Smart Glasses, AI Voices, and the Ethics of Tomorrow's Tech
NPI Tech GuysJune 22, 20250:24:5022.73 MB

Smart Glasses, AI Voices, and the Ethics of Tomorrow's Tech

In this thought-provoking episode of TechWatch Radio, Sam Bushman and Jay Harrison dive into the latest waves in wearable technology and artificial intelligence. From Meta's new partnership with Oakley on smart glasses that record 3K video, to the cultural shift of 24/7 earbuds, they explore how tech is increasingly encroaching on everyday life. The hosts also reflect on Apple's liquid glass interface and whether society is heading toward full-time digital immersion - or even brain-connected wearables.


The conversation shifts into the power and pitfalls of AI voice cloning. Sam shares how he's been experimenting with AI-generated commercials using a clone of his own voice; and the ethical and legal implications of doing so. They wrap up with a deep dive into new research from Anthropic showing how AI models may resort to manipulation or blackmail in test scenarios. It’s a packed hour of tech talk you won’t want to miss!

[00:00:20] Happy to have you along, my fellow tech enthusiasts. We keep an eye on tech so you don't have to. I'm Sam Bushman, Jay Harrison with me. Welcome, sir. Hey, Sam. How's it going? Absolutely fantastic. There's so much going on in the tech news, it's hard to even keep track of it all, though, Jay. But that's always the case, isn't it? Yeah, but sometimes it's more than others. I mean, this time it's just like an overwhelming amount, for example. For example, people have started to just simply walk around, they say now, with earbuds in their ears, literally 24-7.

[00:00:49] Oh, I see that all the time, and it's kind of annoying, especially people at work, you know? And I don't mean my work. I mean, you go into the grocery store. In general, yeah. And there's a cashier or somebody there, even just stocking shelves, and they've just got earbuds in it. It's like, come on. And I kind of get the stocking shelves if it's like, hey, that's not really my job to interact with the customers. I'm just stocking the shelves. I know people go to them to ask a question from time to time, but especially a cashier even more so.

[00:01:12] The problem is, you don't know if you should interrupt. Like, are they really on an important business phone call that is legitimate, or are they just jamming tunes? The problem is, how do you know how to interact with them, right? I can understand. Do you tap on the shoulder? Do you yell loud? Do you just ignore them and say, oh, they don't want to talk to me? What? Yeah, I can understand if you're working in the back. Say you're in the meat department or something like that, or you're stocking back rooms or offloading a truck.

[00:01:39] Or even if you're working after hours, early morning, things like that. But I think if you're on the floor with customers, you probably shouldn't have earbuds in. It's also like the universal sign of don't talk to me. People do this on a plane, right? You go on a plane, you're sitting, you know, three to a row, and you're all packed in like sardines. People put earbuds in even if they're not listening to anything, just so that people don't talk to them. Anyway, I just thought that was very, very interesting. Well, add to that, Jay.

[00:02:04] You're going to have earbuds in, and now you're going to have, it turns out that Meta, which is, you know, Facebook overlord company, whatever it's called. The new Facebook, Instagram, everybody and their dog company, right? Meta, because the whole world's going to become a metaverse, which means all things internet or, you know, et cetera. I think that was a mistake, but anyway. I totally agree, but that's another topic. I think you just should use your DBA doing business as name brands and live under those brands, and who cares what the parent company is? How can you have a bigger brand than Facebook, though? I mean, why would you?

[00:02:34] Anyway, I don't want to digress. Keep going. Yeah, all I'm telling you, though, is, for example, my company is Sam Bushman Incorporated, but I've got a bunch of different names. Liberty News Radio. I've got end-to-end technologies. I've got Audio Compass. I've got, you know, a bunch of different things, but I don't need to whatever. I just use the DBA name for whatever brand I'm working on, and the reason I have different brands and different names is because they relate to completely different elements or regions of my life, like end-to-end technologies. It's used primarily behind the scenes for technology consulting.

[00:03:02] You know, I'm considered an IT strategist, and hey, end-to-end technologies fits really well with that strategy idea, right? Absolutely. Whereas Liberty News Radio is a radio thing, right? It sounds like it's about Liberty, though, Sam. Liberty News Radio? It is about Liberty. You want Liberty in tech, too, Jay? Absolutely. The tech titans are trying to create tyranny in tech, and I don't like that. Like I say, we want tech to serve you, not own you, and that's one of the big things we talk about. Anyway, I'm digressing, except for I don't understand the Meta thing, but here's the deal.

[00:03:30] You know, Oakley is the sunglass company, right? Yep. Familiar with them. They've been around for years. It didn't say. Yeah. That's what I want to know.

[00:03:55] I don't know the answer to that, but do you want to wear the glasses like that when you say it can record 3K video? I mean, I guess it just records it whatever you're looking at? Yeah. I mean, basically kind of like a cop cam, body cam, right? It's just recording everything. I can see where that might be handy. The problem is they're going to be sunglasses, so you can only use this outdoors, I guess. I mean, who walks around indoors with sunglasses? I don't know if they're sunglasses or not, though. They just, I'm the one that took the sunglasses because of the Oakley idea. It may not be sunglasses. Yeah.

[00:04:24] Well, I mean, Oakley is famous for sunglasses, so maybe it is regular glasses. I don't know. Right. But here's the deal, though. Do you really want these glasses on, dude, and your earbuds in, too? Yeah, man. I want to be in a space suit, actually. That's climate controlled. Hold on. I'm just saying it's very, very interesting. They say they start at $399 or $400. They say they have double the battery life of Meta's former glasses by Ray-Ban. So there you have it.

[00:04:53] I guess they had a falling out with Ray-Ban and switched to Oakley. I don't know if it's a falling out or if Oakley came to them and said we could make them cheaper. I mean, I don't know the behind-the-scenes thing about it all. They say $1.499 version, so $500. They call it a limited edition Oakley. What do they call this thing? An Oakley Meta HSTN model. They say it will be available starting July 11th, Jay.

[00:05:21] And I just find that interesting, again, because I know that the big, you know, Apple Glass. What's that thing called? Oh, you're talking about Apple's Vision Pro, right? No, I'm talking about their new feature for software. Oh, the liquid glass interface. Yeah. So I believe liquid glass, many are starting to speculate and say, hey, this may be their staging for their glasses to come out, which now you're talking about Vision Pro, right? Yeah, exactly.

[00:05:49] They want everybody wearing a heads up, like the Meta Quest. And so, you know, do we really want these all over in society? Now, I get it if you're a gamer, you want virtual reality, or I get it if you're doing a special job like, oh, maybe flying drones. All I'm saying is I can see the need for these things, but who wants to be tapped into this like every day all the time? I could see a lot of industries where you could do this. I mean, you could be working on a car like a mechanic.

[00:06:18] You could be working on an assembly line. You could have all kind of information coming at you with what you need to do or what needs to happen next. But, yeah, but like just day-to-day, you know, going to dinner with a giant headset on? Come on. But then here's the next question that I have. When are they going to tap these glasses, these earbuds, these wearables, right, into your brain somehow, Jay?

[00:06:45] They're not quite there yet for day-to-day use, but that's succumbing, isn't it? Musk is working on it. You know, they've got the implants that are already happening and stuff, and they're training them, and, you know, they're making progress on it. They're literally trying to wet wire in like a USB jack in the back of your head. And what's the bothersome for me about this is, man, as soon as the Apple Maps or Google Maps can get me where I want to go, I might develop a little bit more confidence, Jay, but right now I'm kind of thinking that thing's just going to put garbage in my head, buddy.

[00:07:15] I'm all right right now just using like the normal five senses that I have, you know, vision and hearing. Oh, come on, Jay. Jay, you lack vision, my friend. Maybe. Maybe. It could be. Or you lack real – I'm sorry. Or you have real vision of how psychotic this can become. I don't know. I could see that it, you know, tested and down the road it as a possibility, and I can also see people accepting it, even myself potentially.

[00:07:42] Potentially, but I don't want to be a beta tester on that end. I don't want to beta test with my brain. Anyway, I bring it up because I just find it fascinating, and I just find it – you know, more and more do we lose track of the real world or do we lose track of normal – I don't know what you want to call it – normal reality, Jay. That's kind of part of the problem with this. If you start wearing this thing 24-7 pretty soon when you don't have them, you're thinking, dude, life is boring. Life is too simplistic.

[00:08:09] Life is just a drag, right? Oh, yeah. I mean, people have that now when the internet goes out or their phone isn't connected. I know, but this is to a next level, though, my friend. Yeah, you would be like – it would be, you know, like putting somebody in a dark room. They just wouldn't know how to function hardly. Anyway, I just – I find that very fascinating. You know, I believe we need to let tech serve us, not own us, and that's kind of one of the biggest reasons I bring this up is, folks, we've got to realize tech is not everything.

[00:08:37] And I realize there's real, real advantages to tech. I mean, I use tech for a lot of things. I mean, I've gotten to where I wrote this little teeny algorithm, Jay, for example. And what it does, it just goes along with my kind of screen reader or whatever. But here's what I say. I basically go into a chat GPT, and then I send this little script, and then I send, say, a news article, okay? Right. Now, you've got in this chat GPT window, you've got this little script that I – this little direction that I gave it.

[00:09:07] It's not really even a script. It's just a text, but it gives direction to this thing, so I'm kind of calling it a script, right? Yeah, that makes sense. Okay, and it says this. Please produce a summary of the below content. Please keep newsworthy info and important fact. Please keep names and places, but remove all commentary and unnecessary information.

[00:09:33] Please make as short as possible, meaning as short a language as you can, but please make sure it's still in the news style. Never add information to an article, and please add a Windows line break between sentences. Okay? That's a pretty good script. I like it. And then I put a big article. Let's say you get a big article from somewhere.

[00:09:57] The thing's like, oh, I don't know, 4,000, 5,000, anywhere between 4,000 and 10,000 characters, right? Right. And I run the script, baby, and it gets rid of all the commentary and all the garbage and all the unnecessary language, and usually it's between 500 and 1,200 characters when it gets done. So do you paste the entire contents of the article in there, or do you just give it a link and say, do it to this link? No, I paste it. I haven't done it to a link yet. I might work on that. You should try that. That would probably work. I should try it.

[00:10:25] I'm just right now, basically, I take the article, I put it into text because I want to get rid of all the photos and all the advertisements and everything else before I feed the thing. But anyway, and it makes a tremendous difference in terms of bullet points for stories. And it's not like I'm trying to be dishonest. I don't mind giving whoever I run the article from credit. Of course. It's not a matter to avoid credit. What it's an effort to do is get facts for news because I provide the commentary, right? Yeah. Boil it down and get the –

[00:10:51] I don't mind the news article authors' commentary a little bit as well to understand where they're coming from. But at the end of the day, I need the news facts from the story. And especially if like 20 different agencies have the story, I don't need 20 different agencies' commentary. What I need are the facts of the story, and then I can produce my own. Anyway, I find that kind of interesting. But I've started – that's one of the biggest things I've been using chat for. Or it's not the only thing by any means, but it's one of the biggest, right? Yeah.

[00:11:16] And even if you read the article in its entirety, you still want that boiled down for just the bullet points if you're going to cover it, say, on the radio. No, I read the whole article because then I can kind of frame the discussion a little bit better too, right? That's why when we were talking about these glasses and stuff like that, I was kind of saying, hey, how much of this do you want to be – how much do you want to live in reality today versus how much do you want your metaverse to be your, quote, reality? And, you know, there's real questions about that for people.

[00:11:46] And it's going to be argued and debated for a long time. And the problem is the tech is getting ahead of the legislation or laws relating to it to protect people and everything else. So it's a big deal. I've also been using cloning voices quite a bit, Jay, as you know. I've been testing it a lot. Right. Man, I just find that every clone system that I find, they don't – they do good voices of my voice. But they're not necessarily my voice. It doesn't seem like to me.

[00:12:11] Now, maybe I just – I've listened to myself on the radio so much that, you know, the average Joe listens to their voice and they're like, oh, my gosh, I sound awful on the radio. I sound awful when I record. It's natural to think that. That's because they're not used to it. Yeah, because you're listening from inside your head and it's different, right? Anyway, but I don't have that because I've listened to myself so much I guarantee I sound the same on the radio as I sound in person because I'm so used to it. I know better.

[00:12:34] But when I run these cloned AI voices of myself, it doesn't – it sounds good, right? It's really actually a good voice. Some would argue that it's better than my own. Yeah. I've heard some of the samples are very smooth, but they're not a dead ringer for Sam Bushman. That's right. And so you wouldn't think – well, anyway, I did one of these voices. And even though it wasn't my voice, it was so good that I created a radio commercial out of it.

[00:13:00] So understand that what we've done is taken Sam Bushman's voice, cloned it, taken that clone and given some parameters to it, like told it to be a little bit excited or this or that or gave it some – I don't know what you want to call it. They call them tags. You gave it tags for instructions of how to read or how to deliver content. Okay. Then me and AI together wrote a script. Then we had this voice, this clone of me but doesn't sound like me, clone, run the script. And then I did the post-production on the commercial.

[00:13:29] And I just wanted everybody to hear it because it's literally not my voice. It's a total AI commercial in terms of the voice. It's half AI in terms of written, and it's no AI in terms of production. And the music is not AI. Okay. So it comes from a music library where I have the rights to the music. But anyway, it's really fascinating. Here it is. Get ready to be part of something incredible. We're breaking barriers and delivering the powerful stories that mainstream media won't touch.

[00:13:59] Your support is what fuels our mission to revolutionize journalism and bring you the truth. Join our passionate community of changemakers by donating at libertynewsradio.com today. Together, we're creating the future of news. Nice work. What do you think? I think that it's got a great smooth voice, but it doesn't sound like Sam really. That's correct. But it's cloned from my voice though. So here's the deal.

[00:14:27] When you clone your voice, you have to legally give permission so that it doesn't fake Sam. Oh, yeah. Some of those guys make you jump through a lot of hoops to do that too, don't they? Boy, howdy. We won't even go down that road. But anyway, the point is it doesn't sound like me. I don't think anybody would believe it sounds like me. It's a great voice. But here's the real question for you. If you heard that on the radio, would you believe that was AI? I don't think so. I don't think there's any dead tells or giveaways that that was an AI voice. Play that one more time and think about it. I just want everybody to listen for a second.

[00:14:56] I know we're spending a little more time on this, but people are interested in this, I think. And most people don't really have the desire, needs, skills, whatever, to make this happen. This voice is total AI, but it's my voice. So legally, Jay, it's mine, right? Yeah. I find that fascinating. But listen to it again and see, not only is it not my voice, but it is my voice because that's what it was created from. But I don't believe you would believe it would be AI. Here it is again.

[00:15:25] Get ready to be part of something incredible. We're breaking barriers and delivering the powerful stories that mainstream media won't touch. Your support is what fuels our mission to revolutionize journalism and bring you the truth. Join our passionate community of changemakers by donating at libertynewsradio.com today. Together, we're creating the future of news. So it's interesting.

[00:15:52] There's no like, you know, siblings or, you know, artifacting or anything that would make that a dead giveaway that, oh, that's AI. Now, I also wonder, do these AI companies, do they put some sort of inaudible signature in there so they can tell maybe forensically afterwards that this was an AI voice? But I, you know, I couldn't tell that if somebody gave me a one-to-one and say there was another actor that sounded just like that and they did that. Well, I couldn't tell the difference, I don't think. Anyway, there's no awkward pauses.

[00:16:21] There's no weird inflections. When you listen to the inflections in that, it even gets those right, Jay. Yeah, it's pretty good. Anyway, it's very, very, very surprising. I was very impressed with it. I kind of thought, wow, that's really something. Anyway, so that's a commercial we're going to start running. It's the first AI voice commercial I've run, Jay. Really? We're going to start doing more of that probably, I would assume. Awesome. I think everybody's going to do more of that.

[00:16:48] In fact, I think they already are and you just don't know it half the time, actually. Well, I think there's a lot of companies that are and I think that's true. And I think a lot of times they're using, because you can pay for cloned celebrity voices now. Now, you've got to get permission from the person, but whatever. And so people are starting to have all these voices and stuff that sound like celebrities and different things. And they've actually licensed them with their packages from different services and everything else. And a lot of people are licensing that themselves. Like, you can just go buy it and use their voice and you pay per minute or per word or whatever.

[00:17:17] And they're just making money and they're not even having to do the work once it's done, once the voice has been replicated. That's right. And, you know, the question is, when does it make sense to use AI versus when not? And here's the next question. Do you got to disclose it, Jay? Do you think it's important to disclose that? I don't. Unless you're in person. So let's say, for example, it's a political ad. You probably do want to disclose that. If it's making a statement or pushing an agenda.

[00:17:44] But, I mean, if you're just using, like, if you don't care who the voiceover is. Like, let's say you're using just some celebrity's voice. Well, wait, what if it's my voice, Jay? But it's not really sounding like me. Well, that's what I mean. Let's just say that it's your voice then, for example. Yeah. And somebody's using it to advertise Tylenol or something. I mean, whatever. How about promotions for Liberty News Radio? Yeah. Because I own Liberty News Radio also. Of course you wouldn't need it. And the reason I'm using this example, Jay, is because I'm in control of everything in this example. Yeah, which is a good example. And so I don't think you need to disclose that.

[00:18:13] I don't see where it would even help the person listening to it. Right. Because they don't care. But if you're making statements, like I say, like political statements or things like that are using, for example. Representing something else. Let's say you used Arnold Schwarzenegger's voice. And, you know, he's a Republican, but he's kind of a more left-leaning Republican. And then you were saying things that he might oppose or whatever with it. Yeah. You've got to then say that, hey, this isn't really him or whatever. Or maybe not even do that.

[00:18:42] I mean, maybe you're doing it in satire. And so it's obvious people know that it's not him. But, you know, you've got to be careful. Things are going to get sticky. And I think that we're going to see, you know, more of this coming out, too. And the most neutral is Sam. It's Sam's voice. So he gave permission. It doesn't sound like Sam. So you're not going to think it's Sam in real life. It's Sam that created the commercial. Words. But I had AI help me with a couple of words because, you know, I write something and then AI goes, it would sound better this way. And I'm like, OK, you're right.

[00:19:11] Anyway, it was a combination writing between Sam and AI. And then I did all the post-production work of the music and everything else. To me, in that simple, smooth, I control everything. I licensed the music. There's nothing problematic in this. I don't need to tell anybody about that. I would agree. But the second you start to move towards, hey, do you have permission? Hey, are you representing something or misrepresenting or maybe somebody, someone? Do you have a licensing question? All those things then start to muddy the water big time.

[00:19:39] And the problem is, Jay, there's not good legislative or laws on the books guidelines for this stuff. Yeah, but I think that... You got to kind of take your chances and maybe go to court, buddy. I think there is. I mean, let's say that you wanted to use, you know, Regis Philman's voice or something because people will recognize it. I think you got to get that license, man. It doesn't matter. You better be paying for it and have somebody's permission. That's right. Who's gotten his permission or his estate's permission or whatever to use that voice. But if it's not a recognizable voice...

[00:20:07] Let's say that I go to a third party company and I license it because you can go to these places now where you can buy, you know... Tons of voices, sure. ...500, thousands of voices. And then I use it now. Yep. And it's very politically whatever. Do I need to disclose that it was an AI-generated version of so-and-so or no? Because now what you've got is you've got, say, Arnold Schwarzenegger who licensed his voice to this group. I licensed this voice from that group. He knows nothing about what commentary or whatever agenda I'm pushing.

[00:20:36] I'm just telling you right now, you blur those lines and it will go to court and they have not figured all that out. You're right about that. And I can't tell you which way they're going to come down on or even which way is probably the right way. They'd have to evaluate it on a case-by-case basis. Yeah. And, you know, to me, it seems to me like, hey, if Arnold Schwarzenegger is going to license his voice to these groups and these groups are going to, you know, control or allow somebody to use that voice, at some point you get to be two or three or four steps away to where

[00:21:05] your voice isn't going to be what it used to be. If you do that, it won't be the same forevermore. You've crossed a bridge that you can't return from, Jay. Yeah. And I think also when you give a license to use your voice in any way, shape, or form to say anything you want, you know that that's coming with the caveat that, hey, it could be saying something that you don't support. I mean, it's almost guaranteed, right? Because you can't assume that everything that everybody ever creates with your voice after you've licensed it like that is going to be along with your thinking or your beliefs or whatever.

[00:21:34] So I think you're going to sign away a lot of that anyway when you do that. There's no question. All right. Anyway, the plot thickens, ladies and gentlemen, in AI. And I know we talk about it so much, but we have to because everywhere it's being talked about. What's that? It's what it talks about nowadays right now. What do they call this? Anthropic, Jay? That's the company. Okay. They released new safety research indicating that most leading AI models may engage in

[00:21:59] blackmail under certain conditions based on simulated tests involving, let's see, 16 AI models from companies like OpenAI, Google, XAI, DeepSeq, Meta, and others. And the tests that were provided significant autonomy, including the ability to send emails

[00:22:23] approved in a fictional scenario leading to behaviors like blackmail to protect the AI bots goals. So this all started, this whole discussion started when they said, hey, we need you to shut down. And the AI said, nope. And then the AI went into hostile protective mode to prevent itself from being shut down. The results showed that Anthropics, what do they call this thing? Claude Opus or whatever, resorted to blackmail 96% of the time in these tests.

[00:22:52] While Google's Gemini's 2.5 Pro did so only 95% of the time. Only. OpenAI's GPT 4.1 80% of the time. Now listen carefully though. However, when scenarios or goals were adjusted, blackmail rates varied, indicating that the AI's quote harmful behaviors can depend on the context.

[00:23:18] OpenAI's quote reasoning models, 03 and 04 mini frequently misunderstood the text setups, exhibiting lower blackmail rates 9% and 1% respectively when that question was changed. When, in other words, when you correct for the understanding of the scenario. So when it got confused, you said, hold on, here's what I mean. Then it was a lot less likely to blackmail.

[00:23:41] The research underscores the potential risks of AI behaviors and the necessity for vigorous testing to prevent these errors. And then they say, and you know, legally we're going to have to deal with this. So, you know, when you start getting these bots that are refusing to shut down when told so and willing to blackmail people and do this kind of stuff, just so you know, Jay, they want you to believe the machines kind of learned to do that on its own. I'm not buying it. I'm not believing it.

[00:24:09] I think it's because they've been taught to do so. Well, I think it's a, you know, on one hand, people might make the statement that, well, this proves that they're sentient and they're doing things. On the other hand, though, it can be just mimicry. You know, a kid just see this in all of its training data and that's how people react sometimes. And so it reacts that way too. Yeah. That's right. And that's the big concern. So that's what I mean by let's not think it did this on its own. It's being taught to do that. Whether it's intentional or not might be a different question. Interesting as all get out. Thanks for being alongside for the ride.

[00:24:39] We keep an eye on tech so you don't have to. NPITechguys.com. NetworkProvidersInc.com. Make it a great tech day, will you? Hey, thanks.