Elon vs. OpenAI, Meta's AI Layoffs, and the Future of ChatGPT 5.5 | 04-24-26
NPI TechGuysApril 24, 20260:24:5022.73 MB

Elon vs. OpenAI, Meta's AI Layoffs, and the Future of ChatGPT 5.5 | 04-24-26

Sam Bushman and Jay Harrison break down the biggest tech stories you need to know. From Elon Musk's $150 billion lawsuit against OpenAI to Meta cutting 14,000 jobs in a massive AI bet, plus what ChatGPT 5.5 really means for everyday users, why AI courts are a dangerous idea, and whether the AI chip shortage is about to get worse. Tech Watch keeps an eye on tech so you don't have to. Sponsored by Network Providers Inc., your friend in the IT business. Strategic consulting, managed services, and more at networkprovidersinc.com Timestamps: 0:00 Intro and Sponsor 0:47 Elon Musk's $150 Billion Lawsuit Against OpenAI 6:54 Should AI Run Our Courts? 10 Reasons Why Not 9:30 Meta Cuts 14,000 Jobs to Bet Big on AI 12:17 ChatGPT 5.5 Rolling Out to Millions 14:22 Second Half Kickoff 15:05 AI Chip Demand Surge and the Global Shortage 16:36 Generative AI and Project Cancellations 18:54 Chips vs. Software: The Real AI Bottleneck 22:24 What ChatGPT Model Are You Actually Using? Call to Action: If this broke down the tech world for you, subscribe and share with someone who needs to keep up. Check out all our episodes and resources at NPITechGuys.com. Need IT support for your business? Visit NetworkProvidersInc.com or grab the Cyber Playbook at networkprovidersinc.com/cyberplaybook. Let tech serve you, not the other way around.

[00:00:18] We call it TechWatch because we keep an eye on tech so you don't have to. I'm Sam Bushman, Jay Harrison with me. NetworkProvidersInc.com, our sponsor. You want a friend in the IT business? Think NetworkProvidersInc.com. They've got everything when it comes to IT, including strategic consulting. Check it out today. NetworkProvidersInc.com, the show. NPITechGuys.com. If you want us to build your website, npihosting.com is where you go for all that stuff.

[00:00:47] Ladies and gentlemen, they say it's a $150 billion lawsuit. I don't even know how to respond to $150 billion if I want it or if I whatever. Anyway, it's a showdown in Silicon Valley. It starts on Monday. Elon Musk sues OpenAI. He claims breach of founding principles. Jury selection starts on Monday. It's going to be in Oakland, I guess, California.

[00:01:13] And so Elon Musk is suing OpenAI and its leader, Sam Altman, because he says they're deviating from their, quote, non-profit mission. Musk claimed OpenAI, now valued at $730 billion, pursued profits over public good. A shift for which he says, hey, we need leadership changes and $150 billion in damages.

[00:01:43] I don't know what's going to happen with this because I see both sides of it. I see that Elon Musk is pretty right factually from what their original agreements were, Jay. On the other hand, I kind of see that to compete in this world. I just don't see how you're going to make it completely a non-profit reality. And now, I guess no harm, no foul because Elon Musk has his own, I don't know what you want to call it, AI ecosystem is the best word I can think of for it.

[00:02:09] And so how's that going to all work when you have your own, when you can compete, when, I don't know. And can they stay alive and compete in the modern world without turning it into some profit-type ventures and stuff? It's going to be sticky, and you're going to have to have pretty savvy IT judges to handle this, Jay. I would agree with that, but I don't know how they're going to do this. There's a lot of issues here. Number one, I guess, OpenAI was started as a non-profit, right? Yes. And I know that they make tons of money, but they also lose tons of money.

[00:02:38] And last I've heard, they're losing a lot more than they're making. So I don't know that they, while they have the potential to be profitable, have they actually been making a profit? No. So that's number one. Amen. I see what he's saying. Elon Musk is wanting to sue them. But can anybody just sue a non-profit? Like, can I sue some public broadcasting as a non-profit because I think that they're making money? Anybody can sue anybody for any reason, Jay. The question becomes, will you have any ground or will it be dismissed and say you don't have standing?

[00:03:08] That's a real question, too. Yeah, but I mean, I thought like the government or somebody had to go against non-profits if a non-profit is actually making profit and they're not saying that or there's a problem with it. I always thought that was government. Of course, I didn't think just an individual or somebody else could bring a judgment against them. Well, I think an indict a ham sandwich, anybody can sue. The question is what will happen to that suit? Does it get thrown out before it ever gets wings or legs? You know, that's a debate and your point's right. There's some concerns there and there's some overarching rules to prevent just millions of lawsuits and stuff like that, too.

[00:03:37] So I don't know where it's going to go, but here's what's interesting to me. If Claude says we are going to restrict our models because they're too powerful and they could be used for wrongful purposes and so we're only going to put them in the hands of the few. And then if ChatDBD says, well, we're a non-profit or open AI and we're going to release it to more people, we're not going to be as critical about that.

[00:04:04] Aren't they on one hand serving their non-profit idea by doing that, too? That's one question. I would think so. The other question is what about Elon Musk's group now? And then you go back to the Chinese. What's that Chinese thing called? That was their Chinese AI? I don't remember. You know, made a lot of headlines recently. What's the word for that? I don't remember. Something DeepSeek, I think it's called? Yeah, DeepThink or something like that, yeah. DeepSeek. Anyway, and they say that's an open source one, Jay.

[00:04:34] I haven't heard anything about that in six months. But, well, they're talking about it now and they're saying it's open source and they're saying they're keeping up with the, quote, private world, the non-profit world, and the open source world. Imagine the communist Chinese in charge of the open source. It gets weird fast. It's like, what the heck is going on? I mean, open source is open source. I don't think it matters who it's from, if you can read the source code, right? I mean, yes. But what's funny to me is we're going to debate whether AI can make a profit or open AI can make a profit.

[00:05:03] We're going to debate whether Claude's going to release how much to whom, and then Elon Musk is going to have his third party thing, and then the Chinese are running the open source version. It's a strange world we're in, sir. It's a wild west for the AI, and I like it. Amen. So I don't know what's going to happen with all this, but it's a massive lawsuit, and I wanted to make sure you knew about it. And we'll keep an eye on it because the outcome of the lawsuit really matters, too.

[00:05:22] Will it put a damper on or tamp down creativity rollouts of AI? What will happen? Will the judge put a kibosh on a lot of things? You know, what's going to happen with this? I don't know. There's a lot up in the air on this one, Jay. Well, I think one thing's for certain. The lawyers will make out like bandits. It's sad but true. But what if we just use AI? Can't AI just litigate this thing? No. You've got to have real people there.

[00:05:52] I mean, they're going to be using AI, of course. What are you talking about? Why do we have real people? Judges are mad now because lawyers are submitting AI stuff, and a lot of the AI stuff is really pulling fake cases, and that's a big battle, too, Jay. Yeah, that is a battle. Why can't that be vetted? You're going to need an AI to vet your AI output. Well, all we've got to do is just take the ability to edit and audit source code and edit and audit court cases, Jay. That's all going to be.

[00:06:17] Not only is that already being done, I would imagine there is vast amounts of legal stuff that's being done through AI now because it's always been a headache and a nightmare to try to do that kind of stuff manually. No doubt. This is one place where it will shine, for sure. It'll shine if it's trained right and if we give it time. See, right now we're just in the early stages of this, and stuff isn't vetted well enough. I'll give you an example.

[00:06:41] People are wondering if we're going to have AI courts, and people are wondering if we're going to have AI judges, and we're wondering if we're going to have AI government entirely. And so I created, for another talk show, by the way, 10 reasons why I'm against AI courts. Code is not conscience. AI cannot really do fair moral judgment and mercy because it's all based on who wrote it. If they don't believe in mercy, they're just justice, then it's a justice bot. Accountability must be human.

[00:07:10] You know what? Who answers or who's responsible when AI gets it wrong? Where does the buck stop? Due process requires transparency. Black box decisions violate justice. Buy us in, buy us out. You know what? AI inherits the flaws of its creators, right? And the data that it's got as its core. Efficiency, folks, might be cool, might be really efficient, but it's not justice. Okay? Faster decisions aren't always better decisions. What about the human element?

[00:07:40] What about your gut? What about your heart? What about your rights cannot be automated away, folks? They come from God Almighty, not government in the first place. Constitutional protections require human vetting or human defense, if you will. No soul, no stakes. AI has no skin in the game, no consequences or anything else. Centralized control risk. AI governance. Concentrates power dangerously. If you're not very careful, you don't have checks and balances.

[00:08:09] AI just becomes judge, jury, and executioner. And then finally, appeal to whom? Who are you going to appeal to? If there's a machine, who are you going to appeal to, right? And then finally, it needs to be our tool, not our ruler, Jay. AI should assist us all. I get it. But as we always say, for TechWatch, let's have tech service not on us, Jay. Isn't that the bottom line for this thing? Yeah. There's a new movie on Amazon called Mercy.

[00:08:38] It's got Chris Pratt in it. Yes. And it's all about an AI judge. I don't know if you've seen that. What do you think? Yeah. I've heard about it. I haven't seen it. I actually watched it. You watched it? Yeah. A week or so ago. And I thought it was pretty good. I thought it was decent. You know? Do you think an AI judge can provide mercy, Jay? Well, I don't want to give any spoilers for those who haven't seen it. But it does make you think a little bit. And it's a decent movie. You know? Well, it depends on who writes the AI and who writes the LLM, right?

[00:09:05] If it's a topic you're interested in, you should check it out. All right. Anyway, I just... That's your homework, Sam. Watch that. I don't mean to go off on that, but I really want people to kind of understand this is serious business. Those are great points. AI is in everything we're doing, folks. And I'm telling you right now, we can say, well, let's talk about something else for a change. I'm telling you, it's the freight train. Yeah, you can't. It's the freight train. Everything is involved in AI. For example, Meta is having massive job cuts. Yes, they are.

[00:09:34] Zuckerberg is betting on an AI future. And he thinks it's going to be huge. Meta plans to reduce its workforce by 8,000 employees. That represents families. These are not low-paying jobs or something like that. Now, they might be lower on the totem pole at a high-paying company or whatever. But if they reduce their employees by 8,000, then they say they will not even fill 6,000 current open positions.

[00:10:00] That's 14,000 people from a super high-tech company. They say this decision is part of a strategic refocus on artificial intelligence. And it comes as the tech giant shifts resources amid broader industry layoffs and heavy investments in AI. Now, I find this fascinating because, you know, I look at the world.

[00:10:28] And for small business, small to medium-sized business where I work, we're just not that far along. We have a lot of great AI happening. But we don't have a lot of tasks in place to truly replace people. But some of these companies have built their own internal stuff, and they do, Jay. And so now, really, the takeaway from this is it's happening and it's real. And the takeaway is it's going to eventually kind of work its way down to the small business world, to the medium business world. It's at the big corporate, you know, big, right? It used to be big data.

[00:10:56] And we couldn't do near the analytics that big data could do. But now big data is filtered down to us. And AI is going to filter down to us, too. I don't know what the lag is going to be because it's happening faster than I thought it would. But at the same time, these big companies are already there, Jay. We just don't see it in our day-to-day lives because it's either behind the scenes or it's working and people don't understand the tech behind it yet. But this is real and happening, and it's going to change the world. I don't want you to think there's not new jobs because there's a lot of new jobs. What about AI implementers?

[00:11:25] Somebody's got to watch the AI run and monitor it. You've got to have some kind of a human element to this thing. So we're seeing a massive change. But you don't need everybody to take care of the horses now. You just need somebody to fill your car with gas or plug it in, right? That's exactly right. So this change is similar to that example. And it's going to be like 10 to 1. I think that you're going to see one person doing the job of what 10 other people used to do. And big companies like Facebook, Meta, they're going to streamline.

[00:11:54] And frankly, I'm surprised it hasn't happened sooner that these guys are doing this. But we're going to see this, and we're going to see it with a lot of big tech. And tech isn't going to be the gold mine of jobs that it has been in the past, at least for a little while. I think it's going to create more jobs, though, in the long run. But they're going to be different. They're not going to be what we're used to. Anyway, we thought we'd bring that to your attention.

[00:12:17] Open AI rolls out new AI model chat GPT 5.5, and they're going to expand access to millions. We'll talk about it. This is TechWatch. In the medical field, IT security is crucial.

[00:12:45] Our highly skilled consultants are HIPAA certified and have 20-plus years of experience servicing medical clinics, billing, and supply companies. We offer comprehensive endpoint protection, guarding your computers and servers against all stages of threats. And with our 24-7 monitoring services, you'll never worry about extensive downtime again. Ready to level up your IT support? Call 801-706-6980 today and discover how great IT services can be with Managed IT Services. Cyber crime is exploding.

[00:13:14] Take Sarah from Sweet Delights, whose world crumbled after having to close with not being able to bounce back. Small businesses are prime targets, but the right strategies can keep yours safe. Jay Hill, CEO of Network Providers, has co-authored The Cyber Playbook Simplifies Cybersecurity for Business Owners with Strategies to Avoid Costly Breaches and Fines. Build a strong cyber attack response. Secure your business with key protections.

[00:13:43] Cyber threats aren't slowing down, but you can stay ahead. Protect your business. Ensure its security for tomorrow. Get the Cyber Playbook today at networkprovidersinc.com slash cyber dash playbook. Or call 385-446-5500 now.

[00:14:22] You're along, my fellow Americans. TechWatch. We keep an eye on tech so you don't have to. There's a lot going on, Jay, in the industry. I'll tell you that. Yes, there is. All kinds of stuff going on. You got OpenAI rolling out their new model like you talked about. You got AI chips. Demand is surging globally. We'll still have that problem. It's still kind of a rampocalypse.

[00:14:47] They're driving major gains for semiconductors and chips like Intel, but even AMD, even memory chips, even things from Samsung and everything else are going up. They're driving the price up like crazy. Amen. I don't know what to think about this because AI chip demand surges globally. All that means is it's going to be harder for you and I to get computers and hardware, Jay. It does. It does mean that.

[00:15:12] People are, you know, there's still some stock, but I think that, you know, it just seems like they need to ramp up production. If they're eating up stuff and there's a high demand for it, you know, maybe we're going to see the other side of that. By the time production ramps up, then demand could cool and you could see a glut of chips and the price fall dramatically. I really think when you have a massive, massive new, I don't know what you want to call it, use case is the best description I can come up with.

[00:15:42] You need to kind of fuel that yourself. So if my new use case requires tons of new tech, they really need to build their infrastructure around fulfilling that use case with hardware, with electricity, with whatever the case may be. And I think of that when it comes to like digital currencies or some of these kind of things, cryptos and stuff. They're kind of the same way. They take so much computing power and so much processing and so much this and that.

[00:16:10] They kind of need to build their own infrastructure to support their use. Same thing with plug-in vehicles and stuff like that. Otherwise, they just basically, you know, destroy what's out there from a supply and demand point of view and then they can't, you know, right themselves. But if you could project your supply and demand requirements and or needs and build for them, I think that's what Donald Trump and others are trying to get to. And I think it's wise for them to do so.

[00:16:37] Generative AI development continues across major companies despite some project cancellations. So you're going to expect cancellations. Google's been famous for patternizing this. We try something, we don't like it, we dump it. We try something, we don't like it, we dump it. What do they call it? Google Labs, Jay? Yeah. Well, they're always got projects that are needing to be done and they're dropping stuff and rebuilding and, you know, innovating. And that's the thing. And with innovation, you're going to dump a lot of stuff. You're going to say that doesn't work out for us.

[00:17:07] Sadly, I think that's the reality. And so you've got basically data infrastructure, large scale computing. It remains central and we've got bottlenecks for this in AI deployment. And so it's going to slow down, folks. I know you hear a ton, but when it comes to everybody starting to onboard, not just the thinking or the reasoning layer, which is what ChatGPT and Cloud are, but you start to have people build automation behind it and have AI take actions.

[00:17:37] Now you're talking token suck. There's input and output tokens that govern all these pricings for all these models. Some are more expensive than others and all that kind of stuff. But at the end of the day, you've got to have the infrastructure. We don't have it. It's not ready. So enterprise, for example, that's where we see the lead happening.

[00:17:56] Enterprise focus is already shifting towards, quote, practical AI use cases tied to ROI, ROI, return on investment, rather than speculation. That's something big, Jay, that when we start to do that, then tech not only becomes real, but it slows down. Yeah, but it also becomes more profitable and more productive. Without question. And then that will fuel building the infrastructure. So it's a little bit like the chicken and the egg.

[00:18:25] I hate to be weird about that, but I'm just saying it's cliche, whatever you want to call it. But it's something we really need to kind of understand. Right? Yeah, and they both need to develop off of each other. And it takes time for that to ramp up. But I think that these companies, you're seeing a lot of it, and we see a lot of wow factor with AI, too. But sometimes it doesn't always translate to real dollars. That's where we need to see it happening for industry to grow on it.

[00:18:53] Anyway, so the question becomes there's a battle between the global chip manufacturers and what they can bring to the table hardware-wise versus the debate is who's got the software. You know, you've got automation and AI assistant workflows. They're continuing to replace manual business processes. But it's slow. And it's slow because it's very hard to automate at scale. Let me explain that.

[00:19:21] It's very easy to automate if you have a solid, solid clean data set and a solid process that you say this works every time. Now it's fairly easy to use AI to intelligently think through it or reason through it and then for those different layers to go. To say, I can trust the data. I've got the connectors to connect and go and do things. And I've now got the ability to duplicate it consistently over time. That works well.

[00:19:50] But when you try to scale automation across the board to thousands of people that do things differently, now it's a different ballgame, Jay. And the comparison I like to give people is AI is here and it's here to stay. But if you think AI is going to change the game in one, two, three, four, five years so drastically that we can't function, it's not true. Think about the iPhone. When did the first iPhone come out? 2006? 2012 maybe? I don't remember. I think in 2008 they had iPhone. I'll have to ask chat if you knew when did the first iPhone come out.

[00:20:19] But anyway, I bring this up to say how long has it been out? 20 years at least? If it was 2006, certainly 20 years. Yeah. When did the first iPhone come out? I thought you were using Alexa for this, Sam. No, I got chat GPT at my fingertips here. But the reason I bring this up, though, is, Jay, how long has it taken the iPhone to make all the changes that we see now? The first iPhone was released by Apple on June 29, 2007.

[00:20:50] Okay. It was announced earlier that year. So we're right at coming up on 20 years. It was the ninth. So I was close. Yep. But anyway, I was just guessing, by the way. Just redneck guesses. But there you go. Anyway, the reason I bring it all up, though, is it's taken 20 years. And if you look at the first iPhone, I mean, it's almost embarrassing. It's like coming out in your pajamas, you know? Compared to the modern day. Yeah, but high tech at the time, though, right? Oh, it was. Don't get me wrong. I'm not mocking it. I'm just trying to get across the point. It was high tech at the time. So is AI.

[00:21:18] But AI is going to take you 10, 20, 30 years to mature. And you can say, no, Sam, AI is agentic. In other words, AI could think for itself and teach itself, and so it's going to be faster. Yes, it is. But it's kind of like Moore's laws on chips. Is there a law that's going to slow it down? Yes. You want to know what they are? People. Human's ability to keep up, number one. Yep. Number two, the ability, again, to do something isn't hard.

[00:21:45] To do something across a gazillion variables becomes really hard. In other words, lab results are easy. Real results or real-world results are really, really difficult. And so it's tying it to that automation. It's tying it to productive solutions without making mistakes. When you give it access to your data, you've got to make sure that it doesn't just rape your data and ruin it, destroy it. You've got to give it clean data so that you've got to give it connectivity. You've got to give it governance. You've got to give it all those things haven't been really worked out really well.

[00:22:11] Now, if you take a big company like Meta or somebody, and they've got their processes down and they're prepared for it, they can implement it, and boom, it's happening. That doesn't mean the rest of us can or will that easy. Yeah, there is a comfortability. OpenAI is rolling out 5.5, expanding access to millions. That's great. It's smarter. But do you realize that the average American that uses ChatDP is only on the version 3 model day-to-day, Jay? No. I use it day-to-day, and I just stay on whatever model is the latest.

[00:22:41] Whatever they leave is the default. Are you sure? Yeah. Because let me explain the default. The default says we're going to be on 3.3 for most of your questions. When you ask a question that requires greater reasoning that's beyond 3.3, we're going to manually, or I'm sorry, automatically switch you to 4.something and apply it on the fly, and you'll never know the difference. Yeah, so now- It's called auto mode, and that's the default. And why are we doing that? Because it saves tons of money on tokens, Jay. Yeah, I bet it does. And I don't think there's anything wrong with that. It's not dishonest by ChatDPT. Now, that's the difference between Claude.

[00:23:11] If you choose Claude and choose the latest Opus model and everything, you're spending tokens and money big time. And it's super reasoning, thinking, latest, greatest, and everything else, but it's way expensive. And so most people are saying, let's have the auto mode. And so I asked ChatDPT, what am I on? And it said, well, you're on the auto mode, and by default, you're running 3.3, and we upgrade to 4.something whenever we need to if you have a question that requires it. And I said, well, do I have five at my fingertips? And they said, not yet.

[00:23:38] When you will, it'll be sparingly used on this variable thing. Now, you can force a certain engine, but by default, that's what it does. Yeah, and on the app is the same way. It has an option. On default, you're just on instant, but it has thinking, and you can configure it. You can force it to go to a certain version. Yeah, but instant thinking, you're switching between those engines behind the scenes. Those are the numbers that I was giving you. Yeah. When you say instant, it's 3.something. When you say thinking, it's 4.something right now. Agreed.

[00:24:05] So when you ask it a question and it takes you 30 seconds to answer, it's using one of those more advanced models. Yes. But, I mean, you're right in that people are day-to-day, they're using probably three. I mean, if you ask it, what year was the iPhone released? That's an easy question. It doesn't need this massive thinking model to get there. Exactly. It just doesn't. So I think it's incredibly smart. Anyway, I love it. I like to keep an eye on tech. Hopefully, what we talk about brings it down to the kitchen table for you and makes things usable.

[00:24:35] Hopefully, it's a little bit of entertainment and fun along the way as well. Spread the word and tell all your friends. NPITechguys.com is our website. Check out the radio and the video versions. NPITechguys.com. Brought to you by NetworkProvidersInc.com. Make it a great day, will you? Thanks.