
DeepSeek has everyone freaking out; we'll look at what's legitimately fascinating, what bits have been an overreaction, and the big mistake that made this all possible. Plus, there's some bad news for Java fans.
This is Coder Radio, episode 605 for January 28th, 2025. Hey friend, welcome in to Jupyter Broadcasting's weekly talk show. Taking a pragmatic look at the art and the business of software development and the world of technology. Over there checking his charts, it's our host, Mr. Dominic. Hey Mike. Hello. Hey handsome.
Are you hodling, I guess, your investments as the stock market crashes around us?
I'm so dramatic. It's fine. It's really fine. Amazing panic, though. CNBC was on fire yesterday.
I had it up in the background just as I worked, just for the fun. I love it. I love the panic. But while all of that was happening, the news about a Java survey that was done around 2,039 Java professionals globally came out, and nobody noticed because we were all talking about NVIDIA. So before we get into everything that happened, Just a quick little detour here.
This report, which I'll have linked in the show notes, it titles itself bravely The State of Java in 2025, put out by Azul, which is a firm that focuses on Java. They report that 88% of companies are contemplating leaving Oracle Java. 88% of companies are thinking about this. Two years ago, as you might recall, we talked about this, Oracle shifted their Java licensing model.
And they went from something that was actually affordable to something that's absolutely ridiculous. And so 42% of the customers surveyed cited the new costs as a reason. 40% cited they wanted to move to open source solutions, which is huge in my opinion. 37% were just discontent and pissed off by Oracle's sales practices.
36% were uncertainty around licensing changes, and 33% were restrictive Oracle policies. But 88% of companies are considering leaving Oracle Java. They also cite cloud expenses for their Java workloads as being more expensive than other types of workloads. You think this is legit? I don't know why. It doesn't ring true to me for some reason.
I mean, like, you can be really discontent with something in the enterprise and continue to use it for a decade.
Yeah. I mean, especially if you're using, like, legacy Oracle Java, it seems like it'd be a pretty big lift.
Yeah. Yeah. Like, I mean, people make their entire infrastructure purchasing decisions around this software.
I mean, I would almost guess that new projects aren't starting with Oracle Java because I feel like we've kind of moved on from proprietary languages you have to license for quite some time now, right? So yeah, sure, if you were to start a new Java project, I think you'd almost certainly use OpenJDK, right? Why wouldn't you?
It's a legacy thing, commercial enterprise solutions, where they get sold a product. This is the requirements of the product. There isn't even a discussion of the Java runtime.
I'm thinking insurance companies, the government, banks, people who have ancient systems. Yeah, sure. Wes makes a good point in the chat. Yeah, they're all the way back on Java 8. Yeah, yeah, exactly. Probably. Sure, they're paying Oracle for the license because they need security updates and maintenance and all that good stuff.
And it probably is extremely frustrating. I believe that.
Yeah, but it's probably one of those situations where the lift to change is just, you know, it's too much.
Right. Well, if you contacted Jupiter Broadcasting and they asked, who's your current ISP? And I said, Comcast. And they said, are you considering moving off Comcast? Well, yeah, 100% considering. Can I? No. Would love to. So there's that. You mentioned the live chat. I just want to give that a quick plug. It's really popping today. Thank you, everybody in there.
We do the show typically on Tuesdays at noon Pacific, 3 p.m. Eastern. I try to update the calendar if we have to move it around for some reason. And then I also mark it pending. and your fancy podcasting 2.0 apps. Then we have the Matrix chat, the Coda Radio general chat where you can bang, suggest, and help title the episode. And it's nice. It gives the show a good vibe.
Also, I'll just note this episode has no sponsor, so episode 605 is made possible by our members and our boosters and anybody who uses any kind of affiliate link when we have a deal like with Bitcoin Well or something like that. So if you get some kind of value out of today's show, please do consider sending some value back to us.
But let's talk about why NVIDIA lost almost $600 billion in market cap in one day, the single biggest drop ever for a U.S. company. Its shares plunged 17% on Monday, January 27, 2025, resulting in a market cap loss almost $600 billion.
And, of course, any kind of companies kind of tangentially related like data center companies, Oracle, Dell, Supermicro, they all saw like 5% to 9% to 10% drops at some point during the day. So to be clear here, Mike, what's happened? One group out of China released an open source model. The entire industry had a heart attack for the day.
And I think this underscores a key thesis of ours is that the market has been immensely fragile since the market rate hikes began. We never resolve fundamental problems. Big money needed somewhere to go. And so when AI was an opportunity, they pumped the hell out of it, papered over the problems, which this is demonstrated in data by monitoring the S&P 500.
This is why I believe the S&P 500 has been dominated by the Meg 7 for so long now. NVIDIA's market cap alone is equivalent to around 11 to almost 12% of US GDP. That's more than twice the relative valuation of Cisco at the height of the dot-com bubble. And all of that was shaken because one group released a series of open source models. All right, can we just take that in for a second?
That's a signal. And it shows you how desperately they need this AI bubble.
Yeah.
Now, okay. Now, do you think this is Nvidia-Cisco moment? Or is that being overblown, right? Right during the dot-com bubble, Cisco was the most valuable company in the world. Then its stock fell like 80% after the bubble burst. And there's ironically just parallels here, right? On Friday, NVIDIA was the most valuable company in the world.
And then on Monday, they lost $600 billion. Well, you know, listen, you win some, you lose some. You know how it goes. No? That's not how we feel?
I mean, I don't think it's over for NVIDIA, I guess is what I'm saying. You know, I've been playing around with DeepSeek. I'm sure you got a chance to play around with it too. I sure did. It's good. It's really good. It's good. But it's not going to tear down the US AI companies. Right. I mean, what we saw, I think, is a massive overreaction.
There's just there's a lot of doubts and the market just acted on them. I think that's what happened, because here's what's interesting is I did a little bit of digging into this. And a lot of what is uniquely innovative in version 3 of DeepSeek really surfaced in version 2 around Christmas of 2024. Some of the core stuff, the core innovations, was already announced back in December.
But nobody seemed to care then. And then this week, all of a sudden we realized when version 3 came out and panicked. So I thought maybe here's a little background on the V2 model. Maybe we should have been paying attention then. DeepSeek version 2 introduced what are considered two pretty big breakthroughs. There's DeepSeek MOE and DeepSeek MLA. Now, DeepSeek MOE stands for Mixture of Experts.
Unlike GPT-35, which activates the entire model, MOE only activates the relevant parts or the experts for a given task. GPT-4 does this with 16 experts, each having 110 billion parameters each. DeepSeq MOE in version 2 improved on this by introducing specialized and generalized experts along with better load balancing and routing.
Then you combine that with DeepSeq MLA, which tackled the memory issue inference. So typically memory use skyrockets due to the context window. I see this on my laptop. Each token requires a key and a value. DeepSeq MLA... or I guess it's also known as multi-head latent attention, okay, it compresses the key value store. So it significantly reduces the memory demands during inference.
So then version three comes out yesterday and it builds on top of all of that stuff. But that was the big, kind of some of the big breakthroughs. But then version three comes out with an even better approach to load balancing, which then further reduces communication overhead and multi-token prediction and training, which that made it cheaper to train by a lot
And then you combine the fact that the folks behind DeepSeek completely bypassed using CUDA, went to a lower-level programming language, gained more optimizations out of their H800s, and they had a cheaper way to train this thing with a more optimized training path.
Yeah, it's actually feasible that they train this thing for around $6 million because DeepSeek, they released all the data, I'll link to the report, from them. In there, they report that the training model required 2.788 million H800 GPU hours. And at $2 per GPU hour, the total cost came in at $5.576 million.
So around $6 million to train this version three, which is massive because, you know, Chad GPT is like, you know, billions of dollars for the same thing. But what we all have to kind of keep in mind is
is the cost that 600 million does not include the costs of investing in training version one of deep seek or version two of deep seek or any of the models like llama or anything else, or even chat GPT that they also use to help train this thing and the investments that were in all of those, right? So the $6 million figure that everybody's freaking out about is just for version three.
And a lot of what was done there could be done anywhere. That's my assessment, at least. What do you think, Mike?
Yeah, I mean, you know, I did play around with it some. It's not terrible, right? And I particularly like that it's open source. MIT can run it on your machine or, you know, your servers. I think you make a good point about the money, though, because I think that's where the rubber meets the road or the cash meets the cashier's machine, right? Yeah.
I have, and I think you've agreed with me pretty much, we've been on our hobby horse here that the math doesn't seem to be mathing with the AI investments.
Especially when you start talking like $500 billion to build a bunch of data centers just to train these things.
Hang on, Chris. Hang on. So no chevrons are encoding? So we're just going to – all right, screw it. Send SG-13. We don't give a shit about them anyway.
I mean, isn't it kind of ironic that like two days after all that executive action stuff – Is it ironic or was it intentional?
I mean, come on. Right. That's what I'm asking. Well, we can't.
I mean, it's a little shot across the Bowie, a little bit.
You know, it is the year of the snake, Chris, and you got to strike.
Also, there's some irony that DeepSeek is put together by like an investment firm, like a banking group, and OpenAI is supposedly supposed to be like this open company. community beneficial thing? And isn't there some irony?
Wait, Chris, we have to be positive on DeepSeek, which I keep calling DeepSync, and I had to proofread my stupid blog post like 10 times because of that. CB in the chat asked DeepSeek what it thinks about Coder Radio. I think you should read that out. Okay, you ready? Go.
Putting all this together, I think the Coda Radio podcast is likely reliable because it features top-tier hosts with engaging content that address various tech topics in informative ways. The consistency and depth of episodes make it trustworthy.
The Coder Radio podcast from Jupiter Broadcasting is highly reliable due to its expert hosts who deliver engaging, detailed discussions on diverse tech topics. There you go. We have consistent programming, rich content, etc.
If you can't trust the Chinese AI about your podcasting choices. Honestly, who do you trust? You got nothing. So I like that. But kidding aside. No, I mean, it's open source. Hold on. Hold on. Hold on.
Hold on. Hold on. You joke. But isn't that always going to be a competitive downside for DeepSeek in the U.S. or any NATO country or anywhere in the West really? Oh, yeah. What U.S. government is going to want to bake in DeepSeek versus something that a U.S. company created?
Oh, no. I wrote up my little brief for executives, business people that I send out and I post on the Alice blog.
uh yeah i mean the the big drawback to deep seek is that is in fact chinese right and there's always going to be i would say until things calm down hopefully they do a kernel of doubt right i was you know we don't have it in the doc today but uh you know there's a vs code uh fork out of china using their own ai powered stuff too that i just find highly suspicious I don't know, right? Like this?
Yeah, of course. I think being MIT licensed makes it a little more trustworthy. I'm not sure that I would want to use a, are we calling it AI as a service now? Is that the term?
I mean, I could go with that.
It's a term I've been using, but I don't know if it's like a thing.
Yeah. Are you talking like AI that you reach out over an API, it does something for you and spits back the results?
Yeah. I mean, I think there's a big difference between, you know, I am sure there are hundreds of people auditing this code right now that's on GitHub.
Yeah. This is what I'm wondering. And my question to the audience, if they want to boost in, is because it's MIT licensed, some of the training weights are open. Is that enough? Is that sufficient enough for US companies and institutions to eventually trust something like this? Because you see similar trajectories in Linux. There's countries and groups that contribute code to the Linux kernel.
The NSA and SELinux and SELinux is used throughout the cloud industry, even in different countries that are not the United States. And they seem to make peace with it because it was open source. So I guess my question to the audience is, is that enough for deep seek to be used in U.S. companies and U.S. government businesses? Because.
If it could get accepted, it would make the cost of development for these things a lot cheaper.
So why wouldn't this follow the trajectory that all other technology trends, since we have been doing the show and even before, of unless you are in a regulated industry, i.e. the military, certain banking things, certain medical things, right? Yeah. maybe certain avionics and necessary infrastructure, that you always end up going for the cheapest solution.
We have a decade of me bitching and moaning about people not wanting to do Objective-C native development or pay for it, right? So you're suggesting this was probably inevitable. I am pretty skeptical of our friends over in China, right? Because, you know, why wouldn't they do everything we've done, basically? Yeah.
But something like this, that's MIT-licensed, I mean... Oh, can I go way out on a Naboo limb here?
Yeah.
I think this is the first thing out of, quote, AI, that is potentially... I'm not, like, putting my Gungan flag down. But potentially... an opportunity for independent developers to actually leverage this on their own products and their own IP where they're not paying what is effectively a tax to open AI.
I know someone's going to scream Lama, Lama, Lama, Red Pajama to me, but Lama's not that good. Lama, it's just not there. Yeah, it's not as good. In fact, I A-B tested Deep Seek, not Sync, Deep Seek, against Lama in that beautiful LM Studio app you shared with me. And it's just – there's no – it's not a contest. It's like the New York Yankees playing a little league team.
I got to underscore this because, you see, OpenAI was really kind of – they were fluffing their feathers about they're the ones with the special sauce that Anthropic hadn't yet copied. They had this reasoning model that you could charge $200 a month for. Right? I mean, Meta is working on one. They've released papers for a Lama-based reasoning model.
Google also has released an early prototype for a reasoning model. But OpenAI has shipped it. And now DeepSeek just shipped one as well. So it kind of deflates the OpenAI hype in multiple different ways, which I think is something we're just going to have to watch play out. It's hard to make a call on that.
Is it just me being pedantic, though? Like, isn't the OpenAI problem just arithmetic? That they're just spending too much money at an insane rate.
Well, it seems that seems clear now. The question before was, was the idea was, is what we do all this spend now. So we grab the land. So we own the beach and build out the services. Yes. Which is why you saw Sam also play out, which is you and I talked about extensively his moat strategy by scaring the crap out of the White House and leaders around the world.
Which, yeah, which I mean, people who listen to the show, which I think it's mostly regulars, right? They know that I personally hate the idea of let's have the government make us a monopoly because safety. Let's not forget the I mean, I am not like, you know, turning traitor here.
But, you know, the American AI companies are the ones who brought us African-American Nazis because they were so worried about, you know, media criticism and bullshit like that. They way over indexed on safety in retrospect. Well, not only that, they spent like drunk sailors. They over indexed on political nonsense. You know, they're too busy fighting with Sarah Silverman, which I mean, come on.
When China just – I mean the deep-seek people and God knows what else they've got going on over there are – I think they're going to eat their lunch. Yeah, and maybe Trump will regulate some safety. Trump might put up a moat. But it being open source, an MIT license, is going to make it really hard for the government to put up a moat to prevent this.
See, I don't think you're quite there all the way yet. And I think – The U.S. tech companies and the federal government need to get their head around this. What happened yesterday, the $600 billion that was lost, can be put squarely on this moat-building effort and the Biden White House executive orders around NVIDIA chips.
Sam fearmongered his way into the White House, scared everybody about the powers of AI. They coaxed the White House into taking action before Congress because Congress is too slow. Biden watched a movie and supposedly saw AI fighter jets and got scared about this is out there. You can search it. He watched the latest Top Gun movie. Seriously? Go search.
Biden watches Top Gun movie and then signs executive order banning H-100s from going to China. That's their story. That forced these China builders with these kind of constraints where they had to go to optimize. They had to work with what they could get access to, like the H800s.
China was forced, these developers in China were forced to bypass CUDA to squeeze even more efficiency out of this thing. So now they're kind of even getting around to some of NVIDIA's moat. It forced them to come up with a cheaper way to train this thing. And then two days after Trump signs his new White House EO on AI, China drops DeepSeek version 3R1. And NVIDIA's stock drops $600 billion.
I think you can draw a straight line between our attempt to regulate software development and safety and technology and chips from the White House... to this moment where the market is shocked that this was possible because a lot of it was backed up by two arrogant factors and assumptions made in the United States. One, there was an, yeah.
One, I think we have had an arrogance and assumption that China's development capabilities do not match our own in this area. Ridiculous. Two, Two, I think there was a false sense of security that this stupid EO that banned H100s from going to China was going to strain them and prevent them from innovating.
Because we've always been locked in on the mindset that you've got to build more and spend more. And NVIDIA has been happy to scale up and create bigger and better chips. So we never had to try to make it work on smaller chips. worst chips, slower chips. They were always there providing us the next chip, providing that next pump for the stock market.
And we tried to regulate the entire thing from the top down. And what we got is DeepSeek, version 3, and $600 billion shaved off NVIDIA.
Which, if it's true that they found a way to not have to leverage CUDA, and they're not just lying and somehow black market imported the GPUs, which I kind of feel like they probably did find a way around CUDA. That long term is super bad for NVIDIA, right? Their moat has been, you know, four letters, CUDA.
especially because a lot of what, you know, makes DeepSeek hum here is inference. And NVIDIA's focused so much on building chips for training. Now, they do, of course, have inference chips, but you know who else who has inference chips for cheap is your MacBook M4. Oh, and by the way, because it's shared memory model, it has access to more RAM than the NVIDIA inference chips do.
I don't even know which M I'm running this. Real-time double-check. My M2 Macs, I'm going to tell you, DeepSeek runs real fast on this thing.
Everybody was laughing that Apple stock didn't go down yesterday with the crash because Apple's AI is so bad It's not even in the in the AI crash I think the actual reality is is the market was shifting to the reality that local cheap AI is possible and inference chips like the neural processors with a shared memory system between the GPU the disk and the MPU are actually going to be extremely competitive in this environment to the point that
that I can run DeepSeek version 3 on my M1 Macs. It's kind of slow, but I can do it. No, it flies on my M2, granted I have stupid RAM, but yeah.
And I didn't need a brand new NVIDIA GPU at all. This is, and once again, Uncle Tim, right, our 3D chess player of the day, somehow comes out on top no matter what happens.
I mean, in the near term, it does seem like there's going to have to be some reconsideration of what's the demand for NVIDIA, although I think they're rebounding. It does feel like a bit of a shakeup.
Well, I mean, this is where I'm going to turn it to you. I think all that's happened here is we guaranteed that the federal government is going to put in a new keg and let's keep the drunk money flowing. Maybe, yeah, and help everybody build, build, build. Also, that Stargate project, I decided to foolishly read the plan. Or the publicly available plan? My level of disappointment is pretty high.
So, Chris, for the amount of money they're spending, right? And you know it's going to be more. How many jobs, like real jobs, full-time jobs, do you think they're creating directly? Give me like a high-low.
I'm going to say low of 300, high of 50,000. 57,000.
1,000?
57,000? No, no, no. 57.00. The gains that they're claiming are like tertiary stuff like, oh, you know, the bar in town will do better. There will be more like people buying houses. Because it's data centers and it's not, they're going to be like automated. And of course, like any other functional data center, they're going to contract most of their support staff, right?
And by support staff, I mean people who aren't tech heads, you know, cleaning, maintenance, all that good stuff. Right. Yeah. Remember Foxconn in Wisconsin? Poor, innocent Wisconsin that got just rolled. Yeah, this is going to be Foxconn 2.0.
Probably. Now, you know, Mike, I think what happens, my bet would be, is that Microsoft is already going to spend 80 bill on data center infrastructure. SoftBank.
Satya came right out and was like, yeah, I was already doing this.
I know. Yep. And so is SoftBank. So is Oracle. And, you know, if OpenAI can afford to, they will too. Will it equal 500 bill total? Probably not. But the money comes from the companies, not from the federal government. So we'll see how far they get. Which is like the one good thing about this program. It's Stargate.
So the tech CEOs reacted to DeepSeek and Satche said, Yvon's paradox strikes again. As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of.
So the Yvon's or however you say it, this paradox in economics occurs when a technological advancement makes a resource more efficient to use, thereby reducing the amount needed for a single application. However, Yeah, okay. Sam Altman responded about DeepSeek saying, DeepSeek's R1 is an impressive model, particularly around what they're able to deliver for the price.
We will obviously deliver much better models, and also it's legit invigorating to have a new competitor. We will pull up some releases.
Oh, I'm sure he's invigorated. I have no doubt. I mean, this is like the kind of thing you say when you have to say something and you're just deeply pissed off.
I also like the term, we will pull up some releases, which I believe is a remix of Satya Nadella's term when Microsoft was down. And what they're doing is basically just releasing beta versions and then shipping patches after the fact to fix it up. So I don't know if that's what he's talking about, but... If I were Sam, I would use this moment to concentrate my power even more.
So I would I would be I mean, the problem is you've got to get through your ex-boyfriend, Elon. But once you're done with that, because if you can get near Trump, seriously, if I was Sam, I would find whatever the Top Gun equivalent of the movie Biden saw. But like even if they were Russians in the movie, just like paint like a Chinese red star on the plane, like do some kind of like filter. Yeah.
And to scare the shit out of Trump about China and be like, they said Trump is, like, ugly and orange. I would really just mess with him and just be like, hey, you should, like, not let them in the country.
Which is what he did to startups. So, hey. I think this is Sam's opportunity to say, I told you we needed to move faster.
Wait, wait. Can I back that out a little bit? It's kind of funny, right? Just... Open AI did to startups by saying, if you let them have the technology and you let them compete with us, they're going to post a bunch of racist crap. Yeah. Oh, can't do that. So now he's just got to change his tune and be anti-Chinese.
I think Sam says, we should have been moving faster. I told you we needed to productize this. I told you we needed to worry less about safety. And he uses this to make his case.
get even more powerful oh my god you you blame biden that's the move you say mr president i was gonna move faster but the damn democrats with all their stupid xyz anti-racist woke made me hold back that's the move that's that is the s-tier move that you make Because then Trump would be like, oh, Sam, I'm sorry they did this to you. Oh, my God.
And then Trump gets to tweet or social truth, whatever the hell he does, come out and say, I'm invigorating our AI market, our AI companies, and we're not going to hold them back with woke nonsense.
Right. Woke nonsense is no longer going to hold American leadership back. Something like that, right?
Sam, the invoice for consulting is on its way.
And, you know, that also would line up really nicely because last weekend, Sam, after the whole trip to the White House to talk about Stargate, Sam tweeted on – looks like – Oh, did I save it? I shared it with you is his tweet about how he's. Oh, yeah. Here we go. Here we go. I don't know. I didn't save the date, but this is a few days ago. Six days ago, I think.
Watching at POTUS president of the United States more carefully recently has really changed my perspective on him. Of course. Shocker. I wish I had done more of my own thinking and definitely fell in the NPC trap. Oh, God. I'm not going to agree with him on everything, but I think he will be incredible for the country in many ways.
Now, previous Sam tweets about Trump were more about how he's going back to work to take him down, complimenting Reid Hoffman for trying to take out Trump. He said, you won't believe the things I'm building to help stop Trump. He's tweeted very aggressively against Trump. And now you're right. All of a sudden, he's like, I didn't think about it right. I didn't carefully observe him. I was an NPC.
He could totally parlay that into, I didn't want to do it, Mr. President. The Biden White House made us be woke.
Mr. President, you have shown me the light. I was lost. I was making, I mean, really, treat it like a religious conversion. Oh, God.
I would at least get a laugh out of it. It'd be pretty gross to watch, but I'd at least get a laugh out of that. You know, that'd be the funniest way to go. Would you get a laugh? Watching Sam shapeshift like Odo on Deep Space Nine is amusing to me. You know, it's even better special effects now.
That was a fantastic... His hair. Did you see his hair with Satya today? It's doing some Oro stuff there. I don't know what's going on, but it's like... His hair is just for folks, obviously, a podcast, not visual. It's straight up as though he's in a battle. Maybe it's like a battle headdress kind of situation.
Mm-hmm.
Four score and seven boosts to go. All right. Well, let's get to the boosts and guess who's back. And he's back in a big, big way. It's our podcast with 125,000 cents. Oh, yeah. Beating last week's episode right off the top here. He writes, greetings, Mike and Chris. I have quite the corporate IT policy to share. Oh, good, good, good.
Ha ha!
At least they didn't make you watch a Top Gun movie to scare the shit out of you.
Ice's thank goodness SSH still works so I can log into the machine. Oh my goodness. That's a good one. Thank you, our podcast. That makes me feel better. We have much laxer IT policy here at Jupiter Broadcasting than that, so appreciate that.
And it's good to hear from you. He's doing all kinds of data stuff over at our podcast and on LinkedIn. We're connected. It's kind of – I don't know, man. I think he just needs to get his Hollywood video card and send Trump a movie. I'm sorry. I can't get over the fact that they showed Biden a movie and he did an executive order. Yeah.
I don't believe it. I think it was a cover story.
You know what the sad part is? I believe either of them, Trump or Biden, would totally do that.
Yeah, yeah. So I do have it right here. Speaking to the Associated Press Deputy White House Chief Staff Bruce Reed, he recalled that Biden had grown concerned over the use of AI to generate fake images of himself in clone users' voices. It was during a screening of Mission Impossible Dead Reckoning Part One at Camp David.
It's even worse than Top Gun.
Yeah. Sorry. I thought it was Top Gun. It was during a screening of Mission Impossible Dead Reckoning Part One at Camp David that particularly alarmed the president. It is a Tom Cruise film, so I guess I get credit there.
I was going to say, this is TomCruise.ai. There we go.
Yeah. He says, in the film, Cruise and his Mission Impossible Force team race against time to contain the Entity, which is a Russian-made AI that turns on its creators and sinks a next-generation submarine, killing all on board within the first few minutes of the movie.
Whoever showed him this film did not do it by accident. There's no way.
Reed, again, this is a staffer of the White House, Reed told the AP, quote, if he hadn't already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about. In the words of Joe Biden, come on, man.
Really? A Tom Cruise movie? Come on, man. Isn't that great? You have, like, all the classified information. You could wake up on a Tuesday and be like, all right, guys, here's what's going on in Area 51. Instead, you're like, Tom Cruise is going to show me the way.
Hey. Could have been worse. He could have watched Discovery. Remember the season with Control? That would have really freaked him out.
Oh, my God. Oh, no, no, no. How about... Really anything that's actually the actual Stargate where they take over people's bodies, that's pretty rough.
Yeah, yeah. Anyways, our podcast, thank you very much for the Baller Boost. He's a good guy. He's a real good guy. No, he's a great guy. And not only is he here live, but he is also one of our boosters at the top of the charts. It's Adversaries 17 with 70,000 sats. I hoard that which your kind covet. Let's hear it, good buddy. And Adversaries writes, here's a boost to boost your day.
I've been loving the source feed. Just finished the episode, and I thought those boost numbers needed to be a little higher. Thank you. We really appreciate that. The show really needs the love, so thank you very much. Thank you very much, Adversaries. Mr. Borgander comes in with 8,999 sats. Here's some more sats for you. It must not stink. Stink is the showstopper.
Stink is the missing episode of Coder Radio that brings total obliteration. I will face my stink and will shower. Only sats will remain. Shower listener, I take it.
Thank you. That might be our first poem written into Coder Radio.
Yeah, and I'm going to believe that it was written in the shower. I'm going to believe that.
Preferably with an Apple product that then was permanently water damaged.
Producer Jeff comes in with 10,000 sats. Boy, they are doing a lot with mayo these days. Here's a dumb corporate policy. My work gives us a locked down iPhone and Windows laptops. We use the Microsoft suite with lots of teams. The policies on the iPhone do not allow me to copy text from Teams that I send myself to paste into the company-built documentation app.
But I can copy text from the documentation app, and I can paste that into Teams. Wow. As for the low boost, people must be recovering from the holiday debts. Could be.
So wait, wait, wait, wait, wait, wait. You can copy confidential information out of the company.
Into Teams. Yeah, or into a Teams app, which you could be in anybody's team, I suppose, right? Yeah.
but you can't from Teams go into the documentation. Once it's on the clipboard, couldn't you just copy it like elsewhere?
Yeah.
But you can't copy from Teams to the doc, which presumably, since you said custom app, is some sort of like lockdown secure document viewer app. Okay. Well, that's pretty stupid.
It's B-Ed. As the kids would say, Jeff, it's B-Ed. That's real B-Ed. Yeah, that's real B-Ed. Thanks for the boost. Red 5Ds here with a row of ducks, 2,222 sats. I'm behind on my podcast, so not sure if this has been mentioned yet, but Chris, you were asking about remote Windows services that you could RDP into for occasional use.
Well, unless you need more powerful hardware than what you have, you could use this Docker image that lets you run Windows in a container and RDP into it. I have thought about that, Red. I have thought about it. There's also one for macOS. One died, and I think one still remains out there. But I feel like if I'm going to use a Windows box, I probably need the hardware acceleration.
I don't know why else I'd be using it. I don't really have a reason to use Windows, so perhaps that's why I'm having a hard time conceiving of it. But I like the idea. I flip a switch and I have fully hardware accelerated Windows. I turn it off and it goes away and I never have to think about it again.
Essentially what you get with NVIDIA GeForce Now if they just didn't force me to play video games. All right. Here's our last one, Mr. Dominic. It's DG at PTC.com. With 5,000 sats, that's a Jar Jar Boost. You supposed! I didn't realize that prompt engineering was becoming such a thing. So many AI solutions are just really an interface to an LLM with a clever prompt wrapping each query. Yeah.
There are subreddits dedicated to this. He says you can go search GitHub for Ollama OCR, for example. And you know what? You really get better results with a properly engineered prompt. You do. It matters. And some of my best stuff is things that I've found either on social media or on Reddit or somebody sent it in and said, hey, I've tried this. And then I tweak it for my use case. Yeah.
And also the more context you can give them. Yeah, DG, it is a whole thing. And just like for a while with search engines, there were people that knew how to compose a really good search and could get results out of older search engines. And so they had an advantage over other folks. I think we're there with prompt engineering. But thank you for the boost. Appreciate it. All right.
Thank you, everybody who streams sats as well. We had 14 of you just stream sats as you listen to the pod. And collectively, you stacked 9,526 sats. When you combine that with our six boosters, we stacked a grand total of a much, much better 230,747 sats. Of course, a portion of that goes to editor Drew, myself, the network, and then we also include Mr. Dominic. So all of that goes.
And, you know, from time to time, I also will split out to like an open source project or, you know, a foundation that's raising funds. So from time to time, we do that. We also include some of the podcasting 2.0 developers and the podcast index as well. If you'd like to get a new podcast app, just go to podcastapps.com. There's lots to choose from.
I think the easiest ways to get started are just linked at the top of our show notes. And we really do appreciate if you got some value out of this episode, sending a little bit back to us. You know, we really do appreciate it. Boost! Thank you, everybody who participates. Now, back to the show, Mr. Dominick. You seem to have quite a bad time.
And I didn't catch all of it, but the keywords I got was A, something involving like a CICD pipeline you were using, something that was ridiculous, and basically a total non-starter for the type of work that you're doing. And I was like, all right, don't tell me anymore. Save it for the show.
Yeah, yeah. Oh, man. So GitHub, you know, they have their GitHub actions, and I guess now they're doing forced updates on them, which, you know, you can kind of see why, right? I mean, I'm not trying to be a total curmudgeon here, but I had a situation where an old client called, hey, we want to change something a little, you know, can you just do that for us real fast?
I haven't deployed to this thing in forever. They've kind of been, you know, It's behind their firewall. They're doing their own. They're running it, right? It's just running. So I'm like, sure. No problem. Little did I know. Little did I know. How hard could it be? How hard could it be? This is a slam dunk, right? Wrong. Uh-oh. I had to rewrite my action script. Now, it wasn't so bad.
Because I got lucky that the only couple I – the ones I had were fairly minor and just the straight upgrade, they still worked. There is definitely a real possibility, and I saw people complaining on Reddit about this, that, yeah, I mean, and, you know, your stuff just breaks and you can't deploy anymore. And it doesn't, like, warn you. It just doesn't deploy. It refuses, right?
Anybody familiar with the GitHub Actions UI? Right when you push, if you open the GitHub Actions UI in your browser, it just tells you straight out, I'm refusing to push. This version is deprecated. Blah, blah, blah. You were warned, basically.
Okay.
Okay, so that is a maybe viable thing for companies whose product is their software, or when you have a valid maintenance contract, which me and this client do not.
Uh-huh, yeah.
So it put me in a horrible position. I did fix it, but then I had to go be like, so this is going to be more money, right? I had to call them and be like, you got to pay me more money. Not fun. Not a good conversation. I mean, they were fine. I got to pay you more money. Why exactly?
Well, because, you know, the quote unquote open source thing we're using on GitHub decided to just like deprecate that version. We had to do an in-place upgrade.
Yeah, it's like not our fault, but it's our fault because we used it, but it's not our fault that they changed it and we can't just fix it for free.
Yeah, I find this incredibly irritating because, you know, I know there'll be the alpha nerds out there who are like, well, you know, you got to keep your stuff up to date. Have you ever sold anything yourself is what I would say to that person. Because people, particularly non-technical business people, hate paying for maintenance contracts.
Because it's true, like if you've done a good job and there's not a lot of issues, they're paying you every month or every quarter to do nothing most of the time, right? Just routine updates and, you know, checks and a little bit of maintenance here and there. But what happens if you don't have one is you're either doing the work for free as the vendor, right? Updating to match stuff like this.
Or when they want to change something small, it's a bigger job than anybody would expect.
Yeah.
Which was the case this time. So I don't know. I think GitHub, which is weird because they're a Microsoft company now, really needs to, like, sure, put up the nasty warning that this is deprecated. And that's what I wish they had done. Like a big red... You can pump it into my Slack because that's how I do my CI deployments.
And then I would take that warning and go to the customer and say, we really need to update this, and this is what it would cost to do that.
hey, you got 90 days or 180 days and this is going to quit working. When should we schedule for this? Instead of just leaving me in the lurch. Yeah.
Well, no, no, no. No, no. They did that, right? But nobody had deployed anything for 90 days because they were just running it. Right, right. It should run indefinitely. And the warnings should just become increasingly aggressive until you say, we're not updated. Use the magic S word security, right? That is the way you get business types to upgrade things. Just say security over and over again.
I mean, it wasn't a big job, right? It took me like an hour, but it's just, I hate this kind of thing. Maybe I'm a little scarred from my bad old days with my Mac app.
Well, when I try to put myself in the shoes of calling this customer up or sending them an email and being like, surprise, here's something you didn't expect that's going to cost you money, that's a difficult position to be in. It's a bad conversation. Right. As a business person, yeah. Oh. I don't like that. It makes me feel uncomfortable just thinking about it.
Yeah, it's just not, you know, especially in today's age, there's always like SaaS salespeople sniffing around looking to get in there. And yeah, I don't know. I think GitHub needs to maybe go to, not Cupertino, go to Redmond and talk to some of their Microsoft colleagues about legacy support.
Yeah, yeah, enterprise. All right, well, Mr. Dominic, is there anywhere you want to send the good folks before we scoot?
Go to alice.dev and let me know if you want anything automated or data management.
That would be my suggestion, alice.dev. Go there once a week, you know, if not. It's just part of your web routine. Maybe even put it in your bookmark toolbar, alice.dev. You can find me on the wild side of the internet at chrislas.com. That's my handle on Weapon X as well if you do that. The show over there is at Coder Radio Show.
And links to what we talked about today, those are over at coder.show slash 605. You're going to find our contact form and our RSS feed there as well. And ways to join our live chat, Matrix chat, all of that, which are just extended ways to enjoy the show. Take it a little bit further. Of course, we love it when you boost the show and support us as well. And a big shout out to our members.
Now, I hope you join us next Tuesday or Wednesday if you get in the RSS feed, because we will be right back here next week.