The insidious undercurrents threatening to crush open-source AI projects, plus our thoughts on Microsoft's "big changes" to Windows post-CrowdStrike.
This is GoToRadio, episode 588, recorded on September 17th, 2024. Hey friend, welcome in to Jupyter Broadcasting's weekly talk show, taking a pragmatic look at the art and the business of software development and the whole world of technology. My name is Chris, and joining us, ready to go, probably fully caffeinated, I don't know, it's our host, Mr. Dominic. Hello, Mike. Misa, bye.
Now, Jar Jar, can you put Mike on, because I've got a question for him. Okay, but Mike's a little sad. Oh, what happened?
Get out of there. Well, so I don't know if you know that Florida is crazy. I've heard, you know, tons, tons of our schools have had shooting or otherwise violent threats since Friday. Oh, like just since Friday? Since Friday. Yeah, tons. Like it was – my little town here was like lockdown mode. What's happened in the last few days? Do you know?
What's going – I mean – The one nearest to me where one of my stepkids goes to school and it's the same district as my youngest is a kid went on the internet, posted a threat that he intended to shoot the school up, right? Yeah.
They don't like that.
They don't like that. Right. Of course not. Right. Correctly, they respond pretty aggressively. Well, as the sheriff's department is clearing the school, this person decides, I'm assuming it's a boy. That he is going to follow the media, the social media, the local news, and taunt the school and the sheriff. Oh, man. You're kidding me. Very poor decision. Yeah.
In a county over, an 11-year-old was perp walked and arrested because that sheriff also had the same experience. It was some kind of meme or something going around, some coordinated – I don't think it's like a conspiracy, but like something in the – just like in the air, right? Like a coordinated thing. It's a copycat type syndrome perhaps?
Copycat because of what happened in Georgia a couple weeks ago or – yeah, a week and a half ago. And an 11-year-old did a similar thing. He claims to have been joking, right? The sheriff found him of that county, different county that I live in, and perp walked him the whole thing, booking, pat down, in the cage, the whole thing, on tape and put it on social media, which the kid is 11.
I have many thoughts that would get us into political water here. I would lose a number of Republican points. It's too much in my opinion.
Oh, man. This is really awful. As a dad, this is just really hard to hear.
This is terrible. The kid is stupid. He deserves to be punished. Publicly named. Oh, wow. Public video, parents named, terrible thing. The school in my district that my kids go to, they are now introducing mandatory backpack searches every day. Metal detectors. It reminds me a lot of the TSA or jail. Yeah. This is the last thing and then you can go. But the –
Because we have all these Facebook groups that are like the local – the parents, the school, whatever. That's like the community, right? The bang for blood of – granted, this kid is wrong and should definitely be punished for making these threats, right? He should absolutely be punished. But the just like Roman Colosseum level of bloodlust here is –
from frankly soccer moms, is disturbing as hell to me. And the just willingness to turn school, which is hard for some kids, right? High school wasn't the best experience for me either. Just going to say I was a nerd. I hung out in the computer lab. Into a prison-like environment with security and metal detectors is wild. It's just crazy to me.
So one, aggressively spy on your kids, folks, because they may be joking. Like I got to say, that poor 11-year-old, again, he did a very bad thing in that other county. He probably was joking, right? Also, he's probably not capable of doing anything. But his life is – I mean this – he better change his name if they don't send him to juvie.
Yeah, we've had some incidences in my kid's social group where – Somebody makes a stupid comment in like a group chat, and one of the kids in that group chat reports it to a parent. The parent reports it to the school, and then the next thing you know, it's a whole incident, and that kid didn't expect that.
He just thought he was – he was saying the forbidden thing, which is so funny when you're a kid.
Oh, I mean it's like cancellation, but with actual legal weight behind it now. Yeah. Now, having said that, something – obviously these kids need help and to be punished, right? To be clear, I'm not super pinko comedy. Like, yeah, you do something. You make a threat. You're going to get the hammer. But maybe being a juvenile and in this case 11, you shouldn't be paraded publicly, right?
You should have a chance to reform yourself.
Now, serious question. Does this set the context for some of the comments from Larry Ellison last week about his omnipresent AI cameras that will ensure good behavior?
Well, so I think it does, right? So this is the problem, right? Everybody makes mistakes, but in particular, children make mistakes. That's kind of – we call that growing up. And can we be old men for just a minute? I feel like we are already. I was a huge nerd in high school, right? I got picked on. I had a hard time. And then one day, somebody said something – Gross about my mother.
I turned around and punched him in the face. I'm a little younger than you. So they took violence seriously back then. So I liked the cops game, but I was okay. Nothing really happened. The cops were like, this is stupid. It's a fist fight. To me, that is part of growing up, right? Boys in particular, they get in fights. They insult each other. They do aggressive things to each other.
They say aggressive things to each other. And it's bad behavior, no doubt. But is it – I mean how did we get from Chris made fun of Mike or Mike made fun of Chris and somebody ended up with a bloody lip, which is all it used to be. I mean I'm sure maybe – I don't know if you directly have this experience. But boys got in fights and surely in your school as well.
Yep, yep. I mean I never started them, but yeah.
But nobody got killed, right?
It generally was – it didn't even really get to punches or anything. It was like one or two punches. Yeah, most was like a shove.
I think the maximum I ever saw was like a bloody lip. And when I say ever saw, that was me. So that I did that. So I get the fear, but we've become so – And this ties into the Larry Ellison thing and it ties into the open AI safety thing where I'm actually on Sam Altman's side for a change.
So scared of these kids getting like a little pinprick, we act like it's a sword through the chest when I think we haven't let them – and again, I'm going to say in particular boys and I know I'm going to get shit for this. But like boys, especially going through their changes right through puberty, there's an amount of aggression that they need to learn to either –
You know, suppress, control, or just, you know, handle, right? Your hormones are on overdrive. You can't even like get a shouting argument now without getting suspended. And God forbid you say a naughty word during that argument.
and now it's, oh, there's therapists, there's this, there's that, when now they're killing each other, or they're threatening to, or they're sitting in a dark room because they never leave their house, posting, frankly, insane things online, which leads to a huge expense for the people of the county, and I think a wild overreaction of the community. And I understand the fear, right?
What happened in Uvalde, what happened in Georgia, terrible. But the only thing that – the prof would blame social media, and I think that is a part of it. But we also changed how we handle – I'm not going to – I don't want to get us explicit tracked, but let's say two boys in middle school say, F you, F your mother, right? Oh, you get in trouble for that. Even back in our day, right, Chris?
You got in trouble for that. Let's say somebody – boy B shoves boy A. You get in trouble. Now the police are there. There's anger management. There's therapy. It's – It's overboard.
And we've created the situation where these kids now, instead of just getting in the shoving match or the fist fight or the shouting match, they go to crazy online threats that given the very few that do these kind of things. I mean, I understand why the sheriff's department, especially like this kid in my district, taunting the sheriff is very stupid. Right.
It's it's this man's mission now to get you. and he's elected, I would say, and so is the DA, and so are all the judges. So when they catch this kid, which they will, I have a feeling if he's old enough, he'll be looking at some, shall we say, older gentlemen in prison that might want to teach him a few things that he probably doesn't want to learn.
Yeah, there's probably not going to be a lot of sympathy there. There's going to be no mercy. So – and I think the problem, which the show is not really probably going to be able to get to, but the core of it is like so many other problems in society –
Multifaceted extremely massively huge you know it you mentioned social media it obviously does play a factor when when social media first came along you had people that would behave one way on social media and then they would behave absolutely in real life in fact you'd even say like you'd never have that argument in real life these days people behave in real life like they behave in social media the two have blurred the lines for some folks.
So if they get radicalized in social media, they're radical in real life. Also, I think you have a very litigious society. So the schools, the whole structures around it are everybody's trying to avoid lawsuits. And I think that adds to a lot of this hyper, hyper response. And then obviously, I could list 10 other things, but somewhere in that list,
The typical family structure has been pretty weakened and, you know, typical being whatever your definition of that is. But even the very best parent or set of parents, even the ones that are trying to be the most attentive, are extremely, extremely busy.
It's an interesting and I wonder if not a knock on effect, a bit of inflation and the state of the economy for the middle class for the last couple of years. Since COVID, people are so, so busy. I've seen it reflected in... I just don't really see my family at all anymore. Most of our family events have kind of fallen apart, and nobody has the energy, the will, or the time to reorganize.
And even if one person in the family did, nobody else in the family has the time, energy, or will to participate. And it's just sort of been this massive tax because everyone is so busy. So we've had this huge time tax, probably because... You know, we work for money and that money does less.
So when we work, we store our time and energy in the money and then we buy goods and services with that money without having to do those goods and services directly. But now those goods and services we're getting less and less for. So we have to work harder and do more. That's part of it for sure. But, you know, there's just a lot going on in society. Lots being thrown at us in modern days.
And I think all of this is such a huge, massive, unsolvable problem. It's a train that only goes in one direction. That you see people get hyperbolic and you see opportunities for salesmen like Larry Ellison from Oracle coming and say, let me sell you an AI surveillance system.
On Thursday, during a Q&A in their quarterly call, he said, quote, citizens will be on their best behavior because we are constantly recording and reporting everything that's going on. That's ridiculous.
I mean he's just – he's so clearly a carpetbagger in this situation.
Well, I know. Talk about taking advantage of the worst of society.
That's like the crazy people who say an armed society is a polite society. Yeah, because somebody tailgating you should end in a gunfight.
You're right. It is that same logic. It's the opposite side of that same logic. That's a great insight. Right.
It's the liberal version of that, right, where the conservative one is have all the guns.
And here's where I think – he also, by the way, he continued to say, quote, we're going to have supervision. Maybe not immediately. You don't have these cameras on every corner like we're London.
But to your point at the start of the show, couldn't you see a compelling argument for classes and schools where you could observe student behavior and try to pre-crime out somebody who's – I'm not saying this actually would work, but I'm saying that's how they would sell it. Try to pre-crime out a kid who's maybe doing something that looks a little destructive and dangerous.
And maybe on top of this, you're monitoring messages. Maybe parents, you're encouraged to install an app on their phone so that way you can scan their photos and messages for you and participate in the system and feed the network. Make your kids safe. Think of the children.
I could totally see that, right? Yeah. I mean, I go back way into the past with this. When I was in, let's call it middle school or high school, I had a couple – I knew them, right? It was a smaller town up north. They weren't bad per se, boys, but they were a little hyper, right? They liked to do – they were – very traditional school I went to, right?
So sitting in rows and sitting there for hours on end and being told what to do and yes, ma'am, yes, sir was not exactly – Their wheelhouse. And then a magic little pill called Ritalin came along. See, I think the problem is you and I are old enough to remember what this was like before. And people just actually dealt with the kids and taught them how to control themselves naturally. Yeah.
Without NSA level surveillance.
Yeah, but now you can do it with a nice little AI summary report on each student. And then, you know, what if if the student participates in the program via the parents permission? What if on the report cards you got like a little summary as the parent? You get like a little, you know, like you get a credit score on your credit bill, your credit card bill.
Yep, yep, yep.
What if you got like a student behavior score as determined?
Well, you kind of do, right? They give you like – they've always done that. They give you notes. I mean – but yeah, you're talking about – That's what I'm saying. You're talking about like act like on their device as spying.
Well, no. They would AI-ify the report and they would have like these categories where the AI observed. We recommend you work with your student in this area. And the whole thing would be bogus, but it would just be more stuff that the school can claim they're providing. The value, quote-unquote. I mean, the more I think about it, it's just what a boondoggle.
And, you know, it's taxpayer money, so what could go wrong?
Yeah, so I think that could actually happen. My big concern with this particular case down here and all of them down here, but I think a lot of states are like this. In Florida, like I said, the sheriff is elected in every county. The judges are all elected. And the DA is elected.
No one who is running for election wants to be the guy that believed a kid who made a stupid Facebook post or Snap or whatever the kids use now or TikTok, right? And said, I'm going to give this kid a break and just force him to go to therapy or something like that instead of charging him criminally. No one wants to be that dude.
Because your opponent in the next election, I mean, the ads write themselves, right? I could see the local Tampa Bay CBS and Fox stations running them day in, day out. So what they're going to do and what they've been doing is treating children like they are Osama bin Laden. And that's pretty bad. And I know people are going to be like, well, if it was your – but like it was my kids, right?
Like it was the schools my kids go to. I don't – I'm not happy that this happened. The only reason I'm making this a public thing is because the wildness of the overreaction. And don't get me wrong. I got very little work done. I was nervous as a cat. I was texting my stepdaughter. I checked my son's school's page. They said we're not – we're locked – we're –
taking measures but nothing has been directed at this school particularly there's three or four schools within like two blocks of each other is why that matters but just like it's hard for me to describe this in polite terms but like ladies driving equinoxes or you know escalades which i live in a fancy area bang for blood as if they're caligula is something i did not think i would see in my lifetime
Especially blood knowing it's a kid. We all know this is a kid. It's just which kid? So, I don't know. And maybe I've become too much of a softie. I don't know.
Well, Larry, you know, he continued. By the way, he's 80 years old. And in this finance call, he said every police officer is also going to get monitored by AI. So there's, you know, a little bit of a rough shot. And then he added… You're going to have AI drones replace police in high-speed pursuits. You'll just take a car to get followed by the drone system.
It's very simple in the age of autonomous drones.
Oh, and does Oracle get a percentage of the speeding tickets? Yeah. Just saying.
How do you know if someone's telling you the truth? I mean, really, especially when it comes to content creators. I don't know about you, but recently I've become very aware of nearly all the information that we receive has either been spun in some gross way, curated, reduced down with certain facts omitted or other facts amplified for some agenda of which I'm usually completely unaware.
And it seems like especially during an election year here in the States, it's only getting worse everywhere. So realistically, how does one solve for that? I don't think it can be done with a central management plan. I think it's at the individual level. I think you have to be more active in selecting your information diet. And I think all of us have to do a little bit more work.
We have to figure out the incentives of the people creating the content. In the mainstream media, it's the corporations that run all of it. In the independent content creator, it's who funds them. And for over a year, the Coder Program has been funded by our listeners, which makes us ferociously loyal to our listeners.
The Coder Program tells you like it is in the world of technology, software development, small business, and all the things that get touched by that stuff. If you zoom out a bit, too... Protecting the environment that makes this possible, podcasting, is what the open podcasting 2.0 standards are about. It's what things like the boosts and memberships to podcasts directly are about.
It's about taking out the middleman. It's about realigning the incentives to something that you can trust based on the way it is because of how it is. It's fundamental to how the content is created. And I just can't express to you how important I think it is that we save and preserve this environment for podcasting.
And so your support directly either through Autopilot with our QA membership program or by sending a boost on your terms with the amount you like, it means a lot. It's not just one show, but we're really trying to change something for all of podcasting to preserve this medium so it can be this trustworthy medium like no other medium can be. It's a big goal.
But we're getting there, and this show is proving it can be done. This show has transitioned to fully audience-funded, and that's really remarkable. So thank you for your support. If you're a member or if you've boosted before, we really appreciate it.
And if you haven't done it yet, consider either by going to coda.show slash membership or go get something like Fountain, which just keeps getting so good, Fountain.fm, or Cast-O-Matic, which is like the Cadillac. podcasting 2.0 app for iOS, and Podverse, which is working on an incredible rebuild, cross-platform and open source.
And there's so many other great apps, like True Fans and more, at podcastapps.com. Try one out or become a member and participate in actively selecting the media that you can trust versus the stuff that has incentives you don't even fully understand.
So I'm not sure what to make of this AI regulation bill that's just passed the California House and Senate and now it is sitting on Gavin Newsom, the governor's desk. He needs to sign it by the end of September. And there is a lot I want to cover in this because it does have some good in it, especially around whistleblower stuff and employees being able to speak out.
But there's also just this complete whitewashing of all of the threats it seems to pertail for open source development. And the pro side is essentially lying about what the bill does to make their case. And Where better to get your tech advice than from an actor who played the Hulk?
Hey, Governor Newsom, Mark Ruffalo here. First, I want to thank you for championing progressive policies and all the great work you're doing in the state of California. This is about SB 1047, which is to regulate the. explosion of AI into the world.
All the big tech companies and billionaire tech boys in Silicon Valley don't want to see this happen, which should make us all start looking at why immediately. But AI is about to explode and in a way that we have no idea what the consequences are.
So all this bill is doing is asking these companies to test these products before they come out to the consumer, just like any industry would have to do to make sure they're safe. Right now, they can be used for terrorist attacks. They can steal jobs. They can create deepfakes that are influencing our elections and our ideas and our laws. And...
And there's just no regulation that's put in the books right now on AI. So this bill does that. It does it in a really thoughtful and competent way and would make the tech industry have to really make sure that they're not harming us in any way before they release this technology into the world. I mean, it's very powerful technology. We all know it could be positive and it has negative effects.
And just like any other industry, It should be regulated. So, Governor Newsom, please do the right thing. Don't bow to the billionaires and protect us.
Now, of course, his motivations are probably based around Hollywood's incentives to avoid AI replacing them. You know, it's funny, too, to hear somebody who's probably worth a couple million complain about the billionaires. There's also some irony there. Now, to zoom out a little bit,
At a high level, this regulation applies to huge models, massive models, which at present day prices would cost hundreds of millions of dollars to be able to manage and create. So we're talking initially about something that applies to massive scale operations. But, you know, you go ahead about 20 years or so and that's going to be, you know, fifteen thousand dollars.
So, of course, today it might take a ton of money, but to get to that compute in the future, it might, you know, it could be a lot less. And those numbers can always be changed. Things will be much cheaper in the future. There's also going to be new architectures and advances in chips that are going to result in the kind of compute power that they're trying to target at cheaper prices.
So I'm not a big fan of where they set the bar. But the claim that the regulation just makes it so they have to test is so far from the truth. So it's SB 1047 and Reason.com, which we'll link, does a rundown of how this impacts open source development.
And they write that it would disincentivize developers from open sourcing their models by mandating complex safety protocols and holding developers liable for harmful modifications and misuse by bad actors. Could you imagine if open source developers were held liable, first of all?
The bill offers no concrete guidance on the types of features or guardrails that developers could build in to avoid liability at all. And it defines open source AI tools as, quote, artificial intelligence models that are made freely available. And the other thing that's incredible is the developers are held responsible for any derivatives of their model.
And that includes if somebody just straight up copies their model, doesn't make any changes and uses it for something bad like misinformation or domestic terrorism of some kind. So any derivative, any copies, any or anything like if they just took their model as it was. and integrated into a piece of software that was bad, the developer of that LLM would be held liable.
It would require open source developers to implement, quote, reasonable safeguards to prevent, quote, creation or use of chemical, biological, radiological, or nuclear weapons, or, quote, mass casualties of at least $500 million of damage resulting from cyber attacks. I mean, how you even measure 500 million damage from cyber attacks or any, quote, harms to public safety and security.
So essentially, it requires developers to build in censorship to their LLMs because you could harms public safety. It's pretty, pretty big area. That's pretty wide margin. The bill also mandates that open source developers take steps to prevent, quote, critical harms to just vague critical harms. They have to avoid they have to take steps to prevent critical harms.
which seems like that's designed to be interpreted very broadly. Also, it imposes extensive reporting and auditing requirements on open source developers. Developers would have to identify, quote, specific tests and results that they're using to prevent critical harm and report that to the state.
The bill would also require open source developers to submit an annual certification under penalty of perjury of compliance and self-report, quote, each artificial intelligence safety incident within 72 hours. My God, could you imagine? Put something out on the Internet, and if somebody looks something up and hurts themselves, and you hear about it, you have to report it within 72 hours.
And starting in 2028, developers of open source models would need to, quote, annually retain a third-party auditor to confirm compliance. Could you imagine an open source project having to retain an auditor?
AI Compliance LLC. Yeah, right. Delaware. I'm sorry, what were you saying?
Yeah, you're right. There is a business opportunity. Oh, yeah. And my last bit, developers would have to re-evaluate the, quote, procedures, policies, protections, capabilities, and safeguards implemented on an annual basis. That's bad. Not if you're a big company. That's fantastic. Right. These are just open source regulations. I just focused on the open source stuff.
I mean, I feel like... This is basically creating a moat for the big boys, right?
For sure. I think, and I hate to be cynical about this, but I've just watched all this play out, Mike. To me, it feels like it's truly about information control. They don't want anything that's open source and not under control of one of the big five to get traction.
These are not insurmountable odds, but you're not going to have two founders that are visionary and passionate be able to make this happen. They're going to have to get funding. They're going to have to kind of become big tech. And then if they have anything that's good, they'll just get acquired.
Right, right. So they'll never become big, right? What will happen is if they have anything interesting, their VCs will farm the Mac.
Mac will hire the employees like we've seen or buy the whole company.
Yeah.
Yeah. And so the people that are in favor and the people that are against it is interesting. So a lot of OpenAI staffers, well, like 120 of them, former and current staffers,
are in favor of this and i think they're in favor of this because like i mentioned at the top it does have some pretty strong ai whistleblower protections which we've seen a clear need for open ai managed to right at the beginning that up for everybody and make this something that has to be baked into everything thanks sam and so they like that but
Anybody who really is kind of looking at a larger, higher level picture, I think, doesn't support it. And I can't believe I'm going to say this, but I agree with Nancy Pelosi and other California members of Congress. We should reject this bill just because of the way it impacts open source.
And additionally, I just don't want to see a situation where we have every individual state implement their own bespoke AI regulation to whatever nutter group that runs that state. That's going to be awful. And then if you do it in California, of course, it's going to be pretty impactful to the tech sector. And the ones that have money will just go somewhere else and develop it somewhere else.
Yeah. Of course, Elon's in favor of it.
It's amazing. One of the richest men in the world likes something that makes it expensive to do things.
Well, you know, yeah, I think exactly. It would slow down OpenAI. It would give him time to develop Grok and XAI.
I'm not convinced it would slow down OpenAI. Actually, I think OpenAI mostly would benefit from this.
Yeah, probably. You're right. Yeah, you're right. I hate to see this effort to – because it really – why all of these things? Here's my argument. Like why – Why burden the open source developers with all these requirements and all of these loopholes and all of this stuff?
Because do you really think that if something bad happened, like a national security incident happened because of an LLM, that we don't already have national security laws on the books that enable the government to do whatever is needed to stop that threat? Of course we have that. I mean, just during COVID, we saw the presidents, both of them,
Constrict different corporations to build things under some law from World War Two.
Ah, yes. The Emergency Production Act. Yes. Yeah.
Something like that. And my point is, is like if some open source. Thank you. Yes. There it is. If some open source LLM came along and it was spewing such dangerous information that the security of the state was at risk. I got to believe there's most likely laws on the books right now that they can address that threat with. So why burden open source with all of this?
Well, one, I don't even think the people who wrote this law understand open source at all. It's not even something that comes on their radar. One, I think they've definitely bought into the hype. So they really think AI is going to put a bunch of people out of business or out of jobs, I should say.
I really think in our political class, there is this wave that the social media platforms really did decide one of our recent elections. And I find that so hard to actually believe because its underlying premise is that people are that stupid, which I just can't buy. I reject the premise completely.
But if you believe that premise and you could put out an effectively infinite number of social media memes and posts to push people one way or the other, then yeah, maybe this makes sense to you. And I think that's really what it all comes back to.
Had there been no 2016 election, or at least had it gone the other way, I don't think we'd be seeing this level of, one, tech backlash and certainly AI regulation.
Well, and you know, it's also another thing that's pointed to as COVID, you know, all the misinformation during COVID. But then when you kind of look back in totality, a lot of the misinformation came from the federal government. I'm not saying it was intentional, but I was like, it depends on which month you're looking at. What was the misinformation? Right. Exactly. Exactly.
It is interesting that misinformation is sort of accepted as such a significant threat.
What is the difference between me seeing on – I mean I'm old. I use Facebook. I know the kids don't use it anymore but whatever. Some crazy thing from Uncle Jimbo or me just like being in an airport bar waiting for my flight and talking to a guy who's somebody else's Uncle Jimbo and tells me something wacky he heard on some weird podcast, right?
Sure. Well, I mean, you want to know the original threat to democracy? Believing that JFK was assassinated and the federal government was involved. That sounds like you're some sort of insurrectionist to me. What a threat to democracy to believe that your own federal government murdered a sitting president.
So you could see how in modern day you could spin something like that, which was a common belief and a common thing that happened for a long time. And people still believe that. You could cherry pick any one particular issue and really spin it up as a massive threat to society.
Yeah, I don't think it's working as well as it did but then I go over to Mastodon and I see everybody panicking and I realize that's still pretty effective.
I just – I can't believe that your average American or your average European or Asian or whatever, right? Like any country but a public education system of any salt whatsoever – would really fall for the kind of bulls**t that's obvious bulls**t that you see generated by AI.
Hey, man, we're saving democracy. So let's talk about OpenAI's CEO, Mr. Altman. He has, quote-unquote, left – Champion mode all the time. He's left the safety committee, which that was created back in May. When that happened, we came on the show and we're like – How does the CEO sit on the safety board that's also supposed to be a check to the practices of the company he's a CEO of?
How does that work? So now it's going to be chaired by Zico Kultler, the director of machine learning at Carnegie Mellon. And then we have a bunch of other members like Quora CEOs on there, retired U.S. Army generals on there. You'll love to know this. Former NSA chief Paul Natsak is on there. Natsak, I think, is his last name. NAK SAC, and a former Sony general counsel.
At least the sport has balls.
A former Sony lawyer is also on the board. So there you go. That's the safety board.
So wait, we've got academics, spies, generals, and a lawyer.
Who's the academic? Oh, right. Carnegie Mellon. So the director. Okay. But you've got retired general, former NSA chief, and a Sony lawyer also on the board. Oh, okay. You know what's funny? Do you think any of them is under the age of 40? I would eat my hat if one of them was. Do you think any of them has ever spun up an open source LLM on their own local workstation? I mean, it's possible.
Maybe the NSA chief has. Maybe. Wouldn't it be something if the people that made these decisions understood the tooling? No, you can't have that. Actually used it? You cannot have that. Yeah. So just as a recap, just as a recap, in the last week, since Coder got together, they have some kind of reasoning model they're talking about. It's, you know, using reasoning.
They're trying to raise a round of funding at $150 billion valuation. They think they're worth $150 billion. They're considering restructuring once again and getting rid of the nonprofit altogether. And now their just announced safety committee has been reformulated once again. And it will have the power to delay models. Will it? So that's where we're at right now. Will it? That's what they say.
That's what they say. You know, it's I don't know. I think they should. If I were them, I would delay the next one. It's it'd be brilliant, Mike, because it's not ready. It's not going to be that good. So delay it and give them time to make it better. And what is the message when the safety committee says this is too dangerous? A.K.A. It's good. It does stuff.
That is the way it looks like the most silverback gorilla marketing tactic possible. I love it.
I would 100% think that's what they're doing. I really do. I think they pretend all this safety stuff, not only is it to get their moat, but it also serves as reverse psychology marketing to tell you what we're working on is so advanced that we have to be humble about it, that we have to be careful about it, and we have to take safety so seriously.
Now, are there hands in a triangle?
Yes. Of course. Yes. Yeah, yeah. I think that's what's going on.
You know, I just, okay, sure. That's my new motto. We're going to print up new robes. Let's just say, okay, sure.
All right.
Attack Coder Radio.
That's just the way it is. Microsoft has sat down. They met with their very important security vendors. You know, it's a big club. And you're not in it, including CrowdStrike. We talked about this recently. And they've come away with some decisions about how they're going to improve security and prevent failures on Windows.
Installing Debian on all your machines. Congratulations.
So really, access to the Windows kernel has been a hot topic since the whole CrowdStrike thing took out 8.5 million. Oh, it's interesting. The number's down now. 8.5 million Windows PCs. So Microsoft released a blog post and they talked about how the little powwow went. And, well, this is their language. They, quote, looked at longer-term steps.
And those longer-term steps include developing new security capabilities that let stuff run outside the kernel but still get kernel-like access. They say, quote, both our customers and our ecosystem partners have called on Microsoft to provide additional security capabilities outside the kernel mode, which, along with SDP, can be used to create highly available security solutions.
So in other words, if I were to interpret this, I think they're going to create kind of like Linux's eBPF solution where there's like a little micro VM that very basic simple code can get executed inside the kernel. And if it crashes, just the VM goes out, not the whole kernel.
So basically we're going to comply with the EU. If we didn't have to do this, we wouldn't do it at all. We would just lock the shit down. But we're going to make it so bad that good luck.
You know, so The Verge writes, while Microsoft isn't directly saying it's going to close off access to the Windows kernel, it's clearly at the early stages of designing a security platform that can eventually move CrowdStrike and others out of the kernel. You know, so what it means is for current Windows users... You get nothing. Nothing's better. Nothing changes after this.
They've came together. They had their powwow. And literally nothing is going to be improved for current Windows users. For future Windows users, Microsoft's going to have, you know, some kind of little micro VM or something. It's just a guess of mine. They're not actually doing anything to address the fact that Windows boot is extremely brittle.
And all they're doing is super engineering for one particular failure scenario. not taking care of the bigger problem.
So the next failure, it might not be this particular issue, but the next failure will just take Windows out during the boot process, just like the CrowdStrike problem did, and you're all going to have to go out there and touch each individual machine because they've solved nothing about that actual problem. There's no improvement in how Windows could maybe...
You know, boot from a previous shadow copy, detect that it's blue screen two or three times in a row, disable stuff and go into safe mode. Nothing like that.
Yes, but they have solved the problem. If you define the problem as the CrowdStrike problem. Yes. Right. If you throw CrowdStrike out of the kernel, you have therefore solved that problem. Four score and seven boosts to go.
Oh, the podcasting 2.0 consultant is our baller booster this week. Alex Gates comes in with 50,000 sats. So he writes, and I heard this from a couple other people, Pixel Buds Pro have conversation detection, and it's fantastic. I use them every day with Giraffine OS. I have Pixel Buds Pro. I think there's only Pixel Buds Pro.
I have the first ones, and they do noise canceling, but I have never had them detect a conversation and unsilence. And I went into the Pixel Buds Pro app, and I see no setting for that, and yet multiple people have told me this is a feature of the product of which I already own, and yet I cannot make it work. I am considering... I really... I don't know.
You know, this happens every time I, I had this moment today that it really, I'm still kind of, I'm still sort of processing. And I looked at, I looked at my photo library and I realized since I've switched to Android, I kind of stopped taking pictures. And, you know, that really hits when I look at, like, all these pictures of my kids and all these pictures I've taken with the iPhone.
And then I get to the Pixel and it's like every three or four weeks there's a picture or two in there where there used to be one, like, from almost every day. And I then remembered that's the same exact thing that happened last time I switched to Android because the goddamn camera app is so laggy. It just compared to the iPhone.
It feels like the iPhone is a hardware camera and the pixel feels like a crappy third party software app that isn't even using like, you know, hardware accelerated features. And it's slow. Editing photos is slow and the phone gets hot. And then, you know, I've implemented my own backup solution. So then like every photo takes up disk space.
I don't know.
Moving on, though. He says, I use Podman every day inside WSL2, and it works great. I heard that from a couple other folks, Mike. I know you were having some strugs, though, with WSL as the week went on.
Yeah, I was having a few issues with RuboCop, specifically in RubyMine. I mean, this is, like, very specific, right? Yeah. I actually did reach out to Jet Brands, and they're kind of awesome in their support. Oh, good. I found most of the problem, but then something else propped up. So I'm checking it out. It seems to be a very, very specific issue.
based on permissions access between file systems. And this is, so far in my usage of WSL, this has been kind of its Achilles heel when you have to cross the boundaries, which you probably want to, right? Like I'm using the Windows Ruby mind to work in Ruby on the Linux file system. And that's the right way to do it.
So I will say this issue is such a dumb issue that it's very specific to the way RubyMine implements RuboCop suggestions. RuboCop, by the way, is a linter similar to, I don't know, like Flake or something like that in Python, where it tells you, your code is right, but, you know, geez, you should be using this syntax. It's more modern instead of your old-ass way of doing it.
Yeah, like if you open VS Code, I have no problems, which tells you kind of, again, one of the weaknesses of WSL is the tool's got to keep up. No one keeps up better with their own tooling than Microsoft.
That's a great quote right there. All right. So he also goes to ask if we have ever heard about DuckDB or Splink. He says, I use them in large government record linkage projects. DuckDB is amazing. I don't know about you, Mike. I've heard of DuckDB only from the audience and I've heard very good things. It's an open source column oriented relational database. Relational. But I've never used it.
Never had any need for it. But it feels like one that maybe you should have in the quiver of open source projects that you may call upon one day. You never know.
I just like the name.
Yeah, that is a good one. That is a good one. Thank you, Alex. It is nice to hear from you, and thank you for being our baller. Tomato comes in with 20,425 sats. I'm catching up on some podcasts I missed over the summer. I just want to send some value your way. Oh, thank you. That was a... Yes.
That's amazing. I've got the same combination on my luggage.
Spaceballs boost. Traveling with electronics. I'm able to get away with it a lot, actually, including bringing a breadboard project and SDR radio gear with me. My secret is keeping my business card on me and identifying myself as an electronics engineer. Ah. So you probably also have to say it with some confidence. Oh, yeah. Here's my business card. I'm an electronics engineer.
Am I the first one to boost in for small talk as the show's official language? Small talk. He says talk small and carry a big class library.
I am going to steal that phrase, sir. That is good. Also, I just want to say small talk does have a successor language. You may have heard of it. I want you to be objective about this and see the truth.
Oh.
Oh.
Well, it's going to take a little bit more of a boost than that to get Smalltalk the official language.
See, the R data people are making so much money. Yeah. They really just, I mean, honestly.
It's true. It's a big industry. But, you know, Smalltalk would be pretty special. That'd be fun if we could wear that badge.
You know what? I will install GNU Step. Although it's still Objective-C, isn't it? Never mind. I forgot the name of the Smalltalk. Is it Pharaoh? Someone will correct us in the feedback. But there is a Smalltalk thing you can still run. pretty easily on Mac and Linux. I'm sure it runs on Windows too. And I will do a stupid Jar Jar thing in Smalltalk if Smalltalk can win.
Cultivator comes in with 2,000 sats. Florida trip. This ought to cover the tax on a cup of coffee. But maybe Bitcoin will rip one day. Fingers crossed. You know, I'd love to get out there. I want to go to Jupiter, Florida. Right. I mean, that's the namesake. I want to do a show in Jupiter, Florida. That's probably that's probably pretty far from you, isn't it?
Yeah. Just a reminder. Florida is as big as most of the United Kingdom.
It's long. Yeah. So long. God. And I hate that about California. You know, so you've got like that same thing going on. It takes forever.
Everybody thinks Florida is small because of the way it's usually rendered on U.S. maps. It is not small.
I just thought maybe once you took out all the driving around water, it's not that big. The actual usable land.
Well, if you fly on a Super Saiyan alligator, it's much faster.
Ah. The immunologist comes in with a Jar Jar Boost, 5,000 sats. You're so boost. This is a plus one for R. I'm still using my iPhone 8 in 2024. I get battery changes every two to three years. iPhone 8?
There it is. Battery changes every two to three years. I was wondering how he did that. Okay.
But you're not getting OS updates, right? So, like, how are you even installing apps anymore? Because, you know, they're super brutal about that. He says, maybe because I don't work in tech and I don't do a lot of photos, but this iPhone has everything I need. USB-C would be great, though, but not worth the tradeoff of a phone not fitting comfortably into my pocket. Yes. There is that. I know.
I look at the new phones and I'm like, they're just too damn big. Damn it. Give me something the size of the SE. I'll trade battery and put everything else in there. That's all I'm asking for. He says also he really enjoyed the R song. Bud comes in with a Jar Jar Boost as well. 5,000 sats. Use a boost. I've been using my Pixel Buds Pro since they came out.
They've had conversation detection for a bit. It lets you tap on the Bud once to cancel it if it's triggered by accident. I don't know. I can turn it on and I can turn it off, but I have never had it automatically detect a conversation and turn itself off. It's the most frustrating thing. I would very much appreciate that feature. And then one last boost to round us all out.
Jen from Matik comes in with 2,000 cents. About the subject on a phone listening to your conversations, I can share a really good breakthrough of how it really works once and for all so people stop believing misinformation. It's a French YouTuber. I can put some subtitles in there for English so you can enjoy it. All our phones are listening devices. Spoiler alert.
No, it's correlation through metadata and more. And he links us to a YouTube video, which I'll put in the show.
I feel like that's what we said, though.
Yeah.
Yeah.
Yeah. It's actually the fact that they can figure you out without having to listen to you, which I think is creepier. It'd be actually better if they were just listening to you, transmitting that up to some server and then, you know. Yeah, just like hard searching on the words you said. I guess deriving text and intent. Right. Yeah.
That would be more like direct and gross where this instead is just a network of monitoring you where they've actually figured you out over the years.
Can I take a creepy diversion? Yeah. So my boy has decided to watch Minecraft videos on YouTube in my office, right? My home office. Sure. I've reluctantly let him. Google has figured out that likely if I open YouTube on my laptop, It's me. So it surfaces all like, you know, the political stuff I watch or big history buff, right? All that stuff. Some of the, you know, the gaming stuff.
If it's in here, which I'm in my home office now, it's basically exclusively showing like kiddie YouTube stuff or kiddie, excuse me, kiddie Minecraft stuff. Fascinating. And it must be fingerprinting the device and saying, well, on this device, they watch this.
Yeah. So. You know, I have gotten that sense myself. First of all, I think I see a certain set of suggestions on the TV versus what I see on the desktop. Yeah, it's the television in the home office. And then I've had really bad insomnia like this entire year. Tell me about it.
I have been watching a lot of long form kind of relaxing content at night, you know, and after it gets to be about midnight or so, when I open up the YouTube app, all of my suggestions shift and it's stuff that's multi hours long. And it's, it's like a couple of themes. And one of them is like old art bell shows, because honestly, I just like listening to a professional.
And then the other one is long, boring, boring, Pandantic analysis about Star Trek things for like four hours. And I just started surfacing all of those late in the night. And then during the day, I don't see those ever suggested at all. I feel like we're going to need some links on the ladder there. Yeah.
Well, just search for Star Trek stuff late at night for about six months in a row, and there you go. Problem solved. All right, that wraps up the boosts. Thank you, everybody. We had 18 folks that streamed those sats through the streamers. We stacked 24,586 sats altogether, and we had nine boosters who... Through them, plus the streamers, brought us a total of 110,221 sets.
Not a banger episode for us. But we appreciate the messages and the support. Thank you, everybody. If you'd like to boost the show, go get a new podcast app at newpodcastapps.com or really take the plunge and go get Fountain FM and let it spin up a Nostra identity for you as well so you can just start playing in that entire ecosystem when you want and you can take that identity anywhere with you.
Shout out to all our boosters. We appreciate you. Boost! Okay, so as we wrap up, Early analysis is not looking so good for the iPhone 16, and I want to get your thoughts. Because I think it's actually a pretty good device. The iPhone 16, the base model, I think is one of the best base models they've had yet. The Pro and the Pro Max look really nice with a lot of RAM, really nice battery.
And the first weekend pre-orders seem to really have sucked. Down year over year by 37 million units, like one of their worst years ever potentially. What do you think this is? OK, disclaimer, it could be that Apple just ordered a ton up front and so there's just a ton of inventory. Nope. But it's also more likely that the pre-orders are down because this is all coming from supply chain analysis.
I think the pre-order is down, right? I mean, we've seen historically since we've been doing this show that the biggest way to get to kind of juice iPhone sales is a new form factor.
Yeah.
A significant difference in case, not just size. So that's got to be it. Also, I would say that there's not a lot that's really new here. I'm on a 14 Pro Max Shabangabang, whatever they call it. And I'm not upgrading because I don't need to. Right. I'm a techie. I do a tech podcast. So why would I imagine the normies out there? I just I don't see it.
It seems like it's a pretty it's a pretty iterative product. Even I'm two generations behind and it doesn't seem that different. Right.
So I think a lot of the new is the Apple intelligence stuff and that hasn't shipped. You know, if you get an iPhone 16 right now, there's no Apple intelligence on there. So why not wait till the Apple intelligence OS update ships and there's, you know, iOS 18.5 or not one or whatever it's going to be. Probably not one is baked into the image on the phone and just buy that one.
And you don't have to preorder. You can just get it. So I'm not too surprised. I think if you look at the probably whole year release cycle, I bet you this is going to be a pretty good seller. But people are feeling tight right now. You know, I saw a comment on 9to5Mac, I think. And it was, I'm still using the iPhone 12 mini. I'd sure enjoy a new iPhone, pro or not, but it's just so much money.
And since I'm 41 and I don't really care much about unique emojis and I'm not really generating any imagery without making it myself, and I'm plenty capable of writing and editing my own personal or business emails myself, I'd rather just have other things that the $1,300 gets me When my iPhone still does what I need it to do.
Like food and electricity.
I guess. Yeah, it's like everything besides a phone that's working for you, right?
Yeah, it just, it doesn't. Now, I'm hoping that the next iPhone has something. I mean, by that time, my battery will probably start being terrible.
Well, here's my question to the audience. First of all, what does it take for you to get a new phone? Is it just simply your current phone gives out, can't replace the battery, you break it? Because here's my theory. iPhone 17 release window, which I don't even think they're going to call it the iPhone 17. But I think next year, because of this problem...
They're going to introduce the foldable iPhone. You know they've been working on one. They might not call it an iPhone 17. They might just start the foldable generation off with its own number series. But you're going to see the iPhone foldable announced.
And I'm going to go out on a limb and say, you know, they've been working on it for five years and it's probably going to be the best foldable device out there. Would you get one of those?
I would totally get a foldable. Yeah, because then it becomes like a small reading tablet too.
So tell me what you think, audience. And if you've had a foldable, I'd also like to hear from you because maybe you've got experience that would tell us otherwise. But I think that would juice sales. The problem is it's going to be like $1,500. I was going to say, is that two grand? And what is it going to do? Run iPad apps?
Anybody that has any insights on a foldable iPhone and how it would work, boost it and tell us that as well. Because there are no iPad apps. Oh! All right, Mr. Dominic, is there anywhere you want to send the good people throughout the week?
Go check out alice.dev and check out Linux Unplugged. I like this week's episode.
Oh, yeah, thanks. LinuxUnplugged.com. You can find some Wes Payne over there, too. Go get some Wes Payne. If you create yourself a Nostra identity and you're looking for somebody to follow, you can follow me over at chrislast.com. I am on the Weapon X as well. Mostly don't tweet much, but I do respond to replies. And I post about live shows.
Chris LAS over on Weapon X. And there's also the Coder Radio Show, I suppose, at Coder Radio Show. The way to really do it is join our chat room. That's where it's at, coder.show slash matrix. There's people going in there right now during the live show. They're banging, suggesting, helping us title it. And we do this here show live.
You can find the live time and date at jupiterbroadcasting.com slash calendar. I think we're going to need to, no, I'll be doing it next Tuesday. It'll be regular. I figured it all out. So we'll be live at our regular time, noon Pacific, 3 p.m. Eastern, Tuesdays, jblive.tv. All right, that's it. Links to everything we talked about at coder.show slash 5888. Thanks for joining us.
We'll see you right back here next week.