# 353: Don’t Be Evil Unless the Government Asks Nicely Duration: 100 minutes Speakers: A, Justin, Matt, Jonathan Date: 2026-05-13 ## Transcript [00:07] A: Welcome to The Cloud Pod, where the forecast is always cloudy. We talk weekly about all things AWS, GCP, and Azure. [00:14] Justin: We are your hosts, Justin, Jonathan, Ryan, and Matt. [00:18] A: Before we get into this week's news, we want to take a minute to tell you about We Are Developers World Congress, which is finally making its way to North America this September. If you've spent any time in the European tech scene, you probably know the team behind it. They've been running World Congress in Berlin for over a decade, and it's a big deal over there, pulling in more than 15,000 developers every year. Our friend Kote from Software Defined Talk is actually speaking at the Berlin event this July, and from what we've seen, these are the people who know how to put on a good developer conference. This September 23rd through 25th, they're bringing it stateside to San Jose. Organizers are expecting more than 10,000 developers with over 500 speakers across 18 different content tracks, covering the entire stack, including cloud, DevOps, AI, security, software architecture, data engineering, frontend, and developer experience. If you've got a team, everyone's going to find a full schedule. It's not just sit and listen sessions. There are keynotes, workshops, masterclasses, and hands-on labs. The kind of stuff you can take back home and work on on Monday. There's an impressive list of speakers, including names from Datadog, Honeycomb, Sentry, Google, LinkedIn, Stack Overflow, Netflix, Microsoft, and Stripe, plus Kelsey Hightower, Oliver Pommel, Christine Yen, Scott Hanselman, and Angie Jones. Head over to wearedevelopers.us to grab your ticket and use code DEVPOD26 for 15% off. That stacks with their group rates if you're bringing 4 or more people, and honestly, at that price, you should probably bring the whole team. [01:50] Matt: Episode 353, recorded for May 5th, 2026. Don't be evil unless the government asks you nicely. Good evening, Ryan and Matt. How you guys doing? [02:00] Justin: Doing well. [02:01] Jonathan: Good. How are you? [02:03] Matt: I just got back from a lovely weekend away and came back to record with you guys. So you're welcome. [02:08] Justin: Thank you. Hope you're rested and had fun. [02:12] Matt: Yeah, rested, probably not. I mean, just ran around a city for 4 days. [02:16] Justin: Yes. [02:16] Matt: Did a bunch of things and ate some good food and saw some shows and it was a nice time. And, uh, it was close enough to Matt that we probably should have got together, but, uh, we didn't coordinate far enough in advance that that would've happened. So I had a one-year birthday too, so life was, yeah, it would've been tough. [02:30] Jonathan: Family weekend. [02:32] Matt: Yeah. But, uh, my wife wants us to go back for like weeks and like get an Airbnb. And so if we do that, we'll definitely hook up with it when we do that. But, uh, anyways, well, uh, it is a busy week once again in the cloud. And, uh, first up is once again earnings. So, uh, they were blessed enough to all announce earnings on the same day other than Oracle, which I don't know when they announce earnings. Just happens. [02:58] Justin: Aren't they the ones that do like 6 months off anyway? [03:01] Matt: Yeah. Yeah. Like they're a weird calendar year. They have all kinds of stuff. But anyways, but they all report the same day, so I don't have a really an order, uh, that I'm just gonna go through these. So Microsoft posted, uh, Q3 2026 revenue of $82.89 billion, up 18% year over year. With Azure cloud services growing 40%, slightly ahead of analyst expectations of the 38 to 39% range. Capital expenditures came in at $31.9 billion, about $3 billion below what the analyst consensus thought it would be, which contributed to the stock dipping 2% despite the earnings beat, reflecting investor sensitivity around AI infrastructure spending levels. They're not spending enough, it's bad news. They're not spending too much, it's bad news. Like, I just, I can't help them. Microsoft said annualized AI revenue now stands at $37 billion, up 123% year year over year, spanning Azure-hosted AI services and Microsoft's own AI tools, though the metric excludes some infrastructure workflows, which is worth noting when comparing figures across the different quarters. 365 Copilot commercial seat count grew from 15 million to over 20 million by end of March, including getting continued enterprise adoption of AI productivity add-ons at a pace worth tracking for cloud practitioners evaluating Microsoft's enterprise AI traction. Gross margin narrowed a bit to 67.6%, the lowest since 2022, as data center depreciation costs are increasing. Because of all those hot, hungry AI chips. Yeah, this is— [04:17] Justin: it's interesting that they're spending less because I figured that they'd— everyone would be just spending more than they originally were predicted, right? [04:23] Matt: I don't know if it's an issue they didn't want to spend the $3 billion. I think it's the issue there wasn't supply available to buy or delivered in time for the quarter. [04:31] Jonathan: So that's what I was thinking. They're, they're trying to do it, they just can't. There's not enough, you know, RAM in the world. [04:38] Matt: I think it's the RAM problem. I mean, that's, I just saw today Apple stopped selling the 96GB option of like one of their, um, one of their Ultra PC computer configurations for minis. So like, you know, you used to be able to get 512 and then you couldn't, you couldn't get that anymore. You can only get 256 now. The biggest you can get for a, what is the thing called? The, it's not the mini, it's the, the Studio. Thank you. Yeah. Stupid naming. Uh, the Studio basically, uh, now maxed out 96 gigs, so they can't get memory for them, which is just crazy to me. So hopefully that comes back later. But, uh, because 96 seems low for a studio, but just me. [05:14] Jonathan: But there was, um, Unify. I saw on whatever their latest, you know, Dream Machine was, has a memory upcharge, like a memory. Yeah, like, like it's, it's here's your item, and then on the things where taxes, shipping, it's like memory surcharge. [05:30] Matt: And you're like, so it just replaced where it's a tariff with memory upcharge. [05:34] Jonathan: Yeah. [05:35] Matt: Nice. Well, that's good. Well, congratulations, Microsoft. I'm sorry your stock didn't appreciate it quite as much as, uh, they should have, but that is the way of analysts and what they do in their business. I am looking at their stock, uh, in the last week since earnings was last week. They were at $424 on average, and now they're averaging around $410, so about $12 to $13 drop. Uh, they dropped more than that, they come back, uh, about half of what they dropped initially because they're on earnings day, they were at $423 and then they dropped the next morning to $400 and now they've come back to about $410 to $413. So anyways, uh, you know, not so great for them. Amazon AWS revenue reached $37.59 billion in Q1 2026, growing 28% year over year, which is its fastest growth rate in over 3 years as we've been tracking it. Slowly going down, which is just shows you how much spend is happening in Amazon. It came in above analyst expectations of 26% growth. Amazon's capital expenditures hit $44.2 billion in Q1, along with a full-year projection of $200 billion, primarily driven, of course, by AI infrastructure. Free cash flow dropped 95% year over year to $1.2 billion over the trailing 12 months, a direct consequence of AI investment levels, raising questions about when that spending translates to direct returns. Amazon has also formalized AI partnerships with OpenAI, Anthropic, and Meta, which signals continued infrastructure demand growth and suggests AWS capacity expansion will need to accelerate. Further to continue to support these relationships. Q2 revenue guidance came in at $194 to $209 billion and came in well above analyst estimates of $188.9 billion. The wide operating income range of $20 to $24 billion reflects uncertainties likely tied to tariff impacts and variable AI spending timelines. [07:14] Justin: It's crazy to me that Amazon's entire revenue is Microsoft's AI revenue, right? Like, whoa. But yeah, that's That's what you get for Office 365, and it's reach. [07:28] Jonathan: I mean, the free cash flow for them is interesting. You know, I know they're investing, but that's a massive drop in cash flow year over year, down to $1.2 billion. I mean, still a ton of money to have in cash. [07:43] Matt: I mean, I imagine it's a combination of them making these big investments into Anthropic and others, as well as the AI capital investments they're trying to make. So it's it's a combination of both those. But yeah, I'm surprised they let it go down to $1.2 billion. I mean, 95% drop, no matter if it's the right thing to do or not right thing to do, it's, it had to be somewhat shocking to Wall Street a little bit on that one. But let's look at this tape, see what the stock did after the earnings here. [08:06] Jonathan: So that means their original cash flow was, you, previous year cash was $24 billion, if I can quickly have Claude do math for me. [08:15] Justin: Yep. [08:16] Matt: Uh, so basically at earnings last week they were $263 a share, and today they are at $273. So they've got up. and they have gone, trended up since this announcement. So even though free cash flow was down, uh, because I think mostly based on the fact that Amazon AWS growth 28%, that's been a, been dragging on their stock for a bit that it wasn't going up and it was only at 20% or 23%. Uh, so that, that was, it beat the consensus is a big deal. So analysts are thinking now that's probably a good deal is my guess why their stock, uh, improved dramatically. And then finally round it up, Google Cloud posted 20. $0.2 billion in Q1 2026 revenue, a 63% year-over-year increase, with enterprise AI solutions cited as the primary growth driver for the first time. You know, now carries a $460 billion backlog, signaling sustained demand well into future quarters. Sundar Pichai noted that Alphabet is compute constrained in the near term, stating cloud revenue would have been higher if supply could have met demand. It's a notable signal for cloud customers who may be experiencing capacity limitations on GCP. We've been experiencing before AI. Alphabet raised its 2026 capital expenditure guidance to $180 to $190 billion, with the CFO indicating 2027 CapEx will increase further. The $35.7 billion spent in Q1 alone on servers, data centers, and infrastructure reflects the scale of investments required to support AI workloads. Gemini Enterprise paid monthly active users grew 40% quarter over quarter, suggesting enterprise adoption of AI tooling on Google's platform is also accelerating at a meaningful pace. And Waymo, uh, surpassed 500,000 fully autonomous rides per week and is expanding its traditional US cities. While recent $16 billion fundraising round valued at $126 billion, which is important because Alphabet owns majority of the stock. [09:50] Justin: Yeah, it's crazy numbers for, for the data centers, you know, which we knew. And it's, it is funny because, you know, is it part of the bubble? Is it gonna, is it gonna pop and then it's not gonna be here? Or like, I go through these waves of like, it's gonna, it's all gonna burn tomorrow or no, it's, this is just our new normal now. But it's crazy. I don't know. [10:14] Matt: Yeah, uh, Alphabet releases Sundar Pichai's remarks separately. So I, um, you know, we use CNBC to track these stocks that we have kind of consistent reporting on these. But basically there were some interesting things in Sundar's, uh, notes. So basically Google Cloud revenue hitting $20 billion, which is 63% growth year over year. Backlog, you know, as mentioned, but the Gen AI model-based products growing nearly 800% year over year. So that's pretty huge. Yeah, they talked about the TPU AI for inference with 80% better performance per dollar than the prior generation, which is a big deal. A bunch of that CapEx is going to go to TPUs, I'm sure. And then they also talked about some of the things they announced last week at Next in his comments. But overall, Google on the stock did very well. The analysts were very happy with Google. So happy, in fact, that their stock, once I find Alphabet, it's the weird one in my thing, Goog. All right, there it is. Uh, basically on earnings day, they were at $347.68. The next day they jumped to $369, a $20 increase overnight. And now they continue to go up and today they closed at $383. So they've basically got— their stock has basically gone up $40 since earnings, uh, based on all of this positive news in their stock. So congratulations, Google. You won, you won the earnings. All right, let's move on to other exciting topics. We have a cable corner today. And so we, uh, basically as everyone knows, we love cables, undersea cables, that is. And so anytime we find a good undersea cable story, we'd like to talk about it. And so, uh, there's two articles basically about, um, cutting cables though, which is not the best thing for the cables that we love. So first up, uh, a crucial Taiwan undersea cable was severed by an old shipwreck. And so basically Taiwan had to go back to backup microwave communications. The Dongyan Island lost its undersea cable connection after a C4 shipwreck shifted during bad weather, prompting activation of a backup microwave communications for the island's 1,500 residents. Incident reinforces a known issue of reality: physical undersea cables remain the primary backbone for reliable, high-bandwidth connectivity, while wireless alternatives like microwave links and LEO satellites serve only as degraded fallbacks. So, uh, Taiwan apparently monitors 24 undersea cable links around the main island and has blacklisted 96 vessels suspected of connections to China. Studying how nations are treating cable structure as critical security parameter rather than purely a commercial asset. So interesting. [12:33] Justin: Yeah. [12:34] Matt: And leading into that is that China apparently tested a deep-sea electrohydrostatic actuator that can cut undersea cables at a depth of 3,500 meters. So shipwreck or China, you answer the question. Yeah. They apparently successfully tested a deep-sea electrohydrostatic actuator at a depth of 3,500 meters, or roughly 11,500 feet. Represents a notable extension of previous capabilities that topped out at around 2,000 feet. The device combines hydraulics, an electric motor, and a control unit to a single compact system, eliminating the need for external oil piping and making it more practical for deep-sea deployments from research vessels. Practical efficiency gains are measurable. A 2022 pipeline cut took 5 hours for a single 18-inch pipe, while a 2023 remotely operated vessel could cut 38-inch pipes in 20 minutes, illustrating rapid operational improvements. Undersea fiber optic cables carry the majority of global internet traffic and financial data, meaning any credible threat to this infrastructure has direct implications for cloud connectivity. Uh, so yeah, so bad. So China is, uh, definitely preparing if they ever go to war with us or anybody else, and they're going to cut all the cables. That's what I hear. [13:33] Justin: I mean, there's the jokes about the Great Firewall of China, right? This is, this is hardcore. [13:38] Matt: It's a, it's a physical firewall at this point. [13:40] Jonathan: Yeah. I mean, the best way to get any real connectivity is always layer 1, you know, check to see if the cable is there. You know, it's one of, I feel like a lot of debugging that I always end up doing, at least at home and side projects. You know, so here, you know, layer 1, you know, physical is always gonna be important. It's always gonna be the fastest and most reliable. Then we have the other stuff, you know, but it's latency and other issues that are gonna arise further. [14:07] Justin: Yeah. [14:09] Matt: Well, let's hope, uh, no cable cutting happens anytime soon, especially with Iran war happening. Uh, and let's, uh, keep those cables flowing and, uh, keep the shipwrecks away apparently as well. Linux, uh, 7 is now available and is available to you on 7 distributions. This is not a milestone release similar to when Torvalds jumped from 3.x to 4.x in 2015 to avoid unwieldy version strings. The biggest thing for 7.0 is the Rust support is now officially stable in the kernel after 5 years of incremental work with native build tooling supporting x86, x64, ARM, and RISC-V architectures, which has direct implications for system security and memory safety. The revamped scheduler introduces lazy preemption by default and adaptive scheduling domains, which should improve throughput for containerized cloud workloads and reduce latency on hybrid CPU architectures like Intel Alder Lake. AI tooling is now a recognized part of the Linux development workflow, with Torvalds and stable kernel maintainer Greg Kro Hartmann both noting a notable improvement in the quality of AI-generated bug reports reaching the kernel team directly. Cloud enterprise users can test 7.0 today through rolling release distros like Arch Linux and OpenSUSE Tumbleweed, with Ubuntu 26.04 LTS and Fedora 44 expected to ship it within a few weeks. So you can now get to Linux kernel 7. [15:17] Justin: Fancy. [15:18] Matt: Yeah. [15:18] Justin: It's always neat with Rust. Yeah. [15:20] Matt: The Rust is a big thing because now you get out of C++ compiled binaries and the core parts of the kernel. So this should be a huge improvement to availability, reliability, and potentially security as well. As long as that was handled well. I think it's interesting that I wouldn't see Torvalds as a guy who loves AI. I mean, I don't know him personally or I've ever talked to him, but Just the few things I see him write on the, on the news, you know, that comes to my attention from the, the newsgroups and stuff like that. He seems like an old curmudgeon who would hate AI, but apparently not. [15:49] Justin: Did he really say he loved AI? It's like he just cared that the bugs that he was getting are better written, which is, you know, right. [15:55] Matt: Well, I mean, that's something. [15:57] Jonathan: Yeah. [15:57] Matt: I mean, he's, he's crediting AI for writing the better bug reports. So I mean, I guess that's a win. [16:02] Justin: Yeah. [16:03] Jonathan: Just proves that humans can't actually do anything. We can't even write our own bugs. Very well. [16:07] Matt: Oh, especially kernel bugs. [16:08] Justin: Like, oh, I've looked, I've read a few and it's, you know, I don't understand anything. But yeah, it's like, I don't— [16:15] Matt: there's a, there's a defect in the memory alloc 4775734652 register and you're like, uh, you've lost me. [16:22] Jonathan: Yeah. [16:23] Justin: Like, I know kind of what that means, but I don't know how you fix that or— [16:27] Matt: yeah. Or, or what to do with this information. Yeah. I think I conceptually understand what we're talking about, but I don't Understand it really. Yeah. So in a story that's kind of scary for global warming, just 11 data center campuses in the US are linked to natural gas projects permitted to emit up to 129 million metric tons of greenhouse gases per year, which exceeds the annual emissions of countries like Morocco or Norway, even at half capacity. So that's crazy. Behind the meter power, where data centers generate their own electricity rather than drawing from the grid, has grown from 4 gigawatts in early 2024 to nearly 100 gigawatts in the US development pipeline by early 2026. Driven largely by grid connection delays and utility cost concerns. Unlike traditional grid-connected power plants that cycle down based on demand, data center power plants run at near constant load, meaning actual emissions are likely to be much closer to permitted maximums than the industry standard two-thirds reduction estimate customers often cite. Major AI companies including Meta, Microsoft, OpenAI, and XAI have made public carbon reduction commitments, but the scale of these gas projects could offset years of stated emissions progress, with Meta's Ohio projects alone potentially erasing over 10% of its claimed 4-year emissions reduction. Air permits do not guarantee construction. Permit shortages are a real constraint, and several high-profile cloud providers like Fermi face leadership and financial instability. So the full emission scenario may not be materialized, but the trend towards fossil fuel-backed AI infrastructure raises long-term questions for cloud providers' sustainability commitments. Yeah, that could be bad. [17:49] Jonathan: Yeah. [17:49] Matt: Yeah. That's a lot of metric tons of greenhouse gases. [17:52] Justin: I mean, my, my AI, or sorry, my sci-fi fueled, you know, narrative in my head is like, oh, this is, this is how the world ends. [18:00] Jonathan: So cool. [18:00] Justin: Like huge advancement in AI. [18:03] Matt: And we, yeah, my, uh, my youngest son actually is, is very anti-AI cuz it destroys all the water. That's what he tells me. [18:09] Jonathan: Yeah. [18:10] Matt: He's like, it's just, I'm like, well, that's, yes, water was definitely a thing in many data centers, but most data centers now recycle water. [18:16] Justin: Uh-huh. [18:17] Jonathan: Most of the new data centers recycle water. [18:19] Matt: The new ones. Yeah. The old ones don't. Yeah. But the, you know, the new ones at least, and the new ones I assume are what are being built to run most these AI workloads cuz of the power density requirements. I assume they're using recycled water plants. Um, so, you know, he can now come to me with this argument instead, like, all the CO2 for the AI from natural gas generators. And I know, like, I think it was XAI's data center somewhere in the South, it has, like, apparently 12 huge generator trucks just spewing, uh, emissions, and the neighbors all hate it. Uh, and basically, like, they're only approved to have 6, but they're running more, and you have a whole bunch of community concerns around data centers in the world, uh, as a whole. But this is a good reason. This one I can understand. So yeah, the light pollution one also is one I also very much understand, because the light pollution thing is ridiculous. Light— [19:07] Justin: you never think about that as, you know, data centers being heavily on light, but it's, it's just giant warehouse space and a lot of it, right? [19:14] Matt: So, well, the, the— [19:15] Jonathan: but where's the light? [19:16] Matt: The problem is, is it's the light outside, because they have perimeter security they have to maintain, and to have perimeter security, you have to have lighting, and then they're all using really bright LED lights. So there was, there's an article about about some, some poor couple, I think in Virginia, who lives like down the street from a data center, an Amazon data center, and they're like, it basically lights up their entire, their entire yard. [19:35] Justin: Yikes. [19:36] Matt: How much light comes out of that place. Uh, and again, like, I was— and my initial thought was like, well, why wouldn't they— don't need the lights, they turn off. Like, no, you need the light for security. [19:44] Jonathan: So it's like, uh, it's how you have to get all those SOC and ISO audits done. You have to prove you have security. [19:49] Matt: And they also talk about the constant buzzing from all the all the AC units. And so it's just like constant noise. [19:54] Jonathan: But the buzzing and the noise is, I've heard, is a big issue for a lot of people, you know, because it's, it's just there, it's always on, you know, and it drives people a little bit crazy from what I've been told. [20:06] Matt: That's what I understand too. So they are trying to, in my town where I live, they are zoning, uh, this area. And you know, while there's no plan to build a data center there because there's no power in our area, apparently they use natural gas, uh, generators, but I hadn't thought about that, but, uh, you know, basically they're like, well, it's the technically it's a mixed-use zonal. And so one of the uses could be data centers. And so, but like the community's just losing their minds about it. Like protests and going to city council meetings and like, we can't have data centers in our backyard. And I was like, I don't want them here either. But, uh, yeah. [20:37] Jonathan: But my next question is how many of, how many of them, how many people that are saying all that also on the same point say, I wanna use AI or I use a SaaS application, you know, You're using it, you just don't want it next to you. Yeah. [20:51] Justin: I mean, these days you don't really have a choice but to use AI, like Google Search, you're using AI, so many products. It's, it's built in AI. [20:58] Matt: It's built into everything. Yeah. [20:59] Justin: Yeah. [20:59] Jonathan: So, right. [21:00] Matt: That's why I'm, I'm a big fan of on-device AI models. Like, you know, hoping Apple, they should roll out some SLMs onto the iPhone that you can use for some, you know, basic use cases and things. There's a lot of stuff that you need that doesn't require a lot of AI. Compute power but can benefit from it. Well, if you are here on the show, you should raise your hand if you've been hurt by GitHub in the last month. I definitely have, and my hands are raised, and so are my co-hosts too. [21:25] Jonathan: Yeah, yeah. [21:26] Matt: You might have, you might have noticed that GitHub, uh, has had some bad availability, and some reports are saying that in the month of April their availability was potentially as low as 85% depending on how you calculate it. Of course, GitHub doesn't say it was that bad. But, uh, you know, there's been a trend, uh, that, you know, things have been bad for GitHub and, you know, so originally people were saying, well, it's because, you know, the Microsoft transition and, or they're, you know, all the AI features they're building or they're, you know, it's vibe coding causing all these problems at GitHub. And then someone did the math and was like, actually, no, their availability started suffering from the moment Microsoft bought them. So just saying. [22:02] Jonathan: Well, then they also did the big push and said they had to move into Azure this year. [22:07] Matt: Yeah, and then the process. [22:07] Jonathan: Can't imagine that's helping them. [22:09] Justin: The timing seems awfully suspect. [22:11] Matt: Well, I'm sure they're gonna spin that as like, well, by moving to Azure, we're gonna improve stability 'cause the data center that we're in right now, which is the GitHub data centers, are aging 'cause we haven't invested in them to force this decision, I'm sure. [22:21] Justin: Probably. [22:21] Matt: But basically it's forced GitHub to write a blog post about their availability. And so GitHub CTO published a transparency post acknowledging two recent incidents and outlining a scaling plan that has grown from a 10x capacity target in October 2025 to a 30x target. By February 2026, driven by rapid growth in agentic development workflows since late 2025. So basically they're saying a lot of their scaling problems are because the growth capacity that they thought they needed is 30x what they thought because of AI-generated code and the amount of AI-generated code they're getting, which, okay, yeah, I could understand that. There was a merge queue incident causing incorrect merge commits for squashed merges in groups of more than one pull request affecting 658 repositories and 2,092 pull requests with no data loss, but incorrect default branch dates that could not all be repaired automatically. And on April 27th, the incident involved an Elasticsearch cluster becoming overloaded, likely from a botnet attack, which disrupted search-backed UI experiences across pull requests, issues, and projects. GitHub acknowledged that this system had not yet been fully isolated as part of their reliability prioritization. And this one I knew about because literally I had a message in my— every time I went to a PR for days that my PR may not be complete due to a search issue. And I was like, well, that's not good. GitHub says they're addressing scaling challenges throughout— through several technical approaches, including moving webhooks out of MySQL. Redesigning session caching, migrating performance-sensitive code from Ruby monolith to Go, isolating critical services like Git and Actions, and pursuing a multi-cloud strategy beyond their current Azure migration. GitHub is updating its stats page to include available metrics and committed to reporting both large and small incidents, responding to developer feedback about needing better transparency during disruptions. This also resulted in an article that— thank you, Matt— uh, basically Mitchell Hashimoto, who is You know, from Hashiwork or Hashi— started Hashi, I thought. Started Hashi, basically. Uh, he had, uh, he retired from Hashi, you know, and they got bought by IBM, and he's basically been working on a product called Ghostly, a terminal emulator. And he basically said, I'm out of GitHub, I'm moving, I can't. It's been too unreliable and too unavailable for me to continue to remain on GitHub. And so he is moving his workloads elsewhere. Wow. [24:22] Jonathan: I mean, I've definitely been bit by some of these, especially the search one was like multiple days and you couldn't find anything. It was, you know, you couldn't just load up pull requests because anytime you press the pull request button, it was a search technically. 'Cause it's like, is status open or not? So every feature was just hung for a couple days. [24:43] Justin: I wish I would've thought to blame a botnet attack for all of my Elasticsearch cluster problems. [24:49] Matt: I mean, you were basically being DDoSed by the, by the publishers. [24:54] Jonathan: You built your own DDoSing. Yeah. Yeah. [24:57] Justin: It is interesting cuz it's, you know, I guess it's a whole bunch of searches. It makes sense that directly there's no circuit breaker there between the UI layer and the search backend. That sucks. Not an easy recovery either. [25:09] Matt: Where does he, I mean, where's, did he say in the article where he's gonna go to? Is it GitLab or like? Back to SourceForge. [25:16] Justin: Oh, he says, he says, he says multiple. He's not specific about where he's going. [25:23] Matt: Okay, so he's gonna, he's gonna spread his risk basically. I don't— has anyone used Ghosty? I, I know when he announced it, I was like, uh, I'm sort of interested, but I, I use Hyper as my terminal most days and I'm pretty happy with it, so I don't have a lot of desire to try a new terminal. [25:37] Justin: Yeah, I don't— I mean, I've been using I've been using iTerm2 for a decade and I'm not gonna change. So. [25:45] Matt: I mean, I don't like Hyper because I could write some really simple JavaScript plugins on top of it, which were pretty nice. And so it was, that was how I had found it. But, uh, I don't know how well that's getting supported these days. I feel like it doesn't get a lot of updates, but it's a terminal. How many updates does it need? Right. So, uh, okay. So yeah, hopefully GitHub fixes their issues and, uh, you know, can improve things dramatically cuz it's going pretty bad. For them right now. And, uh, they're being kind of mocked pretty mercilessly on Twitter and other social media networks about how bad their availability has been. It's maybe 85%, if that number's true, factoring in all these things, it's, that's bad. [26:18] Justin: I imagine that's like the worst way you could calculate it, but, 'cause it sounds a little bit crazy. [26:24] Matt: I mean, you're basically, I think there's someone counting all of the last Azure out, even though like most people were not impacted that, you know, at some point there's like a very small handful of people who were impacted long-term. Yeah, that's just kind of thing. I mean, Anthropic uptime is not great either. [26:37] Justin: No, sure isn't. [26:39] Matt: So, you know, it's— there's definitely, uh, some challenges. I think Claude AI says their uptime was 98.73% in the last 30— or over the last 90 days. So I think if you shorten that down, it's probably pretty bad too. But if you scroll through their incident on their SaaS page, it's like there's a lot of issues on these platforms, and it's a scaling challenge of any of them, so I get why it's an issue. [26:59] Jonathan: I just love how everyone uses Status Page and you can always tell as soon as you load their status pages, you're like, okay, this is the last thing that Status Page got me. Like, you don't even have to think about it. [27:10] Justin: You know, it's such a, like, I never wanna build a status page ever again. And you know, like, I love that there's just this thing. It's so plug and play. [27:19] Matt: Yeah. I mean, it's nothing, a toil that I never wanna do. I mean, there are some cheap ones out there, but you know, there are, there, there's definitely Atlassian's, which is statuspage.io, and then there's a couple others. [27:29] Jonathan: Yeah. [27:29] Matt: And they, they aren't as expensive. Some of them are more expensive than others depending on what features you wanted to, but if you just want a simple SaaS page, you get it for like $30 a month. It's not bad. [27:36] Jonathan: Yeah. [27:37] Matt: But the, you know, having, when I first started at a company, uh, we had a hand-built status page and I was on call for incident command and they were like, here's how you update the status page. You take a jump box into the data center, you go to this specific jump box, that has access to the specific SQL node, but then you have to update this table by hand. And then once you do that, you have to go run a Jenkins job that then does a compilation of the static page, then publishes it to the website, all from this. And I was like, at 3:00 AM in the morning, I'm not doing that because my brain cannot function to think through that process. Uh, and so I forced us to change to a different tool. [28:17] Justin: Yeah. But, uh, yeah. And thank you. And I thank you for that. [28:20] Matt: Cause it's, uh, you're welcome. [28:22] Justin: When they explained that whole process, it wasn't, with any kind of tone that was like, we're sorry, or this isn't a good idea, but we just haven't got to it. It was like, this is normal, perfectly normal, everything's fine. [28:33] Matt: He was like, I don't like this. Yeah. [28:38] Jonathan: I always tell people things should be simple enough and well documented enough at 3:00 AM when I'm drunk, I should be able to figure out how to fix it quickly. [28:49] Matt: Mm-hmm. [28:49] Jonathan: And that. [28:50] Matt: Is not that. [28:52] Justin: I've actually written that in my sort of documentation that I share with team. [28:56] Matt: That's exactly drunk at 3 AM. Yeah. It's, there's a test. I can tell you, I can tell you that this, if this requires more than two, than two brain cells at 2 in the morning to figure out. Yeah. It's not going to go well. [29:08] Justin: I'm not your, I'm not your guy. [29:09] Matt: Yeah. [29:10] Justin: No. [29:11] Matt: Uh, so, you know, we're in this era of, um, AI defining bugs and lots of really bad vulnerabilities going on. And then this, Linux issue here, CVE-2026-31431, Docker copy fail. It's probably the worst one I've seen yet. It's a local privilege escalation vulnerability affecting virtually all Linux distributions. And I mean all, allowing unprivileged users to gain root access with a single Python script that requires no modification across the distros. The exploit is particularly relevant to cloud environments because it can be used to break out of Kubernetes containers, compromising multi-tenant systems, and inject malicious code through CI/CD pipelines. The kernel patch exists across multiple versions, including 6.1.2.85 and 6.6.137 and 5.15.204, and hopefully in Linux 7. But most Linux distributions have not incorporated those fixes at the time the exploit code was publicly released, leaving a substantial window of exposure. Confirmed vulnerabilities included Ubuntu 22.04, Linux, Amazon Linux 2023, SUSE 15.6, and Debian 12, meaning cloud workloads running on major providers are directly at risk until patches are applied. The 5-week gap between private disclosure and public exploit release combined with slow distribution and level patching highlights our ongoing coordination challenge in the Linux security ecosystem that cloud operators need to account for in their patch management process. So yeah, uh, you do need to patch this as quickly as possible. Uh, it is bad. And, uh, you know, I feel like we're in this world where we're dealing with a lot of really bad patches and issues, and I'm hoping that this is because of the age of AI and, you know, researchers having new tools are able to find things easier. And this will be a bad year and then we'll be more secure from then on. That's my hope. [30:44] Justin: I mean, the article specifically mentioned that a security research found this with an AI tool, so that's absolutely true, right? That's, is helping surface some of these things. But yeah, this one's scary just because anything like GitHub Actions or Jenkins or any publicly hosted sort of execution engine is very vulnerable. [31:01] Jonathan: So why is your Jenkins server publicly executable, Ryan? Well, how do you allow that? [31:06] Matt: I, it's not, right? [31:07] Justin: It's more like, uh, like if you have like CircleCI, or, or some of these others. [31:11] Matt: Yeah, I know. [31:12] Jonathan: Just making fun of you here. [31:13] Justin: Yeah, yeah, yeah. There is no GitHub or Jenkins Cloud, right? Like even CloudBees doesn't, isn't foolish enough to do that. [31:20] Matt: I don't think so. Yeah. It would, it would be really bad choice on their part if they did that. [31:24] Justin: It really would be. [31:25] Matt: I'm, I'm going to go look right now though. [31:27] Jonathan: DevOps for the cloud. There's that. [31:31] Justin: Yeah. For all we know, some of these terrible tools use Jenkins underneath, but. [31:35] Matt: So that looks like this, their CloudBees unified platform, uh, is a, is a public, is a cloud version, but it doesn't look anything like the Jenkins that you know and love. So I think, I'm hoping that it's something that's a rewrite. [31:47] Jonathan: I don't think the words love is correct, but I'll, I'll let you have it. [31:52] Matt: All right. AI is how ML makes money this week. GitHub is going to be making a lot of money because they're going to start charging Copilot users based on their actual AI usage. They're shifting to this usage-based billing model starting June 1st. So you have no time to fix this problem. Replacing the current flat premium request model with AI credits that map one-to-one to monthly subscription costs, with overages billed by token consumption across input, output, and cache tokens. The pricing variation is substantial depending on model choice, with OpenAI GPT output tokens ranging from $4.50 to $30 per million tokens, meaning a developer using GPT-5.5 for a dump tick task could see a meaningfully higher cost than when using lighter models for simple completions. Basic features like code completion and next edit suggestions remain outside the credit system entirely, But Copilot code reviews will now consume GitHub Actions minutes, adding another cost dimension for teams running automated review workflows. This shift reflects a broader cloud infrastructure reality. Multi-hour autonomous coding sessions consume substantially more compute than a single chat query, and a flat rate pricing becomes difficult to sustain as adjunctive AI workloads grow in frequency and complexity. For real-world teams, the practical implication is that AI spending will now require the same cost governance as other cloud services, with model selection and session length becoming factors in budget planning rather than just feature preferences. And I'm sure you'll hear all about this in a few weeks, uh, in June at FinOps X. Yes. Yes. Everybody's gonna have FinOps for visibility for AI workloads. 'Cause this is probably the biggest gap in most of the platforms we're seeing is that cost visibility is very problematic. And like what people use on that, et cetera, is a big issue. [33:21] Justin: And it's so extreme, right? Like the, it's not like EC2 optimization, which we chased down for years, right? Because it's, that was optimizing, you know, pennies, but over this long-term things you could add up to real savings. This is like real quick, you can have a bill. And I've been using Copilot largely because the premium request model allows much more freedom. And so like this will annoy, this is gonna make me switch to another one, right? Like it depending on, you know, what the, how it averages out. So I hope they actually build that appropriately. But I hate running out of quota. [33:56] Matt: And not only that, but you have to work on governance models. You're like, who's going to be in charge of the quota management? You know, who, who has approvals? Is it a manager approval thing? Like, it's— yeah, it's becoming a very complex problem very quickly, and, uh, I'm in the thick of it right now as well. It's nice though not to be the, uh, the person who's in charge of spending all the money, because like when I own cloud, like, the CFO just yelled at me. Now I'm like, I don't own this. This is not me. I own, uh, like, and I've been very clear from day one, like, I didn't make the mistakes of Claude. Like, no, no, this is that this specific person spent this money. [34:27] Justin: Mm-hmm. [34:28] Matt: Go talk to their boss. [34:29] Justin: Yeah. [34:30] Matt: It's much more pleasurable than in the cloud where it's all a bit opaque. Like, oh, well, this user spun up an API, then this thing that he spun up costs a lot of money. Uh, now I can like, no, this is a direct thing that person did. So it's a little bit easier too. [34:41] Justin: A little easier, but it's not always easy. Like go into like Bedrock just added the visibility. Per user and you still don't get it at a lot of the other sort of model providers. [34:50] Matt: Oh, even Vertex still, there's a big gap there as well. Yeah. [34:52] Justin: Huge gap. [34:54] Matt: Oh, sorry. Vertex is no more. Agent platform. [34:57] Justin: Oh yeah. [34:57] Jonathan: God, Justin. [34:57] Justin: Gemini agent platform. [34:58] Jonathan: Can't you change your nomenclature overnight? [35:01] Matt: I know. It's funny. It took me forever to start calling it Vertex and not SageMaker. [35:04] Justin: The same name. [35:05] Matt: SageMaker 2.0. Yeah. Yeah. [35:09] Justin: The same name as every other Gemini product. [35:13] Jonathan: They've got everyone hooked now. All these companies have everyone hooked and they want to keep, you know, people are wanting to use it. Everyone's, how many of our listeners I'm sure have been told you have to use AI for X percentage of your job, et cetera, et cetera. And it's been so subsidized that in this case, it's not gonna be subsidized anymore. You're gonna start to get bills. You're gonna have to teach, you know, your finance team or just someone random, hey, you, while you're using this, for this, don't use Opus for everything, go use Sonnet, you know, for these things, or go use Haiku for these simple tasks. I'm teaching— it's gonna be a whole other learning adventure for your developers also, particularly here, which I'm sure everyone's developers have been told, AI, AI, AI, you have to be using it, you're not using it, how are you getting anywhere? Now you're actually going to start paying that bill. You— I think you're going to see some sort of decline because Ryan said, and I know I do, I abuse that free tier, that premium tier as much as I could right now. And soon I will not. [36:13] Justin: And there's so many like cases where you don't have really full control over the, you know, the tokens coming in, going in or coming out, right? Like doing a code review on a code base, like do a code review just on this portion of my code base. [36:27] Matt: Sure. [36:27] Justin: But is that going to be as meaningful as something that can trace calls all the way through and realize that, calling it this way is, uh, is gonna have an impact with the module other way elsewhere. So it's, it's kind of annoying. [36:40] Matt: Well, and, and it's expensive too. Like I, I turned on Claude code reviews for my personal project and I turned it off within a day 'cause it burned like every code review. I mean, I think we talked about on the show when they first announced it, like every code review's gonna cost like $30. Now the code review's super thorough and if I was an enterprise, I'd be probably be interested in this, but as my personal development, I was not interested in what I was getting. [37:00] Jonathan: Right. [37:00] Matt: Uh, but it's, it's weird cuz like on the Cloud Pod, we have Claude Code for pull requests as well, but it's the old version and the old version is, is good enough for what I need for the Cloud Pod. I'm like, how do I get that old version though on my other project? I don't, I can't figure out how to do that. Mm-hmm. So, so I've actually been playing with other tools like CodeRabbit, which are, you know, it's like $30 a month for CodeRabbit. I'm like, well, that's worth it to me. You know, that's just as good as what I see with Cloud Code and it does some very similar things. Uh, but again, it's the full system context that they don't all have, which is what you really need. It is, it is funny to me how many dumb bugs that AI makes. [37:34] Justin: Yes. [37:34] Matt: AI then catches itself making. Mm-hmm. Even as the same model, you're like, huh, that's so weird. [37:39] Justin: Okay. [37:39] Jonathan: It's just, you know, but it goes back to our prior conversation too, which is like, okay, we found, we're finding a lot of bugs because we're leveraging AI. Now that you're not, are we get, and we're producing so much more code. If you look at that GitHub chart that they had of like, the number of code lines committed and everything else, we're producing more code than ever before. And if people continue on that pace, but we're not reviewing it, we're not getting that second, I'm going to say quote unquote eyes on it, even if it's another AI bot reviewing it, are we just going to be adding more bugs or is AI going to be producing better code? But based on what I've seen, no, you know, and kind of you'll see that drop off. So I'm curious to kind of see where that on the spectrum everything falls over the next couple of years. [38:24] Justin: I assume if costs go up, they'll kind of drop proportionally to one another, just 'cause you'll, you'll generate less code and therefore you'll review less code. But we'll see. [38:33] A: Yeah. [38:33] Jonathan: Depends if you have your security department budget to go use the expensive AI tools. [38:38] Justin: Yeah. That's, yeah. I mean, it's true. I mean, I, when I look at my, my GitHub profile, like year over year since AI's release, like the, you know, the amount of code lines that I've contributed is, is, you know, hundreds of thousands more year over year than than previous. And I'm not a developer, right? Uh, this is sort of tangential to my day job, so it's kind of crazy how much it empowers, but it's expensive. And as we talked about earlier, it's killing the environment. So it's like we gotta balance all that and figure out what it is. [39:09] Matt: Yep. But until then, we're gonna all keep losing our jobs to AI, allegedly. Right. And not the slowing economy due to the Iran war tariffs and everything else, but you know, again, we're gonna continue to blame AI. [39:20] Justin: Well, yeah, and we'll use AI and then we'll forget how to do our jobs, which is fine. And then so that's, you know, it's all downhill. [39:28] Matt: Yeah. [39:32] Jonathan: There are a lot of cloud cost management tools out there, but only Archera provides insured commitments. It sounds fancy, but it's really simple. Archera gives you the cost savings of a 1 or 3-year AWS savings plan with a commitment as short as 30 days. If you do not use all the cloud resources you've committed to, Archera will literally cover the differences. Other cost management tools may say they offer insured commitments, but remember to ask, "Will you actually give me my rebate?" Archera will. Check out thecloudpod.net/archera to schedule a demo today. [40:14] Matt: One of the core issues that we haven't talked about that's depressing about AI is, of course, identity and AI agent identity. And so there was a good article here from Snowflake, and, you know, they have a solution. I think it's more interesting to talk about AI governance in general. The core issue Snowflake raises is that AI agents lack persistent, verifiable identities, meaning when an agent queries data, initiates a transaction, or produces a derived insight, there's often no audit trail linking the action to defined authorization or scope. Snowflake argues governance must be embedded at agent creation, not added later, with explicit permissions, expiration windows, and scoped access that does not simply inherit from the invoking user's credentials. A notable technical concern is the derived insight problem, where an agent authorized to access HR data and financial data separately may not be authorized to combine them, and currently, uh, access controls on source data alone do not address this boundary. Snowflake's internal go-to-market AI assistant serves as a practical reference point, using role-based access, certified queries, and defined scope at creation to support over 6,000 employees answering 35,000 questions per week. With full auditability. For enterprises and regulated industries like financial services or healthcare, the absence of agent in the infrastructure creates concrete compliance exposure. So everyone's trying to solve this. Okta has a solution for this, you know. I see it in some of the other players like Delinea are trying to do things in this space as well. So there's lots of people trying to solve this, and there's really no right answer that seems perfect for all use cases. So, because it's— [41:33] Justin: you can't implement a solution without, um, having a specific set of technologies that you're combining together as a platform or like a big purpose tool that you're spending all the money on to leverage it. It's really tricky to do. Like, how do you gate all the agents from, you know, like someone running Claude on their desktop to the application that's running in the cloud to, you know, the chatbot that's on a website? And it's sort of tricky to gate all of those things and put protections on all of those things. And you know, until there's like sort of a, I'm waiting for the next, you know, like MCP or agent-to-agent protocol where we're, where it's off so that it can all be a common element and you can leverage centralized tooling to sort of govern identities for AI agents in multiple places. Like this solution works great for Snowflake. Gemini has released agent identity that works great on you know, Gemini Enterprise and within Azure. But do I— how do I manage? Can I manage those the same way together? You know, not really. That's tricky. [42:39] Matt: Well, we'll keep an eye on this one, or when Ryan solves this problem for all of us, uh, let us know what it is. We'll, uh, we'll come back and talk about it some more. [42:47] Justin: Yeah. [42:48] Matt: For those of you who are big Cloud Code users, uh, and you— if you follow any of the Reddits, uh, for Cloud Code, you'll know that there's a lot of people who always complain about basically Claude code getting dumb over time. And this is typically being caused by the massive amount of change Anthropic is making to Claude code. And so they, they basically, um, have kind of killed their own credibility. And in fact, they've seen Cortex downloads increase like 4x in the last 2 weeks just due to some of the Claude code things. And so it's interesting because I'm one of them. Yep. Anthropic confirmed 3 product-level issues degraded Claude code performance over 7 weeks starting March 4th, including a reasoning effort downgrade from high to medium, a bug discarding reasoning history mid-session, and a system prompt capping responses at 25 words between tool calls. That's an issue. The issues were fixed as of April 20th, Anthropic published a postmortem, but the 7-week gap between the first issue shipping and any public official led to significant user backlash, subscription cancellations, and speculation across GitHub, Reddit, and X. A notable analysis by AMD senior director of AI examined 6,852 Claude code session files and 234,760 tool calls, including Claude shifted from a context-gathering approach to a faster edit-first style that increased error rates on complex engineering tasks. And this is a radical risk for change. LLM workflows on top of AI coding tools, undocumented behavioral changes cascade into downstream systems, delivering commitments and zero trust before any official acknowledgement arrives. And tonight, I don't update Claude code automatically. Like, I wait and go read what people are saying about it before I do any of those upgrades, just because it does, it has had an impact and has had problems in the past. So we are hoping to see, you know, Claude and Anthropic basically become more transparent about these issues and hopefully address them quicker than waiting 7 weeks because Yeah, it really hurt them in the eye of public markets. [44:30] Justin: I also call BS, like I've had issues much later than April 20th, so it, and it always seems to come up right around the time when they're releasing a new model. [44:37] Matt: Yep. [44:38] Jonathan: No, that's, there's their whole infrastructure crashing behind the scenes. [44:42] Justin: Well, I'm, but it's, it's them tuning it. [44:44] Matt: Yeah. [44:45] Justin: To deal with the load for sure. And so there, yeah, things where you, you, you're requesting a certain level of reasoning and they're like, no, I'm gonna shift it through 2 days or 2 layers down. There's nothing you can do about it. It's completely behind the scenes except for you to get this answer that doesn't make any sense with all kinds of hallucinations. [45:01] Matt: I mean, I think part of it is the, you know, you had to spin up these new models, you know, demand's gonna be really high, so you start kind of chipping away capacity from your other ones to reallocate them 'cause there's only so much finite capacity out there. But I, I think it's interesting they haven't released a new Haiku model since October of last year. They've released a new Sonnet and a new Opus. You know, and basically— [45:20] Justin: two opuses since then, right? [45:22] Matt: Yeah, two opuses. That's true. And I think it's just because they're like, well, if we give another Haiku model, then it's even more capacity we have to deal with. And so I imagine it's a challenge of, you know, how do you scale something this large at this frequency and not piss people off in the process? But I think transparency goes a long way in this. And I think if they were more transparent, of course they're not going to tell you like, hey, we're going to release a new model next week, so we're pulling capacity. But you know, other issues like bugs or acknowledging bugs would be a big part of it. And so that would be helpful. [45:51] Justin: Yeah, this is interesting cuz like SaaS companies and like as the world took on more SaaS, you did get a lot of that transparency and you got, you know, companies like Amazon talking about how they deal with big spikes in traffic and, and a lot of transparency. And it seems like we're going away from that. And I, I wonder if that's just, I think cuz it's not really a SaaS business. [46:10] Matt: I mean, Anthropic did have a, a public postmortem about this, but again, It was like 3 weeks later. Yeah, it was like 3 weeks afterwards and it didn't really say like, and it's gaslighting me. [46:19] Justin: Like, it's not true. [46:19] Jonathan: Yeah. [46:20] Justin: Right. Yeah. Like there's no way you can tell me that, oh no, we fixed it all on the 20th. Like, no, no, you didn't. [46:27] Jonathan: I feel like a lot of the large companies and the massive companies do public RCAs and postmortems in that way. A lot of the medium to small companies don't because I've had vendors of mine that just go down and you're like, what happened? And they're like, I'm like, well, I need an RCA because you went down this third time in two weeks. And they're like, it was a bug, we fixed it. I'm like, that's not a postmortem. Like, give me any real information. They're like, no, we don't do that. I'm like, okay. [46:57] Justin: I mean, I guess Anthropic's not that big, but that's, it's kind of crazy for— [47:01] Matt: I mean, they're not, they're bigger than you think they are. They have 5,000 employees, allegedly. Okay. Yeah. [47:07] Justin: Okay. [47:07] Jonathan: I was thinking of like, you know, the CloudFlares, the, you know, the, those level. But, you know, I guess Anthropic could be at that. [47:14] Matt: I mean, even Amazon has kind of gone away from the really good postmortems they used to do. I feel like the, the recent ones have been, you know, light on details and, and a little bit more, you know, they blame the user. [47:25] Jonathan: Mm-hmm. [47:25] Matt: Which was something they would not have done in the past. So I don't know. Uh, well, the divorce, uh, is official official. I think finally OpenAI and Microsoft have amended their partnership agreement to make Microsoft's license to OpenAI's IP and models non-exclusive, allowing OpenAI to offer its models for major cloud providers beyond Azure. Azure retains the designation of primary cloud partner through 2032, but that status is conditional on Microsoft's ability to continue honoring the arrangement, which introduces some ambiguity. Worth watching. The revenue share structure has changed notably. OpenAI will continue paying Microsoft 20% of revenue, but that obligation is now capped at an unspecified amount and only guaranteed through 2030 rather than running indefinitely. And the removal of the AGI clause is a meaningful structure change, as the revenue share is now independent of OpenAI's technology progress, eliminating the previously contentious trigger that could have ended exclusivity based on a hard-to-define benchmark. For developers and businesses, this opens the door to accessing OpenAI models through providers like AWS or Google Cloud, which we'll talk about shortly, which could affect pricing, latency options, and procurement decisions depending on where the workloads already live. [48:24] Jonathan: I feel like whoever wrote this contract either was done so long ago that the concepts that we're running into didn't exist, or it's just a really bad job also negotiating it. Like, contracts should have details and metrics, very defined things. [48:41] Matt: But maybe it wasn't possible back then. I know, how do you even have this conversation? I'm like, okay, Ryan, I have a new technology that you have never seen before that I think is going to revolutionize the world, and I'm going to need hundreds of millions of dollars of compute capacity from you that I cannot pay for. Would you invest? [49:03] Justin: No, no, exactly. [49:05] Matt: Yeah. So, so Microsoft had outta all the cards and so they were able to find a, you know, Microsoft was willing to do, try this experiment with the, with the right to get venture revenue things. Now this is all pre-ChatGPT. This is all back when they were doing GPT-1 and GPT-2 and no one saw those products. And you know, they were going out trying to find this. And so then finally they get ChatGPT, it becomes the secret app that unlocks the potential of all of this. Stuff. And now all of a sudden, you know, they're making money hand over fist and all of a sudden the deal doesn't look so great. And now, but now I also, you don't, you can't meet my, you can't meet my demands either, Microsoft. So now we have a different problem. Yeah. Not only has the business changed and we now know it's profitable and it's something, and we know we have something and you've screwed us over, but now you can't actually produce what we need. That's the problem. That's the change. So all the leverage shifted and that's why they're able to get this done. [49:52] Justin: Yep. And that's why it's been slow, right? They had to unwind it. [49:57] Matt: Right. [49:58] Jonathan: But like, and not like, I feel like there's so much ambiguity in there. Oh, just me. [50:04] Matt: I mean, the AGI thing was super ambiguous. Like, at least I got rid of that cuz it was like, what's AGI? I'm like, I don't know. [50:09] Jonathan: I don't know. [50:11] Matt: We'll know when we see it. Oh, good. Okay. [50:12] Justin: That's all. [50:13] Matt: That's, I love that in legalese. We'll know it when we see it. [50:18] Jonathan: Yeah. [50:18] Justin: When they nuke, when the AI gets smart enough to nuke us from orbit, but then we won't really be worried about that clause. Yeah. [50:23] Matt: We won't care anymore. Yeah, I think we, we talked briefly about Microsoft, uh, Meta releasing Muse Spark, uh, which is their new proprietary cloud-only LLM built from scratch with new infrastructure and architecture. And when we talked about it, uh, we didn't really know what was the future of Lambda, uh, but apparently now Meta has confirmed Lambda is dead. Muse Spark offers no downloadable weights, no self-hosted capability, and is currently limited to private API preview access. Existing Lambda models will remain available on major cloud providers but are expected to receive only incremental maintenance updates as with no continued frontier-level investment. This looks like a substantial user base, as Matt reported 1.2 billion Lambda downloads before the pivot. There was no migration path from Lambda to Muse, so due to fundamentally different deployment models, and switching to alternative providers requires rewriting vendor-specific APIs. So yay. [51:06] Justin: Yeah. [51:06] Matt: Developers looking to stay in open ecosystem have 3 practical options: continue using existing LLM models, knowing that they will fall behind frontier competitors, switch to alternatives like Mistral, Deepseek, or Alibaba Qwen, or migrate to proprietary APIs from OpenAI, Google, Anthropic, or Meta. [51:21] Justin: Yeah, this, I, Lambda seemed to fill a large gap, right? Like it was, and so, I mean, Quen I, I see a lot of, but then other, you know, I don't see Mistral very much. And so like, it's kind of, kind of nuts for, for local stuff. And it kind of, you know, if you don't want to pay huge amounts of money and you want something that's a little bit more open source, it sucks if there's not a real, real option that really can replicate what you're experiencing with that, like a commercial grade one. [51:47] Matt: Yeah. I haven't really heard much of Mistral at all. Uh, Cohere is kind of out there too, uh, as another option potentially. But DeepStick and Alibaba and KIMI, all Chinese. [51:56] Justin: Mm-hmm. [51:57] Matt: All highly popular and very successful and way cheaper than any of the OpenAI APIs. So, you know, it is, it's one of these where I'm sure Meta felt they were under pressure that they weren't monetizing LLM. And so because they weren't monetizing it, they were getting punished on their stock price. But the fact of the matter is if they built a better ecosystem around Llama that people could take advantage of to customize and tune Llama into different things, they could have built a, probably a pretty successful business around their open weight ecosystem. But Zuckerberg's just not that guy. It's not his style. And so I just, I, I think it was always sort of a weird choice that they went open first. And you know, I hope maybe someday they'll come back and rethink this, but I doubt it. [52:38] Justin: And without any of the traditional stuff, right? Like, you know, there's plenty of companies like Reddit that they've made money on the open source models of things, right? But, so nope. Yep. [52:49] Matt: So rest in peace, Llama. We barely knew you. You know, hopefully Gemma 4 and some of the others, like, you know, those models get, continue to get some love from Microsoft, you know, from Google, et cetera. So I would love to see OpenAI also release, I think actually maybe they have an open source model. Do they? I think. Really? Uh, I don't know if it open source, open code, you know, I don't know how they define these is a little weird. [53:12] Jonathan: They have an open model by OpenAI, the GPT OSS model. I have seen that around. Yeah. [53:17] Matt: The O4. [53:18] Justin: Yeah. [53:18] Matt: Some of those. Yeah. The OSS 20B GPT stuff. Yeah. So they, they exist, but again, like, I don't know how open those are versus Llama was open. Like, I don't think I can take, I don't think I can take a GPT OSS model and go create a new model out of it. Like you could with Llama. So, and then speaking of OpenAI, they released GPT-5.5 Instant as the new default model for all ChatGPT users, replacing GPT-5.3 Instant, and is also available in the API as chat.latest. Paid users retain access to GPT-5.3 Instant for 3 months before it's retired. The hallucination reduction numbers are worth noting, with GPT-5.5 Instant producing 52.5% fewer hallucinated claims than GPT-5.3 Instant on high-stakes prompts in medicine, law, and finance. And reduced inaccuracies claims by 37.3% on conversations flagged for factual errors. The model includes improvements in visual reasoning, math, STEM questions, and smarter decisions about when to invoke web searches, making it more capable across the kind of tasks everyday users actually run into. Personalization gets a notable upgrade with faster retrieval from past chats, uploaded files, and connected Gmail using a new memory source feature that shows users exactly what context shaped the response and lets them delete or correct it. For developers and businesses, the API availability as a chat latest means these factually and personalization improvements roll in automatically, though teams relying on consistent behavior may want to to pin to a specific model version given the default is now changing, although it'll go away in 3 months. So pin with care. [54:38] Justin: Yeah, that's difficult to adjust. I don't know. Most businesses I've worked at, 3 months isn't a whole lot of time to pivot to something new. No. I don't know what kind of changes it would take to go from 5.3 to 5.5. [54:50] Matt: So I don't know. [54:50] Jonathan: I almost wonder if it's like, if they're trying to make it just like, you know, here's, oh, I can't think of a package. You know, whatever open source latest package, you know, 1.1 to 1.2 to 1.3, just a standard, you know, software development packet, you know, library that you're upgrading. And 99% of the time you just upgrade, you know, whatever the default is. As long as your application code seems to work, you don't care. I wonder if that's where OpenAI is trying to make it by just changing the default over to this, being like you're just always using latest. You're— was it SCA or STAT, whatever one you're dependency analysis just automatically updates it to the latest version of, of, of it as it goes. [55:30] Justin: Yeah, I mean, that part makes sense. Um, but it's like, if you, I know that I can't remember which one, if it was 4.0 to 4.1, but it was one of 'em where it was like, it changed the entire interaction and people were really upset. Like my, my, my friend that I've been chatting to for 6 months just went away, you know, became lobotomized. [55:50] Jonathan: I thought that was like through, I don't remember which versions, right? [55:54] Justin: But it's one of those things that companies who are using these for their interaction for customer service or for, for that, you know, I can see the need to want to sort of control that a little bit more, but I don't know what it takes to sort of update all your prompts to like, be less of a jerk, be, you know, be less friendly. That, you know, tuning all that stuff when you're embedding that into your own products. [56:17] Jonathan: Yeah, I was gonna say, I think it's less of that, but just QA testing it, you know, and making sure that you've worked out and like used it and then like adjusted the prompts accordingly because a lot of companies, you know, from doing, you know, security questionnaires and whatnot that touch that world of AI, they go, how are you testing it? How are you validating the results are accurate? How are you working through the full system of it? And that's what takes the time. [56:43] Justin: Yeah, everyone's making up answers there because there's not a real good way. Yeah, we asked the same question multiple times and it sort of said the same thing, so we went with, you know, like, well, let's move on to Amazon as we're already an hour into this. [56:59] Matt: Jesus. There's not a ton of articles here, but, uh, last week we talked about QUIC and we were very confused, and it just happened we had gotten some QUIC articles on Monday and then Tuesday after recording cutoff is really when all the news really dropped on this. So basically they took a product called Qwik that they used to have and they killed that product and they've created a new product called Qwik. So what we were, we're very confused about was that there was still part of QuickSight in it, which is the authentication layer, which is still the case, uh, which I think is basically Amazon's public Cognito instances through QuickSight. I think that's how they basically— [57:29] Justin: it's the no AWS account signup model. Yeah. [57:33] Matt: Uh, and so basically Amazon Qwik is an AI assistant that connects you to your apps, tools, and data to answer questions and take actions on your behalf, including scheduling meetings, sending emails, and following up on tasks with role-specific workflows for sales, marketing, finance, and operations. The new free plan lets users sign up in minutes using personal email or existing Google, Apple, GitHub, or Amazon credentials with no AWS account required, lowering the barrier to entry compared to most AWS services. The personal knowledge graph feature is notable because it learns individual user priorities and preferences over time, grounding responses on real business data rather than generic AI outputs. Pricing tiers include free, plus professional enterprise plans, with higher tiers adding agentic and business intelligence capabilities, enterprise governance, and unlimited user support. Pricing details are available to you over at aws.amazon.com/quick/pricing. The no AWS account signup, which is the QuickSight thing, positions Qwik as a standalone SaaS product rather than traditional AWS service, which is meaningful shift in how AWS is packaging and distributing AI tooling for most business users. The pricing on this is the free tier is $0, which allows you to chat about any topic or task, research anything in depth, automating repetitive stuff, turn ideas into real apps, turn conversations into deliverables, and connect the tools to Slack, Microsoft, Google Workspace, QuickBooks, and more. For $20 a month, uh, on an annual contract, or $25 per month billed monthly, you can get, uh, everything in free plus Qwik on your desktop, which is a proactive AI across email, messaging, local files, shared spaces for your team with knowledge agents and automations that compound across people, Qwik Work, uh, where you work browsers and Microsoft 365 extensions, and Scale when you're ready with user management, centralized billing, and up to 300 users. And then the professional level is $20 per user per month plus a $250 infrastructure fee per organization. Grow without limits, enterprise governance, RBAC, SSO, data sovereignty, and admin controls, dashboards and data visualizations that surface what matters, automate complex processes, support when you need it, and 25GB pooled storage per user. And then for $40 per user per month, you get, uh, additional— all of that plus author dashboards your way, certify and publish assets, and 50GB pooled storage per user. [59:30] Justin: I was just doing the query from on Nova. About, you know, what is the difference between Amazon Nova and Quick, just because I wanted to get it. And it, uh, it failed like you'd expect. Uh, so yeah, it talks about like the, the, uh, the Nova launcher on Android phones. I'm like, um, I don't think— [59:49] Matt: well, ironically, Amazon Quick is powered by Claude, because if you ask Quick, it says, I am powered by Claude, made by Anthropic. So it does not use Nova. [59:58] Justin: What? I wonder, like, that has to be some sort of limitation they built into Nova accidentally, right? [60:03] Matt: Because I, I think Nova needs a major update. So I just think it, so yeah, so there's Amazon Quick for Desktop application as part of the paid tiers right now. It's available for free. It's in preview. So play with it now to see if you want it later. They do support documents and visual creations in chat. So it can create Word documents, PDFs, PowerPoint presentations, Excel spreadsheets. I have not tried any of those features yet, but I am sort of intrigued to try them. And it also integrates into Google Workspace, Zoom, Airtable, and many other SaaS applications through their connectors. uh, that are out there. So there you go. That's, uh, what Qwik is. So now we understand it. So it's a desktop app. [60:37] Justin: It's, it's a desktop app. [60:39] Matt: It's basically, which is kind of what I said last week, just was confused with the QuickSight part of it. I was like, I don't know exactly how it ties into QuickSight, but it's a desktop app. It looks like ChatGPT. And, uh, since I installed it last week, I launched it for the first time today, and it had, uh, basically 30 releases since last time I opened this product. [60:56] Jonathan: Wow. [60:56] Matt: It's, uh, rapidly being evolved as we speak. [60:59] Jonathan: So they have their CI/CD workflow down properly. [61:02] Matt: Yeah, right. Yep. Amazon announced a couple of new things for Connect. Connect, of course, is their call center software. First of his decisions, uh, it's now generally available as an AI-driven supply chain planning tool combining demand forecasting, constraint-aware supply planning, and automated exception triage into a single solution targeting retail, CPG, automotive, and industrial manufacturing sectors. The service positions itself as an overlay on existing systems rather than a replacement, which lowers the adoption barrier for enterprises that have already invested heavily in ERP or legacy supply chain infrastructure. That's nice, I guess, if you're into that. Uh, but the one that's more interesting to me is Amazon Connect Talent extends existing Contact Center platform into the hiring space using AI agents to conduct structured voice interviews and score candidates consistently, which reduces recruiter workload during high-volume hiring periods. System draws on Amazon's internal hiring practices to power adaptive questioning and science-backed assessments, aiming to bring more consistency to candidate evaluation compared to traditional recruiter-led screening calls. Preview capabilities include ATS integrations, a mobile-first candidate portal, and the ability to evaluate hundreds of candidates simultaneously, making it relevant for organizations that experience seasonal or surge-based hiring needs like retail, logistics, or call centers. Currently available only in US East North and US West with no public pricing announced yet for the preview period. This organization's interested in cost modeling will need to request access through Amazon Connect talent page to get details. One product consideration definitely is the regulatory and bias risk landscape around AI-led hiring tools, and the fact that if you make me do an AI hiring tool, I probably will not continue on the interview process. This sounds terrible. [62:32] Jonathan: I've done a— [62:33] Matt: it's already so bad. [62:34] Jonathan: I've done a few of these. [62:36] Justin: Yeah, have you done the AI interview? Because I've just done— I've seen a lot of like AI evaluation of like resumes and seeing the output of that, and they're not good. No. [62:45] Jonathan: Okay. So I've done in the past, I actually had a client I used to work with that did, you know, it's one of those things I sort of understood what they did in 2018, and now I fully understand what multimodal model is, which I did not fully understand back then, where they did kind of games and their target was more like UPS drivers, FedEx drivers. You know, you're looking at like pools of people. And then you're, it was basically, you scored based on how the, you know, those games interacted with other people at that same position. So you had to have like a certain number of people at the positions and whatnot. But I've done a few of these recently as, you know, I've kind of played with, you know, one of those things you always do in life is you should always be looking at new jobs and everything else. But, you know, as I looked at this, it's interesting to do one. I did one voice call mainly outta curiosity of how not fun it is. And it's like, tell me your name, and you tell it, and then there's just like this pause, like the real-time-ness of it isn't quite there yet. So like, it's, it's almost like awkward still because it's not in real time and it's laggy and it's clunky, you know. So maybe Kinect now has it because that was probably about 6 months ago I did that, but it was still, it was an interesting thing to do. I I think if you ever actually make me do it as like an initial phone call, I might just say, thank you, I'll try again later. [64:06] Justin: Am I interviewing from someone in space? What's going on? [64:11] Matt: Yeah. All right, next up is OpenAI models, Codex and managed agents have come to AWS. They're basically expanding their partnership to bring OpenAI models, including GPT-4.5 to Amazon Bedrock in limited preview, giving enterprises a path to use OpenAI capabilities within existing AWS security controls IAM, identity systems, and procurement workflows. Codex, OpenAI's coding agent used by over 4 million people weekly, can now be configured to run on Amazon Bedrock as the model provider, meaning usage counts towards AWS Cloud spending commitments and customer data stays within Bedrock infrastructure. Initial integrations include Codex CLI, the desktop app, and VS Code extension. Amazon Bedrock Managed Agents powered by OpenAI is a new offering that handles orchestration, tool use, and governance for multi-step agentic workflows, reducing the infrastructure work required to move agents from prototype to production. All 3 capabilities launched today in limited preview, so available is now not yet general, and pricing details have not yet been publicly disclosed beyond the note that Codex usage can apply towards existing AWS cloud commitments. So if you have a burndown you need to do on your code, on your cloud build, this might be a great way to do that. [65:14] Justin: And it won't take long. [65:15] Matt: It won't take long, I'm sure. [65:18] Justin: Yeah, I mean, I'm starting to like this model more and more just because it's, you know, it's something that a lot of enterprises already have, which is a cloud ecosystem, and then especially with Amazon and Bedrock, them releasing the sort of visualization of the IAM identities behind some of the usage on Bedrock is super powerful. So that's, I kind of like it. So this one sounds like it's a little bit more full-featured than what I've seen on like similar things from Vertex AI with managed agents and, and be able to orchestrate multiple like Codex things. So it's kind of neat. Yep. [65:51] Matt: Uh, AWS is adding a visual configuration editor for CloudWatch agent directly in the EC2 console, letting users set up metrics, log sources, and deployment targets without manually editing JSON configuration files. Feature supports tag-based policies for automated fleet-wide management, meaning new instances launched by Auto Scaling automatically receive the correct monitoring configuration without manual intervention. From the instance detail page, operators can view agent status, update configurations, and troubleshoot agent health in one place, consolidating observability management that previously required separate tooling or CLI work. Visual Editor is available in all AWS commercial regions at no additional cost for the management experience itself, though standard CloudWatch pricing still applies for the metrics, logs, and the agent collects and having troubleshoot CLI level CloudWatch stuff many times. Thank God. Thank you. Yeah. I mean, I store my configuration once I get it right into Parameter Store just so I don't ever have to do that wizard on the client ever again 'cause it's so painful. But you know, being able to quickly see JSON configurations for log files would be great in a GUI. So thank you for that. I have not played with this yet. I meant to do that before the show, but definitely this potentially is a huge quality of life improvement for me. [66:59] Justin: Yeah. Especially if you're doing like custom log location and want to tweak it. [67:03] Jonathan: Mm-hmm. [67:04] Matt: If I didn't run so much containerized workloads, I probably would care a lot more cuz at least container logs are always, have always been centralized. Yeah. So yeah, it's really the stuff that's not in the container that I would need this for. [67:15] Justin: For the things hosting the containers, right? Like that's— [67:17] Matt: Yeah, exactly. [67:18] Jonathan: The ECS logs now I can, you know, typically my configuration CloudWatch logs, So that CLI-based setup, I can still see in my head and I don't think I've done it in about 5 or 7 years and I'm very happy they made this. [67:33] Justin: Yeah. That was truly, truly terrible. [67:37] Jonathan: Well, then you also had the multiple agents for a while, 'cause you had the CloudWatch agent, you had the SSM agent. You, I know they merged some of the— [67:44] Justin: I think there were 2 CloudWatch agents. I think there were 2 CloudWatch agents just by itself. Yeah. [67:48] Jonathan: Yeah. And then you had the SSM agent and I know they merged them all, I think at one point. Yeah. [67:54] Matt: They did merge them all finally. No, they, they did successfully do that. Yeah, they did it. [67:57] Jonathan: Okay. [67:58] Matt: But it was painful. [67:59] Jonathan: It was just painful. Everything about it was painful. [68:02] Matt: All right. Uh, I'm going live real time here. So I, I have not installed the CloudWatch agent on either of my machines. [68:12] Jonathan: So this real time feedback's gonna be more interesting. [68:15] Matt: Yeah, yeah, I'm sorry. Well, uh, I'll be back next week, follow up. [68:19] Jonathan: Justin does a thing. [68:20] Matt: Yeah. Uh, all right, Amazon is apparently trying to turn its massive shipping operation into another AWS. Amazon Supply Chain Services, or ASCS, opens Amazon's fulfillment network to outside businesses across automotive, healthcare, electronics, apparel, and food industries, directly competing with DHL, UPS, and FedEx. Companies can store inventory in Amazon fulfillment centers globally and access its its fleet of trucks, aircraft, and delivery vehicles. The service expands on the Supply Chain by Amazon Offrame launch in 2023, which initially focused on shipping products directly from factories. ASCS broadens this to include freight, distribution, fulfillment, and parcel shipping for businesses of all sizes. Early adopters included Procter Gamble, 3M, Lands' End, and American Eagle Outfitters, suggesting the service is targeting established enterprises rather than just small sellers. Pricing deals have not been publicly disclosed at the launch. The parallel data is worth noting for cloud practitioners. Amazon built internal infrastructure at scale, then monetized as a third-party service. The same model is used when opening its web infrastructure to outside customers in 2006. ECS follows that same pattern with physical logistics. So yeah, this could be really cool. I mean, they've always had some capabilities around this. Like if you sell on the Amazon Marketplace, uh, you can, you know, ship your product to the Amazon warehouse and they'll take care of fulfillment for you. Uh, but this is basically saying, look, you don't have to have any of your stuff going through the Amazon Web Service. We'll just sell you directly the logistics network. And so if you want to ship your packages, I'm sure all of the tools like Shippo and others will add Amazon Supply Chain Services as one of those options. And if it's cheaper to ship through Amazon than it is to ship through DHL or UPS or FedEx, it'll tell you that and you can make that choice. [69:50] Jonathan: Didn't Toys R Us move to Fulfilled by Amazon in like 2019 or something really early? [69:57] Matt: So they originally had a partnership where the Toys R Us went to amazon.com and that was a, that was a bad choice. Because that basically moved all those Toys R Us customers directly to Amazon customers, um, and led to part of the deterioration of the Toys R Us brand. So that was not a great move early on. [70:14] Jonathan: No. [70:15] Matt: But that was, that was early dot-com and no one knew it was, everyone thought it was a fad. So, but yeah, I mean like there was definitely been things like that, but it's interesting to me too because a lot of Amazon's fulfillment still comes through UPS and FedEx. So a lot of the last mile delivery is USPS or FedEx or these things. And so are they gonna, If you're using ASCS, your stuff gets still delivered by UPS anyways in some circumstances. So like in some ways, you know, does UPS and FedEx, are they a partner or are they a competitor? Kind of both. It's kind of both in some ways of this. So curious to see how this shakes out over the next year really, probably before we really see big impacts of it. But definitely on the news, FedEx and UPS stock were down. [70:55] Jonathan: Well, isn't there also, I remember there was a negotiation like a couple months ago with USPS and Amazon trying to finalize their, you know, multi-billion dollar deal too for that. So you're kind of looping in and taking all these different shipping vendors sort of along with you, but also tearing them down as they go. It's gonna be interesting to see where all this falls. [71:16] Matt: Yep. Very curious. I'm really curious about the pricing of it. That's gonna be the biggest part of it. [71:21] Jonathan: I don't think you'll ever really find out. [71:23] Matt: Like, can I, can I make, you know, Cloud Pod t-shirts and send, you know, basically send them to the warehouse and sell them through our website and then just have them get shipped by Amazon? That'd be awesome. I would love it. I don't wanna sell the t-shirts through the, what, through Amazon Web Service or Amazon website. That's silly. [71:37] Justin: But I mean, like, there's like 1,000 TikTok, you know, about side hustle and how people do dropshipping. [71:43] Matt: Yeah. [71:44] Jonathan: Yeah. [71:45] Justin: Now you have AI handle the front end stuff. Like there, you really don't like have to touch anything, right? Yeah. [71:52] Matt: I mean, potentially this could be really cool. So, and then yeah, we'll see how it works out. But launching agent core optimization and preview, adding automated recommendations, batch evaluation, and A/B testing to close the observe Evaluate Improve Loop for AI Agents Running on Amazon Bedrock Agent Core. Previously, developers had to manually read traces and guess at prompt fixes without systematic data-backed evidence. The recommendations feature analyzes production traces from CloudWatch log groups and proposes changes to system prompts or tool descriptions based on the specified evaluator without touching underlying tool implementations. So this is a, this is a good handy feature. [72:25] Justin: So yeah, this is what we were just talking about with the ChatGPT model. So this is Yep. That's pretty sweet. [72:31] Matt: Yep. [72:31] Jonathan: It's like we had the foresight to read all the show notes ahead because we could have linked those two together a little bit better. [72:37] Justin: I mean, I read it. [72:38] Matt: We read them. I knew. [72:40] Jonathan: I forgot. I'm not gonna lie. [72:43] Matt: For the dozens of us who are very excited about Ruby 4.0 is now available on Lambda. [72:47] Justin: I'm not even sure there's dozens of you anymore. I don't know. [72:51] Matt: I don't, I can tell you that I haven't even written anything in Ruby 4.0, so I have no idea if this is good or bad. I have unfortunately moved on to Python and to Go for most of the things I code these days, but, um, and TypeScript a little bit as well. Unfortunately, JavaScript. Yeah, but, uh, it exists. [73:08] Justin: It's a thing and it's frontend. You have to. [73:11] Matt: Yeah, you have to do it sometimes. So, uh, but anyways, uh, yeah, so this is great if you are into Ruby. Uh, I, if I thought I wanted to put myself with a dead language, I would go be really excited about this, but, uh, I'm happy at least it's available if I ever need it. [73:25] Justin: And full disclosure, Justin did try to kill this story. So, but I— [73:29] Matt: Yeah, you said I had to keep it. [73:29] Justin: I had to keep it in just to make fun of him. [73:32] Matt: Which is fine. And Peter's not here anymore, so. [73:34] Justin: Right. Yeah. [73:35] Jonathan: You know? [73:36] Matt: Yeah. AWS IAM is now providing higher maximum quotas for roles, role trust policies, instance profiles, managed policies, and identity providers. Some of these are increasing from 5,000 to 10,000 per account, or OpenID Connect providers from 100 to 700 per account. The role trust policy length increased from 4,096 to 8,192 characters, particularly useful for organizations with complex cross-account or federated access patterns. These increases are not automatic maximums but adjustable limits, meaning customers still need to request increases via the service quota console. Boo. There's no additional cost associated with these quota increases as IAM itself remains free. I mean, the only one I actually understand is the role trust policy length, because again, the cross-account and federated access makes a lot of sense to me that that's much more complicated. But anything other than that, I'd hope you have automation for, 'cause 5,000 to 10,000 instance profiles, bleh. Yeah. Like that would suck. [74:25] Justin: This is the agent identity problem, right? That's, I think that's, they're getting ahead of it, especially the, the OIDC provider limit, I think is, you know, you're gonna have a whole bunch of agent apps that are handling that OIDC flow or, or authenticating into Amazon using OIDC. So this is gonna be something that you'll see more of and hitting limits is gonna be probably pretty common given just how much spread there is with agent identities and how we don't even really know how to assign an agent identity to a workload. [74:56] Matt: We don't? It's just me. Yeah, right? Till it, till it's not. Until it's not. Well, that's good. If you want a terrible way to run AI agents in Amazon, you can now run them on WorkSpaces. As they now support AI agents operating virtual desktops in public preview. Agents interact with legacy desktop applications through mouse clicks, keyboard input, and screenshots without requiring any API integration or application modernization. This feature addresses a real enterprise problem. According to a 2024 Gartner report, 75% of organizations run legacy apps without modern APIs, meaning AI agents previously had no practical way to automate workflows in those environments. So I do know that there is a Claude Code plugin, or what you can run Claude Code on, uh, Amazon Lighthouse, What is it? Lightail? It's that cheap thing. It's Lightail, I think. But, uh, I feel like this is maybe them planting some seeds that we might get an OpenClaw, uh, implementation at re:Invent. [75:50] Justin: That is interesting. I was immediately thinking about like, you know, the old Mechanical Turk type things where this is just all this is going to be used for is sending me spam emails and texts and terribleness. But it is interesting to like have AI virtual desktop sort of Amazon WorkSpaces. 'Cause you know, I don't want to use Amazon WorkSpaces, but you know, an agent doesn't have any choice. So, and they can't complain, right? 'Cause they're too happy. [76:16] Matt: And can I make it run in a Linux one too? That way they have to suffer that outdated Linux packages. No, you're gonna run in that Linux workspace and you're gonna like it. Yeah. [76:25] Jonathan: I'm just picturing somebody spawning every agent into their own workspace and Having hundreds of workspaces scale up and down every second because each agent gets its own workspace and sounds painful. [76:39] Matt: It's definitely an interesting choice. I, again, I assume this is an OpenClaw, you know, you need this to basically— interesting, it uses— authenticates through IAM and full audit trails by CloudTrail, which of course you need. It does the implementation using the MCP standard. So the feature works with popular Azure frameworks like LangChain, Crew, and Strands to manage MCP endpoint exposed to the workspace stack. So definitely intriguing opportunities, uh, which you might be able to do with this. So we'll see. AWS WAF now includes an AI traffic analysis dashboard that tracks over 650 unique bots and agents, giving organizations visibility into which AI companies are accessing their content, what those bots are doing, and which endpoints they target most frequently. Thank you. Because you had bot— you told me that it was an AI bot, but you didn't tell me what they were doing. So I just had this big number. I'm like, I have no idea what that's doing to my site right now. [77:25] Justin: Yeah. [77:26] Matt: And then you had to go look at IIS logs or Apache logs and then you're just having a bad day and no one's happy and I'm cranky. And so thank you. [77:34] Justin: Thank you for finally doing this. And he's the executive. He needs it in picture form. [77:37] Matt: Yeah. Yeah. Which is the WAF dashboard is very pictory. [77:40] Justin: It is very pictory. But I mean, comparatively to, you know, when I started using it, like where you didn't have any logs, you were just like, it's, it's working. I promise. [77:49] Matt: Right. Uh, I also found lots of fun ways to make WAF really expensive. Oh, I bet there's all kinds of like really complicated rules you can turn on that. Like, oh, so why didn't my bill go up so much this month? Oh no. Yeah. [78:00] Justin: And they have different weights, right? [78:01] Matt: So depending on which one you use, so it's, it can be, and which one's being triggered and which one's being hit first and order of operations matters and like all kinds of things. So, yep. [78:09] Justin: I also found out the hard way. [78:12] Jonathan: Still stuck on like WAF 1.0 and I'm like, you guys are in the 2.0, which has been out for so many years. [78:18] Matt: Yes. 2.0 is, well, I think it's even like 2.5 now, so it's, I'm sure it is. [78:23] Jonathan: It's just been that long since I've really like gotten that detail. I mean, I use it for side projects, things like that. It just runs, but I never really touch it. [78:31] Justin: Yeah. [78:31] Matt: I just use it to protect the Cloud Pod website cuz it gets a lot of, because it has to be open to the world cuz of course podcast listeners are global. They're everywhere. [78:38] Justin: Yeah. [78:38] Matt: And so, uh, you know, you had to leave Russia open and China open and all these places. And so there's a lot of spam kitties who like to hit the site all the time and, uh, So yeah, I protect it with WAF and then also a firewall on WordPress and all kinds of craziness just to make sure it has to be defense in depth. Yo. Yep, exactly. [78:54] Jonathan: Yep. [78:54] Matt: Because that is the only way to secure that thing. [78:56] Jonathan: So it's WordPress. It's never secure. [78:59] Matt: It's never truly secure. I know. All right. Google, uh, has signed a classified deal with the US Department of Defense allowing use of its AI models for any lawful government purpose. If you remember right, this is what Anthropic got in trouble for. So apparently the no longer do no evil, uh, Google is no longer applying to military use cases. So the agreement includes non-binding language stating Google AI should not be used for domestic mass surveillance or autonomous weapons without human oversight. But the contract explicitly states Google has no right to veto or control lawful government operational decisions. So we told you not to, but if you do it, I can't stop you. The deal also requires Google to assist in adjusting its AI safety settings and filters at the government's request, which raises questions about how its standard model guardrails will be maintained across commercial and government deployments. And for GCP Enterprise customers, this is framed as an amendment to an existing government agreement rather than a new standalone contract, such as Google is expanding its existing cloud and AI footprint within federal agencies. [79:50] Justin: Yeah, the AI safety settings is, is the part that really bothers me because it's, it's got to be, you know, the government saying don't, don't provide details about this thing that we're doing, or, you know, if someone asks about, you know, finding out our our, our dark ops, you know, something, something, something, um, send us an email. And it's, yeah, I just, just kind of gross. Like, I use the tool, like, I get that, and I know why people don't like that, but, you know, this amount of interaction makes me feel real gross about it. [80:22] Matt: You can now generate files in Gemini, uh, which was something I thought you could always do, but apparently that was because I use Gemini Enterprise. Uh, but Gemini itself can now generate downloadable files directly from chat prompts, supporting a broad range of formats including PDF, DOC, Excel, CSV, Google Docs, blah, blah, blah. Feature is available to all Gemini app users globally at no additional cost beyond existing Gemini access, with outputs downloadable to local devices or exportable directly for Google Drive. Thank you. [80:48] Justin: Yeah, really handy. No longer have to copy paste everything. [80:52] Matt: Yep. Introducing Agent Gateway for ISV ecosystem for security and governance. This provides a programmable data plane that sits in the request path for all agent traffic, covering user to agent, agent to agent, and agent to tool interactions, including MCP calls. Google announced a partner ecosystem of 14 security vendors integrated with Agent Gateway covering identity governance from Okta, Ping, Savant, Silverfort, DLP solutions from Symantec and Netscope, and runtime AI protection from Palo Alto, Prisma, Cisco AI Defense, CrowdStrike, Zscaler, Checkpoint, F5, Exabeam, and Thales. A key design principle across most integrations is that security controls inject into the existing request path without requiring application code changes, which lowers the barrier for enterprises to add governance to existing agent workflows. Identity-focused integrations address a specific challenge with non-human identities, where tools like Silverfort automatically discover agents, map them to human owners, and flag overprivileged or stale credentials at runtime rather than relying on static credentials. Pricing details were not disclosed in the announcement. Availability varies by partner, with some integrations like Imperva for Google Cloud noted as currently in preview. Organizations interested in a specific integration should contact the IAM Gateway partnership team directly. [81:53] Justin: This is one of the things I really focused on when I was at Google Next, just because it's— I think we're going to see this pattern grow because I can't imagine a anything else is gonna work, right? Like I said before, it's really difficult to control where your agents are being executed from. And every solution up until now has really been, well, you have to modify the application code so that before your prompt gets analyzed, you send a response out to the service. And so now, now being able to sort of plug this in and have the visibility, it's something, it's not foolproof and you still have to like work with the rest of your business to sort of make sure that these things have their proper guardrails. But, but I'm happy to see tools like this and I want to play around. I've asked for demos. [82:37] Matt: I would like to get your demos too. [82:40] Justin: Well, most of them would probably be behind non-disclosure, so eventually. [82:45] Matt: Okay, fair. All right. Uh, Google Cloud is running a series of hands-on developer workshops across North America focused on building agentic AI applications, targeting platform engineers, security engineers, and data practitioners who want practical production experience rather and theoretical overviews. These are available all over the place, Sunnyvale, New York, Seattle, Austin, Texas, Toronto, et cetera, so, and Chicago. So if you're interested in this, definitely check out this tool, this training, as it's something free, and Google's training that's free is typically pretty decent. [83:16] Justin: So, very good. Yeah, definitely. They do such a good job at offering training. [83:20] Matt: Yes, they do. Which is a great transition to Azure, who's also offering free training. I don't know if it's good or not, but it's free. Uh, with the Microsoft Azure Infra Summit 2026, uh, it's a free virtual event running from May 19th to 21st starting at 8 AM Pacific each day targeting IT pros, platform engineers, SREs, and infrastructure teams with level 300 to 400 level technical content, allegedly. 3-day agenda is organized around build, operate, and optimize pillars covering topics like AKS operations, IAC, storage, networking, backup, and DR. Now, no AI here, so go to the Google one for AI. If you care about cloud things and our AI, go to the Azure one. How's that? [83:55] Justin: This would be refreshing to actually go to. I'm kind of thinking about it. You're like, ah, to deal with servers and just to optimize that kind of thing. That'd be pretty sweet. [84:05] Jonathan: I mean, I guarantee you they're going to talk about AI, especially when they hit the SRE stuff and things like that. If it's 300, 400 level, there's no way they're not. But as soon as also everything says as no marketing slides, I'm like, oh God, there's definitely going to be a sales pitch. Like, immediately causes me to have the opposite reaction. [84:24] Matt: Uh, next up for Microsoft, uh, in public preview, memory in Foundry Agent Services. Uh, so basically they're getting memory just like everyone else is. Memory feature integrates natively with Microsoft Agent Framework and LangGraph, meaning teams already building on those frameworks can adopt persistent memory without significant architectural changes. I, I feel so weird that all these companies are just getting memory. I'm like, it's been in Claude, it's been in OpenAI ChatGPT for a while. SAML, uh, in the, apparently only on the desktop side and in the user consumer space, not in the enterprise tools. So, okay. [84:55] Justin: It is kind of nuts, right? Like, yeah, it seems, and it's, there's just, there's tools being, uh, launched so you can put it into your app, I think is mostly the newness, but which, who knows what they, you know, the people that were doing this before were climbing together, but yeah, I mean, I think you're right. [85:11] Jonathan: I think it's building it into every app. So therefore you can kind of have your memory. which I think is also, you know, some of that grounding and what other people have done. So they use a standard memory and they leverage that for multiple— for all the sessions. So every session starts with the same memory. Yeah. [85:28] Matt: Microsoft Agent Framework has reached version 1.0 for both.NET and Python, bringing stable APIs and the long-term support commitment, which gives enterprise developers a reliable foundation for building production AI agent applications. The framework supports multi-agent orchestration and multi-provider model support, meaning developers can coordinate multiple AI agents and swap between different AI models without being locked into a provider. I mean, it feels like things are changing so fast right now that standardizing and long-term support feels sort of weird. Yeah. But I appreciate that they're trying something. [85:58] Justin: I mean, it's always been a name only anyway, right? So like, yeah. [86:03] Matt: I mean, but I mean, there is a, there is a belief that if you do 1.0, that you at least have to keep supporting. You might have a 1.1 or 1.2 that's, you know, much better, but you can still force you know, a lot of people to get through this path, I think is how I would see it. [86:16] Justin: Yeah. [86:17] Jonathan: Tell HashiCorp that with Terraform. Would it take like 10 years for them to get that? [86:20] Matt: Well, they're all about IAM now, so all bets are, all bets are off. [86:23] Jonathan: All bets are off. [86:23] Matt: Sorry. [86:24] Jonathan: Tell, uh, tell back in the day Terraform and HashiCorp. [86:27] Matt: How about that? Uh, next up is Microsoft is open sourcing their integrated HSM. Basically it's embedded in every new Azure server designed to meet FIPS 140-3 Level 3 certification. Encrypt keys with hardened hardware at all times, meaning keys never appear in host or guest memory ever during Active Directory operations. So Microsoft announced at the OCP EMEA Summit that the HSM firmware, driver, and software stack will be open sourced by GitHub at github.com, uh, with an OCP workgroup launched to guide ongoing development. They integrate HSM complements existing services like Azure Key Vault and Azure Managed HSM by adding server-local cryptographic protection, addressing the shared blast radius and network latency limitations decentralized HSM models. So I mean, this is not something you're gonna typically do anything with. So I don't know who the people who are gonna be contributing to this open source wise are, but I'm glad you did it, I guess. [87:20] Justin: Yeah, I guess it's open just so that people can test it. Like, it seems like— [87:24] Jonathan: yeah, I feel like it's trying to build that level of trust and everything else there. Like, look, we trust our software, here it is, you go look at it. To validate that we are doing it right. But the part that I find interesting, it's in the Azure v7 virtual machines, and getting capacity is going to be its own beast. Just to get that, you know, moving, moving all your virtual machines up a tier and then everything else, that's going to take some time and effort. So while it's there in v7, I think there was one region I was trying to get v6 in, I couldn't get v6 keep, uh, scaling. [88:00] Matt: Yeah. [88:02] Justin: I wonder if this is like the Azure equivalent of Nitro. [88:06] Jonathan: Yeah. [88:06] Matt: Kind of sounds like it. [88:07] Justin: Yeah. [88:08] Jonathan: I think it is, but like it's a piece of Nitro, I feel like. [88:12] Justin: Right. [88:12] Matt: Yeah. [88:13] Jonathan: Yeah. [88:13] Justin: One component of Nitro. [88:14] Matt: I mean, I guess from a trust perspective, it allows companies to evaluate it and make sure they're comfortable and maybe people will, maybe people will actually provide stuff to it, but it is, it's still just weird to me. So I don't know. [88:26] Justin: Nitro Enclaves specifically. [88:28] Jonathan: Sorry. [88:30] Matt: Uh, Microsoft's internal Project Lobster team is building CloudPilot, an open cloud-based desktop environment that functions as a 24/7 autonomous personal assistant within Microsoft 365, growing from 100 to over 3,000 daily internal users in a single week as of May 1st. This is designed around a multi-agent architecture including a Chief of Staff agent, Executive Assistant agent, and Specialist agents, each with their own Entra ID, Exchange mailbox, and Teams presence for governance and identity isolation within Microsoft Graph. Security remains a central challenge, as Microsoft's own Defender team explicitly states Open Cloud should not run on standard enterprise workstations due to risks including persistent credentials, untested input injection, and vulnerability to prompt injection attacks turned into action injection attacks. Project differs from existing Copilot offerings from Copilot Tasks and Copilot Cowork in that it targets full-life context for knowledge workers handling tasks like DoorDash orders or rescheduling personal calls without requiring constant user prompting. Microsoft VP Scott Hanselman was built Vault has a built-in Windows node for OpenCloud that may surface at Microsoft Build in June. So there's some near-term developer-facing announcements around Windows as an enterprise-ready Agentic runtime environment, maybe coming soon. No pricing or GA timeline has been disclosed. [89:34] Justin: So, so this is either going to be amazing and exactly what everyone wants, which is, you know, a desktop app that does all the cool stuff, but it's backed by, you know, Entra and all the security stuff that your IT org is already running. Or it's gonna be so nerfed and not able to do anything because it's backed by Entra, which your IT is managing and they don't give it any permissions. [89:57] Matt: I think we're gonna see a lot of open claw across all the enterprise tools, so get ready, Ryan. Yep. Secure the enterprise. [90:06] Justin: Super excited. [90:08] Matt: Are you? [90:08] Justin: No, no, I'm not. [90:10] Matt: I'm sorry. I didn't pick up the excitement in that. [90:13] Justin: Yeah. Sure. [90:14] Matt: Well, it will, we'll see how things, uh, continue to evolve here, but, uh, it definitely feels like a lot more automation and agentic is coming across the board for everybody. [90:24] Justin: I mean, it's, it's something I want. It's, you know, the, the functionality is definitely something that we need to provide. [90:29] Matt: Mm-hmm. [90:30] Justin: Because it's a huge enabler, but it's also, we don't need to throw away all of our security, you know, controls with the, with the bathwater to mix metaphors or half a metaphor. Agreed. [90:42] Matt: And then finally, our last Azure story. Microsoft Foundry Model Router consolidates multi-model dispatch into a single endpoint that routes across up to 18 underlying LLMs, shifting the routing logic from application code to the platform layer. This matters for cloud architects who currently manage bespoke routing logic across model fleets. The model subset feature is the most governance-relevant control, letting teams define which vendors and regions for their prompts can touch, set an effective context window ceiling, and bound worst-case per-call cost. New models added in future router versions are not auto-included, which is a liver compliance guardrail worth noting. [91:13] Justin: It's kind of nuts. I, I guess I was sort of making the assumption that the previous, you know, LLM router was a central endpoint, but it seems like you had to have a lot more logic at the app layer to use it. [91:25] Jonathan: Yeah, I think there was a few different pieces you had to tie together to make it work, and this is just giving you a single place. Mm-hmm. [91:32] Matt: Yeah, I think so. And that's it, guys. It was a long, long road to get here, but it was a long show, but a good conversation. Yeah. All right, gentlemen, we'll see you next week here in the cloud. [91:46] Justin: All right. Bye, everybody. [91:48] Matt: See you. [91:50] Justin: Another week of cloud news wrapped up. [91:53] Matt: Bolt will collect the news. [91:55] Justin: Justin will get the notes. Jonathan will write some code. Ryan will watch the perimeter and Matt will reluctantly watch Azure. Till next week for IAM, Amazon, Google Cloud, and Azure. And hey, maybe even Oracle, who knows? Check out thecloudpod.net for our newsletter. Join our Slack, message us on socials, or leave a review. [92:21] Matt: Well, I have an after show today. [92:24] Justin: Do you think anyone's still listening? [92:25] Matt: Like, I mean, maybe marathon, marathon, marathon episode. Yeah. But, uh, basically, uh, you know, something happened. Uh, Tim Cook, uh, currently the CEO of Apple, has announced that he is transitioning to board member only, uh, as of September 1st. And he's being replaced by John Ternus, uh, who, for those of you who know, Ternus comes from a hardware background, which may be a significant or continued increase in Amazon Apple Silicon. Device-level computing. Uh, but he basically has run the hardware division for many years now and did lead the Apple silicon transition. So Tim Cook, of course, replaced, uh, Steve Jobs, the late Steve Jobs, after he, uh, passed, uh, well, actually before he passed, uh, through his final stages of his life. And then he basically took what Steve Jobs had built with the iPhone and iPad and turned it into the behemoth that is now Apple. You know, and people have sort of hit or miss opinions of Tim Cook's tenure at Apple. I have a relatively positive feelings about it other than some of the things he's done recently with political side of things, trying to make sure that he doesn't get tariffs on his iPhones. He's done a lot of sucking up, which I think, uh, is not a great look for him. But, uh, you know, he'll be— still Ryan on the board to hopefully keep doing those things and keep John Ternus clean. So, uh, that'd be nice, right? Yeah, the biggest thing that's interesting to me about this, and you know, we don't typically talk about Apple too much unless they're doing something in AI that's interesting in our space, or, you know, we're getting a new Mac version on Amazon. But, uh, you know, I, I kind of excited about this. Honestly, having a hardware person kind of come in who's more technical. I mean, Tim Cook is a very smart guy, but he's a logistics dude, like logistics all day long. He can take any product and make China sing and basically build that product at massive scale. That's what he's good at. That's what he's always been good at. And that's where he really has always helped scale Apple's business the way he did. You know, he's also helped transition them into more of a services business. So you have like Apple TV Plus and you've got other subscription services that didn't used to exist. That's been part of Tim Cook's kind of tenure there. But during a harder person, it sort of reminds me a little bit about bringing someone in like Satya Nadella at Microsoft, who, you know, abandoned a lot of things that Ballmer did that, you know, were sort of— I don't, I wouldn't call Tim Cook a Ballmer. I think he's better than a Ballmer. He's not just a pure sales guy. He does operationally understand how the business works, where I don't know if Ballmer ever did. But I think bringing a technical person back into the role at this time when, you know, Apple isn't doing much in AI, they're not, you know, they haven't really released a lot of new features. The Apple Vision Pro has not been well received or highly adopted, mostly because it costs a small fortune. And so I think it might be a really cool time for Apple and maybe they get their groove back. I don't know. [95:01] Justin: I agree. I think they, you know, like the only thing Tim Cook didn't do is invent the next iPod. Right? Like something that was splashy and big that, you know, that we kind of got used to with Apple 'cause we're spoiled. And, and, but yeah, it was more, more operationally focused. And so yeah, it's sort of a hardware guy gives you sort of that renewed faith that maybe, you know, maybe the new Apple car or whatever, you know, like is gonna come out and be this like amazing. [95:25] Matt: I always thought the car was silly. [95:26] Justin: I know, I know. I'm just using it as an example. [95:28] Jonathan: I feel like the, the MacBook Neo that came out was like, The bit, like, what is the big deal? And from what I was reading, like, they can't keep 'em on shelves at stores because it's a lower price point. So it's enabling a lot more people to get into the Apple ecosystem. [95:44] Matt: Well, I mean, the Mac Neo undercut the Surface and it's not that much more, very similarly priced to the Chromebook. And so it, it's a suddenly a very viable interest to lower end market and it, It's a pretty serviceable machine cuz Apple Silicon runs well and most other low-end laptops of that size, not very many of 'em are, I mean the Surface was ARM, but like a lot of other, uh, Windows computers are really neutered Intel chips. So, you know, it's a very powerful machine and they're difficult to use cuz they're slow. [96:14] Jonathan: Yeah. [96:15] Matt: So I mean like the MacBook Neo, why it's not a computer I would ever buy, I could see for a college kid who's not going into computer science, who just needs a, a workhorse. You know, for college, like, it's a great system. It starts at $600 for the base entry config, you know. So again, I think it makes sense to me. It's just a matter of, you know, what makes sense for your use case. But, you know, low cost is not a bad thing. [96:39] Justin: No, definitely isn't. And that's, you know, largely why you see, you know, a lot of Windows machines in corporate environments and stuff is that cost. [96:50] Jonathan: Yeah. [96:50] Matt: I mean, my one beef is I think 512 gigs for disk is pretty tight. [96:56] Jonathan: For the OSs, I swear like 100 gigabytes between upgrades. Yeah. [97:00] Matt: I mean, I haven't had a Mac with less, I haven't had a Mac with less than 2 terabytes in years, so. [97:05] Jonathan: Ooh, now I wanna check what mine is. I think mine's like one of the lower end ones. [97:10] Justin: I got a lower end storage one too, just cuz I, I tend to use a lot of external storage for stuff. [97:16] Matt: I mean, I do too. I've, I mean, I synchronize all this stuff with my Synology and I've got all that stuff too. But like, you know, I, I could fit on 1TB, uh, but I don't want the stress. Oh yeah. Especially now with all the models and stuff I download locally and run locally to play with. Like I definitely use a lot more disk space in the last year or two than I did before and I would've had a problem. But I'm a big RAM guy too. I typically buy at least 32 or 64 gigs of RAM on my laptops. [97:39] Justin: Yeah. [97:39] Matt: So, you know, this one, it's not, it's definitely not my config. But, uh, but again, the 512 I think is probably the size I would go with, which is $100 more. That's probably my only thing I would tell someone who's looking to buying something like this. I'd be like, get the 512, please just trust me. But I, I imagine that this is why I also push Mac, you know, 'cause right now they're still carrying support for like x86-based, uh, macOS. And so, uh, you know, in, if they start cutting that stuff down to just support ARM, which I think they'll be able to do finally this year with the Mac Pro finally falling out of support with with the next version of Windows, of macOS, maybe they can reclaim a bunch of that space on the disk. Cause that, that'd be my one concern is the operating system is pretty big on the drive. [98:16] Jonathan: I thought they already did cut Intel support. Maybe I'm wrong. [98:20] Matt: It's still, I mean, you still can still install. It's rolling through. Yeah, it's rolling through. You can still install, um, what do they call that? Rosetta. And Rosetta will emulate the Intel stuff. So that, that takes quite a bit of stuff in there. And there's still, you know, still apps that require Rosetta, unfortunately. [98:36] Jonathan: I'm sorry, I was thinking the other way around. [98:38] Matt: We're like, they have about 18 months before they need it. I think this version of macOS still supports Intel Mac. This is the last version of macOS that will, so. [98:47] Jonathan: All right, I know I had a friend that had a really old Mac that was definitely Intel. [98:52] Matt: Well, they've been dropping support. Yeah, so technically the Mac probably would have ran the current operating system, but I think the Mac Pro is— the last version of the Mac Pro Intel is the only Mac that's still certified. For the operating system at this point. [99:05] Justin: And so the hard— yeah, they won't release newer OS versions for older hardware, but, uh, because this can't run it. And so like your, your old Mac laptop is topped out at— yeah. [99:16] Matt: So, well, good. Uh, I'm excited. We'll keep an eye on it. Maybe they'll— maybe they'll turn out to be a big cloud guy. Maybe he'll, he'll get more cloud into Google and to Azure with Macs. And maybe he'll fix— [99:27] Justin: don't want them to. [99:28] Matt: And maybe he'll fix the pricing problem, because the pricing problem It's horrendous. [99:32] Justin: Yeah. [99:32] Matt: Not with today's RAM and CPU prices, but, oh, you know, wait, wait, well, I mean, if really, so the problem I have with the Mac Mini on Amazon is that you have to pay for a day. [99:41] Justin: Yeah. [99:42] Matt: That's, that's the, and like if they would fix that problem, I'd be super happy. [99:47] Justin: So makes you feel like there's someone logging into it to, to wipe it. [99:50] Matt: Right. [99:51] Jonathan: Mm-hmm. [99:52] Matt: Right. I mean, it's probably that level of load. Yeah. I mean, who knows? All right, gentlemen. Well, have a good one. See you later. [99:59] A: All right. [100:00] Matt: Bye. Bye.