# 348: Compliance Theater Now Available as a Subscriptions Duration: 70 minutes Speakers: Justin, Matt Date: 2026-04-02 ## Chapters 1. [00:00] This podcast is hosted by Justin, Jonathan, Ryan and Matthew Compliance Theater is now available as a subscription. We talk weekly about all things aws, GCP and Azure. Episode 348, recorded for March 24, 2026, Compliance Theater. Matt in his forest background that he likes to disappear into sometimes the show. 2. [01:31] Stryker attack apparently disrupted their Microsoft corporate environment on March 11 The Stryker attack that occurred on March 11 apparently disrupted their internal Microsoft corporate environment. As a standing permission, I would recommend seriously that you consider dividing your access from user access and admin access to two different accounts. The amount of threat action that's happening on the Internet right now is just crazy. 3. [05:09] Fedramp authorized Microsoft's government community cloud high despite insufficient security documentation Fedramp authorized Microsoft's government community cloud high, despite internal reviewers finding insufficient security documentation. The GCC high offering is specifically designed to handle some of the US Government's most sensitive data. It raises questions about the integrity of the federal cloud authorization process. 4. [06:50] Anonymous report claims Delve is a fraud that fabricates audit reports Dell reportedly generates identical audit reports across all clients, meaning the auditor independence required by the AICPA and ISO standard is violated distance. Companies use Dell for all kinds of things including HIPAA and GDPR compliance. The problem is, you know, nobody wants to be audited. 5. [14:12] Substack claims Dell provides pre filled board meeting minutes with fake evidence Substack tried to make an issue out of the fact that Dell provides pre filled board meeting minutes, policies and forms implying that in some way this leads to providing fake evidence. And then the final one was Delve is not a manual platform with no automation. These five points you say don't have any what legs on them unless you show evidence. 6. [18:50] Supply Chain Attack and Light LLM. This dropped this morning Light LLM was found to contain a malicious PTH file that executes automatically on every Python process startup with no corresponding release on the official GitHub repository. This incident hides the risk of supply chain attacks to transitive dependencies, where a package you never directly installed can introduce malicious code. This makes dependency auditing and package integrity verification important. 7. [23:04] Google is open sourcing the GKE cluster Autoscaler Kubecon EU just happened and we have some updates here. Google is open sourcing the GKE cluster Autoscaler. LLM D is a Kubernetes native distributed inference framework built with Red Hat. I love the idea of having individual workloads on a cluster be able to delegate to managed and unmanaged. 8. [25:59] AKS Networking gets several notable updates including Azure RDMA NIC support AKS Networking gets several notable updates. New agent container networking interface lets operators run natural language diagnostic queries against live telemetry. Blue green Agent Pool upgrade and Agent Pool rollback are now available in aks. 9. [27:48] Snowflake announces Project Snow Work and Research Preview and a GenTech AI platform Snowflake is announcing Project Snow Work and Research Preview and a GenTech AI platform targeting business users in finance, sales, marketing and operations. Project Snow workshops with pre built Persona profiles for specific business functions. Targeted access is currently limited to a select group of customers. 10. [29:36] OpenAI is acquiring Astral, the company behind three widely adopted Python developer tools OpenAI is acquiring Astral, the company behind three widely adopted Python developer tools. UV for dependency and environment management, Rough for linting and formatting, and Ty Ty for type safety enforcement. The goal is to move Codex towards participating in complete development workflows. 11. [32:15] Anthropic is releasing open or sorry cloud code channels for Telegram and Discord Anthropic is releasing open cloud code channels in version 2.1.8. Allows developers to connect their cloud code sessions to Telegram and Discord bots. Shifts from an asynchronous chat model to an asynchronous persistent agent. Will notify users when tasks complete. 12. [34:54] Anthropic has launched computer use capabilities in Cloud Cowork and Cloud Code Anthropic launches computer use capabilities in Cloud Cowork and Cloud Code now in Research Preview for Pro and Mac subscribers on macOS. Cloud can now directly control your browser, mouse, keyboard and screen to complete tasks with no direct connector exists. 13. [37:46] Open Claw offers middle ground between conservative permission prompts and risky dangerously skip permissions flag Open Claw has launched auto mode for cloud code and Research Preview for teams plan users. It offers a middle ground between the default conservative permission prompts and the risky dangerously skip permissions flag. Automotive adds some overhead to token consumption costs and latency per tool call but I think well worth it. 14. [40:31] Amazon Bedrock Agent Core now includes Invoke Agent Runtime command Amazon Bedrick Agent Core now includes Invoke Agent Runtime command. API lets developers execute shell commands directly inside a running agent session. It's available across 14 AWS regions, including all major US, European and Asia Pacific. I cannot wait for the first remote shell execution vulnerability to be created. 15. [42:03] Amazon Inspector now supports agentless EC2 scanning for a broader range of software Amazon Inspector now supports agentless EC2 scanning for a broader range of software including WordPress, Apache HTTP Server, Python packages and RubyGens Windows operating system vulnerabilities. New Windows Kill Knowledge Base findings consolidate multiple CVs addressed by a single Microsoft patch into one finding. 16. [46:03] AWS turns 20 this month, which makes sense because S3 just came out AWS turns 20 this month, which makes sense because S3 just came out last month. It grew from $0.10 per compute hour in 2006 to nearly 129 billion annual revenue today. Jassy has said he thinks he could reach 600 billion in annual revenue by 2036. 17. [47:36] AWS MCP Server in Preview now automatically publishes metrics to CloudWatch AWS MCP Server in Preview now automatically publishes metrics to CloudWatch under AWS MCP namespace at no additional cost. Still available only to you in US east one in Preview, so keep that in mind. This update provides a practical observability layer for your mcp. 18. [48:52] GCP Cloud SQL Read pools now generally available for Enterprise Plus Edition GCP Cloud SQL Read pools are now generally available for Enterprise Plus Edition. Let you provision up to 20 read replicas behind a single load balance endpoint for MySQL and Postgres. Auto scaling helps avoid over provisioning by scaling and during low traffic periods. 19. [51:17] Google has unveiled an AI native design canvas that converts natural language descriptions into uis Google Labs has evolved Stitch into an AI native design canvas. It converts natural language descriptions into high fidelity uis designs. CISH connects to the developer workflows through MCP Server and SDK. Pricing details are not specified in the announcement, so it's free for right now. 20. [53:48] Microsoft is extending Nvidia Ver Rubin platform support to Azure Local Microsoft is extending Nvidia Ver Rubin platform support to Azure Local. A new physical AI toolchain available via public GitHub repo integrates Nvidia physical AI data factory with Azure Services. Skynet is very excited about these announcements. 21. [56:34] Microsoft has paused automatic deployment of the Microsoft 365 copilot app to desktop users Microsoft has paused the automatic deployment of the Microsoft 365 copilot app to desktop users. The opt out default model increased IT workload by forcing organizations to set policies on Microsoft's timeline rather than their own. Paas has no specified end date and existing installations remain unaffected. 22. [58:12] Microsoft announced a savings plan for databases at SQLCon 2026 Microsoft's announcing a savings plan for databases at SQLCon 2026. Up to 35% savings versus pay as you go pricing on a one year hourly spend commitment automatically applied to cross eligible Azure database services. GitHub Copilot is now generally available in SQL Server Management Studio 22. 23. [63:36] Microsoft is releasing the Azure Skills plugin available at some website Microsoft is releasing the Azure Skills plugin available at some website. Skills layer is the core deteriorator here, encoding decision trees and sequencing logic. plugin designed to be portable across agent hosts including GitHub, Copilot and Visual Studio Copilot. 24. [65:51] XMCP gives AI agents a hosted authenticated connection to Azure DevOps data Azure DevOps remote MCP server in preview as of March 17th. Server gives AI agents a hosted authenticated connection to Azure DevOps data. Endpoint authentication runs entirely through Microsoft Entre. Only entre backed DevOps organizations are supported. 25. [67:56] Java 26 ships 10 JDK enhancements including AI integration Java 26 ships 10 JDK enhancements including AI integration. I don't want to know what the AI integration to this is because I'm sure it would scare me. 26. [69:01] Oracle announces bundle of agentic AI capabilities for Oracle AI Database Oracle is announcing the bundle of agentic AI capabilities for Oracle AI Database. Highlight additions include the Autonomous AI Vector Database and Limited Availability. Security angle is notable here with Oracle Deep Data Security and Private AI service containers. 27. [70:19] Well, that's it for another fantastic week here in the Cloud, guys This was another fantastic week here in the Cloud. Head over to our website@thecloudpod. net where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. We'll see you next week for, I'm sure, more AI news. ## Transcript [00:00] Justin: Foreign, [00:08] Matt: Where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:14] Justin: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:18] Matt: Episode 348, recorded for March 24, 2026, Compliance Theater, now available as a subscription. Good evening, Ryan and Matt. How you doing? [00:28] Matt: Hello. [00:30] Matt: Good. [00:30] Matt: How are you? [00:31] Matt: I notice it's a lot brighter today because I post daylight savings time and like, we're actually, like, recording in daylight. [00:37] Justin: Yeah. [00:38] Matt: Speak for yourself. [00:39] Matt: I guess it's the one benefit of losing that sleep. [00:42] Justin: Well. [00:42] Matt: Yeah. Not for Matt. Sorry. For Ryan and I in our lovely, right. Lit rooms. Matt in his forest background, virtual background that he likes to disappear into sometimes the show. [00:52] Justin: It looks sunny where Matt is. [00:53] Matt: Yeah, it looks like it's virtual. Yeah. [00:55] Matt: Yeah, I got some lights over my head. Does that count? [00:59] Matt: I mean, this is really great podcast content, but the fact that you have this forest, I always love because, like, when you get up and you walk to go deal with kids or whatever, you like, disappear into the forest, it's. It's kind of amazing. [01:08] Matt: That's actually why I did. It was one of my. One of someone I work with at my day job and I were talking and I got up to like, just walk around and he goes, it looks like you're walking out on the forest path. And since then I've just kept it because it's so, so perfect. [01:22] Justin: Yeah. [01:22] Matt: And just because I feel bad for our show, our listeners, I'm gonna put a screenshot of you so people can see you in the forest. Great capture too. You're welcome. All right, let's get into some general news. We talked about it two weeks ago and then I forgot as a bad podcast host that we were gonna mention the Stryker attack that occurred on March 11 apparently disrupted their internal Microsoft corporate environment affecting order processing, manufacturing and shipping vanilla, not their connected medical devices or cloud hosted products. The attack vector was specific to strikers of Microsoft environment, which meant products running on AWS and Google's cloud platform were not architecturally isolated and unaffected. Striker specifically stated that this was not a ransomware or malware. And government agencies including cisa, FBI and White House National Cyberdirector were engaged with domain seizures linked to threat actors already executed. And basically they didn't say it in this blog post, but the reports are that someone phished one of their administrators who had full entre admin access. And he then initiated a remote wipe on every Windows device in their entire corporate entra system. So if you're letting your admins running around with your full admin Privileges to things like entre. As a standing permission, I would recommend seriously that you consider dividing your access from user access and admin access to two different accounts. Because that's embarrassing and real ugly. And I, I have, you know, Hug Ops. The Stryker team. Man, I. That's. Yeah. I can only imagine. I couldn't imagine having to like rebuild my entire Windows estate at a company the size of Stryker in the middle of, you know, you're trying to do business and everything else. And, you know, you're. And you're providing medical devices and. And I have to wonder, do you think this was a targeted attack by Iran because of the name Stryker and they connected Stryker to Stryker missiles? Because when I first heard about a company called Stryker, I thought Stryker missiles. So if I can make that dumb connection, I'm sure people in Iran can make that connection too. And they. Do you think they actually knew they were hitting a medical company or do you think they thought they were hitting a military target? [03:21] Justin: Oh, I never really thought about that. That's interesting. I just thought it was just, you know, a convenient target. But I. You're right. I guess maybe there's difference. [03:31] Matt: I don't know why else you would target this company of all the companies in America that you could target. Like, the company that no one's ever heard of is not the target that I'm going to go after if I'm Iran just. Or if I'm a threat actor. Trying to think like, who. How am I going to do damage to another country? It just, it seems a little weird to me. But in general, I think, you know, watching just the cloud pod, we have a firewall and things that protect it. Like the amount of threat action that's happening on the Internet right now is just crazy. And like, I don't have access to all this tool that Ryan does because he's in security and I'm not allowed to touch those tools. But like, just what I see on the cloud pod is kind of terrifying. Like the amount of volume that's gone up in the last month and just [04:08] Justin: reading the feeds, it's the same. Like, it's crazy. [04:11] Matt: Yeah. Hopefully Striker gets. I think they're back. Mostly back up now. It's been a couple weeks. I think they got all their Windows estate mostly back up for laptops. And I know they've been recovering services, at least from what I've heard. But good luck to them and I hope they separate that admin permissions in the future. And also, just because you're an. IT doesn't mean you can't be phished. [04:30] Matt: Remember, that probably means you're more likely to get phished because you're targeted more because you have that permission set. Yep. [04:38] Matt: I mean, security people too, because they also typically have a lot of the permissions. [04:42] Matt: We just give Ryan permission to everything. It's fine. What could possibly go wrong? [04:45] Matt: I mean, my favorite is always, like, when you get the vulnerability network scanner and they're like, hey, we want you to put that in the middle of the network and we want you to open it to allow talking to all devices on all ports. Like, what could go wrong? [04:55] Matt: Yeah, don't worry that the vulnerability scanning tool probably hasn't been patched themselves. [05:02] Matt: I mean, how dare you talk about qualys that way. All right, the interesting article from Ars Technica. The headline is a little bit inflammatory, but I approve. Thought of Cyber Experts call Microsoft Cloud a Pile of Shit. Approved it anyways. This is apparently the Fedramp authorized Microsoft's government community cloud high, despite internal reviewers finding insufficient security documentation, issuing unusual buyer beware notice to agencies considering the product, raising questions about the integrity of the federal cloud authorization process when commercial pressures intersect with security evaluations. The GCC high offering is specifically designed to handle some of the US Government's most sensitive data, making the documentation gaps particularly consequential given that Microsoft had already been linked to two significant photo breaches involving Russia and Chinese state actors. The core technical concern that was Microsoft's inability to adequately document how data is protected as it moves between servers in their cloud infrastructure, leaving reviewers unable to assess the system's overall security posture with any confidence. So it's so secure that even the government chose it is not, I guess, a good message when they say it was a pile of shit. [06:06] Justin: Yeah, I mean, that's. If you can't adequately explain how like, basic things like encryption and security controls are handled in your environment, that's not good. [06:15] Matt: Right. [06:15] Justin: Because it's. While it's not completely indicative of a security problem, it's highly suspect. Right. Because it's. It just means you don't even have the basic of evidence to describe how you're doing these security things, which is [06:28] Matt: not great, but we just encrypt everywhere and we solve the problem. Right. But why do we need more evidence than that? [06:34] Justin: Then be able to describe that to [06:36] Matt: someone who asks, like, that's not that hard. [06:40] Matt: It's encrypted. [06:41] Justin: That's the part that's crazy. [06:41] Matt: It's fine. [06:42] Matt: Yeah. [06:43] Justin: Oh, what algorithms are we using? Encryption. Nah, it's Good. We like Fantribe is very specific about what they allow. [06:50] Matt: I mean it's a good segue to our next story though, because if we're talking about compliance as a service or fake compliance as a service, maybe Microsoft was a customer of this story for Delve who on substack an anonymous article was posted that is very long, very. I mean it's like I tried to print it off to give it to the summarizer and it was like 256 pages of data that they collected about why they think Delve is a fraud. And basically they said Delve is a compliance automation platform which fabricates audit evidence including board meeting records and test results, then uses Indian certification mills operating through U.S. shell entities to rubber stamp reports rather than conduct independent verification. The core technical concern is that Dell reportedly generates identical audit reports across all clients, meaning the auditor independence required by the AICPA and ISO standard is struct truly violated distance. Delve itself is actually acting as both the platform and the auditor which if you do anything in auditing, you know, that is a. No, no. [07:46] Justin: Yep. [07:48] Matt: Companies use Dell for all kinds of things including HIPAA and GDPR compliance as well as ISO and SOC being probably the most popular. And they were a y common ear graduate a few years ago out there. But yeah, that's a. I. So I, I shared this with our security team and Ryan and then I read the first two pages and I was like, oh, this is bad. And then I, I saw Dells had responded and we'll talk about that in a second. But then I was like I should probably read the full report before I go read the Delve, you know, response. And so I. And again, it's 200 some odd pages and I read through almost all of it until I got to like really detailed screenshots of things and I was, I was pretty much convinced that I would never do business with Delve ever. Yeah. After reading through this. So you know, if this is what Delve claims here in a second then you know, well done. But you know, there's nothing about this that doesn't make me think that this is 100% accurate. Also, I come up with like 20 better ways to argue this away than I think Delve does here in a second. Any thoughts? For some of the report, it's just so awful. [08:51] Justin: Like, you know, like it really is just sort of phoning it in and I'm not a big fan of, you know, checkbox security and having that around just for compliance purposes and you know, this, but it's also like if you're, this is really misrepresentation. Like you, you know, we, you look at things and you know it's certified by Dell, it's not certified by these other companies. And it's you know, if all that evidence, the specifics that they listed in the, in the report are like crazy. Just how like this is not cool. It's just generated. It's not even real in the slightest. [09:26] Matt: It's bad. [09:27] Matt: And the problem is, you know, nobody wants to be audited. Especially if you have hundreds of customers. You can't have a small business. You know, the stock and the ISO are supposed to say to people look, we're doing what we say we're doing and you know, here's how we're securing it. We're meeting these qualifications because otherwise you can't have a small business with you know, have 500 customers audit them on a daily basis. You know, you would tie up their security compliance team full time. If not, you know, they would just go bankrupt just doing these audits by third parties. So this is supposed to help companies and I just worry that we're pretty much this and you know, I've seen it with other companies, you know, not this bad but you know where people are saying it's your audits aren't good. So cool. I, you have a SoC, but I still need you to provide me all this because I don't trust your sock. And if we're going to lose that value in the SOC in the ISO, then we're going to have a big problem as an industry because audits are going to run rampant. [10:30] Matt: Yeah, I mean I'm already knowing why I do that now when it comes and they're always big companies who are important and you want to respect their needs and you do what they need to do. But yeah, there's also a big chunk of customers who are like do you have a sock too? And they're like yes. I'm like cool, thanks, I'll buy. But yeah, if everyone goes to like no sock or no ISO are valid or trustable until I trust myself that's it is a bad precedent and delve [10:57] Justin: like the, the, the idea of like AI, you know, generated evidence and stuff like that's such a real use case that if it was an, an actual solution would be amazing. Right. Like it's because it's such a heavy handed sort of part of audits and in the generation of that evidence. But you know when you're, you're doing sort of the fake AI which is, you know, sort of, you know, either doing like the, you know, sending stuff to the machine learning, which is just a, you know, an Indian factory somewhere, or if you're, you know, just sort of just copy and pasting stuff like it doesn't really address that problem at all. And that's. It kind of ruins it. [11:36] Matt: So of course Delve had to respond to this as you would need to definitely from a crisis perspective. And you know, they had five points that they made. And after I read all the evidence, I was prepared for how I would potentially spin this in the Justin spin zone. And none of what I said was what I would've said is in here at all. So I was impressed by that already. I'm like, wow, you, you had other ways. Okay, cool. So the first one was Delve does not conduct audits or issue fake stock reports. And says the substack inaccurately says Delve fakes compliance ports. This is not true. And then there's a bunch of words that don't say anything about anything. And then they say we work with the, you know, it says, you know, they rely on Indian certificate mail. That's. This too is not accurate. Okay. And then like, show me the credentials like of your third party audit firms. If they're not what they said there are like if, if it's not true, show me the evidence. Third is standardization is inherent in compliance frameworks. The substack is actually saying Delve uses templates across the majority of reports. The. This is misleading. Most modern compliance platforms allow clients to adopt a fixed control set based on widely accepted standards. I mean, yes, I have bought ISO, I bought SoC certifications and they sometimes have templates to help you get started, but they don't, you know, and so I don't have a major problem with the templating thing other than the part where on the website, basically you get put onto the trust site that you have all these things because you have the template. That's the part where I was like, yeah, no, not quite. And I kind of argue that the [13:06] Justin: use of pre filled templates is common. Like templates. Yes. Prefilled and direct copy templates from between companies. [13:15] Matt: I've seen a couple that, you know, here's our general one. Put your name at the top and go from there. But these are 10, 20, 30 person companies, you know, that are, you know, and not big companies, you know, so you really have to. I've seen more than I want to admit, you know, working with enough small businesses out there that hey, we have to get this thing to run, a customer requires it. Let me go buy insert GRC tool here. And it comes with, you know, ISO, you know, for dummies, you know, that has all the, has all the standard, you know, security policy, infosec policy, vulnerability management, et cetera, et cetera. It comes with all those. [13:53] Matt: Yeah, but you still go edit them. [13:55] Matt: You have to adjust it. [13:56] Matt: Yeah, you're not, you're not telling a. Filling out a form in the app that then automatically fills it out and then produces this as your evidence. You're, you're given that template with the fields and you're like, okay, I had to review the whole thing and I had to decide what I want to keep and what I, I don't want to keep in my security program or you know, solve what the questions are, whatever that's going to be. So I don't know. The next one is Substack tried to make an issue out of the fact that Dell provides pre filled board meeting minutes, policies and forms implying that in some way this leads to providing fake evidence. I mean, why would you pre fill a board meeting minute that doesn't. I mean, the agenda for the board meeting. Sure, I'm okay with that. But again like you're going to lead to a situation where customers are just going to rubber stamp that stuff and you know, that's, that's a problem. And so even if that wasn't your intention, that's what the outcome is going to be because you have no, you have no ability to do anything. And then if you're, if your accredited auditor isn't noticing it because they're paid not to notice it or whatever, then that's a problem. And then the final one was Delve is not a manual platform with no automation. Basically they said that, you know, they're saying they have 120 automated integrations and subservers, not just 14s as a legend substack. But what they allege in the substack was that of the 14 that they use, they were all just new forms that you had to fill out that didn't actually go gather any evidence from the system that it's supposed to integrate with. It just basically just says, oh, you use linear cool. Answer these three questions. So these are the, this is the, this is the entire blog post of the response. And I keep looking back thinking like they're going to respond with more than this. Right? Like this is just a quick down and dirty, like we gotta get something out. But I mean it's, it's been four days and they haven't updated this thing. [15:32] Justin: So, so I'm curious, what, what is your, how would you spin this? [15:37] Matt: So I would say something along the lines of like the breach that occurred of the data that went out, you know, is, is basically a draft folders that when we create for every new customer, we create a draft set of folders and then in the application we don't actually use those templates that are in the draft folder. And this just happens to be an automation stake or something like that. And then I would basically show like here's a customer redacted with the template that we originally started with and here's our final SOC report. And those are, you can clearly see that they are not the same document. That's how I would argue that because basically the whole part of the, part of the evidence of the heat and to explain that part of the evidence that's used is that Delve was breached. And when they were breached they got access to basically over 180 customers templates and their document store basically of those 180 customers. And so that's where they started. You know, doing some of the analysis of like this isn't really real. Plus whoever authored this said he was a customer and has tried to use the product and had issues. There's also, you know, one of the things Delve allegedly has said is that they said this is a competitor just trying to smear them. I'm like 126 page smear job is a pretty aggressive. Yeah. Right. Amount of effort to go into a smear job. [16:47] Justin: I feel like I've been angry, but I've never been on 26 page anchor. [16:51] Matt: Yeah. And then again like with the, with the whole idea of like the automation thing like okay, show. Cool. Show me that the, the four. One of the 14 that they said is not an accurate actual automation. Show me a video of it actually doing automation. Show. Show. Don't tell me that's my big thing about their response is like you just told me a bunch of things. But your credibility has already been challenged and decimated in the original substack post. So these five points you say don't have any what legs on them unless you show evidence. That's my problem with it. [17:20] Justin: Yeah. Crazy. [17:22] Matt: It's again, the response is trust us. But why should we trust you? [17:27] Matt: Why should I trust this company? I've, I've only ever seen their billboards on the way to the airport in San Francisco. You know who clearly spends a lot of money on marketing. [17:35] Justin: Yeah. [17:36] Matt: So Yeah, I don't know. Like, I definitely would question if you have vendors who are, you know, allegedly Delve customers who have, you know, you've gotten SOC appliance before, you should probably double click into those right now because I'm not as confident in them at this moment. You know, it's interesting. Some of the biggest startups right now are allegedly their customers. Although one of them did come out and respond, say, hey, we used Delve originally, but we moved off Delve a while ago. We have completely different company doing our audits. Like, you know, they basically said right away like, we're not using them. They didn't badmouth Delve in that, but they basically said we're not using Delve anymore. Which I guess is fine. [18:13] Justin: Yeah, I mean, if it makes sense, right? If you, if you are a customer of Delve and people are now going to look at it that way, right, you'd want to get yourself out of the way of that and be like, no, no, no. [18:24] Matt: I mean, especially if you are growing, you know, you should change auditors every, you know, X number of years to show, you know, it's another person looking at it with a different lens and everything else. [18:33] Matt: So different things they've seen different, you know, it's just like having, you know, doing any type of thing like that, like, it's only as good as the things that you've experienced or your imagination. And so if you haven't experienced this hack, then, yeah, bringing someone else who's been in this scenario is always a good play on these. So I, we're talking about this story too, but I first want to take you guys on a quick journey with me here. So Supply Chain Attack and Light LLM. This dropped this morning and you know, I was curious about this and you know, I'd heard of Light LLM. I luckily not downloaded any of the models, but I happen to be on their website and I'd like you both to go to Light LLM's website and when you get there, I'd like for you guys to scroll straight on down to the very bottom of the page and I'd like to point out to you that they, underneath their, their brand and logo, which is a little train, it says Light LLM and it says SOC2 type 1 or type 1 and ISO 27001. And who, who provides that to them? Wow, that's fantastic. So sleek. What does it say? [19:34] Justin: Secured by. [19:36] Matt: Secured by Dell. [19:37] Matt: I just also like how the ISO doesn't say the year. I don't know it just bothers me. [19:42] Matt: And I also like neither of those link to anything which is. [19:45] Justin: Yeah, that's what I was just clicking [19:46] Matt: through going so yeah, so this is a. That just fortuitous timing. [19:53] Justin: Timely. [19:53] Matt: Yeah, yeah, it was very timely. Like oh, that's awkward. And thanks to someone on Twitter who had pointed out like, oh look, delve is there, it's their auditor create. But basically ll inversions 1.8.2 7 and 1.8.2.8 on pypi were found to contain a malicious PTH file that executes automatically on every Python process startup with no corresponding release on the official GitHub repository, indicating the pypy account was likely compromised. The malware follows a three stage attack pattern, collects SSH keys, cloud credential ENV files and kubernetes, configs encrypting and extracting them to a domain unrelated to legitimate light LLM infrastructure, then attempting persistent backdoor installation via systemd and privileged kubernetes pod creations. The attack was discovered because the bug in the malware caused an exponential fork bomb through recursive PTH file triggering, which crashed the host machine and made the compromise visible rather than silent. Any developer or CICD pipeline that pulled Lightlm as a transitive dependency after March 24, 2026, which is today, should treat all credentials on that machine as compromised and rotate SH keys, cloud provider tokens, API keys and database passwords immediately. Wow. And this incident hides the risk of supply chain attacks to transitive dependencies, where a package you never directly installed can introduce malicious code into your environment, making dependency auditing and package integrity verification important. Yeah, that's bad too. [21:08] Justin: That's bad. [21:10] Matt: So the fact that, you know, they didn't even. This isn't even part of their official release and it just got dumped into their PYPI repo, that's, that's pretty scary. That means their entire PYPI repo wasn't properly secured either. Again, something that you would think you'd find in an ISO 27001 or SoC2 type 2. You know, things that I would hope that would be found, but apparently not. [21:29] Justin: Yeah, I mean, supply chat, supply chain attacks are just going to get more and more prevalent because it's, it is sort of a big area and it's really difficult to secure and you know, like, great, you got to validate and you know, sign everything and then make sure that your customers can decrypt that and verify it. Like just a little bit of a pain but needs to happen. [21:51] Matt: I mean, it's just so many Packages get pulled in that people don't realize unless if you have, you know, dependency analysis tool, you know, you're not going to know that these things are even there. You know, even the random stuff, you know, installed with homebrew on my laptop, you know, I was updating something, I was like, oh, it needed this package. I was like, what is this package? Why is it on my laptop that's updating? It was, you know, something I installed a couple weeks ago to test with and I just for never uninstalled it. But this sub dependency is a homebrew that had to update today. I was like, what is this? [22:26] Matt: Well, I mean Even, you know, GitHub Dependency Analyzer will pop up sometimes like do you have a vulnerability in this package? And I'm like, I don't remember applying that package. And they're looking like, oh, it's a transitive dependency for this other thing. And you know, so like, oh great, now I have to go find that. This light LLM thing did very much remind me though of, Remember the old node JS NPM outage when the guy who created the shift left NPM package decided to rip it out of the thing and all these websites broke across the Internet. [22:52] Justin: Yeah. [22:53] Matt: And you're like, wow, that was really simple. Why did you do. But it very much reminds me of that for some reason it has same vibes, same vibes. [23:01] Matt: All you do is the XKCD about that now. [23:04] Matt: Yes, that's a good xkcd all right. Kubecon EU just happened and we have some updates here. We just have pulled them together so we can talk about Kubecon as one batch basically. Google first of all had gk. Autopilot is no longer a cluster level decision made at creation time. Center clusters can now enable autopilot compute classes on a per workload basis, removing the need to create entirely new clusters when workload requirements change. Google is open sourcing the GKE cluster Autoscaler, one of the core infrastructure provision components with the goal of making it available to the broader Kubernetes community as a vendor neutral tool. LLM D, a Kubernetes native distributed inference framework built with Red Hat. Nvidia has been accepted as a CNCF sandbox project addressing inference, aware traffic management, multi node replica, orchestration and key value cache offloading in a hardware agnostic way. Google released the open source dra driver for TPUs coordinated alongside Nvidia, donating their own DRA driver establishing dynamic resource allocation as a shared standard for describing specialized hardware across KUBERNETES workloads and TPU support is coming to Ray version 2.55 with backing from both Google and Anyscale, which is pretty nice. [24:10] Justin: Yeah, I mean super nice of them to, to open source that because it does seem like a very powerful thing to use. I love the idea of having individual workloads on a cluster be able to sort of delegate to managed and unmanaged. It's kind of neat. [24:28] Matt: One of the cool things about lmd, I included a link to just that article as well. But basically the core thing I thought is it has a model aware request routing through the LMD endpoint picker which considers the key value cache hit rates, the in flight requests and the queue depth to direct traffic to the optimal backend. So if you're running multiple LLM models, potentially maybe like a finely tuned SLM or a larger LLM or RAG model, it'll basically determine based on what you're asking which one it should go to. Which I think is super nice how that built right into Kubernetes. [24:56] Justin: So yeah, that's, that's very cool. [25:00] Matt: That is nice. This is where I say Kubernetes is its own cloud. [25:04] Justin: It, it absolutely is. I mean it's, it's weird because it's like I'm, I'm trying to think through that, you know, like I like the idea of having that. I don't know if I like it built into Kubernetes like you said, like it's, it's neat to have as an application but then it's this weird shared dependency across all workloads on your cluster. [25:20] Matt: I mean, I don't know that we put it on like a shared Kubernetes cluster, but a dedicated Kubernetes cluster for this use case I think is fine. Yeah. And again I would assume that each, each Kubernetes node would get this LND router and then you basically use a network load balancer essentially to connect to that. So you wouldn't necessarily have a dependency on one device. But I, I see your concern all about architecture is important. [25:43] Matt: So yeah, I mean it's the equivalent of model router. You know, I think Azure has it. AWS released one. [25:50] Matt: I don't remember. I think there's one. But again, I use a little of AWS's AI stuff right now. I don't, I can't say for certain. Azure also was at Kubecon and announced a few things too, including the dynamic resource allocation mentioned in the previous article, which includes Microsoft's DRA net now includes upstream support for Azure RDMA NICs, meaning GPU to NIC topology alignments is handled at the scheduler level rather through manual config. AI Runway is a new open source project under the CATO umbrella that provides a common Kubernetes API for inference workloads with web interface hugging face model discovery, GPU memory fit indicators and real time cost estimates compete with that llmt. AKS Networking gets several notable updates including Azure Kubernetes application network for identity aware MTLs and traffic telemetry without a full service me and wiregrid encryption at the node level via Cilium and Pod CIDR expansion that lets clusters grow IP ranges in place rather than requiring a full rebuild. Yay pricing for advanced container networking services. Features like Cilium Mutual TLS is not specified. On the observability side, AKS now supports GPU utilization directly into Manage, Prometheus and Grafana, closing and monitoring gap that previously required manual exporter configuration. A new agent container networking interface also lets operators run natural language diagnostic queries against live telemetry, reducing time to identify network issues. And finally a blue green Agent Pool upgrade and Agent Pool rollback are now available in aks, letting you provision a parallel node pool with the new configuration, validate it and revert to the previous Kubernetes version and node image problems appear. So all good updates from Kukan eu? [27:18] Justin: Yeah, and if you've ever debugged an issue on a Kubernetes, you know that there's logs everywhere that you have to go review and correlate across each other. So having an agent that can go and look across all those places to diagnose issues is fantastic. Been using that more and more lately. [27:35] Matt: Yeah, and the blue green deployments for agent pools is I thought it was there. I'm kind of surprised. I thought I feel like that's the way I've always done my thing's upgrades, but I'm surprised it wasn't there on aks. [27:48] Matt: Let's move on to AI is how machine learning makes money. First up, Snowflake is announcing Project Snow Work and Research Preview and a GenTech AI platform targeting business users in finance, sales, marketing and operations who need to compute multi step data workflows without writing code or relying on technical teams. The platform differentiates itself from general AI assistance by grounding outputs in an organization's existing Snowflake data and automatically enforcing existing RBAC and governance policies. So basically Snowflake believes bring AI to the data, not the data to the AI. Which makes sense. Project Snow workshops with pre built Persona profiles for specific business functions. So as a finance user gets workflows tuned to FPA, KPIs and closed narratives, while a sales user gets pipeline risk summaries rather than one size fits all interface project use cases Highlights including compressing financial close storytelling from days to a single workflow and replacing manual pipeline roll ups with automated executive briefs to give listeners a concrete sense of time savings being Targeted access is currently limited to a select group of customers and collaborative research preview, but you could probably get access to this pretty quickly as well. [28:47] Justin: Yeah, I mean I do like the idea of of bringing AI to the data rather than the data to the AI, which is a common problem, especially in enterprise, you know, platforms. I worry a little bit the the RBAC and authorization in Snowflake is very complex and I wonder if people are actually going through and actually defining those in a way that would be proper segmentation. But I guess, you know, they have access to it today, they just have to know how to query it. Having natural language is the same security issue, I suppose. [29:22] Matt: Let me answer that question for you. They're not. [29:26] Matt: Maybe they're going to use Delve for the security of it. [29:30] Matt: Yeah, I thought about going that way with my comment, but I decided it was too soon. But you just went for it. [29:36] Matt: OpenAI is acquiring Astral, the company behind three widely adopted Python developer tools, UV for dependency and environment management, Rough for linting and formatting, and Ty Ty for type safety enforcement. Astral team will join the Codex team after the deal closes, pending regulatory approvals. Codex has reached over 2 million weekly active users, with 3x user growth and 5x usage increase since the start of 2025. And this acquisition appears aimed at deepening Codex's ability to operate across the full Python development lifecycle, rather than just generating code snippets. The goal is to move Codex towards participating in complete development workflows, including planning changes, modifying code bases, running tools, verifying results and maintaining software over time. Which I don't know why they needed to buy the company to do all this. I mean it was open source already. I use UV quite a bit. I've switched over from virtual environments and earlier I think Matt, you were asking what was better and I looked it up for you. So if you use VENV and pip, it's slow where UV is relative in rust, so it's much faster. And then VEN management or virtual environment management is manual where UV is automatic and transparent, so it takes care of all that for you. And then dependency resolutions Much faster and you don't have all the tooling necessary additions that you need for PIP and pip tools etc. So it's much cleaner. If you haven't tried it out. I do recommend it as it's quite nice. I think you also live it live parallel to virtual environments if you really want to. [30:54] Matt: I know what I'm doing tomorrow. [30:55] Justin: Yeah. Try it out as well. [30:57] Matt: It's super fast. It's. It's very nice and it like there's things about virtual environments that you know are a little bit cludgy and like activating event is kind of annoying and like it just uvs. [31:07] Justin: Oh yeah. [31:08] Matt: UV is super quick being in the [31:10] Justin: right interpreter while you activate like it's. [31:11] Matt: Yeah, yeah. So the UV way is just much nicer. Yeah. [31:15] Justin: I mean that sells it for me. I am, I don't know, I feel like this might be an aqua hire. Like I don't know. [31:21] Matt: That's what it feels like. Which is why I worry that they're going to kill these products because I use UV and rough. I. I don't think I've used Ty for type safety because I, I haven't used that one types as much as I want to. But this is my. You're one of those developers like ah, [31:38] Justin: who cares if it's a string. [31:39] Matt: I mean I care like I just. And I do, I do deal with. I do have like some linters that do, you know, type stuff but it's. It's one of those like in my home projects I don't give a crap where most my Python is so I don't really use it there. But if it's something for work I would definitely use. [31:53] Justin: I mean on the opposite. I care more about at home for stuff like that because I don't want to debug stuff on my own time. [31:58] Matt: Like you're like I'm building everything proper and at work. No, we're just yoloing. Let's go guys. We got to get this shipped. [32:07] Justin: Oh, I test in production for my. My home stuff all the time but you know, like debugging functions. No. [32:12] Matt: God no. Anthropic is releasing open or sorry cloud code channels in version 2.1.8, enabling developers to connect their cloud code sessions to Telegram and Discord bots, shifting from asynchronous chat model to an asynchronous persistent agent that can work autonomously and notify users when tasks complete. It's built into the MCP protocol, which acts as a standardized bridge between cloud code and external messaging. Platform platforms Setup uses the Bun JavaScript runtime to run a polling service. Injects incoming messages as session events, allowing Claude to execute code, run tests, and reply back through a messaging app. Practically, this eliminates the need for developers to maintain dedicated hardware like a Mac Mini running Open Source agent frameworks 24 by 7, since cloud code itself now handles session persistence when running Background Terminal or on a vps, the plugin architecture is open with official Telegram and Discord connectors hosted on GitHub under anthropic. Repos in the community can build additional connectors for platforms like Slack or WhatsApp without waiting for Anthropic to ship the them. I oh it does remain tied to the commercial subscriptions Pro Max and Enterprise while the MCP is open, so I assume it would come to the free probably won't come to the free tier, but maybe to your paid plan soon. I try to use this and it don't work for me, but I think I I didn't have enough time to test it and I had too many cloud sessions going and I think I needed to kill all them and update properly to the 2180 version. But I am kind of curious to play with it a little bit more. [33:30] Justin: I'm curious to play with it too, but I just don't. It's just not how I work. Like, I don't need to be somewhere and then message my robot at home like that's not a thing. Maybe it'll be a thing one day when I figure out a use case. But why? What's the difference between that and then using Dispatch? You know, like it's sort of this weird thing that I don't quite understand. [33:48] Matt: I like the idea of it because like there's times where I expect Claude to do a bunch of work while I'm sleeping or while I'm away. And then it gets a dumb question like, hey, I need to read this document. Will you approve me? And I'm like, all this work I thought was gonna happen while I was walked away didn't happen because it prompted me for this thing. So if I was, you know, walking around Telegram and said, hey, do you mind if Cloud does this? I'd be like, oh yeah, go ahead. So I like, that's why I think it might be nice, which is why I wanted to try it. But I kind of agree with you also. Like, I don't like I initially was interested in trying out openclaw and like really playing with that. And then I was just like all the security stuff came out and all the issues and I was just like, no, I can't trust that with my data. Not that my data is that sensitive, but not, not that brazen at this point. [34:30] Justin: Yeah, but Cloud code still has to have a running session for, for this to work. So it's like, I don't know if it really replaces the Open Claw. [34:38] Matt: Well, your Open Claw is just a running cloud session in many ways. The loop, I mean before the loop existed, that's basically was just rerunning a cloud session from whatever the chat and you know, came to it. So I don't, I don't know. Yeah, I agree with you. It's, it was still a session, but it's slightly different. Anthropic kept giving us gifts this week though, with the next one being They've launched computer use capabilities in Cloud Cowork and Cloud Code now in Research Preview for Pro and Mac subscribers on macOS. Cloud can now directly control your browser, mouse, keyboard and screen to complete tasks with no direct connector exists with no setup required. Feature follows a tool priority hierarchy, reaching for service connectors like Slack or Google Calendar first, then falling back to direct computer control. Cloud requests explicit permission before accessing new applications and can be stopped at any point. Anthropic has built in prompt injection safeguards by scanning model activations during computer use sessions, and they acknowledge capability still early and recommend users avoid sensitive data at this time. Dispatch released alongside this update enables a continuous conversation thread between mobile and desktop, letting users assign tasks from their phone and pick up completed work on their computer. Use cases include automated morning email checks, scheduled metric pulls, and triggered cloud code sessions for pull requests. Combination of dispatch and computer use means Cloud can execute multi step workflows on desktops while the user is away, such as making ID changes, running tests and and submitting a pr. [35:51] Justin: I didn't know this was macOS only, which is kind of a bummer because I was going to actually put it on my Linux server so I could get compute that wasn't my laptop. [36:00] Matt: I still don't know if I trust Claude to like take over my entire computer. I barely trust it with like, you know, my web browser traffic just doing what it does. But yeah, I heavily use that feature. [36:11] Justin: But still I will say that I had mixed results with the the sort of cowork controlling and working in the browser like it would. It would go through these weird iterations where it would do something that I'd see in the browser and then say oh, this didn't work or I didn't see It. And then it would give me a screenshot of the thing it said it didn't see. So, like, there's weird stuff that. That can happen and it's. But it. I. I do like that they tried to make it really obvious what Claude is doing in your browser. So you know what's going on. Does it? [36:40] Matt: Yeah, the debug thing there too pops up. So you'd see it. [36:43] Matt: Yeah. [36:44] Justin: And it colors the tabs and does some neat stuff based on, you know, what, which one of your conversations. Like in a single browser, but you can have multiple conversations. The different color tabs show you which ones they're working with, which is kind of cool. [36:56] Matt: Also requires a lot of permissions checks on. Like, do you want to browse this website? I mean, the one I asked you to go to first? Yes, please go to the website. Like, do you want me to take a screenshot of it so I can see what's on the screen? You. Yes, please. Jesus. Yeah, yeah, it's a. It's a little needy on. I. I'm hoping that gets lifted a little bit as it gets more out of beta and the Chrome stuff works a little better. I've also played with the Chrome MCP that gets access to the dev tools. That's actually a good one too, because that's one of my troubleshooting tips when I'm dealing with front end, which I try not to as much as possible, is take advantage of dev tools. And if you don't just copy your console and dump it into cloud, that's a way to burn context really fast. Um, so I've done that a few times. [37:33] Matt: I'm not gonna lie. [37:34] Matt: Yeah, I've done it many times, which is why I much appreciate this MCP from Chrome that's now built in. [37:39] Justin: So the in cowork, it does. It does have access to your dev tools in the browser. [37:42] Matt: So I did. [37:43] Justin: I made it do that. [37:46] Matt: And then finally the last one, which I'm actually probably the most excited about from the features released. So if you've. If you. One of the reasons why I don't like the idea of Open Claw is that one of the things you have to do when you turn it on is you have to run it with dangerously skip permissions flag, which basically says Claw can do anything it wants to, which is hear those horror stories about Claude, you know, ran Terraform Destroy without me approving it or deleted all my files. Or like, it's because they're using dangerously skipped permission flags because they were annoyed at answering yes. Or the prompts were saying always allow on these certain commands. So what they've done is they've launched auto mode for cloud code and Research Preview for teams plan users with enterprise and API access coming soon and it works with both Cloudsonnet 4.6 and opens 4.6, offering a middle ground between the default conservative permission prompts and the risky dangerously skip permissions flag. The core mechanism is a classifier that reviews each tool called before execution, automatically blocking potentially destructive actions like mass file deletion, sensitive data exfiltration or malicious code execution while letting safe actions proceed without interruption. This directly addresses a practical developer workflow problem. Claude's code default run mode requires frequent human approvals that prevent truly unattended long running tasks, and auto mode allows owners to kick off extended jobs without babysitting the process Anthropic is transparent. That has limitations though, noting the classroom may still allow some risky actions when user intent is ambiguous and may occasionally block benign ones and they continue to work on using it in isolated environments rather than treating it as a fully safe alternative. There's a small performance trade off to be aware as Automotive adds some overhead to token consumption costs and latency per tool call due to the classifier running before each action. But I think well worth it. Yeah. [39:16] Justin: Although I'd be curious and see how much token consumption it is because more and more as I provide, you know, do more complicated workflows, I'm getting more and more sensitive to token. [39:29] Matt: Honestly, I still think it's worth it. The bad times, you know, as Justin said before that I'm like okay, cool, like I'm gonna go finish this message, join my meeting at least a one minute late because you know, it's fine and then get. Probably, probably get in trouble for that because I want to finish what I'm doing and then come back, you know, at the end of the meeting I look at it, I'm like oh, you're still reading the first file because you prompted me. [39:51] Justin: Like, cause you asked me for a permit. Yeah, no, I do that same thing. [39:53] Matt: Like even with the amount of like permissions I'm like LS is fine. Find is fine. Like all these commands are good, they don't care. You could do them, don't let you do remove. That's bad. But like all these are good so I'm hoping this solves a lot of my problems. [40:07] Matt: So yeah, the nice thing is that when you do enable the mode, it does become a shift tab option, just like plan mode is or auto edit. So then you can just Shift tab to auto mode so you don't have to leave it on all the time, which is nice. So if you, you know, again, if you know the use case you're trying to do, it'll then kind of take care of it so it doesn't just automatically do it for everything, you have to be in the correct mode for it to happen. Oh cool. So yeah, and I can tell you this because I'm on teams description that allows me to test it. So moving on to aws Amazon Bedrock Agent Core runtime now includes Invoke Agent Runtime command. An API lets developers execute shell commands directly inside a running agent session, streaming the output in real time over HTTP 2 and returning exit codes that custom container logic. The practical benefit here is that AI agents frequently need to run deterministic operations like tests, dependency installs or get commands alongside LLM reasoning. And previous developers had to build all that process management themselves inside their container Commands runs in the same container file system and environment as the agent session and can execute concurrently with agent invocations without blocking, which simplifies artificial coding agents, CI CD automation and similar workflows. It's available across 14 AWS regions, including all major US, European and Asia Pacific. And I cannot wait for the first remote shell execution vulnerability to be created by Amazon Bedrick Agent Core yeah, no doubt. [41:21] Justin: Although you know I do get the advantages to this and it is sort of like most of the my use case and like GitHub, autopilot or cloud code like it's running shell to do lots of things especially executing tests and so for CICD type workflows like you couldn't do anything without it. I can't like I'm really curious how teams were working around this like people that were previously using Agent Core because I bet that is ugly. Yeah but yeah it's going to be dangerous. [41:49] Matt: This is definitely a feature someone asked for because pain they were suffering and building that automation again it's toil that you don't necessarily need to build, [41:58] Justin: but hopefully that that ecosystem is hardened. [42:02] Matt: Hopefully yes. Amazon Inspector now supports agentless EC2 scanning for a broader range of software including WordPress, Apache HTTP Server, Python packages and RubyGens Windows operating system vulnerabilities with no configuration required for existing customers. The new Windows Kill Knowledge Base findings consolidate multiple CVs addressed by a single Microsoft patch into one finding surfacing the highest CVSS score, EPPS score and exploit availability which reduces the noise and makes remediation much more straightforward. All existing CVE based Windows OS findings will automatically transition to KB based findings, meaning security teams will see fewer duplicate alerts and can map findings related to specific Microsoft patches by included KB article links. I'm actually shocked that this was not already there because CVE is really just the generic way that you would find these. But typically they're always linked to a knowledge base article which then typically links you to the patch. So I don't know how people got from the CVE to the patch without this before other than maybe the CVE mentions the KB articles. [43:00] Matt: CVE will mention them, you know, or Google or there's a. [43:04] Justin: Or yeah, you have to enrich the data. I mean vulnerability management's a pain, right. And getting all that data enriched and in the right. In front of the right person's a mess. And so this looks kind of neat. It look, I mean especially, I'm especially interested in the deduplication because it is sort of frustrating when there's every single CVE listed, you know, since the beginning of time. If it's detected on the os, it's duplicate, but it's, you know, the remediation action is, you know, one command and it solves like six of these things at once. So it's neat. I like that. I am a little concerned about you know, the, some of the, the broader range of software that didn't include before because I'm like wait, WordPress and Apache HTTP server, like what? [43:44] Matt: Right. [43:44] Justin: Didn't do that before. [43:46] Matt: Like I just assumed, assumed. And that's one of the things is that inspector, one of the limitations has always been that it's been very focused on the Linux server itself and less on the packages installed. [43:57] Justin: Yeah. [43:57] Matt: So you know, the fact they're adding some more of the popular ones now is kind of like long overdue. So Amazon ECR pull through cache now supports chain guard as an upstream registry source, allowing customers to automatically sync chain guard container images into ECR without building custom synchronization workflows. Which if you're paying the bajillions of dollars that it costs for chainguard, you probably do want to get them faster. Chain card images are known for their minimal attack surface and security focused build. So pairing them with ECRs native image scanning and lifecycle policies gives teams a more integrated security posture for their container supply chain. The practical benefit here is just make it simple for your teams, which simple makes things more secure. Cache chain images inherit standard ECR capabilities like including lifecycle policies for cost management and image scanning, which means customers get consistent governance across their own images and upstream Chain card images. [44:43] Justin: Yeah, I mean anytime you're using sort of a remote image from, you know, usually a paid for repository, it's such a pain, you know, because like production environments typically don't have broad Internet access to go pull that down and, and so you have to sort of pre stage it in your internal repo and maintain it from there. But this, this automatically sort of plugs that gap with ecr, which is cool. Although like I still won't pay for chain guard images. Like it's a bajillion dollars that I can't, I can't understand the pricing model on these things. [45:14] Matt: It's huge, it's, it's massive. But checks a box for, you know, your security team. Right. That doesn't want to understand actually how containers work. Just use this one and you don't have to worry about it, you know, when I can install anything I want on it. So is it actually going to help? [45:32] Justin: Well, I mean, interestingly yeah, you can still install what you want on it, but the chain guard support model, like they, they like if you use a chain guard, like open source, you know, like if you use a HashiCorp vault chain guard image, they're actually directly supporting the vault binaries and they're, they sign off, they provide an SLA on hardening of that binary, which is kind of nutso and I imagine that's why it's price so expensive. [46:01] Matt: But. [46:02] Justin: Yep. [46:03] Matt: Well, AWS turns 20 this month, which makes sense because S3 just came out last month. And for those of you who know anything about Amazon, like we do, S3 is just slightly older, before they knew it was calling it AWS, I guess. But S3 is the first service of AWS. But AWS isn't turned 20 until like a month later. I don't know. [46:20] Matt: No, SQS was, you know, I don't know. SQS was in beta before S3. S3 came out. [46:30] Matt: All right, thank you for that. [46:32] Matt: It's the only thing I learned from studying for an exam. Yes. [46:35] Matt: Why that's on the exam, who knows? I don't know. [46:38] Justin: Yeah, yeah. [46:39] Matt: It grew from $0.10 per compute hour in 2006 to nearly 129 billion annual revenue today, which would place it in the Fortune 500 top 40 as a standalone company. This article goes through a bunch of stuff, including reminiscence about Jeff Barr during the 15 year one. Jassy, of course, who was the founder of AWS, has said that he thinks he could reach 600 billion in annual revenue by 2036. So they're expecting to go from 128 billion to 600 billion. That's a big move in that direction overall, but happy Birthday Amazon. [47:09] Justin: It's interesting the article mentions that Bedrock is the fastest growing service in AWS history and just sort of, I suppose it's you know, the AI, you know, use case. But it's such a long history of and so many managed services. I'm surprised. [47:21] Matt: But it took so long to get AWS where it was. So now they have the customer base. So it's faster to grow than telling people go use this thing called S3. [47:31] Matt: I mean it's not growing because of Nova, it's definitely growing because of Anthropic. All right, AWS MCP Server in Preview now automatically publishes metrics to CloudWatch under AWS MCP namespace at no additional cost, covering invocation counts, success rates, client error, server errors and throttling for individual tools like the AWS API caller and agent SOP retriever. Agent SOPs are pre built tested workflows that guide AI assistance through complex multi step AWS tasks. And the Documentation Search tool now uses semantic similarities so agents can discover the right SOP through natural language queries rather than exact keyword matching. The CloudWatch integration addresses a previous gap where customers had no visibility into agent driven changes. Mailing teams track usage patterns, identify permissions issues and configure alarms. Still available only to you in US east one in Preview, so keep that in mind and for listeners building AI Assisted Infrastructure automation, this update provides a practical observability layer for your mcp. So I appreciate that. [48:24] Justin: Why did everything go offline? Yeah, now you can. [48:27] Matt: Now you find out. Yeah, [48:31] Justin: yeah. I still am very hesitant to run these things unchecked, but I imagine use over time will build confidence. [48:39] Matt: Just trust AI Ryan. What could possibly go wrong? It might just hallucinate things on you. It's fine. [48:46] Justin: I mean I hallucinate things and then push changes to production, so I don't know. [48:49] Matt: Right. [48:52] Matt: Moving on to GCP Cloud SQL Read pools are now generally available for Enterprise Plus Edition lets you provision up to 20 read replicas behind a single load balance endpoint for MySQL and Postgres. Removing the need to manually manage multiple replicas or configure applications when nodes are added, removed or you just use, you know, any of the other managed services like Spanner or what's the what's the postgres Aurora competitor I'm forgetting the name of at this moment? [49:17] Matt: Not BigQuery. [49:18] Matt: It's not BigQuery It's. [49:20] Justin: No, it's, it's. It's something like Spanner. [49:23] Matt: Okay. Alloy db. Sorry, yeah, obvious like spanner. Okay. Neither one of those is obvious. [49:28] Justin: No, no it's not. Yeah, I thought it was a. Yeah, yeah. [49:31] Matt: So I mean basically this is a feature that I imagine says under Alloy. Although Alloy does have a disconnect between compute and storage that this does not have. The new auto scaling feature though for repo is dynamically adjust no count based on CPU station or database connection thresholds with users defining minimum and maximum node counts. So the pool scales, those bounds automatically. Pools with two or more nodes are backed by a 4, 9 availability SLA that covers maintenance downtime and configuration changes like VM type or database flag updates are applied across all nodes with near. Zero downtime with near. With near. From a cost perspective, auto scaling helps avoid over provisioning by scaling and during low traffic periods. Meaning you pay only for nodes actively in use. All available via the GCloud, CLI, Terraform and the REST API. So if you need SQL scaling at large read volumes, this is for you. [50:16] Matt: The feature here I actually like is that it auto scales reads because I don't know of. Maybe I'm wrong but nothing I've seen actually will do auto scaling on the reads for SQL and scale it out horizontally in that way like even Aurora, like if you're on the normal one, you build a read replica. You have to build each read replica and then either route or round robin to those ones. So if it's actually going to do automatic adding and removing based on capacity needs, that's a pretty nice feature because it can save you a lot of money because most people don't scale up or down. Their databases are like eh, it's a, you know, 8XL or you know, whatever size it is and you're stuck with it there and no one ever looks at it until your CFO looks at the bill and has a heart attack one day. So at least here your reads start to get a little bit more dynamic. [51:02] Justin: Yeah, I didn't think about it that way because yeah, you typically don't scale down. [51:06] Matt: Right. [51:06] Justin: If you need an additional read replica to deal with load, it's around forever. [51:10] Matt: Right. So yeah, and obviously you scale it for peak and you never actually remember to scale to actual capacity that you need. [51:17] Matt: All right, for all the designers out there in the world, I am sorry to say AI has come for your job today. So basically Google Labs has evolved Stitch, which is you can find@stitch.with google.com into an AI native design canvas that converts natural language descriptions into high fidelity uis designs targeting both professional designers and non designers who want to move from concept to prototype quickly. Updated tool introduces an infinite canvas, a design agent that resents across projects full history and an agent manager for running multiple design directions in parallel. Addresses a common pain point of managing divergent design expirations. Design MD is a notable addition that lets users extract and export design systems as agent friendly markdown files, making it easier to apply consistent design rules across projects or share them with other tools that's starting from scratch each time. So just dump that Design MD into your cloud code, tell where it exists and it'll help make sure your UI matches the design spec. CISH connects to the developer workflows through MCP Server and SDK with expert options to AI Studio and Anti gravity positioning as a handoff layer between design and development rather than a standalone tool. Pricing details are not specified in the announcement, so it's free for right now, so make sure you get using it right away. But I did help Ryan design a UI at the beginning of our show and we were rolling through our show notes to show him how cool this was. And so he has a tool he's called Patromatic, which is about vulnerability management, a topic near and dear to his heart. And so I gave it a very simple prop to say, hey, I need to create a dashboard for Patromatic. It's a vulnerability tool. I want to be able to use, you know, red, yellow, green to show problems and give me a mockup. And it gave me a beautiful diagram which will include here in the show notes from Stytch, which looks pretty darn good. And compared to my AI use or my l, my design front end design looks, this is way better than what I would come up with. [53:01] Justin: Yeah, like I was developing something for my family internally and it, it looks like you would expect, you know, my first sort of ui. [53:11] Matt: AI ui. Got it. [53:13] Justin: Yeah, it's, it's like I'm, I'm, I can't wait to try this out and it was really impressive how fast, you know, how little feedback you gave it to develop something that's basically what I want. Like and I could tweak it from there. [53:26] Matt: Yeah, which is cool. I mean like I can develop all the, you know, this is multiple menus on the left and so I can develop each of those for you, different mockups and, and even some of these UI things will actually help lead you to new features you hadn't thought of potentially as well, which is cool. So yeah, definitely. If you are a backend developer who wishes you could do more front end and you don't have that skill set, this is a tool for you for sure and so definitely check that out. All right, let's move on to Azure. So Nvidia was at gtc. Nvidia GTC is of course their big annual conference. We actually had a friend of the show who was there last week. He will join us next week to kind of give us his on the ground perspective and join us for the show. So we're looking forward to having him next week. Look forward to that. But Microsoft did have an article here basically about what their new solutions for Nvidia tools are that includes the Microsoft Foundry Agent service and observability in Foundry Control plane are now generally available, giving enterprise teams a unified platform to build, deploy and monitor your AI agent with end to end visibility into agent behavior across tools, data and workflow Azure is the first hyperscale cloud to provide an Nvidia Vera Rubin NVL72 system in its labs with roll out plan to liquid cool data centers over the coming months. Following deployments of hundreds of thousands of Grace Blackwell GPUs in the next year positions Azure as a target platform for inference heavy and reasoning based workloads at scale. Nvidia Nemotron models are now available through Microsoft Foundry and the Fireworks AI integration allows customers to fine tune open weight models into low latency deployments that can be distributed to the edge. Microsoft is extending Nvidia Ver Rubin platform support to Azure Local, allowing organizations in sovereign and regulated environments to run next gen AI loads while maintaining Azure consistent governance through Azure Arc and Foundry Local and a new physical AI toolchain available via public GitHub repo integrates Nvidia physical AI data factory with Azure Services, enabling developers to build robotics and physical AI workflows that connect physical assets, simulation environments and cloud training into repeatable enterprise pipelines. Skynet is very excited. [55:15] Justin: Yeah, just about to make the same joke. I mean there's some neat announcements in here. Like it's, you know, it's, it's hard since I don't use this ecosystem, but it is, you know the, you know one of the big issues that we have is you know, the agent development and having visibility into those workflows and how do you know it's performing as you want and how do you know it's doing secure things and what you, you know, all of that thing. So I like the visibility and framework and you know. Yeah who doesn't want to build an AI robot and have it, you know, take over your life or take your life? [55:50] Matt: Well, you can have it run on your Azure Local stack that you have running and it will really take over your life. [55:54] Matt: Yeah, I mean, I definitely wanted a closet heater to, you know, heat all my house and run my robots locally. [56:00] Matt: It's liquid cooled, though. [56:02] Matt: Not at my house it's not. [56:05] Matt: No, I think that whole chipset it has like, is designed for liquid cooling only, which actually is an interesting thing. If they are, if they're going to do Azure Local with liquid cooling built in, that feels like a complicated project to take on. [56:21] Justin: Exactly. [56:22] Matt: Are they mailing you liquid that feels bad? [56:25] Matt: Hopefully not. [56:26] Justin: They just include a funnel, [56:30] Matt: a gallon of water. It's fine. Don't worry about it. [56:32] Matt: Don't worry about it. It's fine. Microsoft has paused the automatic deployment of the Microsoft 365 copilot app to desktop users, halting a rollout that had already slipped twice from its original Octo 2025 date. Paas has no specified end date and existing installations remain unaffected. The core admin complaint was that the opt out default model increased IT workload by forcing organizations to set policies on Microsoft's timeline rather than their own. Admins who want to proceed with opponents can still do so manually through their available methods. European Economic Area customers were already excluded from this role, likely reflecting ongoing regulatory considerations and the EU saying not on my watch. And this pause aligns with broader reported changes to Microsoft's approach of embedding copilot across Windows 11 surfaces, such as recalibration of how aggressively the system is pushed to end users. Yeah, don't, don't force your IT people to do things. That's not good. They're already overworked and stressed and I, [57:21] Justin: you know, like it's just so hard in these days like for enterprise and IT firms because you're trying to provide some sort of agentic platform tooling because your users are demanding it and you have to have all these data integrations and all the permissions, you know, has to be evaluated, all those things. And, and then Microsoft, if you haven't chose, you know, the Microsoft Copilot thing, Microsoft just pushes this out to everyone. So now people are sending all that data locally under Windows OS out to wherever Microsoft is parsing that and not in the enterprise picked option. So just sort of this, this is lame. I don't like this model of deployment. [57:58] Matt: I mean it's kind of the default thing everyone AI is doing like you. You start out with AIs off by default and then they turn everyone turns it on by default. Then you're caught in the same problem where your users want it, but you're not ready and so it's a bit of a pain. [58:10] Justin: Yep. [58:12] Matt: All right. Microsoft's announcing a savings plan for databases at SQLCon 2026, offering up to 35% savings versus pay as you go pricing on a one year hourly spend commitment automatically applied to cross eligible Azure database services including Azure SQL. GitHub Copilot is now generally available in SQL Server Management Studio 22, bringing chat and TCL code assistance directly into SS Ms. For developers and DBAs. Thank God Azure SQL Database Hyperscaler gained new public preview features including a SQL MCP server for connecting SQL data to AI agents and SQL change in Fabric Reach general availability for several enterprise security features including SQL Auditing, Customer Managed Keys and dynamic data masking with Workplace Level Private Link and Preview. Microsoft has introduced the Database Hub in Fabric now in early access, providing a single management plan across Azure SQL, Cosmos DB, PostgreSQL, MySQL and ARC enabled SQL Server. [59:03] Matt: There's a lot of things in this, in this blog post. Like the biggest one for me is the savings plans for databases is it's just built in there now, you know, and it was great. Finally when, you know, AWS did it at Re Invent I want to say I could ask Boltbot Live but you know, a little bit harder to do while I'm talking, you know. And it really means that you can actually get those savings and you don't have to commit to like hey I'm on hyperscale or you know, this specific one. So you know, it's a great feature for them to add. We talked about a few of these other ones I think in past podcasts too about like SQL and Copilot, which is the only way I'm ever writing SQL statements is having Copilot do it for me. Like I don't write SQL so you know I I weirdly like this feature because I really hate writing SQL and copy pasting back and forth is terrible. So if Copilot can glean some insight onto my, my database and I can write to it in natural language and say hey, I need this query, you know, and generate that SQL, that's a an A plus feature in my head. [60:08] Justin: Yeah, I am a little concerned about like the, the MCP server, you know, gaining a whole lot of like update and delete from or abilities which is kind of seems kind of scary, but you know, if you're willing to code your agent enough to do that, I guess hopefully you're still having hopefully there's still SQL permissions in a way that would prevent it from doing anything really dangerous. [60:30] Matt: The hyperscale core is if you ever look at hyperscale pricing, it's going up to 192 cores. Your CFO is not going to like you. I could tell you that right now without even looking closer at it. [60:42] Matt: You probably have enough revenue to justify it, I would hope. Azure SQL Database now supports versionless keys for transparent data encryption, meaning customers can point to to a key in Azure Key Vault without pinning to a specific version, and the database will automatically use the latest key version as it rotates. This reduces operational overhead for teams managing customer managed keys, eliminating the manual step of updating TDE configurations each time a key is rotated in Azure Key Vault or Managed ASA hsm. Practical benefits is improved reliability around key rotation workflows since missed version updates previously could cause access disruptions to encrypted databases, a real risk in regulated environment. The feature is generally available and interview with Azure Key Vault and manage HSM setup so customers already using bring your own TDE can adopt versionless references without rebuilding their entire encryption architecture. There's no additional cost for this, thank God, because this is the dumbest feature I've ever heard in my entire life. Why? Why does it not just do it automatically? Why is there even a step where I had to know? [61:39] Matt: Yeah, if I remember correctly, you said to be like hey, there's a new key there. And it'll be like oh cool, let me re encrypt with that. Versus now it just goes, oh cool, latest. Great, I'll just check it once a day. Which is interesting is so many of the other Azure things automatically do that. So if you put a SSL cert into Key Vault and your app gateway is connected to it, it will automatically check every like four hours for a new certificate. And if there is, it will reprovision it. But why did this do that? Like, I just don't understand why this was it there? Like, what technical reason was there? And I want to know. I'm curious. What did you overcome? [62:20] Matt: I mean, I don't know, some product manager got figured. You know, someone yelled at him, he's like that's a really good idea. Why do we do this? This is silly. If I would rotate my key every five hours, that means I have to update it manually. [62:30] Justin: Like, or is it the traditional database admin? This is going to impact performance because it's encrypting everything with the new key or whatever. [62:39] Matt: That's why they gave you the 182 VCORES, because that's the only way you can have this feature. [62:44] Matt: Right. [62:47] Justin: Better make that disk with high iops. [62:50] Matt: No, no, Ultra premium storage. Right, Ultra premium. You don't have to identify that. [62:56] Matt: But no, there are, there are a few Google high, you know, premium type things. [63:01] Justin: Google has its own. [63:02] Matt: Yeah, they're not as, they're not as egregious as the Azure ones, but they do exist. [63:06] Justin: Yeah, they do exist. [63:07] Matt: And there's. [63:08] Justin: There's secret ones as well. Like, that's the fun part. [63:11] Matt: Ooh, they're secret ones. [63:12] Matt: I want to know more. [63:13] Justin: Yeah, like your account team just tells you, you're like, ooh, you, you know, we got an enterprise feature for this if you don't mind spending all the [63:20] Matt: money and signing a separate contract. Because we can't just embed it into the Google contract that you already have. Why would we do that? Yeah, that's good. [63:27] Justin: Why would you add it to my existing, you know, commitments? [63:30] Matt: Why wouldn't you just make this a consumption thing and I just pay for consumption? I don't know. You have to have an own contract anyways. Microsoft is releasing the Azure Skills plugin available at some website. You can find the Show Notes, which bundles over 19 curated Azure workflow skills, the Azure MCP server with 200 plus tools across 40 plus Azure services and the Foundry MTV server into a single install for AI coding agents. What could go wrong? Skills layer is the core deteriorator here, encoding decision trees and sequencing logic for real Azure workflows rather than simple prompt snippets. Key skills include Azure Prepare for generating extra code, Azure Validate for preflight checks, Azure Deploy to orchestrate through Azure Developer CLI and Azure Diagnostics for troubleshooting using logs and KQL queries. Plugin designed to be portable across agent hosts including GitHub, Copilot and Visual Studio Copilot, CLI and Claude code configuration handled automatically through a MCP JSON file in a GitHub plugin address skills folder. Microsoft is explicit. This setup requires real credentials and real Azure resources representing least privileged access to explicit tool approvals and skills sourced only from trusted repos. This positions the agent as a supervised collaborator rather than an autonomous actor. So this is cool. We've seen this from GCP as well, I think, I don't think Amazon has one yet, but basically the ability to like, hey, you know, use A serverless KMS key potentially is, you know, what the Azure skill will tell me to do, I hope. And so you know, you don't have to do that thing that you were talking about earlier. But it's nice to see this kind of of intelligence being brought to agents so they don't have to go constantly look up documentation and stuff via reading. They can just get it from the MCP server quickly. [65:02] Matt: I actually am most excited for the KQL feature because writing KQL is like writing SQL but harder. But also I'm terrible at both so don't judge that one statement. But if I can live, just tell it to search the logs in a certain way because right now I just have this terrible workflow of Claude. This is what I'm looking for in KQL copy paste. Take the screenshot, put it back over here, copy paste and iterate through this very slow cycle. So if I can have it understand KQL so much better. [65:33] Matt: That's cool. I didn't know about the KQL thing. [65:35] Justin: I just hope the tool definition is such that it, you know, doesn't load every single tool into your context window. Like you know, actually defines when to use tools, you know, because it's. I've found that, you know, tools sourced from public sources are. Yeah, it's a little hit or miss with those. [65:51] Matt: And then finally another MCP for your DevOps team. This is the Azure DevOps remote MCP server we just in preview as of March 17th, followed by its integration to Microsoft Foundry. The server gives AI agents a hosted authenticated connection to Azure DevOps data including work items, pull requests, pipelines, repos and wikis via a single URL. Endpoint authentication runs entirely through Microsoft Entre. Again permissions meaning organizations apply their existing identity policies, conditional access rules and permission boundaries to agent access without building separate integrations. Notably, only entre backed Azure DevOps organizations are supported leaving MSA backed and on premise deployments without the option for now 2 access control headers stand out for enterprises using XMCP Read only your STIX agents to read only operations and X MCP toolsets lets team scope which tool category an agent can access and again this is all good to have an MCP for this. Does Azure DevOps have an A G H type tool for like GitHub CLI Matt [66:49] Matt: I'm not sure I was under [66:51] Justin: well it's been a while but I thought even when team foundational services days it was git compatible after a certain version there's a. [66:59] Matt: There is a DevOps CLI. I just googled it for you. [67:01] Matt: Yeah, yeah. I mean I don't use the DevOps in that way. We don't use that in my day job. [67:06] Matt: But, but this sounds very much like the code 1, which is why I was asking if it. Because this is basically GitHub but the more expensive version of GitHub. [67:15] Justin: Yeah, I mean it's, you know, these are, these are neat and I do like that they're, they talk about the control headers and how they're managing permissions, which is, you know, okay, something that's near and dear to my heart and something I'm having to discover and learn more about. So this looks like it's built in and I, you know, I continue to be really impressed with entre authentication options. So it is, it is a really cool and powerful tool that makes several things about identity and access management really easy. [67:42] Matt: Yeah. But how complicated is it? [67:45] Justin: Very complicated to set up for? Once it's set up it works really [67:50] Matt: well [67:52] Justin: until you have to change it and then you have to go through [67:53] Matt: the same, the whole process again. All right, Oracle, two stories for you guys this week. First up, Java 26 ships 10 JDK enhancements including AI integration. So they brought AI to Java, of course, and all is going to be lost. So I don't really have anything else to say about Java 26 other than I still remember Java 8. [68:12] Justin: Yeah. [68:12] Matt: And I mean, I guess I'm glad this exists if you're really in the Java side. [68:16] Justin: I think I'm on Java 9. [68:20] Matt: I'm not sure I want to. I'm on whatever. Corretto is the latest version1.1 use case where I actually use Java, which I hate and I going to refactor it someday in a fit of rage. Probably a cloud like Refactor, it's a rust or something less painful. [68:33] Matt: I was just thinking Java AI running on like, you know, ATM machines. [68:38] Matt: Yeah, [68:40] Justin: I, I don't want to know what the AI integration to this is because I'm sure it would scare me. [68:47] Matt: Yeah, I mean they didn't give a lot of details about how it's going to integrate with LLMs, but yeah, I, I'm sort of curious as well. [68:53] Justin: Or maybe it's just the standard where they had to include AI or didn't. Or it didn't ship. [68:59] Matt: Yeah, exactly. And then our final story for this week is Oracle is announcing the bundle of agentic AI capabilities for Oracle AI Database at its AI World tour in London, centered on keeping AI workloads closer to data rather than moving data to external AI systems. Highlight additions include the Autonomous AI Vector Database and Limited Availability. Security angle is notable here with Oracle Deep Data Security and Private AI service containers positioned to address prompt injection and data leakage risks by enforcing least privilege access to your database layer through the AI and lots of other great AI features, including an MCP server. Because everyone's got an MCP server. You get MCP server, I get an MCP server. There are no depressing details in this, but I guess if you're in the Oracle world, you're going to get AI in the database because Oracle put everything into the database, including Columnar back in the day. And all the other NoSQL capabilities are all available to an Oracle. You can do anything. [69:47] Matt: It does everything. [69:47] Matt: It slices, it dices it, it does the laundry. It's all the things. [69:51] Justin: Yeah, I mean, I have the same concerns I did for this as I did the databricks announcement, which is like as long as you're, you know, these low code things, you're. If you expect your finance team to go and build, you know, integrations of these things, you're basically giving them access to your database. So be careful with that permission. Sprout. [70:09] Matt: I was just be careful with your, you know, piggy bank. [70:13] Justin: Well, that too. [70:14] Matt: But I guess if you're on Oracle database, if you're already used to who cares much? So who cares? [70:19] Matt: Well, that's it for another fantastic week here in the Cloud, guys. And we'll see you next week for, I'm sure, more AI news. [70:25] Justin: Yes, AI. [70:26] Matt: AI. [70:27] Justin: AI. Bye, everybody. [70:28] Matt: Bye, AI. [70:32] Matt: And that's all for this week in Cloud. Head over to our website@thecloudpod.net where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.