Player FM - Internet Radio Done Right
48 subscribers
Checked 1d ago
اضافه شده در four سال پیش
محتوای ارائه شده توسط Anton Chuvakin. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط Anton Chuvakin یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده
N
Netflix Sports Club Podcast


America’s Sweethearts: Dallas Cowboys Cheerleaders is back for its second season! Kay Adams welcomes the women who assemble the squad, Kelli Finglass and Judy Trammell, to the Netflix Sports Club Podcast. They discuss the emotional rollercoaster of putting together the Dallas Cowboys Cheerleaders. Judy and Kelli open up about what it means to embrace flaws in the pursuit of perfection, how they identify that winning combo of stamina and wow factor, and what it’s like to see Thunderstruck go viral. Plus, the duo shares their hopes for the future of DCC beyond the field. Netflix Sports Club Podcast Correspondent Dani Klupenger also stops by to discuss the NBA Finals, basketball’s biggest moments with Michael Jordan and LeBron, and Kevin Durant’s international dominance. Dani and Kay detail the rise of Coco Gauff’s greatness and the most exciting storylines heading into Wimbledon. We want to hear from you! Leave us a voice message at www.speakpipe.com/NetflixSportsClub Find more from the Netflix Sports Club Podcast @NetflixSports on YouTube, TikTok, Instagram, Facebook, and X. You can catch Kay Adams @heykayadams and Dani Klupenger @daniklup on IG and X. Be sure to follow Kelli Finglass and Judy Trammel @kellifinglass and @dcc_judy on IG. Hosted by Kay Adams, the Netflix Sports Club Podcast is an all-access deep dive into the Netflix Sports universe! Each episode, Adams will speak with athletes, coaches, and a rotating cycle of familiar sports correspondents to talk about a recently released Netflix Sports series. The podcast will feature hot takes, deep analysis, games, and intimate conversations. Be sure to watch, listen, and subscribe to the Netflix Sports Club Podcast on YouTube, Spotify, Tudum, or wherever you get your podcasts. New episodes on Fridays every other week.…
EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering
Manage episode 454643868 series 2892548
محتوای ارائه شده توسط Anton Chuvakin. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط Anton Chuvakin یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Guest:
Amine Besson, Tech Lead on Detection Engineering, Behemoth Cyberdefence
Topics:
- What is your best advice on detection engineering to organizations who don’t want to engineer anything in security?
- What is the state of art when it comes to SOC ? Who is doing well? What on Earth is a fusion center?
- Why classic “tiered SOCs” fall flat when dealing with modern threats?
- Let’s focus on a correct definition of detection as code. Can you provide yours?
- Detection x response engineering - is there a thing called “response engineering”? Should there be?
- What are your lessons learned to fuse intel, detections, and hunting ops?
- What is this SIEMless yet SOARful detection architecture?
- What’s next with OpenTIDE 2.0?
Resources:
- Guide your SOC Leaders to More Engineering Wisdom for Detection (Part 9) and other parts linked there
- Hack.lu 2023: TIDeMEC : A Detection Engineering Platform Homegrown At The EC video
- OpenTIDE · GitLab
- OpenTIDE 1.0 Release blog
- SpectreOps blog series ‘on detection’
- Does your SOC have NOC DNA? presentation
- Kill SOC Toil, Do SOC Eng blog (tame version)
- The original ASO paper (2021, still epic!)
- Behind the Scenes with Red Canary's Detection Engineering Team
- The DFIR Report – Real Intrusions by Real Attackers, The Truth Behind the Intrusion
- Site Reliability Engineering (SRE) | Google Cloud
236 قسمت
Manage episode 454643868 series 2892548
محتوای ارائه شده توسط Anton Chuvakin. تمام محتوای پادکست شامل قسمتها، گرافیکها و توضیحات پادکست مستقیماً توسط Anton Chuvakin یا شریک پلتفرم پادکست آنها آپلود و ارائه میشوند. اگر فکر میکنید شخصی بدون اجازه شما از اثر دارای حق نسخهبرداری شما استفاده میکند، میتوانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Guest:
Amine Besson, Tech Lead on Detection Engineering, Behemoth Cyberdefence
Topics:
- What is your best advice on detection engineering to organizations who don’t want to engineer anything in security?
- What is the state of art when it comes to SOC ? Who is doing well? What on Earth is a fusion center?
- Why classic “tiered SOCs” fall flat when dealing with modern threats?
- Let’s focus on a correct definition of detection as code. Can you provide yours?
- Detection x response engineering - is there a thing called “response engineering”? Should there be?
- What are your lessons learned to fuse intel, detections, and hunting ops?
- What is this SIEMless yet SOARful detection architecture?
- What’s next with OpenTIDE 2.0?
Resources:
- Guide your SOC Leaders to More Engineering Wisdom for Detection (Part 9) and other parts linked there
- Hack.lu 2023: TIDeMEC : A Detection Engineering Platform Homegrown At The EC video
- OpenTIDE · GitLab
- OpenTIDE 1.0 Release blog
- SpectreOps blog series ‘on detection’
- Does your SOC have NOC DNA? presentation
- Kill SOC Toil, Do SOC Eng blog (tame version)
- The original ASO paper (2021, still epic!)
- Behind the Scenes with Red Canary's Detection Engineering Team
- The DFIR Report – Real Intrusions by Real Attackers, The Truth Behind the Intrusion
- Site Reliability Engineering (SRE) | Google Cloud
236 قسمت
همه قسمت ها
×C
Cloud Security Podcast by Google

Guest: Anna Gressel , Partner at Paul, Weiss , one of the AI practice leads Episode co-host: Marina Kaganovich , Office of the CISO, Google Cloud Questions: Agentic AI and AI agents, with its promise of autonomous decision-making and learning capabilities, presents a unique set of risks across various domains. What are some of the key areas of concern for you? What frameworks are most relevant to the deployment of agentic AI, and where are the potential gaps? What are you seeing in terms of how regulatory frameworks may need to be adapted to address the unique challenges posed by agentic AI? How about legal aspects - does traditional tort law or product liability apply? How does the autonomous nature of agentic AI challenge established legal concepts of liability and responsibility? The other related topic is knowing what agents “think” on the inside. So what are the key legal considerations for managing transparency and explainability in agentic AI decision-making? Resources: Paul, Weiss Waking Up With AI ( Apple , Spotify ) Cloud CISO Perspectives: How Google secures AI Agents Securing the Future of Agentic AI: Governance, Cybersecurity, and Privacy Considerations…
Guest: Svetla Yankova , Founder and CEO, Citreno Topics: Why do so many organizations still collect logs yet don’t detect threats? In other words, why is our industry spending more money than ever on SIEM tooling and still not “winning” against Tier 1 ... or even Tier 5 adversaries? What are the hardest parts about getting the right context into a SOC analyst’s face when they’re triaging and investigating an alert? Is it integration? SOAR playbook development? Data enrichment? All of the above? What are the organizational problems that keep organizations from getting the full benefit of the security operations tools they’re buying? Top SIEM mistakes? Is it trying to migrate too fast? Is it accepting a too slow migration? In other words, where are expectations tyrannical for customers? Have they changed much since 2015? Do you expect people to write their own detections? Detecting engineering seems popular with elite clients and nobody else, what can we do? Do you think AI will change how we SOC (Tim: “SOC” is not a verb?) in the next 1- 3 -5 years? Do you think that AI SOC tech is repeating the mistakes SOAR vendors made 10 years ago? Are we making the same mistakes all over again? Are we making new mistakes? Resources: EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 EP231 Beyond the Buzzword: Practical Detection as Code in the Enterprise EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog Citreno, The Backstory “Parenting Teens With Love And Logic” book (as a management book) “Security Correlation Then and Now: A Sad Truth About SIEM” blog (the classic from 2019)…
C
Cloud Security Podcast by Google

Guest: Cristina Vintila , Product Security Engineering Manager, Google Cloud Topic: Could you share insights into how Product Security Engineering approaches at Google have evolved, particularly in response to emerging threats (like Log4j in 2021)? You mentioned applying SRE best practices in detection and response, and overall in securing the Google Cloud products. How does Google balance high reliability and operational excellence with the needs of detection and response (D&R)? How does Google decide which data sources and tools are most critical for effective D&R? How do we deal with high volumes of data? Resources: EP215 Threat Modeling at Google: From Basics to AI-powered Magic EP117 Can a Small Team Adopt an Engineering-Centric Approach to Cybersecurity? Podcast episodes on how Google does security EP17 Modern Threat Detection at Google EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil Google SRE book Google SRS book…
C
Cloud Security Podcast by Google

Guest: Sarah Aoun , Privacy Engineer, Google Topic: You have had a fascinating career since we [Tim] graduated from college together – you mentioned before we met that you’ve consulted with a literal world leader on his personal digital security footprint. Maybe tell us how you got into this field of helping organizations treat sensitive information securely and how that led to helping keep targeted individuals secure? You also work as a privacy engineer on Fuschia , Google’s new operating system kernel. How did you go from human rights and privacy to that? What are the key privacy considerations when designing an operating system for “ambient computing”? How do you design privacy into something like that? More importantly, not only “how do you do it”, but how do you convince people that you did do it? When we talk about "higher risk" individuals, the definition can be broad. How can an average person or someone working in a seemingly less sensitive role better assess if they might be a higher-risk target? What are the subtle indicators? Thinking about the advice you give for personal security beyond passwords and multi-factor auth, how much of effective personal digital hygiene comes down to behavioral changes versus purely technical solutions? Given your deep understanding of both individual security needs and large-scale OS design, what's one thing you wish developers building cloud services or applications would fundamentally prioritize about user privacy? Resources: Google privacy controls Advanced protection program…
C
Cloud Security Podcast by Google

Guest: David French , Staff Adoption Engineer, Google Cloud Topic: Detection as code is one of those meme phrases I hear a lot, but I’m not sure everyone means the same thing when they say it. Could you tell us what you mean by it, and what upside it has for organizations in your model of it? What gets better for security teams and security outcomes when you start managing in a DAC world? What is primary, actual code or using SWE-style process for detection work? Not every SIEM has a good set of APIs for this, right? What’s a team to do in a world of no or low API support for this model? If we’re talking about as-code models, one of the important parts of regular software development is testing. How should teams think about testing their detection corpus? Where do we even start? Smoke tests? Unit tests? You talk about a rule schema–you might also think of it in code terms as a standard interface on the detection objects–how should organizations think about standardizing this, and why should they? If we’re into a world of detection rules as code and detections as code, can we also think about alert handling via code? This is like SOAR but with more of a software engineering approach, right? One more thing that stood out to me in your presentation was the call for sharing detection content. Is this between vendors, vendors and end users? Resources: Can We Have “Detection as Code”? Testing in Detection Engineering (Part 8) “So Good They Can't Ignore You: Why Skills Trump Passion in the Quest for Work You Love” book EP202 Beyond Tiered SOCs: Detection as Code and the Rise of Response Engineering EP181 Detection Engineering Deep Dive: From Career Paths to Scaling SOC Teams EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther Getting Started with Detection-as-Code and Google SecOps Detection Engineering Demystified: Building Custom Detections for GitHub Enterprise From soup to nuts: Building a Detection-as-Code pipeline David French - Medium Blog Detection Engineering Maturity Matrix…
C
Cloud Security Podcast by Google

Guest: Daniel Fabian , Principal Digital Arsonist, Google Topic: Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process? What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems? Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it? What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle? What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field? Resources: Video ( LinkedIn , YouTube ) Google's AI Red Team: the ethical hackers making AI safer EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]…
C
Cloud Security Podcast by Google

Guest: Alex Pinto , Associate Director of Threat Intelligence, Verizon Business, Lead the Verizon Data Breach Report Topics: How would you define “a cloud breach”? Is that a real (and different) thing? Are cloud breaches just a result of leaked keys and creds? If customers are responsible for 99% of cloud security problems, is cloud breach really about a customer being breached ? Are misconfigurations really responsible for so many cloud security breaches? How are we still failing at configuration? What parts of DBIR are not total “groundhog day” ? Something about vuln exploitation vs credential abuse in today’s breaches–what’s driving the shifts we’re seeing? DBIR Are we at peak ransomware? Will ransomware be here in 20 years? Will we be here in 20 years talking about it? How is AI changing the breach report, other than putting in hilarious footnotes about how the report is for humans to read and and is written by actual humans? Resources: Video ( LinkedIn , YouTube ) Verizon DBIR 2025 EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP112 Threat Horizons - How Google Does Threat Intelligence EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025…
C
Cloud Security Podcast by Google

1 EP228 SIEM in 2025: Still Hard? Reimagining Detection at Cloud Scale and with More Pipelines 27:09
Guest Alan Braithwaite , Co-founder and CTO @ RunReveal Topics: SIEM is hard, and many vendors have discovered this over the years. You need to get storage, security and integration complexity just right. You also need to be better than incumbents. How would you approach this now? Decoupled SIEM vs SIEM/EDR/XDR combo. These point in the opposite directions, which side do you think will win? In a world where data volumes are exploding, especially in cloud environments, you're building a SIEM with ClickHouse as its backend, focusing on both parsed and raw logs. What's the core advantage of this approach, and how does it address the limitations of traditional SIEMs in handling scale? Cribl, Bindplane and “security pipeline vendors” are all the rage. Won’t it be logical to just include this into a modern SIEM? You're envisioning a 'Pipeline QL' that compiles to SQL , enabling 'detection in SQL.' This sounds like a significant shift, and perhaps not to the better? (Anton is horrified, for once) How does this approach affect detection engineering? With Sigma HQ support out-of-the-box, and the ability to convert SPL to Sigma, you're clearly aiming for interoperability. How crucial is this approach in your vision, and how do you see it benefiting the security community? What is SIEM in 2025 and beyond? What’s the endgame for security telemetry data? Is this truly SIEM 3.0, 4.0 or whatever-oh? Resources: EP197 SIEM (Decoupled or Not), and Security Data Lakes: A Google SecOps Perspective EP123 The Good, the Bad, and the Epic of Threat Detection at Scale with Panther EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures “20 Years of SIEM: Celebrating My Dubious Anniversary” blog “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog tl;dr security newsletter Introducing a RunReveal Model Context Protocol Server! MCP: Building Your SecOps AI Ecosystem AI Runbooks for Google SecOps: Security Operations with Model Context Protocol…
C
Cloud Security Podcast by Google

Guests: Eric Foster , CEO of Tenex.AI Venkata Koppaka , CTO of Tenex.AI Topics: Why is your AI-powered MDR special? Why start an MDR from scratch using AI? So why should users bet on an “AI-native” MDR instead of an MDR that has already got its act together and is now applying AI to an existing set of practices? What’s the current breakdown in labor between your human SOC analysts vs your AI SOC agents? How do you expect this to evolve and how will that change your unit economics? What tasks are humans uniquely good at today’s SOC? How do you expect that to change in the next 5 years? We hear concerns about SOC AI missing things –but we know humans miss things all the time too. So how do you manage buyer concerns about the AI agents missing things? Let’s talk about how you’re helping customers measure your efficacy overall. What metrics should organizations prioritize when evaluating MDR? Resources: Video EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 (quote from Eric in the title!) EP10 SIEM Modernization? Is That a Thing? Tenex.AI blog “RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check” blog The original ASO 10X SOC paper that started it all (2021) “Baby ASO: A Minimal Viable Transformation for Your SOC” blog “The Return of the Baby ASO: Why SOCs Still Suck?” blog " Learn Modern SOC and D&R Practices Using Autonomic Security Operations (ASO) Principles " blog…
C
Cloud Security Podcast by Google

Guest: Christine Sizemore , Cloud Security Architect, Google Cloud Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? We are all hearing about agentic security – so can we just ask the AI to secure itself? Top 3 things to do to secure AI software supply chain for a typical org? Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments” “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) " RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check" (references our "data as code" presentation)…
C
Cloud Security Podcast by Google

1 EP225 Cross-promotion: The Cyber-Savvy Boardroom Podcast: EP2 Christian Karam on the Use of AI 24:46
Hosts: David Homovich , Customer Advocacy Lead, Office of the CISO, Google Cloud Alicja Cade , Director, Office of the CISO, Google Cloud Guest: Christian Karam , Strategic Advisor and Investor Resources: EP2 Christian Karam on the Use of AI (as aired originally) The Cyber-Savvy Boardroom podcast site The Cyber-Savvy Boardroom podcast on Spotify The Cyber-Savvy Boardroom podcast on Apple Podcasts The Cyber-Savvy Boardroom podcast on YouTube Now hear this: A new podcast to help boards get cyber savvy (without the jargon) Board of Directors Insights Hub Guidance for Boards of Directors on How to Address AI Risk…
C
Cloud Security Podcast by Google

Guest: Diana Kelley , CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper ) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes…
C
Cloud Security Podcast by Google

Guests: no guests, just us in the studio Topics: At RSA 2025, did we see solid, measurably better outcomes from AI use in security, or mostly just "sizzle" and good ideas with potential? Are the promises of an "AI SOC" repeating the mistakes seen with SOAR in previous years regarding fully automated security operations? Does "AI SOC" work according to RSA floor? How realistic is the vision expressed by some [yes, really!] that AI progress could lead to technical teams, including IT and security, shrinking dramatically or even to zero in a few years? Why do companies continue to rely on decades-old or “non-leading” security technologies, and what role does the concept of a "organizational change budget" play in this inertia? Is being "AI Native" fundamentally better for security technologies compared to adding AI capabilities to existing platforms, or is the jury still out? Got "an AI-native SIEM"? Be ready to explain how is yours better! Resources: EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines? EP119 RSA 2023 - What We Saw, What We Learned, and What We're Excited About EP70 Special - RSA 2022 Reflections - Securing the Past vs Securing the Future RSA (“RSAI”) Conference 2024 Powered by AI with AI on Top — AI Edition (Hey AI, Is This Enough AI?) [Anton’s RSA 2024 recap blog] New Paper: “Future of the SOC: Evolution or Optimization — Choose Your Path” (Paper 4 of 4.5) [talks about the change budget discussed]…
C
Cloud Security Podcast by Google

Guests: Kirstie Failey @ Google Threat Intelligence Group Scott Runnels @ Mandiant Incident Response Topics: What is the hardest thing about turning distinct incident reports into a fun to read and useful report like M-Trends ? How much are the lessons and recommendations skewed by the fact that they are all “post-IR” stories? Are “IR-derived” security lessons the best way to improve security? Isn’t this a bit like learning how to build safely from fires vs learning safety engineering? The report implies that F500 companies suffer from certain security issues despite their resources, does this automatically mean that smaller companies suffer from the same but more? "Dwell time" metrics sound obvious, but is there magic behind how this is done? Sometimes “dwell tie going down” is not automatically the defender’s win, right? What is the expected minimum dwell time? If “it depends”, then what does it depend on? Impactful outliers vs general trends (“by the numbers”), what teaches us more about security? Why do we seem to repeat the mistakes so much in security? Do we think it is useful to give the same advice repeatedly if the data implies that it is correct advice but people clearly do not do it? Resources: M-Trends 2025 report Mandiant Attack Lifecycle EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP147 Special: 2024 Security Forecast Report…
C
Cloud Security Podcast by Google

Guests: No guests [Tim in Vegas and Anton remote] Topics: So, another Next is done. Beyond the usual Vegas chaos, what was the overarching security theme or vibe you [Tim] felt dominated the conference this year? Thinking back to Next '24, what felt genuinely different this year versus just the next iteration of last year's trends? Last year, we pondered the 'Cloud Island' vs. 'Cloud Peninsula'. Based on Next 2025, is cloud security becoming more integrated with general cyber security, or is it still its own distinct domain? What wider trends did you observe, perhaps from the expo floor buzz or partner announcements, that security folks should be aware of? What was the biggest surprise for you at Next 2025? Something you absolutely didn't see coming? Putting on your prediction hats (however reluctantly): based on Next 2025, what do you foresee as the major cloud security focus or challenge for the industry in the next 12 months? If a busy podcast listener listening could only take one key message or action item away from everything announced and discussed at Next 2025, what should it be? Resources: EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps…
C
Cloud Security Podcast by Google

Guests: Michael Cote , Cloud VRP Lead, Google Cloud Aadarsh Karumathil , Security Engineer, Google Cloud Topics: Vulnerability response at cloud-scale sounds very hard! How do you triage vulnerability reports and make sure we’re addressing the right ones in the underlying cloud infrastructure? How do you determine how much to pay for each vulnerability? What is the largest reward we paid? What was it for? What products get the most submissions? Is this driven by the actual product security or by trends and fashions like AI? What are the most likely rejection reasons? What makes for a very good - and exceptional? - vulnerability report? We hear we pay more for “exceptional” reports, what does it mean? In college Tim had a roommate who would take us out drinking on his Google web app vulnerability rewards. Do we have something similar for people reporting vulnerabilities in our cloud infrastructure? Are people making real money off this? How do we actually uniquely identify vulnerabilities in the cloud? CVE does not work well, right? What are the expected risk reduction benefits from Cloud VRP? Resources: Cloud VRP site Cloud VPR launch blog CVR: The Mines of Kakadûm…
C
Cloud Security Podcast by Google

Guest: Steve Ledzian , APAC CTO, Mandiant at Google Cloud Topics: We've seen a shift in how boards engage with cybersecurity. From your perspective, what's the most significant misconception boards still hold about cyber risk, particularly in the Asia Pacific region, and how has that impacted their decision-making? Cybersecurity is rife with jargon. If you could eliminate or redefine one overused term, which would it be and why? How does this overloaded language specifically hinder effective communication and action in the region? The Mandiant Attack Lifecycle is a well-known model. How has your experience in the East Asia region challenged or refined this model? Are there unique attack patterns or actor behaviors that necessitate adjustments? Two years post-acquisition, what's been the most surprising or unexpected benefit of the Google-Mandiant combination? M-Trends data provides valuable insights, particularly regarding dwell time. Considering the Asia Pacific region, what are the most significant factors reducing dwell time, and how do these trends differ from global averages? Given your expertise in Asia Pacific, can you share an observation about a threat actor's behavior that is often overlooked in broader cybersecurity discussions? Looking ahead, what's the single biggest cybersecurity challenge you foresee for organizations in the Asia Pacific region over the next five years, and what proactive steps should they be taking now to prepare? Resources: EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive EP191 Why Aren't More Defenders Winning? Defender’s Advantage and How to Gain it!…
C
Cloud Security Podcast by Google

1 EP218 IAM in the Cloud & AI Era: Navigating Evolution, Challenges, and the Rise of ITDR/ISPM 30:10
Guest: Henrique Teixeira , Senior VP of Strategy, Saviynt, ex-Gartner analyst Topics: How have you seen IAM evolve over the years, especially with the shift to the cloud, and now AI? What are some of the biggest challenges and opportunities these two shifts present? ITDR (Identity Threat Detection and Response) and ISPM (Identity Security Posture Management) are emerging areas in IAM. How do you see these fitting into the overall IAM landscape? Are they truly distinct categories or just extensions of existing IAM practices? Shouldn’t ITDR just be part of your Cloud DR or maybe even your SecOps tool of choice? It seems goofy to try to stand ITDR on its own when the impact of an identity compromise is entirely a function of what that identity can access or do, no? Regarding workload vs. human identity, could you elaborate on the unique security considerations for each? How does the rise of machine identities and APIs impact IAM approaches? We had a whole episode around machine identity that involved turtles–what have you seen in the machine identity space and how have you seen users mess it up? The cybersecurity world is full of acronyms. Any tips on how to create a memorable and impactful acronym? Resources: EP166 Workload Identity, Zero Trust and SPIFFE (Also Turtles!) EP182 ITDR: The Missing Piece in Your Security Puzzle or Yet Another Tool to Buy? EP127 Is IAM Really Fun and How to Stay Ahead of the Curve in Cloud IAM? EP94 Meet Cloud Security Acronyms with Anna Belak EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler EP199 Your Cloud IAM Top Pet Peeves (and How to Fix Them) EP188 Beyond the Buzzwords: Identity's True Role in Cloud and SaaS Security “Playing to Win: How Strategy Really Works” book “Open” book…
C
Cloud Security Podcast by Google

Guest: Alex Polyakov , CEO at Adversa AI Topics: Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client? Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? What trips most clients, classic security mistakes in AI systems or AI-specific mistakes? Are there truly new mistakes in AI systems or are they old mistakes in new clothing? I know it is not your job to fix it, but much of this is unfixable, right? Is it a good idea to use AI to secure AI? Resources: EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi Adversa AI blog Oops! 5 serious gen AI security mistakes to avoid Generative AI Fast Followership: Avoid These First Adopter Security Missteps…
C
Cloud Security Podcast by Google

Guest: James Campbell , CEO, Cado Security Chris Doman , CTO, Cado Security Topics: Cloud Detection and Response (CDR) vs Cloud Investigation and Response Automation( CIRA ) ... what’s the story here? There is an “R” in CDR, right? Can’t my (modern) SIEM/SOAR do that? What about this becoming a part of modern SIEM/SOAR in the future? What gets better when you deploy a CIRA (a) and your CIRA in particular (b)? Ephemerality and security, what are the fun overlaps? Does “E” help “S” or hurts it? What about compliance? Ephemeral compliance sounds iffy… Cloud investigations, what is special about them? How does CSPM intersect with this? Is CIRA part of CNAPP? A secret question, need to listen for it! Resources: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP67 Cyber Defense Matrix and Does Cloud Security Have to DIE to Win? EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics Cloud security incidents (Rami McCarthy) Cado resources…
C
Cloud Security Podcast by Google

Guest: Meador Inge , Security Engineer, Google Cloud Topics: Can you walk us through Google's typical threat modeling process? What are the key steps involved? Threat modeling can be applied to various areas. Where does Google utilize it the most? How do we apply this to huge and complex systems? How does Google keep its threat models updated? What triggers a reassessment? How does Google operationalize threat modeling information to prioritize security work and resource allocation? How does it influence your security posture? What are the biggest challenges Google faces in scaling and improving its threat modeling practices? Any stories where we got this wrong? How can LLMs like Gemini improve Google's threat modeling activities? Can you share examples of basic and more sophisticated techniques? What advice would you give to organizations just starting with threat modeling? Resources: EP12 Threat Models and Cloud Security EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP200 Zero Touch Prod, Security Rings, and Foundational Services: How Google Does Workload Security EP140 System Hardening at Google Scale: New Challenges, New Solutions Threat Modeling manifesto EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use Awesome Threat Modeling Adam Shostack “Threat Modeling: Designing for Security” book Ross Anderson “Security Engineering” book ”How to Solve It” book…
C
Cloud Security Podcast by Google

Guest: Archana Ramamoorthy , Senior Director of Product Management, Google Cloud Topics: You are responsible for building systems that need to comply with laws that are often mutually contradictory. It seems technically impossible to do, how do you do this? Google is not alone in being a global company with local customers and local requirements. How are we building systems that provide local compliance with global consistency in their use for customers who are similar in scale to us? Originally, Google had global systems synchronized around the entire planet–planet scale supercompute–with atomic clocks. How did we get to regionalized approach from there? Engineering takes a long time. How do we bring enough agility to product definition and engineering design to give our users robust foundations in our systems that also let us keep up with changing and diverging regulatory goals? What are some of the biggest challenges you face working in the trusted cloud space? Is there something you would like to share about being a woman leader in technology? How did you overcome the related challenges? Resources: Video “Compliance Without Compromise” by Jeanette Manfra (2020, still very relevant!) “Good to Great” book “Appreciative Leadership” book…
C
Cloud Security Podcast by Google

Guest: Yigael Berger , Head of AI, Sweet Security Topic: Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains? I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be? Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale? SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge? We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it? What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security? So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders? Resource: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP194 Deep Dive into ADR - Application Detection and Response EP135 AI and Security: The Good, the Bad, and the Magical Andrej Karpathy series on how LLMs work Sweet Security blog…
C
Cloud Security Podcast by Google

Guest: Dave Hannigan , CISO at Nu Bank Topics: Tell us about the challenges you're facing as CISO at NuBank and how are they different from your past life at Spotify? You're a big cloud based operation - what are the key challenges you're tracking in your cloud environments? What lessons do you wish you knew back in your previous CISO run [at Spotify]? What metrics do your team report for you to understand the security posture of your cloud environments? How do you know “your” cloud use is as secure as you want it to be? You're a former Googler, and I'm sure that's not why, so why did you choose to go with Google SecOps for your organization? Resources: “Moving shields into position: How you can organize security to boost digital transformation” blog and the paper . “For a successful cloud transformation, change your culture first” blog “Is your digital transformation secure? How to tell if your team is on the right path” ’ blog EP201 Every CTO Should Be a CSTO (Or Else!) - Transformation Lessons from The Hoff EP104 CISO Walks Into the Cloud: And The Magic Starts to Happen! EP141 Cloud Security Coast to Coast: From 2015 to 2023, What's Changed and What's the Same? EP209 vCISO in the Cloud: Navigating the New Security Landscape (and Don’t Forget Resilience!) “Thinking Fast and Slow” book “Turn the Ship Around” book…
C
Cloud Security Podcast by Google

Guest: Kimberly Goody , Head of Intel Analysis and Production, Google Cloud Topics: Google's Threat Intelligence Group (GTIG) has a unique position, accessing both underground forum data and incident response information. How does this dual perspective enhance your ability to identify and attribute cybercriminal campaigns? Attributing cyberattacks with high confidence is important. Can you walk us through the process GTIG uses to connect an incident to specific threat actors, given the complexities of the threat landscape and the challenges of linking tools and actors? There is a difficulty of correlating publicly known tool names with the aliases used by threat actors in underground forums. How does GTIG overcome this challenge to track the evolution and usage of malware and other tools? Can you give a specific example of how this "decoding" process works? How does GTIG collaborate with other teams within Google, such as incident response or product security, to share threat intelligence and improve Google's overall security posture? How does this work make Google more secure? What does Google (and specifically GTIG) do differently than other organizations focused on collecting and analyzing threat-intelligence? Is there AI involved? Resources: “Cybercrime: A Multifaceted National Security Threat” report EP112 Threat Horizons - How Google Does Threat Intelligence EP175 Meet Crystal Lister: From Public Sector to Google Cloud Security and Threat Horizons EP178 Meet Brandon Wood: The Human Side of Threat Intelligence: From Bad IP to Trafficking Busts “Wild Swans: Three Daughters of China” book How Google Does It: Making threat detection high-quality, scalable, and modern How Google Does It: Finding, tracking, and fixing vulnerabilities “From Credit Cards to Crypto: The Evolution of Cybercrime” video…
C
Cloud Security Podcast by Google

Guest: Or Brokman , Strategic Google Cloud Engineer, Security and Compliance, Google Cloud Topics: Can you tell us about one particular cloud consulting engagement that really sticks out in your memory? Maybe a time when you lifted the hood, so to speak, and were absolutely floored by what you found – good or bad! In your experience, what's that one thing – that common mistake – that just keeps popping up? That thing that makes you say 'Oh no, not this again!' 'Tools over process' mistake is one of the 'oldies.' What do you still think drives people to it, and how to fix it? If you could give just one piece of cloud security advice to every company out there, regardless of their size or industry, what would it be? Resources: Video ( YouTube ) “Threat Modeling: Designing for Security” by Adam Shostack EP16 Modern Data Security Approaches: Is Cloud More Secure? EP142 Cloud Security Podcast Ask Me Anything #AMA 2023 “For a successful cloud transformation, change your culture first” (OOT vs TOO blog) https://www.linkedin.com/in/stephrwong/ New Paper: “Autonomic Security Operations — 10X Transformation of the Security Operations Center” (2021)…
C
Cloud Security Podcast by Google

1 EP209 vCISO in the Cloud: Navigating the New Security Landscape (and Don’t Forget Resilience!) 29:06
Guests: Beth Cartier , former CISO, vCISO, founder of Initiative Security Guest host of the CISO mini-series: Marina Kaganovich , Executive Trust Lead, Office of the CISO @ Google Cloud Topics: How is that vCISO’ing going? What is special about vCISO and cloud? Is it easier or harder? AI, cyber, resilience - all are hot topics these days. In the context of cloud security, how are you seeing organizations realistically address these trends? Are they being managed effectively (finally?) or is security always playing catch up? Recent events reminded us that cybersecurity may sometimes interfere with resilience. How have you looked to build resilience into your security program? The topic is perhaps 30+ years old, but security needs to have a seat at the table, and often still doesn’t - why do you think this is the case? What approaches or tips have you found to work well in elevating security within organizations? Any tips for how cyber professionals can stay up to date to keep up with the current threat landscape vs the threats that are around the corner? Resources: EP208 The Modern CISO: Balancing Risk, Innovation, and Business Strategy (And Where is Cloud?) EP189 How Google Does Security Programs at Scale: CISO Insights EP129 How CISO Cloud Dreams and Realities Collide EP104 CISO Walks Into the Cloud: And The Magic Starts to Happen! EP93 CISO Walks Into the Cloud: Frustrations, Successes, Lessons ... And Is My Data Secure?…
C
Cloud Security Podcast by Google

1 EP208 The Modern CISO: Balancing Risk, Innovation, and Business Strategy (And Where is Cloud?) 31:19
Guest host: Marina Kaganovich , Executive Trust Lead, Office of the CISO @ Google Cloud Guest: John Rogers , CISO @ MSCI Topics: Can you briefly walk us through your CISO career path? What are some of the key (cloud or otherwise) trends that CISOs should be keeping an eye on? What is the time frame for them? What are the biggest cloud security challenges CISOs are facing today, and how are those evolving? Given the rapid change of pace in emerging tech, such as what we’ve seen in the last year or so with gen AI, how do you balance the need to address short-term or imminent issues vs those that are long-term or emergent risks? What advice do you have for how CISOs can communicate the importance of anticipating threats to their boards and executives? So, how to be a forward looking and strategic yet not veer into dreaming, paranoia and imaginary risks? How to be futuristic yet realistic? The CISO role as an official title is a relatively new one, what steps have you taken to build credibility and position yourself for having a seat at the table? Resources: ATT&CK Framework EP189 How Google Does Security Programs at Scale: CISO Insights EP129 How CISO Cloud Dreams and Realities Collide EP104 CISO Walks Into the Cloud: And The Magic Starts to Happen! EP93 CISO Walks Into the Cloud: Frustrations, Successes, Lessons ... And Is My Data Secure?…
C
Cloud Security Podcast by Google

Guest: Bob Blakley , Co-founder and Chief Product Officer of Mimic Topics: Tell us about the ransomware problem - isn't this a bit of old news? Circa 2015, right? What makes ransomware a unique security problem? What's different about ransomware versus other kinds of malware? What do you make of the “RansomOps” take (aka “ransomware is not malware”)? Are there new ways to solve it? Is this really a problem that a startup is positioned to solve? Aren’t large infrastructure owners better positioned for this? In fact, why haven't existing solutions solved this? Is this really a symptom of a bigger problem? What is that problem? What made you personally want to get into this space, other than the potential upside of solving the problem? Resources: EP206 Paying the Price: Ransomware's Rising Stakes in the Cloud EP89 Can We Escape Ransomware by Migrating to the Cloud? EP45 VirusTotal Insights on Ransomware Business and Technology EP204 Beyond PCAST: Phil Venables on the Future of Resilience and Leading Indicators EP7 No One Expects the Malware Inquisition Anderson Report (July 1972) “The Innovator Dilemma” book “Odyssey” book (yes, really) Crowdstrike External Technical Root Cause Analysis — Channel File 291 (yes, that one)…
C
Cloud Security Podcast by Google

Guest: Allan Liska , CSIRT at Recorded Future, now part of Mastercard Topics: Ransomware has become a pervasive threat. Could you provide us with a brief overview of the current ransomware landscape? It's often said that ransomware is driven by pure profit. Can you remind us of the business model of ransomware gangs, including how they operate, their organizational structures, and their financial motivations? Ransomware gangs are becoming increasingly aggressive in their extortion tactics. Can you shed some light on these new tactics, such as data leaks, DDoS attacks, and threats to contact victims' customers or partners? What specific challenges and considerations arise when dealing with ransomware in cloud environments, and how can organizations adapt their security strategies to mitigate these risks? What are the key factors to consider when deciding whether or not to pay the ransom? What is the single most important piece of advice you would give to organizations looking to bolster their defenses against ransomware? Resources: Video ( LinkedIn , YouTube ) 2024 Data Breach Investigations Report EP89 Can We Escape Ransomware by Migrating to the Cloud? EP45 VirusTotal Insights on Ransomware Business and Technology EP29 Future of EDR: Is It Reason-able to Suggest XDR? EP204 Beyond PCAST: Phil Venables on the Future of Resilience and Leading Indicators…
C
Cloud Security Podcast by Google

Guest: Andrew Kopcienski , Principal Intelligence Analyst, Google Threat Intelligence Group Questions: You have this new Cybersecurity Forecast 2025 report , what’s up with that? We are getting a bit annoyed about the fear-mongering on “oh, but attackers will use AI.” You are a threat analyst, realistically, how afraid are you of this? The report discusses the threat of compromised identities in hybrid environments (aka “no matter what you do, and where, you are hacked via AD”). What steps can organizations take to mitigate the risk of a single compromised identity leading to a significant security breach? Is this expected to continue? Is zero-day actually growing? The report seems to imply that, but aren’t “oh-days” getting more expensive every day? Many organizations still lag with detection, in your expertise, what approaches to detection actually work today? It is OK to say ”hire Managed Defense ”, BTW :-) We read the risk posed by the "Big Four" sections and they (to us) read like “hackers hack” and “APTs APT.” What is genuinely new and interesting here? Resources: Cybersecurity Forecast 2025 report Google Cloud Cybersecurity Forecast 2025 webinar EP147 Special: 2024 Security Forecast Report EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side EP153 Kevin Mandia on Cloud Breaches: New Threat Actors, Old Mistakes, and Lessons for All Staying a Step Ahead: Mitigating the DPRK IT Worker Threat…
C
Cloud Security Podcast by Google

Guest: Phil Venables , Vice President, Chief Information Security Officer (CISO) @ Google Cloud Topics Why is our industry suddenly obsessed with resilience? Is this ransomware’s doing? How did the PCAST report come to be? Can you share the backstory and how it was created? The PCAST report emphasizes the importance of leading indicators for security and resilience. How can organizations effectively shift their focus from lagging indicators to these leading indicators? The report also emphasizes the importance of "Cyber-Physical Modularity" - this sounds mysterious to us, and probably our listeners! What is it and how does this concept contribute to enhancing the resilience of critical infrastructure? The report advocates for regular and rigorous stress testing. How can organizations effectively implement such stress testing to identify vulnerabilities and improve their resilience? In your opinion, what are the most critical takeaways from our PCAST-related paper for organizations looking to improve their security and resilience posture today? What are some of the challenges organizations might face when implementing the PCAST recommendations, and how can they overcome these challenges? Do organizations get resilience benefits “for free” by using Google Cloud? Resources: 10 ways to make cyber-physical systems more resilient “Cyber-Physical Resilience and the Cloud: Putting the White House PCAST report into practice” report Megatrends drive cloud adoption—and improve security for all EP163 Cloud Security Megatrends: Myths, Realities, Contentious Debates and Of Course AI Advising The President On Cyber-Physical Resilience - Philip Venables (at PSW) EP201 Every CTO Should Be a CSTO (Or Else!) - Transformation Lessons from The Hoff EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side…
C
Cloud Security Podcast by Google

Guest: Rich Mogull , SVP of Cloud Security at Firemon and CEO at Securosis Topics: Let’s talk about cloud security shared responsibility. How to separate the blame? Is there a good framework for apportioning blame? You've introduced the Cloud Shared Irresponsibilities Model , stating cloud providers will be considered partially responsible for breaches even if due to customer misconfigurations. How do you see this impacting the relationship between cloud providers and their customers? Will it lead to more collaboration or more friction? We both know the Jay Heiser 2015 classic “cloud is secure, but you not using it securely.” In your view, what does “use cloud securely” mean for various organizations today? Here is a very painful question: how to decide what cloud security should be free with cloud and what security can be paid? You dealt with cloud security for a long time, what is your #1 lesson so far on how to make the cloud more secure or use the cloud more securely? What is the best way to learn how to cloud? What is this CloudSLAW thing? Resources: EP201 Every CTO Should Be a CSTO (Or Else!) - Transformation Lessons from The Hoff The Cloud Shared Irresponsibilities Model 2002 Trustworthy computing memo Use Cloud Securely? What Does This Even Mean?! EP145 Cloud Security: Shared Responsibility, Shared Fate, Shared Faith? No Snow, No Flakes: Pondering Cloud Security Shared Responsibility, Again! Cloud Security Lab a Week (S.L.A.W) Megatrends drive cloud adoption—and improve security for all Shared fate main page Defining the Journey—the Four Cloud Adoption Patterns Celebrating 200 Episodes of Cloud Security Podcast by Google and Thanks for all the Listens!…
C
Cloud Security Podcast by Google

Guest: Amine Besson , Tech Lead on Detection Engineering, Behemoth Cyberdefence Topics: What is your best advice on detection engineering to organizations who don’t want to engineer anything in security? What is the state of art when it comes to SOC ? Who is doing well? What on Earth is a fusion center? Why classic “tiered SOCs” fall flat when dealing with modern threats? Let’s focus on a correct definition of detection as code. Can you provide yours? Detection x response engineering - is there a thing called “response engineering”? Should there be? What are your lessons learned to fuse intel, detections, and hunting ops? What is this SIEMless yet SOARful detection architecture? What’s next with OpenTIDE 2.0 ? Resources: Guide your SOC Leaders to More Engineering Wisdom for Detection (Part 9) and other parts linked there Hack.lu 2023: TIDeMEC : A Detection Engineering Platform Homegrown At The EC video OpenTIDE · GitLab OpenTIDE 1.0 Release blog SpectreOps blog series ‘on detection’ Does your SOC have NOC DNA? presentation Kill SOC Toil, Do SOC Eng blog ( tame version ) The original ASO paper (2021, still epic!) Behind the Scenes with Red Canary's Detection Engineering Team The DFIR Report – Real Intrusions by Real Attackers, The Truth Behind the Intrusion Site Reliability Engineering (SRE) | Google Cloud…
C
Cloud Security Podcast by Google

Guest: Chris Hoff , Chief Secure Technology Officer at Last Pass Topics: I learned that you have a really cool title that feels very “now” - Chief Secure Technology Officer? What’s the story here? Weirdly, I now feel that every CTO better be a CSTO or quit their job :-) After, ahem, not-so-recent events you had a chance to rebuild a lot of your stack, and in the process improve security. Can you share how it went, and what security capabilities are now built in? How much of a culture change did that require? Was it purely a technological transformation or you had to change what people do and how they do it? Would you recommend this to others (not the “recent events experience”, but the rebuild approach)? What benefits come from doing this before an incident occurs? Are there any? How are you handling telemetry collection and observability for security in the new stack? I am curious how this was modernized Cloud is simple , yet also complex, I think you called it “simplex.” How does this concept work? Resources: Video ( LinkedIn , YouTube ) EP189 How Google Does Security Programs at Scale: CISO Insights EP104 CISO Walks Into the Cloud: And The Magic Starts to Happen! EP80 CISO Walks Into the Cloud: Frustrations, Successes, Lessons ... And Does the Risk Change? EP93 CISO Walks Into the Cloud: Frustrations, Successes, Lessons ... And Is My Data Secure?…
C
Cloud Security Podcast by Google

1 EP200 Zero Touch Prod, Security Rings, and Foundational Services: How Google Does Workload Security 27:38
Guest: Michael Czapinski , Security & Reliability Enthusiast, Google Topics: “How Google protects its production services” paper covers how Google's infrastructure balances several crucial aspects, including security, reliability, development speed, and maintainability. How do you prioritize these competing demands in a real-world setting? What attack vectors do you consider most critical in the production environment, and how has Google’s defenses against these vectors improved over time? Can you elaborate on the concept of Foundational services and their significance in Google's security posture? How does your security approach adapt to this vast spectrum of sensitivity and purpose of our servers and services, actually? How do you implement this principle of zero touch prod for both human and service accounts within our complex infrastructure? Can you talk us through the broader approach you take through Workload Security Rings and how this helps? Resources: “How Google protects its production services” paper (deep!) SLSA framework EP189 How Google Does Security Programs at Scale: CISO Insights EP109 How Google Does Vulnerability Management: The Not So Secret Secrets! EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil SREcon presentation on zero touch prod. The SRS book (free access)…
Guests: Michele Chubirka , Staff Cloud Security Advocate, Google Cloud Sita Lakshmi Sangameswaran , Senior Developer Relations Engineer, Google Cloud Topics: What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it? Or do you "it depends" it? :-) Everyone's talking about how "identity is the new perimeter" in the cloud. Can you break that down in simple terms? A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it? What’s this stuff about least-privilege and separation-of-duties being less relevant? Why do they matter in the cloud that changes rapidly? What are your IAM Top Pet Peeves? Resources: Video ( LinkedIn , YouTube ) EP127 Is IAM Really Fun and How to Stay Ahead of the Curve in Cloud IAM? EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler IAM: There and back again using resource hierarchies IAM so lost: A guide to identity in Google Cloud I Hate IAM: but I need it desperately EP33 Cloud Migrations: Security Perspectives from The Field EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant EP188 Beyond the Buzzwords: Identity's True Role in Cloud and SaaS Security “Identity Crisis: The Biggest Prize in Security” paper “Learn to love IAM: The most important step in securing your cloud infrastructure“ Next presentation…
C
Cloud Security Podcast by Google

Guests: Ante Gojsalic , Co-Founder & CTO at SplxAI Topics: What are some of the unique challenges in securing GenAI applications compared to traditional apps? What current attack surfaces are most concerning for GenAI apps, and how do you see these evolving in the future? Do you have your very own list of top 5 GenAI threats? Everybody seem to! What are the most common security mistakes you see clients make with GenAI? Can you explain the main goals when trying to add automation to pentesting for next-gen GenAI apps? What are your AI testing lessons from clients so far? Resources: EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side EP135 AI and Security: The Good, the Bad, and the Magical EP185 SAIF-powered Collaboration to Secure AI: CoSAI and Why It Matters to You SAIF.google Next SAIF presentation with top 5 AI security issues Our Security of AI Papers and Blogs Explained…
C
Cloud Security Podcast by Google

Guest: Travis Lanham , Uber Tech Lead (UTL) for Security Operations Engineering, Google Cloud Topics: There’s been a ton of discussion in the wake of the three SIEM week about the future of SIEM-like products. We saw a lot of takes on how this augurs the future of disassembled or decoupled SIEMs . Can you explain what these disassembled SIEMs are all about? What are the expected upsides of detaching your SIEM interface and security capabilities from your data backend? Tell us about the early days of SecOps (nee Chronicle) and why we didn’t go with this approach? What are the upsides of a tightly coupled datastore + security experience for a SIEM? Are there more risks or negatives of the decoupled/decentralized approach? Complexity and the need to assemble “at home” are on the list, right? One of the 50 things Google knew to be true back in the day was that product innovation comes from technical innovation, what’s the technical innovation driving decoupled SIEMs? So what about those security data lakes? Any insights? Resources: EP139 What is Chronicle? Beyond XDR and into the Next Generation of Security Operations EP190 Unraveling the Security Data Fabric: Need, Benefits, and Futures EP184 One Week SIEM Migration: Fact or Fiction? Hacking Google video series Decoupled SIEM: Brilliant or …. Not :-) UNC5537 Targets Snowflake Customer Instances for Data Theft and Extortion So, Why Did I Join Chronicle Security? (2019)…
Guest: Vijay Ganti , Director of Product Management, Google Cloud Security Topics: What have been the biggest pain points for organizations trying to use threat intelligence (TI)? Why has it been so difficult to convert threat knowledge into effective security measures in the past? In the realm of AI, there's often hype (and people who assume “it’s all hype”). What's genuinely different about AI now, particularly in the context of threat intelligence? Can you explain the concept of "AI-driven operationalization" in Google TI? How does it work in practice? What's the balance between human expertise and AI in the TI process? Are there specific areas where you see the balance between human and AI involvement shifting in a few years? Google Threat Intelligence aims to be different. Why are we better from client PoV? Resources: Google Threat Intel website “Future of Brain” book by Gary Marcus et al Detection engineering blog (Part 9) and the series Detect engineering blogs by David French The pyramid of pain blog , the classic “Scaling Up Malware Analysis with Gemini 1.5 Flash” and “From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blogs on Gemini for security…
Cross-over hosts: Kaslin Fields , co-host at Kubernetes Podcast Abdel Sghiouar , co-host at Kubernetes Podcast Guest: Michele Chubirka , Cloud Security Advocate, Google Cloud Topics: How would you approach answering the question ”what is more secure, container or a virtual machine (VM)?” Could you elaborate on the real-world implications of this for security, and perhaps provide some examples of when one might be a more suitable choice than the other? While containers boast a smaller attack surface (what about the orchestrator though?), VMs present a full operating system. How should organizations weigh these factors against each other? The speed of patching and updates is a clear advantage of containers. How significant is this in the context of today's rapidly evolving threat landscape? Are there any strategies organizations can employ to mitigate the slower update cycles associated with VMs? Both containers and VMs can be susceptible to misconfigurations, but container orchestration systems introduce another layer of complexity. How can organizations address this complexity and minimize the risk of misconfigurations leading to security vulnerabilities? What about combining containers and VMs. Can you provide some concrete examples of how this might be implemented? What benefits can organizations expect from such an approach, and what challenges might they face? How do you envision the security landscape for containers and VMs evolving in the coming years? Are there any emerging trends or technologies that could significantly impact the way we approach security for these two technologies? Resources: Container Security, with Michele Chubrika (the same episode - with extras! - at our peer podcast, “Kubernetes Podcast from Google” ) EP105 Security Architect View: Cloud Migration Successes, Failures and Lessons EP54 Container Security: The Past or The Future? DORA 2024 report Container Security: It’s All About the Supply Chain - Michele Chubirka Software composition analysis (SCA) DevSecOps Decisioning Principles Kubernetes CIS Benchmark Cloud-Native Consumption Principles State of WebAssembly outside the Browser - Abdel Sghiouar Why Perfect Compliance Is the Enemy of Good Kubernetes Security - Michele Chubirka - KubeCon NA 2024…
C
Cloud Security Podcast by Google

Guest: Daniel Shechter , Co-Founder and CEO at Miggo Security Topics: Why do we need Application Detection and Response (ADR)? BTW, how do you define it? Isn’t ADR a subset of CDR (for cloud)? What is the key difference that sets ADR apart from traditional EDR and CDR tools? Why can’t I just send my application data - or eBPF traces - to my SIEM and achieve the goals of ADR that way? We had RASP and it failed due to instrumentation complexities. How does an ADR solution address these challenges and make it easier for security teams to adopt and implement? What are the key inputs into an ADR tool? Can you explain how your ADR correlates cloud, container, and application contexts to provide a better view of threats? Could you share real-world examples of types of badness solved for users? How would ADR work with other application security technologies like DAST/SAST, WAF and ASPM? What are your thoughts on the evolution of ADR? Resources: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP143 Cloud Security Remediation: The Biggest Headache? Miggo research re: vulnerability ALBeast “WhatDR or What Detection Domain Needs Its Own Tools?” blog “Making Sense of the Application Security Product Market” blog “Effective Vulnerability Management: Managing Risk in the Vulnerable Digital Ecosystem“ book…
Guests: Taylor Lehmann , Director at Office of the CISO, Google Cloud Luis Urena , Cloud Security Architect, Google Cloud Topics There is a common scenario where security teams are brought in after a cloud environment is already established . From your experience, how does this late involvement typically impact the organization's security posture and what are the immediate risks they face? Upon hearing this, many experts suggest that “burn the environment with fire” or “nuke it from orbit” are the only feasible approaches? What is your take on that suggestion? On the opposite side, what if business demands you don't touch anything but “make it secure” regardless? Could you walk us through some of the first critical steps you do after “inheriting a cloud” and why they are prioritized in this way? Why not just say “add MFA everywhere”? What may or will blow up? We also say “address overly permissive users and roles” and this sounds valuable, but also tricky. How do we go about it? What are the chances that the environment is in fact compromised already? When is Compromise Assessment the right call, it does cost money, right? How do you balance your team’s current priorities when you’ve just adopted an insecure cloud environment. How do you make tradeoffs among your existing stack and this new one? Resources: “Confetti cannons or fire extinguishers? Here’s how to secure cloud surprises” EP179 Teamwork Under Stress: Expedition Behavior in Cybersecurity Incident Response IAM Recommender “TM" book by Adam Shostack “Checklist Manifesto” book “Moving shields into position: How you can organize security to boost digital transformation” (with a new paper!)…
Guest: Nelly Porter , Director of PM, Cloud Security at Google Cloud Topics: Share your story and how you ended here doing confidential AI at Google? What problem does confidential compute + AI solve and for what clients? What are some specific real-world applications or use cases where you see the combination of AI and confidential computing making the most significant impact? What about AI in confidential vs AI on prem? Should those people just do on-prem AI instead? Which parts of the AI lifecycle need to be run in Confidential AI: Training? Data curation? Operational workloads? What are the performance (and thus cost) implications of running AI workloads in a confidential computing environment? Are there new risks that arise out of confidential AI? Resources: Video EP48 Confidentially Speaking 2: Cloudful of Secrets EP1 Confidentially Speaking “To securely build AI on Google Cloud, follow these best practices“ blog ( paper )…
C
Cloud Security Podcast by Google

Guest: Dan Nutting , Manager - Cyber Defense, Google Cloud Topics: What is the Defender’s Advantage and why did Mandiant decide to put this out there? This is the second edition. What is different about DA-II? Why do so few defenders actually realize their Defender’s Advantage? The book talks about the importance of being "intelligence-led" in cyber defense. Can you elaborate on what this means and how organizations can practically implement this approach? Detection engineering is presented as a continuous cycle of adaptation. How can organizations ensure their detection capabilities remain effective and avoid fatigue in their SOC? Many organizations don’t seem to want to make detections at all, what do we tell them? What is this thing called “Mission Control”- it sounds really cool, can you explain it? Resources: Defender’s Advantage book The Defender's Advantage: Using Artificial Intelligence in Cyber Defense supplemental paper “Threat-informed Defense Is Hard, So We Are Still Not Doing It!” blog Mandiant blog…
C
Cloud Security Podcast by Google

Guest: Crystal Lister , Technical Program Manager, Google Cloud Security Topics: Your background can be sheepishly called “public sector”, what’s your experience been transitioning from public to private? How did you end up here doing what you are doing? We imagine you learned a lot from what you just described – how’s that impacted your work at Google? How have you seen risk management practices and outcomes differ? You now lead Google Threat Horizons reports , do you have a vision for this? How does your past work inform it? Given the prevalence of ransomware attacks, many organizations are focused on external threats. In your experience, does the risk of insider threats still hold significant weight? What type of company needs a dedicated and separate insider threat program? Resources: Video on YouTube Google Cybersecurity Action Team Threat Horizons Report #9 Is Out! Google Cybersecurity Action Team site for previous Threat Horizons Reports EP112 Threat Horizons - How Google Does Threat Intelligence Psychology of Intelligence Analysis by Richards J. Heuer The Coming Wave by Mustafa Suleyman Visualizing Google Cloud: 101 Illustrated References for Cloud Engineers and Architects…
C
Cloud Security Podcast by Google

Guest: Angelika Rohrer, Sr. Technical Program Manager , Cyber Security Response at Alphabet Topics: Incident response (IR) is by definition “reactive”, but ultimately incident prep determines your IR success. What are the broad areas where one needs to prepare? You have created a new framework for measuring how ready you are for an incident, what is the approach you took to create it? Can you elaborate on the core principles behind the Continuous Improvement (CI) Framework for incident response? Why is continuous improvement crucial for effective incident response, especially in cloud environments? Can’t you just make a playbook and use it? How to overcome the desire to focus on the easy metrics and go to more valuable ones? What do you think Google does best in this area? Can you share examples of how the CI Framework could have helped prevent or mitigate a real-world cloud security incident? How can other organizations practically implement the CI Framework to enhance their incident response capabilities after they read the paper? Resources: “How do you know you are "Ready to Respond"? paper EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster?…
C
Cloud Security Podcast by Google

Guest: Shan Rao , Group Product Manager, Google Topics: What are the unique challenges when securing AI for cloud environments, compared to traditional IT systems? Your talk covers 5 risks, why did you pick these five? What are the five, and are these the worst? Some of the mitigation seems the same for all risks. What are the popular SAIF mitigations that cover more of the risks? Can we move quickly and securely with AI? How? What future trends and developments do you foresee in the field of securing AI for cloud environments, and how can organizations prepare for them? Do you think in 2-3 years AI security will be a separate domain or a part of … application security? Data security? Cloud security? Resource: Video ( LinkedIn , YouTube ) [live audio is not great in these] “A cybersecurity expert's guide to securing AI products with Google SAIF“ presentation SAIF Site “To securely build AI on Google Cloud, follow these best practices” (paper) “Secure AI Framework (SAIF): A Conceptual Framework for Secure AI Systems” resources Corey Quinn on X (long story why this is here… listen to the episode)…
C
Cloud Security Podcast by Google

Guests: None Topics: What have we seen at RSA 2024? Which buzzwords are rising (AI! AI! AI!) and which ones are falling (hi XDR)? Is this really all about AI? Is this all marketing? Security platforms or focused tools, who is winning at RSA? Anything fun going on with SecOps? Is cloud security still largely about CSPM? Any interesting presentations spotted? Resources: EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (RSA 2024 episode 1 of 2) “From Assistant to Analyst: The Power of Gemini 1.5 Pro for Malware Analysis” blog “Decoupled SIEM: Brilliant or Stupid?” blog “Introducing Google Security Operations: Intel-driven, AI-powered SecOps” blog “Advancing the art of AI-driven security with Google Cloud” blog…
C
Cloud Security Podcast by Google

1 EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side 27:03
Guest: Elie Bursztein , Google DeepMind Cybersecurity Research Lead, Google Topics: Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)? What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical? Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really? Why do you think that AI favors the defenders? Is this a long term or a short term view? What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk? Resources: “How Large Language Models Are Reshaping the Cybersecurity Landscape” RSA 2024 presentation by Elie (May 6 at 9:40AM) “Lessons Learned from Developing Secure AI Workflows” RSA 2024 presentation by Elie (May 8, 2:25PM) EP50 The Epic Battle: Machine Learning vs Millions of Malicious Documents EP40 2021: Phishing is Solved? EP135 AI and Security: The Good, the Bad, and the Magical EP170 Redefining Security Operations: Practical Applications of GenAI in the SOC EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It PyRIT LLM red-teaming tool Accelerating incident response using generative AI Threat Actors are Interested in Generative AI, but Use Remains Limited OpenAI’s Approach to Frontier Risk…
C
Cloud Security Podcast by Google

Guest: Payal Chakravarty , Director of Product Management, Google SecOps, Google Cloud Topics: What are the different use cases for GenAI in security operations and how can organizations prioritize them for maximum impact to their organization? We’ve heard a lot of worries from people that GenAI will replace junior team members–how do you see GenAI enabling more people to be part of the security mission? What are the challenges and risks associated with using GenAI in security operations? We’ve been down the road of automation for SOCs before–UEBA and SOAR both claimed it–and AI looks a lot like those but with way more matrix math-what are we going to get right this time that we didn’t quite live up to last time(s) around? Imagine a SOC or a D&R team of 2029. What AI-based magic is routine at this time? What new things are done by AI? What do humans do? Resources: Live video ( LinkedIn , YouTube ) [live audio is not great in these] Practical use cases for AI in security operations , Cloud Next 2024 session by Payal EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps 15 must-attend security sessions at Next '24…
C
Cloud Security Podcast by Google

Guests: no guests ( just us !) Topics: What are some of the fun security-related launches from Next 2024 (sorry for our brief “marketing hat” moment!)? Any fun security vendors we spotted “in the clouds”? OK, what are our favorite sessions? Our own, right? Anything else we had time to go to? What are the new security ideas inspired by the event (you really want to listen to this part! Because “freatures”...) Any tricky questions at the end? Resources: Live video ( LinkedIn , YouTube ) [live audio is not great in these] 15 must-attend security sessions at Next '24 Cloud CISO Perspectives: 20 major security announcements from Next ‘24 EP137 Next 2023 Special: Conference Recap - AI, Cloud, Security, Magical Hallway Conversations (last year!) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP90 Next Special - Google Cybersecurity Action Team: One Year Later! A cybersecurity expert's guide to securing AI products with Google SAIF Next 2024 session How AI can transform your approach to security Next 2024 session…
C
Cloud Security Podcast by Google

Guests: Umesh Shankar , Distinguished Engineer, Chief Technologist for Google Cloud Security Scott Coull , Head of Data Science Research, Google Cloud Security Topics: What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM? What can “security trained LLM” do better vs regular LLM? Does making it better at security make it worse at other things that we care about? What can a security team do with it today? What are the “starter use cases” for SecLM? What has been the feedback so far in terms of impact - both from practitioners but also from team leaders? Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning? Resources: “How to tackle security tasks and workflows with generative AI” (Google Cloud Next 2024 session on SecLM) EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It? EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models Supercharging security with generative AI Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma? Considerations for Evaluating Large Language Models for Cybersecurity Tasks Introducing Google’s Secure AI Framework Deep Learning Security and Privacy Workshop Security Architectures for Generative AI Systems ACM Workshop on Artificial Intelligence and Security Conference on Applied Machine Learning in Information Security…
C
Cloud Security Podcast by Google

Speakers: Maria Riaz , Cloud Counter-Abuse, Engineering Lead, Google Cloud Topics: What is “counter abuse”? Is this the same as security? What does counter-abuse look like for GCP? What are the popular abuse types we face? Do people use stolen cards to get accounts to then violate the terms with? How do we deal with this, generally? Beyond core technical skills, what are some of the relevant competencies for working in this space that would appeal to a diverse set of audience? You have worked in academia and industry. What similarities or differences have you observed? Resources / reading: Video EP165 Your Cloud Is Not a Pet - Decoding 'Shifting Left' for Cloud Security P161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud “Art of War” by Sun Tzu “Dare to Lead” by Brene Brown "Multipliers" by Liz Wiseman…
C
Cloud Security Podcast by Google

Guests: Evan Gilman , co-founder CEO of Spirl Eli Nesterov , co-founder CTO of Spril Topics: Today we have IAM, zero trust and security made easy. With that intro, could you give us the 30 second version of what a workload identity is and why people need them? What’s so spiffy about SPIFFE anyway? What’s different between this and micro segmentation of your network–why is one better or worse? You call your book “ solving the bottom turtle ” could you tell us what that means? What are the challenges you’re seeing large organizations run into when adopting this approach at scale? Of all the things a CISO could prioritize, why should this one get added to the list? What makes this, which is so core to our internal security model–ripe for the outside world? How people do it now, what gets thrown away when you deploy SPIFFE? Are there alternative? SPIFFE is interesting, yet can a startup really “solve for the bottom turtle”? Resources: SPIFFE and Spirl “Solving the Bottom Turtle” book [PDF, free] “Surely You're Joking, Mr. Feynman!” book [also, one of Anton’s faves for years!] “Zero Trust Networks” book Workload Identity Federation in GCP…
C
Cloud Security Podcast by Google

Guest: Ahmad Robinson , Cloud Security Architect, Google Cloud Topics: You’ve done a BlackHat webinar where you discuss a Pets vs Cattle mentality when it comes to cloud operations. Can you explain this mentality and how it applies to security? What in your past led you to these insights? Tell us more about your background and your journey to Google. How did that background contribute to your team? One term that often comes up on the show and with our customers is 'shifting left.' Could you explain what 'shifting left' means in the context of cloud security? What’s hard about shift left, and where do orgs get stuck too far right? A lot of “cloud people” talk about IaC and PaC but the terms and the concepts are occasionally confusing to those new to cloud. Can you briefly explain Policy as Code and its security implications? Does PaC help or hurt security? Resources: “No Pets Allowed - Mastering The Basics Of Cloud Infrastructure” webinar EP33 Cloud Migrations: Security Perspectives from The Field EP126 What is Policy as Code and How Can It Help You Secure Your Cloud Environment? EP138 Terraform for Security Teams: How to Use IaC to Secure the Cloud…
C
Cloud Security Podcast by Google

1 EP164 Quantum Computing: Understanding the (very serious) Threat and Post-Quantum Cryptography 31:23
Guest: Jennifer Fernick , Senor Staff Security Engineer and UTL, Google Topics: Since one of us (!) doesn't have a PhD in quantum mechanics, could you explain what a quantum computer is and how do we know they are on a credible path towards being real threats to cryptography? How soon do we need to worry about this one? We’ve heard that quantum computers are more of a threat to asymmetric/public key crypto than symmetric crypto. First off, why? And second, what does this difference mean for defenders? Why (how) are we sure this is coming? Are we mitigating a threat that is perennially 10 years ahead and then vanishes due to some other broad technology change? What is a post-quantum algorithm anyway? If we’re baking new key exchange crypto into our systems, how confident are we that we are going to be resistant to both quantum and traditional cryptanalysis? Why does NIST think it's time to be doing the PQC thing now? Where is the rest of the industry on this evolution? How can a person tell the difference here between reality and snakeoil? I think Anton and I both responded to your initial email with a heavy dose of skepticism, and probably more skepticism than it deserved, so you get the rare on-air apology from both of us! Resources: Securing tomorrow today: Why Google now protects its internal communications from quantum threats How Google is preparing for a post-quantum world NIST PQC standards PQ Crypto conferences “Quantum Computation & Quantum Information” by Nielsen & Chuang book “Quantum Computing Since Democritus” by Scott Aaronson book EP154 Mike Schiffman: from Blueboxing to LLMs via Network Security at Google…
C
Cloud Security Podcast by Google

Guest: Phil Venables, Vice President, Chief Information Security Officer (CISO) @ Google Cloud Topics: You had this epic 8 megatrends idea in 2021, where are we now with them? We now have 9 of them , what made you add this particular one (AI)? A lot of CISOs fear runaway AI. Hence good governance is key! What is your secret of success for AI governance? What questions are CISOs asking you about AI? What questions about AI should they be asking that they are not asking? Which one of the megatrends is the most contentious based on your presenting them worldwide? Is cloud really making the world of IT simpler (megatrend #6)? Do most enterprise cloud users appreciate the software-defined nature of cloud (megatrend #5) or do they continue to fight it? Which megatrend is manifesting the most strongly in your experience? Resources: Megatrends drive cloud adoption—and improve security for all and infographic “Keynote | The Latest Cloud Security Megatrend: AI for Security” “Lessons from the future: Why shared fate shows us a better cloud roadmap” blog and shared fate page SAIF page “Spotlighting ‘shadow AI’: How to protect against risky AI practices” blog EP135 AI and Security: The Good, the Bad, and the Magical EP47 Megatrends, Macro-changes, Microservices, Oh My! Changes in 2022 and Beyond in Cloud Security Secure by Design by CISA…
C
Cloud Security Podcast by Google

Guest: Kat Traxler , Security Researcher, TrustOnCloud Topics: What is your reaction to “in the cloud you are one IAM mistake away from a breach”? Do you like it or do you hate it? A lot of people say “in the cloud, you must do IAM ‘right’”. What do you think that means? What is the first or the main idea that comes to your mind when you hear it? How have you seen the CSPs take different approaches to IAM? What does it mean for the cloud users? Why do people still screw up IAM in the cloud so badly after years of trying? Deeper, why do people still screw up resource hierarchy and resource management? Are the identity sins of cloud IAM users truly the sins of the creators? How did the "big 3" get it wrong and how does that continue to manifest today? Your best cloud IAM advice is “assign roles at the lowest resource-level possible”, please explain this one? Where is the magic? Resources: Video ( Linkedin , YouTube ) Kat blog “Diving Deeply into IAM Policy Evaluation” blog “Complexity: a Guided Tour” book EP141 Cloud Security Coast to Coast: From 2015 to 2023, What's Changed and What's the Same? EP129 How CISO Cloud Dreams and Realities Collide…
C
Cloud Security Podcast by Google

1 EP161 Cloud Compliance: A Lawyer - Turned Technologist! - Perspective on Navigating the Cloud 27:38
Guest: Victoria Geronimo , Cloud Security Architect, Google Cloud Topics: You work with technical folks at the intersection of compliance, security, and cloud. So what do you do, and where do you find the biggest challenges in communicating across those boundaries? How does cloud make compliance easier? Does it ever make compliance harder? What is your best advice to organizations that approach cloud compliance as they did for the 1990s data centers and classic IT? What has been the most surprising compliance challenge you’ve helped teams debug in your time here? You also work on standards development –can you tell us about how you got into that and what’s been surprising in that for you? We often say on this show that an organization’s ability to threat model is only as good as their team’s perspectives are diverse: how has your background shaped your work here? Resources: Video ( YouTube ) EP14 Making Compliance Cloud-native EP25 Beyond Compliance: Cloud Security in Europe Fordham University Law and Technology site IAPP site…
C
Cloud Security Podcast by Google

Guest: Josh Liburdi , Staff Security Engineer, Brex Topics: What is this “security data fabric”? Can you explain the technology? Is there a market for this? Is this same as security data pipelines? Why is this really needed? Won’t your SIEM vendor do it? Who should adopt it? Or, as Tim says, what gets better once you deploy it? Is reducing cost a big part of the security data fabric story? Does the data quality improve with the use of security data fabric tooling? For organizations considering a security data fabric solution, what key factors should they prioritize in their evaluation and selection process? What is the connection between this and federated security data search ? What is the likely future for this technology? Resources: BSidesSF 2024 - Reinventing ETL for Detection and Response Teams (Josh Liburdi) “How to Build Your Own Security Data Pipeline (and why you shouldn’t!)” blog “Decoupled SIEM: Brilliant or Stupid?” blog “Security Correlation Then and Now: A Sad Truth About SIEM” blog (my #1 popular post BTW) “Log Centralization: The End Is Nigh?” blog “20 Years of SIEM: Celebrating My Dubious Anniversary” blog “Navigating the data current: Exploring Cribl.Cloud analytics and customer insights” report OCSF…
C
Cloud Security Podcast by Google

Guest: Royal Hansen , CISO, Alphabet Topics: What were you thinking before you took that “Google CISO” job? Google's infrastructure is vast and complex, yet also modern. How does this influence the design and implementation of your security programs compared to other organizations? Are there any specific challenges or advantages that arise from operating at such a massive scale? What has been most surprising about Google’s internal security culture that you wish you could export to the world at large? What have you learned about scaling teams in the Google context? How do you design effective metrics for your teams and programs? So, yes, AI. Every organization is trying to weigh the risks and benefits of generative AI–do you have advice for the world at large based on how we’ve done this here? Resources: EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil CISA Secure by Design EP20 Security Operations, Reliability, and Securing Google with Heather Adkins EP91 “Hacking Google”, Op Aurora and Insider Threat at Google “ Delivering Security at Scale: From Artisanal to Industrial ” SRE book: CHapter 5: Toil Elimination SRS book: Security as an Emergent Property What are Security Invariants? EP185 SAIF-powered Collaboration to Secure AI: CoSAI and Why It Matters to You “ Against the Gods - Remarkable Story of Risk” book…
C
Cloud Security Podcast by Google

Guest: Dor Fledel , Founder and CEO of Spera Security, now Sr Director of Product Management at Okta Topics: We say “identity is the new perimeter,” but I think there’s a lof of nuance to it. Why and how does it matter specifically in cloud and SaaS security? How do you do IAM right in the cloud? Help us with the acronym soup - ITDR, CIEM also ISPM (ITSPM?), why are new products needed? What were the most important challenges you found users were struggling with when it comes to identity management? What advice do you have for organizations with considerable identity management debt? How should they start paying that down and get to a better place? Also: what is “identity management debt”? Can you answer this from both a technical and organizational change management perspective? It’s one thing to monitor how User identities, Service accounts and API keys are used, it’s another to monitor how they’re set up. When you were designing your startup, how did you pick which side of that coin to focus on first? What’s your advice for other founders thinking about the journey from zero to 1 and the journey from independent to acquisition? Resources: EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler EP127 Is IAM Really Fun and How to Stay Ahead of the Curve in Cloud IAM? EP166 Workload Identity, Zero Trust and SPIFFE (Also Turtles!) EP182 ITDR: The Missing Piece in Your Security Puzzle or Yet Another Tool to Buy? “Secrets of power negotiating“ book…
C
Cloud Security Podcast by Google

Guest: Nicole Beckwith , Sr. Security Engineering Manager, Threat Operations @ Kroger Topics: What are the most important qualities of a successful SOC leader today? What is your approach to building and maintaining a high-functioning SOC team? How do you approach burnout in a SOC team? What are some of the biggest challenges facing SOC teams today? Can you share some specific examples of how you have built and - probably more importantly! - maintained a high-functioning SOC team? What are your thoughts on the current state of SIEM technology? Still a core of SOC or not? What advice would you give to someone who inherited a SOC? What should his/her 7/30/90 day plan include? Resources: EP180 SOC Crossroads: Optimization vs Transformation - Two Paths for Security Operations Center EP181 Detection Engineering Deep Dive: From Career Paths to Scaling SOC Teams EP58 SOC is Not Dead: How to Grow and Develop Your SOC for Cloud and Beyond EP64 Security Operations Center: The People Side and How to Do it Right EP73 Your SOC Is Dead? Evolve to Output-driven Detect and Respond! EP26 SOC in a Large, Complex and Evolving Organization “The first 90 days” book…
C
Cloud Security Podcast by Google

1 EP186 Cloud Security Tools: Trust the Cloud Provider or Go Third-Party? An Epic Debate, Anton vs Tim 27:18
Guests: A debate between Tim and Anton, no guests Debate positions: You must buy the majority of cloud security tools from a cloud provider, here is why. You must buy the majority of cloud security tools from a 3rd party security vendor, here is why. Resources: EP74 Who Will Solve Cloud Security: A View from Google Investment Side EP22 Securing Multi-Cloud from a CISO Perspective, Part 3 EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use “The cloud trust paradox: To trust cloud computing more, you need the ability to trust it less” blog “Snowcrash” book VMTD…
C
Cloud Security Podcast by Google

Guest: David LaBianca , Senior Engineering Director, Google Topics: The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it? The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term? Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about? How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF) ? Does this also complement emerging AI security standards? AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape? What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group? We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them? In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it? An off-the-cuff question, how to do AI governance well? Resources: CoSAI site, CoSAI 3 projects SAIF main site Gen AI governance: 10 tips to level up your AI program “Securing AI: Similar or Different?” paper Our Security of AI Papers and Blogs Explained…
Guest: Manan Doshi , Senior Security Engineer @ Etsy Questions: In your experience, what are the biggest challenges organizations face when migrating to a new SIEM platform? How did you solve them? Many SIEM projects have problems, but a decent chunk of these problems are not about the tool being broken. How did you decide to migrate? When is it time to go? Specifically, how to avoid constant change from product to product, each time blaming the tool for what are essentially process failures? How did you handle detection content during migration? Was AI involved? How did you test for this: “Which platform will best enable our engineering team to build what we need?” Tell us more about the Detection as Code pipeline you use? “Completed SIEM migration in a single week!” Is this for real? Resources: Google Cloud Security Summit (August 20, 2024) and “Etsy and the art of SIEM Migration” presentation “Ancillary Justice” book StreamAlert SIEM migration blog ( spicy version / vanilla version / long detailed version ) Can We Have “Detection as Code”? Google SecOps EP117 Can a Small Team Adopt an Engineering-Centric Approach to Cybersecurity?…
C
Cloud Security Podcast by Google

Guests: Jaffa Edwards , Senior Security Manager @ Google Cloud Lyka Segura , Cloud Security Engineer @ Google Cloud Topics: Security transformation is hard , do you have any secret tricks or methods that actually make it happen? Can you share a story about a time when you helped a customer transform their cloud security posture? Not just improve, but actually transform! What is your process for understanding their needs and developing a security solution that is tailored to them? What to do if a customer does not want to share what is necessary or does not know themselves? What are some of the most common security mistakes that you see organizations make when they move to the cloud? What about the customers who insist on practicing in the cloud the same way they did on-premise? What do you tell the organizations that insist that “cloud is just somebody else’s computer” and they insist on doing security the old-fashioned way? What advice would you give to organizations that are just starting out on their cloud security journey? What are the first three cloud security steps you recommend that work for a cloud environment they inherited? References EP86 How to Apply Lessons from Virtualization Transition to Make Cloud Transformation Better For a successful cloud transformation, change your culture first Building security guardrails for developers with Google Cloud Google Cloud Consulting…
C
Cloud Security Podcast by Google

Guest: Adam Bateman , Co-founder and CEO, Push Security Topics: What is Identity Threat Detection and Response ( ITDR )? How do you define it? What gets better at a client organization once ITDR is deployed? Do we also need “ISPM” (parallel to CDR/CSPM), and what about CIEM? Workload identity ITDR vs human identity ITDR? Do we need both? Are these the same? What are the alternatives to using ITDR? Can’t SIEM/UEBA help - perhaps with browser logs? What are some of the common types of identity-based threats that ITDR can help detect? What advice would you give to organizations that are considering implementing ITDR? Resources: ITDR Definition ITDR blog by Push / solve problem…
C
Cloud Security Podcast by Google

Guest: Zack Allen , Senior Director of Detection & Research @ Datadog, creator of Detection Engineering Weekly Topics: What are the biggest challenges facing detection engineers today? What do you tell people who want to consume detections and not engineer them? What advice would you give to someone who is interested in becoming a detection engineer at her organization? So, what IS a detection engineer? Do you need software skills to be one? How much breadth and depth do you need? What should a SOC leader whose team totally lacks such skills do? You created Detection Engineering Weekly . What motivated you to start this publication, and what are your goals for it? What are the learnings so far? You work for a vendor, so how should customers think of vendor-made vs customer-made detections and their balance? What goes into a backlog for detections and how do you inform it? Resources: Video ( LinkedIn , YouTube ) Zacks’s newsletter: https://detectionengineering.net EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP117 Can a Small Team Adopt an Engineering-Centric Approach to Cybersecurity? The SRE book “Detection Spectrum” blog “Delivering Security at Scale: From Artisanal to Industrial” blog (and this too ) “Detection Engineering is Painful — and It Shouldn’t Be (Part 1)” blog series “Detection as Code? No, Detection as COOKING!” blog “Practical Threat Detection Engineering: A hands-on guide to planning, developing, and validating detection capabilities” book SpecterOps blog…
C
Cloud Security Podcast by Google

1 EP180 SOC Crossroads: Optimization vs Transformation - Two Paths for Security Operations Center 28:09
Guests: Mitchell Rudoll , Specialist Master, Deloitte Alex Glowacki , Senior Consultant, Deloitte Topics: The paper outlines two paths for SOCs: optimization or transformation . Can you elaborate on the key differences between these two approaches and the factors that should influence an organization's decision on which path to pursue? The paper also mentions that alert overload is still a major challenge for SOCs. What are some of the practices that work in 2024 for reducing alert fatigue and improving the signal-to-noise ratio in security signals? You also discuss the importance of automation for SOCs. What are some of the key areas where automation can be most beneficial, and what are some of the challenges of implementing automation in SOCs? Automation is often easier said than done… What specific skills and knowledge will be most important for SOC analysts in the future that people didn’t think of 5-10 years ago? Looking ahead, what are your predictions for the future of SOCs? What emerging technologies do you see having the biggest impact on how SOCs operate? Resources: “Future of the SOC: Evolution or Optimization —Choose Your Path” paper and highlights blog “Meet the Ghost of SecOps Future” video based on the paper EP58 SOC is Not Dead: How to Grow and Develop Your SOC for Cloud and Beyond The original Autonomic Security Operations (ASO) paper (2021) “New Paper: “Future of the SOC: Forces shaping modern security operations” (Paper 1 of 4)” “New Paper: “Future of the SOC: SOC People — Skills, Not Tiers” (Paper 2 of 4)” “New Paper: “Future Of The SOC: Process Consistency and Creativity: a Delicate Balance” (Paper 3 of 4)”…
C
Cloud Security Podcast by Google

Guests: Robin Shostack , Security Program Manager, Google Jibran Ilyas , Managing Director Incident Response, Mandiant, Google Cloud Topics: You talk about “teamwork under adverse conditions” to describe expedition behavior (EB). Could you tell us what it means? You have been involved in response to many high profile incidents, one of the ones we can talk about publicly is one of the biggest healthcare breaches at this time. Could you share how Expedition Behavior played a role in our response? Apart from during incident response which is almost definitionally an adverse condition, how else can security teams apply this knowledge? If teams are going to embrace an expeditionary behavior mindset, how do they learn it? It’s probably not feasible to ship every SOC team member off to the Okavango Delta for a NOLS course . Short of that, how do we foster EB in a new team? How do we create it in an existing team or an under-performing team? Resources: EP174 How to Measure and Improve Your Cloud Incident Response Readiness: A New Framework EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP98 How to Cloud IR or Why Attackers Become Cloud Native Faster? “Take a few of these: Cybersecurity lessons for 21st century healthcare professionals” blog Getting More by Stuart Diamond book Who Moved My Cheese by Spencer Johnson book…
C
Cloud Security Podcast by Google

1 EP178 Meet Brandon Wood: The Human Side of Threat Intelligence: From Bad IP to Trafficking Busts 32:09
Guest: Brandon Wood, Product Manager for Google Threat Intelligence Topics: Threat intelligence is one of those terms that means different things to everyone–can you tell us what this term has meant in the different contexts of your career? What do you tell people who assume that “TI = lists of bad IPs”? We heard while prepping for this show that you were involved in breaking up a human trafficking ring: tell us about that! In Anton’s experience, a lot of cyber TI is stuck in “1. Get more TI 2. ??? 3. Profit!” How do you move past that? One aspect of threat intelligence that’s always struck me as goofy is the idea that we can “monitor the dark web” and provide something useful. Can you change my mind on this one? You told us your story of getting into sales, you recently did a successful rotation into the role of Product Manager,, can you tell us about what motivated you to do this and what the experience was like? Are there other parts of your background that inform the work you’re doing and how you see yourself at Google? How does that impact our go to market for threat intelligence, and what’re we up to when it comes to keeping the Internet and broader world safe? Resources: Video EP175 Meet Crystal Lister: From Public Sector to Google Cloud Security and Threat Horizons EP128 Building Enterprise Threat Intelligence: The Who, What, Where, and Why EP112 Threat Horizons - How Google Does Threat Intelligence Introducing Google Threat Intelligence: Actionable threat intelligence at Google scale A Requirements-Driven Approach to Cyber Threat Intelligence…
C
Cloud Security Podcast by Google

Guests: Omar ElAhdan , Principal Consultant, Mandiant, Google Cloud Will Silverstone , Senior Consultant, Mandiant, Google Cloud Topics: Most organizations you see use both cloud and on-premise environments. What are the most common challenges organizations face in securing their hybrid cloud environments? You do IR so in your experience, what are top 5 mistakes organizations make that lead to cloud incidents? How and why do organizations get the attack surface wrong? Are there pillars of attack surface? We talk a lot about how IAM matters in the cloud. Is that true that AD is what gets you in many cases even for other clouds? What is your best cloud incident preparedness advice for organizations that are new to cloud and still use on-prem as well? Resources: Next 2024 LIVE Video of this episode / LinkedIn version (sorry for the audio quality!) “Lessons Learned from Cloud Compromise” podcast at The Defender’s Advantage “Cloud compromises: Lessons learned from Mandiant investigations” in 2023 from Next 2024 EP174 How to Measure and Improve Your Cloud Incident Response Readiness: A New Framework EP103 Security Incident Response and Public Cloud - Exploring with Mandiant EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler…
C
Cloud Security Podcast by Google

Guest: Seth Vargo , Principal Software Engineer responsible for Google's use of the public cloud, Google Topics: Google uses the public cloud, no way, right? Which one? Oh, yeah, I guess this is obvious: GCP, right? Where are we like other clients of GCP? Where are we not like other cloud users? Do we have any unique cloud security technology that we use that others may benefit from? How does our cloud usage inform our cloud security products? So is our cloud use profile similar to cloud natives or traditional companies? What are some of the most interesting cloud security practices and controls that we use that are usable by others? How do we make them work at scale? Resources: EP12 Threat Models and Cloud Security (previous episode with Seth) EP66 Is This Binary Legit? How Google Uses Binary Authorization and Code Provenance EP75 How We Scale Detection and Response at Google: Automation, Metrics, Toil EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics IAM Deny Seth Vargo blog “Attention Is All You Need” paper (yes, that one)…
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.