Artwork

محتوای ارائه شده توسط Evan Kirstel. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Evan Kirstel یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

From Shadow AI to Safe Adoption: Guardrails for Enterprise AI

20:42
 
اشتراک گذاری
 

Manage episode 510063608 series 3499431
محتوای ارائه شده توسط Evan Kirstel. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Evan Kirstel یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Interested in being a guest? Email us at [email protected]

Your chatbot just recommended a competitor. Your interns pasted sensitive research into a public model. And your developers fed entire mobile codebases to a chatbot for “optimization.” We’ve seen it all—and we’re turning those hard lessons into a practical, repeatable playbook for safe AI at scale.
We sit down with Rick Caccia CEO and co‑founder of WitnessAI, to unpack how large organizations move from “Doctor No” to “Doctor Yes.” Rick explains what a confidence layer for enterprise AI looks like: full observability across employee usage, third‑party apps, internal models, customer-facing chatbots, and increasingly powerful agents. We talk about why legacy DLP can’t keep up with conversational risk, how intention-based controls catch unsafe goals in real time, and why brand safety belongs right alongside security and compliance. You’ll hear real stories: flipping 150,000 employees from blocked to safely enabled in days, stopping inadvertent PCI exposure in support workflows, and preventing chatbots from steering customers to competitors.
We also get tactical about regulation and readiness. Yes, the EU AI Act matters—but so do familiar frameworks like PCI DSS and HIPAA that AI usage quietly reactivates. Rick shares a phased roadmap: start with visibility, normalize identity across divisions, roll out targeted policies slowly, and add guardrails that constrain agents before they act. We cover the attacker–defender gap as AI lowers cost and increases speed for adversaries, plus the emerging blind spots leaders should watch as agentic capabilities become default in operating systems and business apps.
If you’re a CISO, CIO, or builder trying to enable AI without losing control, this conversation offers concrete steps, fresh mental models, and a path to say yes with confidence.

Support the show

More at https://linktr.ee/EvanKirstel

  continue reading

فصل ها

1. Setting The Stakes: Safe Enterprise AI (00:00:00)

2. Witness AI’s Mission: A Confidence Layer (00:00:53)

3. Defining “Safe” For Employees, Apps, Agents (00:01:18)

4. From Doctor No To Doctor Yes (00:03:12)

5. Shadow AI And The 5,000-App Surprise (00:04:20)

6. Why AI Risk Differs From Traditional Security (00:06:46)

7. Intention-Based Controls Over DLP (00:08:27)

8. Regulations: New Acts And Old Rules (00:09:19)

9. Attackers’ Advantage And New Blind Spots (00:10:40)

10. Real-World Wins: Enabling 150k Safely (00:12:07)

11. B2C Bots And Brand-Safe Guardrails (00:13:46)

12. Playbook: Observe First, Then Control (00:15:16)

13. Closing And Listener CTA (00:18:35)

520 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 510063608 series 3499431
محتوای ارائه شده توسط Evan Kirstel. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Evan Kirstel یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Interested in being a guest? Email us at [email protected]

Your chatbot just recommended a competitor. Your interns pasted sensitive research into a public model. And your developers fed entire mobile codebases to a chatbot for “optimization.” We’ve seen it all—and we’re turning those hard lessons into a practical, repeatable playbook for safe AI at scale.
We sit down with Rick Caccia CEO and co‑founder of WitnessAI, to unpack how large organizations move from “Doctor No” to “Doctor Yes.” Rick explains what a confidence layer for enterprise AI looks like: full observability across employee usage, third‑party apps, internal models, customer-facing chatbots, and increasingly powerful agents. We talk about why legacy DLP can’t keep up with conversational risk, how intention-based controls catch unsafe goals in real time, and why brand safety belongs right alongside security and compliance. You’ll hear real stories: flipping 150,000 employees from blocked to safely enabled in days, stopping inadvertent PCI exposure in support workflows, and preventing chatbots from steering customers to competitors.
We also get tactical about regulation and readiness. Yes, the EU AI Act matters—but so do familiar frameworks like PCI DSS and HIPAA that AI usage quietly reactivates. Rick shares a phased roadmap: start with visibility, normalize identity across divisions, roll out targeted policies slowly, and add guardrails that constrain agents before they act. We cover the attacker–defender gap as AI lowers cost and increases speed for adversaries, plus the emerging blind spots leaders should watch as agentic capabilities become default in operating systems and business apps.
If you’re a CISO, CIO, or builder trying to enable AI without losing control, this conversation offers concrete steps, fresh mental models, and a path to say yes with confidence.

Support the show

More at https://linktr.ee/EvanKirstel

  continue reading

فصل ها

1. Setting The Stakes: Safe Enterprise AI (00:00:00)

2. Witness AI’s Mission: A Confidence Layer (00:00:53)

3. Defining “Safe” For Employees, Apps, Agents (00:01:18)

4. From Doctor No To Doctor Yes (00:03:12)

5. Shadow AI And The 5,000-App Surprise (00:04:20)

6. Why AI Risk Differs From Traditional Security (00:06:46)

7. Intention-Based Controls Over DLP (00:08:27)

8. Regulations: New Acts And Old Rules (00:09:19)

9. Attackers’ Advantage And New Blind Spots (00:10:40)

10. Real-World Wins: Enabling 150k Safely (00:12:07)

11. B2C Bots And Brand-Safe Guardrails (00:13:46)

12. Playbook: Observe First, Then Control (00:15:16)

13. Closing And Listener CTA (00:18:35)

520 قسمت

Semua episod

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش