Artwork

محتوای ارائه شده توسط Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

From Data to Performance: Understanding and Improving Your AI Model

26:42
 
اشتراک گذاری
 

Manage episode 518786318 series 3018913
محتوای ارائه شده توسط Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Modern data analytic methods and tools—including artificial intelligence (AI) and machine learning (ML) classifiers—are revolutionizing prediction capabilities and automation through their capacity to analyze and classify data. To produce such results, these methods depend on correlations. However, an overreliance on correlations can lead to prediction bias and reduced confidence in AI outputs.

Drift in data and concept, evolving edge cases, and emerging phenomena can undermine the correlations that AI classifiers rely on. As the U.S. government increases its use of AI classifiers and predictors, these issues multiply (or use increase again). Subsequently, users may grow to distrust results. To address inaccurate erroneous correlations and predictions, we need new methods for ongoing testing and evaluation of AI and ML accuracy. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Nicholas Testa, a senior data scientist in the SEI's Software Solutions Division (SSD), and Crisanne Nolan, and Agile transformation engineer, also in SSD, sit down with Linda Parker Gates, Principal Investigator for this research and initiative lead for Software Acquisition Pathways at the SEI, to discuss the AI Robustness (AIR) tool, which allows users to gauge AI and ML classifier performance with data-based confidence.

  continue reading

423 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 518786318 series 3018913
محتوای ارائه شده توسط Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Modern data analytic methods and tools—including artificial intelligence (AI) and machine learning (ML) classifiers—are revolutionizing prediction capabilities and automation through their capacity to analyze and classify data. To produce such results, these methods depend on correlations. However, an overreliance on correlations can lead to prediction bias and reduced confidence in AI outputs.

Drift in data and concept, evolving edge cases, and emerging phenomena can undermine the correlations that AI classifiers rely on. As the U.S. government increases its use of AI classifiers and predictors, these issues multiply (or use increase again). Subsequently, users may grow to distrust results. To address inaccurate erroneous correlations and predictions, we need new methods for ongoing testing and evaluation of AI and ML accuracy. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Nicholas Testa, a senior data scientist in the SEI's Software Solutions Division (SSD), and Crisanne Nolan, and Agile transformation engineer, also in SSD, sit down with Linda Parker Gates, Principal Investigator for this research and initiative lead for Software Acquisition Pathways at the SEI, to discuss the AI Robustness (AIR) tool, which allows users to gauge AI and ML classifier performance with data-based confidence.

  continue reading

423 قسمت

همه قسمت ها

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش