Artwork

محتوای ارائه شده توسط Demetrios. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Demetrios یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

AWS Tranium and Inferentia // Kamran Khan and Matthew McClean // #238

45:22
 
اشتراک گذاری
 

Manage episode 421965501 series 3241972
محتوای ارائه شده توسط Demetrios. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Demetrios یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/

Matthew McClean is a Machine Learning Technology Leader with the leading Amazon Web Services (AWS) cloud platform. He leads the customer engineering teams at Annapurna ML helping customers adopt AWS Trainium and Inferentia for their Gen AI workloads.

Kamran Khan, Sr Technical Business Development Manager for AWS Inferentina/Trianium at AWS. He has over a decade of experience helping customers deploy and optimize deep learning training and inference workloads using AWS Inferentia and AWS Trainium.

AWS Tranium and Inferentia // MLOps podcast #238 with Kamran Khan, BD, Annapurna ML and Matthew McClean, Annapurna Labs Lead Solution Architecture at AWS.

Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/

// Abstract

Unlock unparalleled performance and cost savings with AWS Trainium and Inferentia! These powerful AI accelerators offer MLOps community members enhanced availability, compute elasticity, and energy efficiency. Seamlessly integrate with PyTorch, JAX, and Hugging Face, and enjoy robust support from industry leaders like W&B, Anyscale, and Outerbounds. Perfectly compatible with AWS services like Amazon SageMaker, getting started has never been easier. Elevate your AI game with AWS Trainium and Inferentia!

// Bio

Kamran Khan

Helping developers and users achieve their AI performance and cost goals for almost 2 decades.

Matthew Mc

CleanLeads the Annapurna Labs Solution Architecture and Prototyping teams helping customers train and deploy their Generative AI models with AWS Trainium and AWS Inferentia

// MLOps Jobs board

jobs.mlops.community

// MLOps Swag/Merch

https://mlops-community.myshopify.com/

// Related LinksAWS Trainium: https://aws.amazon.com/machine-learning/trainium/

AWS Inferentia: https://aws.amazon.com/machine-learning/inferentia/

--------------- ✌️Connect With Us ✌️ -------------

Join our Slack community: https://go.mlops.community/slack

Follow us on Twitter: @mlopscommunity

Sign up for the next meetup: https://go.mlops.community/register

Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/

Connect with Kamran on LinkedIn: https://www.linkedin.com/in/kamranjk/

Connect with Matt on LinkedIn: https://www.linkedin.com/in/matthewmcclean/

Timestamps:

[00:00] Matt's & Kamran's preferred coffee

[00:53] Takeaways

[01:57] Please like, share, leave a review, and subscribe to our MLOps channels!

[02:22] AWS Trainium and Inferentia rundown

[06:04] Inferentia vs GPUs: Comparison

[11:20] Using Neuron for ML

[15:54] Should Trainium and Inferentia go together?

[18:15] ML Workflow Integration Overview

[23:10] The Ec2 instance

[24:55] Bedrock vs SageMaker

[31:16] Shifting mindset toward open source in enterprise

[35:50] Fine-tuning open-source models, reducing costs significantly

[39:43] Model deployment cost can be reduced innovatively

[43:49] Benefits of using Inferentia and Trainium

[45:03] Wrap up

  continue reading

473 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 421965501 series 3241972
محتوای ارائه شده توسط Demetrios. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Demetrios یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/

Matthew McClean is a Machine Learning Technology Leader with the leading Amazon Web Services (AWS) cloud platform. He leads the customer engineering teams at Annapurna ML helping customers adopt AWS Trainium and Inferentia for their Gen AI workloads.

Kamran Khan, Sr Technical Business Development Manager for AWS Inferentina/Trianium at AWS. He has over a decade of experience helping customers deploy and optimize deep learning training and inference workloads using AWS Inferentia and AWS Trainium.

AWS Tranium and Inferentia // MLOps podcast #238 with Kamran Khan, BD, Annapurna ML and Matthew McClean, Annapurna Labs Lead Solution Architecture at AWS.

Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/

// Abstract

Unlock unparalleled performance and cost savings with AWS Trainium and Inferentia! These powerful AI accelerators offer MLOps community members enhanced availability, compute elasticity, and energy efficiency. Seamlessly integrate with PyTorch, JAX, and Hugging Face, and enjoy robust support from industry leaders like W&B, Anyscale, and Outerbounds. Perfectly compatible with AWS services like Amazon SageMaker, getting started has never been easier. Elevate your AI game with AWS Trainium and Inferentia!

// Bio

Kamran Khan

Helping developers and users achieve their AI performance and cost goals for almost 2 decades.

Matthew Mc

CleanLeads the Annapurna Labs Solution Architecture and Prototyping teams helping customers train and deploy their Generative AI models with AWS Trainium and AWS Inferentia

// MLOps Jobs board

jobs.mlops.community

// MLOps Swag/Merch

https://mlops-community.myshopify.com/

// Related LinksAWS Trainium: https://aws.amazon.com/machine-learning/trainium/

AWS Inferentia: https://aws.amazon.com/machine-learning/inferentia/

--------------- ✌️Connect With Us ✌️ -------------

Join our Slack community: https://go.mlops.community/slack

Follow us on Twitter: @mlopscommunity

Sign up for the next meetup: https://go.mlops.community/register

Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/

Connect with Kamran on LinkedIn: https://www.linkedin.com/in/kamranjk/

Connect with Matt on LinkedIn: https://www.linkedin.com/in/matthewmcclean/

Timestamps:

[00:00] Matt's & Kamran's preferred coffee

[00:53] Takeaways

[01:57] Please like, share, leave a review, and subscribe to our MLOps channels!

[02:22] AWS Trainium and Inferentia rundown

[06:04] Inferentia vs GPUs: Comparison

[11:20] Using Neuron for ML

[15:54] Should Trainium and Inferentia go together?

[18:15] ML Workflow Integration Overview

[23:10] The Ec2 instance

[24:55] Bedrock vs SageMaker

[31:16] Shifting mindset toward open source in enterprise

[35:50] Fine-tuning open-source models, reducing costs significantly

[39:43] Model deployment cost can be reduced innovatively

[43:49] Benefits of using Inferentia and Trainium

[45:03] Wrap up

  continue reading

473 قسمت

All episodes

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش