Artwork

محتوای ارائه شده توسط Real Python. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Real Python یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Moving NLP Forward With Transformer Models and Attention

50:47
 
اشتراک گذاری
 

Manage episode 337660588 series 2637014
محتوای ارائه شده توسط Real Python. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Real Python یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

What’s the big breakthrough for Natural Language Processing (NLP) that has dramatically advanced machine learning into deep learning? What makes these transformer models unique, and what defines “attention?” This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, continues our talk about how machine learning (ML) models understand and generate text.

This episode is a continuation of the conversation in episode #119. Jodie builds on the concepts of bag-of-words, word2vec, and simple embedding models. We talk about the breakthrough mechanism called “attention,” which allows for parallelization in building models.

We also discuss the two major transformer models, BERT and GPT3. Jodie continues to share multiple resources to help you continue exploring modeling and NLP with Python.

Course Spotlight: Building a Neural Network & Making Predictions With Python AI

In this step-by-step course, you’ll build a neural network from scratch as an introduction to the world of artificial intelligence (AI) in Python. You’ll learn how to train your neural network and make predictions based on a given dataset.

Topics:

  • 00:00:00 – Introduction
  • 00:02:20 – Where we left off with word2vec…
  • 00:03:35 – Example of losing context
  • 00:06:50 – Working at scale and adding attention
  • 00:12:34 – Multiple levels of training for the model
  • 00:14:10 – Attention is the basis for transformer models
  • 00:15:07 – BERT (Bidirectional Encoder Representations from Transformers)
  • 00:16:29 – GPT (Generative Pre-trained Transformer)
  • 00:19:08 – Video Course Spotlight
  • 00:20:08 – How far have we moved forward?
  • 00:20:41 – Access to GPT-2 via Hugging Face
  • 00:23:56 – How to access and use these models?
  • 00:30:42 – Cost of training GPT-3
  • 00:35:01 – Resources to practice and learn with BERT
  • 00:38:19 – GPT-3 and GitHub Copilot
  • 00:44:35 – DALL-E is a transformer
  • 00:46:13 – Help yourself to the show notes!
  • 00:49:19 – How can people follow your work?
  • 00:50:03 – Thanks and goodbye

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas

  continue reading

272 قسمت

Artwork
iconاشتراک گذاری
 
Manage episode 337660588 series 2637014
محتوای ارائه شده توسط Real Python. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Real Python یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

What’s the big breakthrough for Natural Language Processing (NLP) that has dramatically advanced machine learning into deep learning? What makes these transformer models unique, and what defines “attention?” This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, continues our talk about how machine learning (ML) models understand and generate text.

This episode is a continuation of the conversation in episode #119. Jodie builds on the concepts of bag-of-words, word2vec, and simple embedding models. We talk about the breakthrough mechanism called “attention,” which allows for parallelization in building models.

We also discuss the two major transformer models, BERT and GPT3. Jodie continues to share multiple resources to help you continue exploring modeling and NLP with Python.

Course Spotlight: Building a Neural Network & Making Predictions With Python AI

In this step-by-step course, you’ll build a neural network from scratch as an introduction to the world of artificial intelligence (AI) in Python. You’ll learn how to train your neural network and make predictions based on a given dataset.

Topics:

  • 00:00:00 – Introduction
  • 00:02:20 – Where we left off with word2vec…
  • 00:03:35 – Example of losing context
  • 00:06:50 – Working at scale and adding attention
  • 00:12:34 – Multiple levels of training for the model
  • 00:14:10 – Attention is the basis for transformer models
  • 00:15:07 – BERT (Bidirectional Encoder Representations from Transformers)
  • 00:16:29 – GPT (Generative Pre-trained Transformer)
  • 00:19:08 – Video Course Spotlight
  • 00:20:08 – How far have we moved forward?
  • 00:20:41 – Access to GPT-2 via Hugging Face
  • 00:23:56 – How to access and use these models?
  • 00:30:42 – Cost of training GPT-3
  • 00:35:01 – Resources to practice and learn with BERT
  • 00:38:19 – GPT-3 and GitHub Copilot
  • 00:44:35 – DALL-E is a transformer
  • 00:46:13 – Help yourself to the show notes!
  • 00:49:19 – How can people follow your work?
  • 00:50:03 – Thanks and goodbye

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas

  continue reading

272 قسمت

Semua episode

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع

در حین کاوش به این نمایش گوش دهید
پخش