17 subscribers
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


1 Inside Deloitte Ventures: Strategic Corporate VC Insights on Scaling Startups and Vertical AI Trends 34:07
Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems
Manage episode 366800412 series 3011550
On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.
We discuss:
- The advantages of using large chips for AI work.
- Cerebras Systems’ process for building chips optimized for AI.
- Why traditional GPUs aren’t the optimal machines for AI work.
- Why efficiently distributing computing resources is a significant challenge for AI work.
- How much faster Cerebras Systems’ machines are than other processors on the market.
- Reasons why some ML-specific chip companies fail and what Cerebras does differently.
- Unique challenges for chip makers and hardware companies.
- Cooling and heat-transfer techniques for Cerebras machines.
- How Cerebras approaches building chips that will fit the needs of customers for years to come.
- Why the strategic vision for what data to collect for ML needs more discussion.
Resources:
Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/
Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/
Cerebras Systems | Website - https://www.cerebras.net/
Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#OCR #DeepLearning #AI #Modeling #ML
126 قسمت
Manage episode 366800412 series 3011550
On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.
We discuss:
- The advantages of using large chips for AI work.
- Cerebras Systems’ process for building chips optimized for AI.
- Why traditional GPUs aren’t the optimal machines for AI work.
- Why efficiently distributing computing resources is a significant challenge for AI work.
- How much faster Cerebras Systems’ machines are than other processors on the market.
- Reasons why some ML-specific chip companies fail and what Cerebras does differently.
- Unique challenges for chip makers and hardware companies.
- Cooling and heat-transfer techniques for Cerebras machines.
- How Cerebras approaches building chips that will fit the needs of customers for years to come.
- Why the strategic vision for what data to collect for ML needs more discussion.
Resources:
Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/
Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/
Cerebras Systems | Website - https://www.cerebras.net/
Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#OCR #DeepLearning #AI #Modeling #ML
126 قسمت
همه قسمت ها
×
1 GitHub CEO Thomas Dohmke on Copilot and the Future of Software Development 1:09:44

1 From Pharma to AGI Hype, and Developing AI in Finance: Martin Shkreli’s Journey 1:30:19

1 AI, autonomy, and the future of naval warfare with Captain Jon Haase, United States Navy 1:01:32

1 R1, OpenAI’s o3, and the ARC-AGI Benchmark: Insights from Mike Knoop 1:12:01

1 Vercel’s CEO & Founder Guillermo Rauch on the impact of AI on Web Development and Front End Engineering 56:57
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.