19 subscribers
با برنامه Player FM !
پادکست هایی که ارزش شنیدن دارند
حمایت شده


Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)
«
»
080 – How to Measure the Impact of Data Products…and Anything Else with Forecasting and Measurement Expert Doug Hubbard
Manage episode 314450629 series 2938687
Finding it hard to know the value of your data products on the business or your end users? Do you struggle to understand the impact your data science, analytics, or product team is having on the people they serve?
Many times, the challenge comes down to figuring out WHAT to measure, and HOW. Clients, users, and customers often don’t even know what the right success or progress metrics are, let alone how to quantify them. Learning how to measure what might seem impossible is a highly valuable skill for leaders who want to track their progress with data—but it’s not all black and white. It’s not always about “more data,” and measurement is also not about “the finite, right answer.” Analytical minds, ready to embrace subjectivity and uncertainty in this episode!
In this insightful chat, Doug and I explore examples from his book, How to Measure Anything, and we discuss its applicability to the world of data and data products. From defining trust to identifying cognitive biases in qualitative research, Doug shares how he views the world in ways that we can actually measure. We also discuss the relationship between data and uncertainty, forecasting, and why people who are trying to measure something usually believe they have a lot less data than they really do.
Episode Description- A discussion about measurement, defining “trust”, and why it is important to collect data in a systematic way. (01:35)
- Doug explores “concept, object and methods of measurement” - and why most people have more data than they realize when investigating questions. (09:29)
- Why asking the right questions is more important than “needing to be the expert” - and a look at cognitive biases. (16:46)
- The Dunning-Kruger effect and how it applies to the way people measure outcomes - and Bob discusses progress metrics vs success metrics and the illusion of cognition. (25:13)
- How one of the challenges with machine learning also creates valuable skepticism - and the three criteria for experience to convert into learning. (35:35)
“Often things like trustworthiness or collaboration, or innovation, or any—all the squishy stuff, they sound hard to measure because they’re actually an umbrella term that bundles a bunch of different things together, and you have to unpack it to figure out what it is you’re talking about. It’s the beginning of all scientific inquiry is to figure out what your terms mean; what question are you even asking?”- Doug Hubbard (@hdr_frm) (02:33)
“Another interesting phenomenon about measurement in general and uncertainty, is that it’s in the cases where you have a lot of uncertainty when you don’t need many data points to greatly reduce it. [People] might assume that if [they] have a lot of uncertainty about something, that [they are] going to need a lot of data to offset that uncertainty. Mathematically speaking, just the opposite is true. The more uncertainty you have, the bigger uncertainty reduction you get from the first observation. In other words, if, you know almost nothing, almost anything will tell you something. That’s the way to think of it.”- Doug Hubbard (@hdr_frm) (07:05)
“I think one of the big takeaways there that I want my audience to hear is that if we start thinking about when we’re building these solutions, particularly analytics and decision support applications, instead of thinking about it as we’re trying to give the perfect answer here, or the model needs to be as accurate as possible, changing the framing to be, ‘if we went from something like a wild-ass guess, to maybe my experience and my intuition, to some level of data, what we’re doing here is we’re chipping away at the uncertainty, right?’ We’re not trying to go from zero to 100. Zero to 20 may be a substantial improvement if we can just get rid of some of that uncertainty, because no solution will ever predict the future perfectly, so let’s just try to reduce some of that uncertainty.”- Brian T. O’Neill (@rhythmspice) (08:40)
- “So, this is really important: [...] you have more data than you think, and you need less than you think. People just throw up their hands far too quickly when it comes to measurement problems. They just say, ‘Well, we don’t have enough data for that.’ Well, did you look? Tell me how much time you spent actually thinking about the problem or did you just give up too soon? [...] Assume there is a way to measure it, and the constraint is that you just haven’t thought of it yet. ”- Doug Hubbard (@hdr_frm) (15:37)
“I think people routinely believe they have a lot less data than they really do. They tend to believe that each situation is more unique than it really is [to the point] that you can’t extrapolate anything from prior observations. If that were really true, your experience means nothing.”- Doug Hubbard (@hdr_frm) (29:42)
- “When you have a lot of uncertainty, that’s exactly when you don’t need a lot of data to reduce it significantly. That’s the general rule of thumb here. [...] If what we’re trying to improve upon is just the subjective judgment of the stakeholders, all the research today—and by the way, here’s another area where there’s tons of data—there’s literally hundreds of studies where naive statistical models are compared to human experts […] and the consistent finding is that even naive statistical models outperform human experts in a surprising variety of fields.”- Doug Hubbard (@hdr_frm) (32:50)
- How to Measure Anything: https://www.amazon.com/gp/product/1118539273/
- Hubbard Decision Research: https://hubbardresearch.com
106 قسمت
Manage episode 314450629 series 2938687
Finding it hard to know the value of your data products on the business or your end users? Do you struggle to understand the impact your data science, analytics, or product team is having on the people they serve?
Many times, the challenge comes down to figuring out WHAT to measure, and HOW. Clients, users, and customers often don’t even know what the right success or progress metrics are, let alone how to quantify them. Learning how to measure what might seem impossible is a highly valuable skill for leaders who want to track their progress with data—but it’s not all black and white. It’s not always about “more data,” and measurement is also not about “the finite, right answer.” Analytical minds, ready to embrace subjectivity and uncertainty in this episode!
In this insightful chat, Doug and I explore examples from his book, How to Measure Anything, and we discuss its applicability to the world of data and data products. From defining trust to identifying cognitive biases in qualitative research, Doug shares how he views the world in ways that we can actually measure. We also discuss the relationship between data and uncertainty, forecasting, and why people who are trying to measure something usually believe they have a lot less data than they really do.
Episode Description- A discussion about measurement, defining “trust”, and why it is important to collect data in a systematic way. (01:35)
- Doug explores “concept, object and methods of measurement” - and why most people have more data than they realize when investigating questions. (09:29)
- Why asking the right questions is more important than “needing to be the expert” - and a look at cognitive biases. (16:46)
- The Dunning-Kruger effect and how it applies to the way people measure outcomes - and Bob discusses progress metrics vs success metrics and the illusion of cognition. (25:13)
- How one of the challenges with machine learning also creates valuable skepticism - and the three criteria for experience to convert into learning. (35:35)
“Often things like trustworthiness or collaboration, or innovation, or any—all the squishy stuff, they sound hard to measure because they’re actually an umbrella term that bundles a bunch of different things together, and you have to unpack it to figure out what it is you’re talking about. It’s the beginning of all scientific inquiry is to figure out what your terms mean; what question are you even asking?”- Doug Hubbard (@hdr_frm) (02:33)
“Another interesting phenomenon about measurement in general and uncertainty, is that it’s in the cases where you have a lot of uncertainty when you don’t need many data points to greatly reduce it. [People] might assume that if [they] have a lot of uncertainty about something, that [they are] going to need a lot of data to offset that uncertainty. Mathematically speaking, just the opposite is true. The more uncertainty you have, the bigger uncertainty reduction you get from the first observation. In other words, if, you know almost nothing, almost anything will tell you something. That’s the way to think of it.”- Doug Hubbard (@hdr_frm) (07:05)
“I think one of the big takeaways there that I want my audience to hear is that if we start thinking about when we’re building these solutions, particularly analytics and decision support applications, instead of thinking about it as we’re trying to give the perfect answer here, or the model needs to be as accurate as possible, changing the framing to be, ‘if we went from something like a wild-ass guess, to maybe my experience and my intuition, to some level of data, what we’re doing here is we’re chipping away at the uncertainty, right?’ We’re not trying to go from zero to 100. Zero to 20 may be a substantial improvement if we can just get rid of some of that uncertainty, because no solution will ever predict the future perfectly, so let’s just try to reduce some of that uncertainty.”- Brian T. O’Neill (@rhythmspice) (08:40)
- “So, this is really important: [...] you have more data than you think, and you need less than you think. People just throw up their hands far too quickly when it comes to measurement problems. They just say, ‘Well, we don’t have enough data for that.’ Well, did you look? Tell me how much time you spent actually thinking about the problem or did you just give up too soon? [...] Assume there is a way to measure it, and the constraint is that you just haven’t thought of it yet. ”- Doug Hubbard (@hdr_frm) (15:37)
“I think people routinely believe they have a lot less data than they really do. They tend to believe that each situation is more unique than it really is [to the point] that you can’t extrapolate anything from prior observations. If that were really true, your experience means nothing.”- Doug Hubbard (@hdr_frm) (29:42)
- “When you have a lot of uncertainty, that’s exactly when you don’t need a lot of data to reduce it significantly. That’s the general rule of thumb here. [...] If what we’re trying to improve upon is just the subjective judgment of the stakeholders, all the research today—and by the way, here’s another area where there’s tons of data—there’s literally hundreds of studies where naive statistical models are compared to human experts […] and the consistent finding is that even naive statistical models outperform human experts in a surprising variety of fields.”- Doug Hubbard (@hdr_frm) (32:50)
- How to Measure Anything: https://www.amazon.com/gp/product/1118539273/
- Hubbard Decision Research: https://hubbardresearch.com
106 قسمت
همه قسمت ها
×
1 Why AI Adoption Moves at the Speed of User Trust Irina Malkova on Lessons Learned Building Data Products at Salesforce 47:50

1 173 - Pendo’s CEO on Monetizing an Analytics SAAS Product, Avoiding Dashboard Fatigue, and How AI is Changing Product Work 43:49

1 172 - Building AI Assistants, Not Autopilots: What Tony Zhang’s Research Shows About Automation Blindness 44:24


1 170 - Turning Data into Impactful AI Products at Experian: Lessons from North American Chief AI Officer Shri Santhnam (Promoted Episode) 42:33

1 169 - AI Product Management and UX: What’s New (If Anything) About Making Valuable LLM-Powered Products with Stuart Winter-Tear 1:01:05

1 168 - 10 Challenges Internal Data Teams May Face Building Their First Revenue-Generating Data Product 38:24

1 167 - AI Product Management and Design: How Natalia Andreyeva and Team at Infor Nexus Create B2B Data Products that Customers Value 37:34


1 165 - How to Accommodate Multiple User Types and Needs in B2B Analytics and AI Products When You Lack UX Resources 49:04

1 164 - The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge 45:25

1 163 - It’s Not a Math Problem: How to Quantify the Value of Your Enterprise Data Products or Your Data Product Management Function 41:41



1 160 - Leading Product Through a Merger/Acquisition: Lessons from The Predictive Index’s CPO Adam Berke 42:10

1 159 - Uncorking Customer Insights: How Data Products Revealed Hidden Gems in Liquor & Hospitality Retail 40:47

1 158 - From Resistance to Reliance: Designing Data Products for Non-Believers with Anna Jacobson of Operator Collective 43:41

1 157 - How this materials science SAAS company brings PM+UX+data science together to help materials scientists accelerate R&D 34:58

1 156-The Challenges of Bringing UX Design and Data Science Together to Make Successful Pharma Data Products with Jeremy Forman 41:37


1 154 - 10 Things Founders of B2B SAAS Analytics and AI Startups Get Wrong About DIY Product and UI/UX Design 44:47

1 153 - What Impressed Me About How John Felushko Does Product and UX at the Analytics SAAS Company, LabStats 57:31

1 152 - 10 Reasons Not to Get Professional UX Design Help for Your Enterprise AI or SAAS Analytics Product 53:00

1 151 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode) 49:57

1 150 - How Specialized LLMs Can Help Enterprises Deliver Better GenAI User Experiences with Mark Ramsey 52:22

1 149 - What the Data Says About Why So Many Data Science and AI Initiatives Are Still Failing to Produce Value with Evan Shellshear 50:18



1 146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman 42:07

1 145 - Data Product Success: Adopting a Customer-Centric Approach With Malcolm Hawker, Head of Data Management at Profisee 53:09

1 144 - The Data Product Debate: Essential Tech or Excessive Effort? with Shashank Garg, CEO of Infocepts (Promoted Episode) 52:38

1 143 - The (5) Top Reasons AI/ML and Analytics SAAS Product Leaders Come to Me For UI/UX Design Help 50:01

1 142 - Live Webinar Recording: My UI/UX Design Audit of a New Podcast Analytics Service w/ Chris Hill (CEO, Humblepod) 50:56


1 140 - Why Data Visualization Alone Doesn’t Fix UI/UX Design Problems in Analytical Data Products with T from Data Rocks NZ 42:44

1 139 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode) 51:02

1 138 - VC Spotlight: The Impact of AI on SAAS and Data/Developer Products in 2024 w/ Ellen Chisa of BoldStart Ventures 33:05

1 137 - Immature Data, Immature Clients: When Are Data Products the Right Approach? feat. Data Product Architect, Karen Meppen 44:50

1 136 - Navigating the Politics of UX Research and Data Product Design with Caroline Zimmerman 44:16

1 135 - “No Time for That:” Enabling Effective Data Product UX Research in Product-Immature Organizations 52:47




1 131 - 15 Ways to Increase User Adoption of Data Products (Without Handcuffs, Threats and Mandates) with Brian T. O’Neill 36:57

1 130 - Nick Zervoudis on Data Product Management, UX Design Training and Overcoming Imposter Syndrome 48:56

1 129 - Why We Stopped, Deleted 18 Months of ML Work, and Shifted to a Data Product Mindset at Coolblue 35:21

1 128 - Data Products for Dummies and The Importance of Data Product Management with Vishal Singh of Starburst 53:01

1 127 - On the Road to Adopting a “Producty” Approach to Data Products at the UK’s Care Quality Commission with Jonathan Cairns-Terry 36:55


1 125 - Human-Centered XAI: Moving from Algorithms to Explainable ML UX with Microsoft Researcher Vera Liao 44:42


1 123 - Learnings From the CDOIQ Symposium and How Data Product Definitions are Evolving with Brian T. O’Neill 27:17

1 122 - Listener Questions Answered: Conducting Effective Discovery for Data Products with Brian T. O’Neill 33:46

1 121 - How Sainsbury’s Head of Data Products for Analytics and ML Designs for User Adoption with Peter Everill 39:40

1 120 - The Portfolio Mindset: Data Product Management and Design with Nadiem von Heydebrand (Part 2) 41:35

1 119 - Skills vs. Roles: Data Product Management and Design with Nadiem von Heydebrand (Part 1) 37:12

1 118 - Attracting Talent and Landing a Role in Data Product Management with Kyle Winterbottom 49:23

1 117 - Phil Harvey, Co-Author of “Data: A Guide to Humans,” on the Non-Technical Skills Needed to Produce Valuable AI Solutions 39:39

1 116 - 10 Reasons Your Customers Don’t Make Time for Your Data Product Initiatives + A Big Update on the Data Product Leadership Community (DPLC) 45:56

1 115 - Applying a Product and UX-Driven Approach to Building Stuart’s Data Platform with Osian Jones 45:19

1 114 - Designing Anti-Biasing and Explainability Tools for Data Scientists Creating ML Models with Josh Noble 42:05

1 113 - Turning the Weather into an Indispensable Data Product for Businesses with Cole Swain, VP Product at tomorrow.io 38:53

1 112 - Solving for Common Pitfalls When Developing a Data Strategy featuring Samir Sharma, CEO of datazuum 35:18


1 110 - CDO Spotlight: The Value and Journey of Implementing a Data Product Mindset with Sebastian Klapdor of Vista 32:52

1 109 - The Role of Product Management and Design in Turning ML/AI into a Valuable Business with Bob Mason from Argon Ventures 32:43

1 108 - Google Cloud’s Bruno Aziza on What Makes a Good Customer-Obsessed Data Product Manager 50:43

1 107 - Tom Davenport on Data Product Management and the Impact of a Product Orientation on Enterprise Data Science and ML Initiatives 42:52

1 106 - Ideaflow: Applying the Practice of Design and Innovation to Internal Data Products w/ Jeremy Utley 44:14

1 105 - Defining “Data Product” the Producty Way and the Non-technical Skills ML/AI Product Managers Need 41:53

1 104 - Surfacing the Unarticulated Needs of Users and Stakeholders through Effective Listening 44:12

1 103 - Helping Pediatric Cardiac Surgeons Make Better Decisions with ML featuring Eugenio Zuccarelli of MIT Media Lab 42:33

1 102 - CDO Spotlight: The Non-Technical Roles Data Science and Analytics Teams Need to Drive Adoption of Data Products w/ Iván Herrero Bartolomé 35:05

1 101 - Insights on Framing IOT Solutions as Data Products and Lessons Learned from Katy Pusch 39:11

1 100 - Why Your Data, AI, Product & Business Strategies Must Work Together (and Digital Transformation is The Wrong Framing) with Vin Vashishta 45:08

1 099 - Don’t Boil the Ocean: How to Generate Business Value Early With Your Data Products with Jon Cooke, CTO of Dataception 48:28


1 097 - Why Regions Bank’s CDAO, Manav Misra, Implemented a Product-Oriented Approach to Designing Data Products 35:22

1 096 - Why Chad Sanderson, Head of Product for Convoy’s Data Platform, is a Champion of Data UX 37:36

1 095 - Increasing Adoption of Data Products Through Design Training: My Interview from TDWI Munich 16:50

1 094 - The Multi-Million Dollar Impact of Data Product Management and UX with Vijay Yadav of Merck 46:02



1 091 - How Brazil’s Biggest Fiber Company, Oi, Leverages Design To Create Useful Data Products with Sr. Exec. Design Manager, João Critis 31:24



1 088 - Doing UX Research for Data Products and The Magic of Qualitative User Feedback with Mike Oren, Head of Design Research at Klaviyo 42:26

1 087 - How Data Product Management and UX Integrate with Data Scientists at Albertsons Companies to Improve the Grocery Shopping Experience 37:36


1 085 - Dr. William D. Báez on the Journey and ROI of Integrating UX Design into Machine Learning and Analytics Solutions 44:42
به Player FM خوش آمدید!
Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.