Turning Promising AI into Meaningful Customer Impact

3 months ago 68

Terry Miller, Vice President of AI & Machine Learning Engineering, Omada Health

Terry Miller, Vice President of AI & Machine Learning Engineering, Omada Health

Terry Miller, Vice President of AI & Machine Learning Engineering, Omada Health

I started out as a pre-med undergrad but ended up beginning my career in the industrial space, where I got an early look at how rapidly scaling compute was going to reshape data science. That realization pushed me to go back to school and earn a graduate degree in data science in 2016. From there, I began building and leading data science teams, which really sparked my passion for using technology to solve human problems.

Health and fitness have always been personal priorities for me, so when the opportunity came to join Omada Health as VP of AI & Machine Learning Engineering, I was excited not just to build another great AI team, but to help improve health outcomes on a large scale. It’s clear to me that our healthcare system often leaves people with chronic conditions to manage their health on their own. Contributing to a company that’s working to bend the curve of chronic disease feels both personal and deeply meaningful.

A cross both industrial and healthcare settings, the hardest problem is still getting well-engineered AI systems to solve meaningful customer problems in a way that moves the needle for the business. There’s usually a big gap between three groups that all need to be in sync:

1) the business subject-matter experts who understand the domain,

2) the leaders who can boil things down into a single clear sentence about the problem that must be solved to achieve a specific outcome and

3) the technical teams who know how to turn AI into reliable, scalable systems. In practice, most companies still have weaknesses in all three areas, and that’s why, in well over 90 percent of organizations, AI never quite makes the leap from promising prototypes to sustained, large-scale business impact.

 well over 90 percent of organizations, AI never quite makes the leap from promising prototypes to sustained, large-scale business impact. 

When people ask about “big” AI wins versus incremental ones, the first thing I usually say is that incremental optimization is actually the path forward in the vast majority of cases. Well-engineered data science, executed with discipline across an enterprise, tends to look like a series of small, measurable improvements that, at scale, quietly transform the business, whereas the search for a single, silver-bullet project almost always disappoints.

When evaluating initiatives, I focus on whether we’re building a truly well-engineered system, which for me means

1) having robust evaluation frameworks,

2) being disciplined about how models perform on new and unseen data and

3) putting real functional and operational monitoring around them so we can answer, “Does this thing in production actually behave the way we said it would in training?” That level of discipline is what allows you to move beyond hype, really see what’s happening in the wild, and confirm that an AI system is delivering meaningful business impact.

For me, the line is pretty clear: when a system needs to decide, in real time, what action to take based on context, that’s when LLMs and agent-style systems really shine. They’re incredibly powerful at looking across a range of possible actions, interpreting the current situation, and then choosing and executing the right next step, which is something traditional pipelines struggle to do flexibly.

Traditional machine learning is still fantastic for welldefined prediction or scoring problems, but the ability of modern transformer-based models to orchestrate decisions and workflows is what makes this wave of AI feel different and why the current hype cycle, and subsequent CAPEX infrastructure, has substance behind it.

The honest answer is that it starts with the team. I’ve been fortunate to hire a group of absolute rock stars, which means we can do both disciplined execution and forward-looking innovation at the same time. We’ve structured the organization so that most of the team is focused on reliable, day-to-day delivery, while a smaller group is deliberately set up to live 18–24 months in the future, exploring what’s next. I even have one engineer whose primary job is to prototype the most promising new tools and technologies, so we effectively have a shelf of vetted, ready-to-use capabilities that the product team can pull down and turn into features without compromising the stability of what’s already in production.

For leaders who care most about customer impact, not just efficiency, there are really two key things to focus on. First, think about AI systems as if you were hiring a person: if you wanted a human to do this “thing,” what would their title be, and what would you put in the job description? In most cases, the real user requirements for the AI system are hidden in that job description.

Second, you absolutely need someone who is fluent in modern data science practices to be ultimately responsible for approving anything that goes into production, there’s no way around that. The relationship between how algorithms perform on unseen data, how clearly the problem is defined, and how well the system is engineered and monitored in the real world is table stakes if you want AI that genuinely engages and helps customers rather than just checking a box.

I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

Read Entire Article