Excellence in Action

AI in Practice Part 2: Product Strategy

How do you think about integrating AI into your product roadmap?

By Michelle Ling

Investor, Tidemark
This is Part 2 of the AI in Practice Series.
Part 1: Proprietary Data & Compliance
Part 2: Product Strategy
Part 3: Organizational Change

Our goal is to move from hype-driven blog posts to pragmatic implementation guides. To do so, we put together two round table sessions with 25+ different executives - CEOs, CTOs, CPOs, and AI leads from 20+ different companies across a variety of industries. This group represents a mix of private companies with an aggregate of $30B+ in valuation and $5B+ in funding, and several public companies. 

We’ve compiled, anonymized, and written down the best insights from those working sessions. The takeaways are diverse and enlightening. We are entering into a new age of software, and many previous best practices will have to be reconsidered. 

In this essay, we will cover how to evaluate and prioritize AI features and products, make implementation decisions (such as open source vs. vendor models), and resource headcount on AI projects.

Product Roadmap

“Assume your previous processes are disrupted. Assume that’s gone. How do we now pivot and use that as an advantage to bolster the product roadmap?” - CTO at $2B+ Public Tech Company 

AI will render certain workflows (and roles) obsolete, while being exponentially additive in other cases. Building AI into the roadmap is a risky process. The danger is due to a handful of reasons:

  • There are high costs associated with LLM products, such as compute, talent, data inputs, training, inference, and monitoring
  • There is more uncertainty given the new and ever-changing market landscape and unknown limits of the technology
  • There are more ethical and legal ramifications than prior software products
  • Internal education and alignment need to be developed
  • Customer education is necessary to ensure adoption and make the ROI math work

While AI makes spinning up new products easier, it does not negate the importance of product market fit—nor does it make the process of finding that fit easier.

“My view is that the risks and uncertainties are much more in product market fit than in technology. We’ll figure the technology out. Partially because AI is a lot more expensive than traditional products—and because it’s a lot more uncertain than traditional products, and it’s got a whole lot of other drawbacks as well—it’s all the more important to make sure you’ve got product market fit before you overinvest.” - VP of Product at $100B+ Public Tech Company

There are a few key questions to answer before committing to an AI product strategy. Perhaps unsurprisingly, these are the questions we have always had to answer while building tech companies:

  • Does this solve a problem? 
  • Can you make money? 
  • Can you build a model that will solve this problem? (Do you need AI to solve this problem?)

It’s easy to get caught up in the hype, but it’s vital that the product roadmap stays focused on customer needs, differentiation, defensibility, and whether the costs outweigh the benefits.

 

Technical Architecture

Once you’ve made the difficult decisions about which AI products to include on your roadmap, there are a number of technical decisions that need to be made around implementation. We cover the evaluation processes behind open source vs. vendor models, fine-tuning models vs. prompt engineering, and when to build fast vs. build right.

Open-Source vs. Vendor Model

The general sentiment from the executives we spoke with was that there eventually will be an open-source model that works, but existing models are not yet at the performance level required for business use cases (although there’s been a watershed moment with a number of recent open source model releases that happened after we conducted these sessions). 

For many, it’s easier right now to have a vendor that manages updates for them, as opposed to having to match the rapid pace of innovation and upgrade every two weeks. However, many of these companies still closely monitor open-source model developments and will continue to actively test open-source models against vendor models.

When does open source make sense? One fact that remains consistent regardless of which model you use is that LLMs will make mistakes. Companies that have opted into using open source and building their own LLM are those for whom there is a significant downside to making mistakes, and where there are industry-specific nuances. For certain business-critical processes, this risk can be untenable. Every single percentage point closer you get to 100% accuracy, the better. 

For companies that build internally, controlling the outputs often means controlling the inputs (i.e. training data, weights, and underlying architecture). Some industries, like life sciences, have found that general models do not meet the bar for their particular use cases due to unique industry terminology. The vendor models perform significantly worse than internal, specialized models that are trained and tuned with full visibility into the model’s inputs and architecture. 

“LLMs are extraordinarily good and bad” - VP of Product at $100B+ Public Tech Company

Overall, in the debate between open-source vs. closed, most companies currently have a preference for closed vendor models. These companies prioritize finding vertical- or domain-specific expertise and then marrying that expertise with the right tools. This expertise is more important for them than model-building skill sets. 

Partnering with a foundation model is often the right choice, and the decision is usually straightforward. The choice to do otherwise is typically driven by a clear, compelling reason (data privacy concerns, specific terminology, missing datasets). Most companies have accepted that, for now, they are not LLM companies. But 20 years ago, many companies outsourced their online operations too. We imagine that eventually, some of this stuff will be brought in-house. 

Fine-tuning vs. Prompting

Another common area of debate is how long the need for fine-tuning will exist as foundation models get better and better. One school of thought is that while the models are increasingly advanced, there are still some data sets missing. Individualized interactions, gathered from years of customer activity, are more difficult with general models, and companies are more protective of those data points. 

Others hold the view that there isn’t a big difference between fine-tuning and prompt engineering, and the difference will diminish even further in the long term. As the context window for foundation models grows, fine-tuning will become unnecessary for 90% of use cases. 

We believe the answer hinges on whether you have a strong public dataset (customer reviews, medical publications, Q&A forum discussion) or a substantial set of created IP (like historical user behavior/preferences/language), and whether you fall into that 10% of use cases where general models may never be good enough.   

What is important? Both groups agree that feedback loops should be created and implemented where the most value is to be found. As we will discuss in the UI/UX section of our next essay, giving your users an avenue to provide you with in-product feedback is key to improving your model over time. The combination of user feedback with a tuning engine that optimizes your input prompts over time will coach the model to produce more accurate and contextually relevant outputs. Systematic, continuous improvement is key to building sustainable advantages. 

Build Fast vs. Build Right. 

There has always existed a tension between building fast or building right. 

Areas to build fast: Don’t worry about finding the “right” vendor at this stage—find something that works and go for it. The learnings come from experimentation rather than finding the perfect solution (which likely doesn’t exist). The best companies are engaging in many different alpha/beta tests at once—as well as leveraging their own internal processes—to figure out what works best for them. The same rule applies for costs: once you’ve found product market fit through this experimentation, then you can start to worry about expenses.

“We were just experimenting quickly and learning. Early on, there’s a lot of talk and speculation: What’s going to do this? We were talking a lot about the problem, and admiring the problem, instead of getting in there and trying something. Start experimenting. Start iterating, and then you’ll learn quite quickly. You might make some mistakes, but our personal philosophy is, it’s better to do that and learn. Then, if you have to correct some things, it’s fine. It’s moving so fast that it’s fairly easy to do.” - Co-Founder at $1B+ Private Tech Company

Areas to build right: One strategy we’ve seen to manage the balance between speed and safety is investment in architecture. By starting with a system that enables you to easily swap things out in the future as needed, you can be prepared for the potential explosion of choice in LLMs. Teams are often running sensitivity analyses around different models to determine which is best for any given use case at any given time. Multi-step, modular workflows also allow you to make adjustments in the middle of a workflow, which facilitates easy experimentation as you look to identify where in the workflow AI will make the biggest difference. 

As many companies race to market, it’s also important to make the distinction between building something fast to play with, and something that is production-ready. This is especially important if you operate in an environment where it is critical to be compliant.

These are a handful of the business decisions that need to be made around just the technical implementation. Another important area that will inform the success of your projects is headcount resourcing.

Headcount Resourcing 

Resourcing for AI efforts, as opposed to the existing product roadmap, requires a fine balance between minimizing duplication of efforts and ensuring maximum creative exploration. Many companies are employing a dual-pronged strategy of spinning up dedicated AI teams while also leveraging their existing workforce. We discuss the strategies to best utilize each core function in the sections below.

Engineers

Prioritize getting the engineers who are already at your company trained on AI and aware of the additional tools at their disposal (embeddings, vector databases, clustering, GitHub Copilot, etc.) Engineers have seemed to adopt LLMs faster than data scientists; this may be because engineers typically have a better understanding of the product and end customers, while data scientists tend to be further removed. Data scientists also tend to have more of a wait-and-see mentality, whereas engineers are more likely to jump right in. 

Data Scientists

While it may feel like this democratization of AI through third-party APIs removes the need for NLP experts and data scientists, this sentiment will change quickly once you start to run into walls with those APIs. There is a lot that a company can do with minimal data science resources and some creative hacking, but taking something to the next level (i.e., the customer-ready level) requires specialized talent. Their expertise is vital in gathering data, model monitoring, and implementing feedback to improve the model. Automated validation, or making sure the model performs as expected (key to most business products), is a different beast compared to automated content generation, and it requires true expertise.

Product

Product managers should be pushed to think about all the things that are possible with AI. This is important because the feasibility of features and limitations of existing models must be integrated into the product roadmap when planning for AI. 

A common failure point is when product teams are less involved in AI product decisions and leave things to the data science team. Product management needs to work hand in hand with data scientists and ML teams to determine what is possible and what to prioritize. More validation (customer need, feasibility, priority level) needs to happen upfront, given the higher cost of failure.  

“I would argue that PMs need to be more involved in AI products. I think there needs to be more validation done upfront because they’re more expensive and the cost of failure is much higher. [...] I think you need more PM involvement with AI products than for traditional products” - VP of Product at $100B+ Public Tech Company

AI represents a new variable in the product strategy equation from a roadmap and resourcing perspective. Companies will need to adapt and make decisions about how best to incorporate this technology amidst pressure from customers, investors, and employees. However, while businesses have found that AI changes the game around product strategy, it ultimately doesn’t alter the fundamental rules around product development and finding product market fit. 

If you’re building AI products or have any thoughts on strategies related to incorporating AI into product roadmaps, reach out to us at knowledge@tidemarkcap.com. To learn more about how we think about AI as an opportunity and a threat, visit our essay here. If you’d like to get updates as we continue to dive into AI implementation strategies, sign up below.

AI in Practice Series
Part 1: Proprietary Data & Compliance
Part 2: Product Strategy
Part 3: Organizational Change

 

Authors:
No items found.