The arrival of generative AI hasn't made product management easier. If anything, it's made the discipline more demanding. The surface area of what's possible has expanded dramatically, but the job of the PM — figuring out what's worth building and why — hasn't changed.
What has changed is the pressure to ship AI features before understanding whether they solve real problems. I've lived this tension firsthand while building AvaSense — an AI-driven contract and supplier governance platform. The impulse to integrate every new LLM capability is real. But integrating it is not the same as designing for it.
The PMs I've seen succeed in this environment share one trait: they're deeply curious about the *interaction model*, not just the capability. An LLM can extract structured data from a contract — but where in a procurement analyst's workflow does that extraction matter? What's the cost of a wrong extraction? Who reviews it, and what do they do when the model is wrong?
When we designed the GenAI agents for AvaSense, the answer to those questions shaped every product decision. We designed around a 'trust but verify' model — surfacing AI outputs with confidence signals and frictionless override mechanisms. The goal wasn't to replace analyst judgment. It was to redirect their attention to the decisions only humans should make.
These are questions that don't change when the underlying technology shifts. They're the product fundamentals.
The best AI products aren't the ones that expose the most model capability. They're the ones that narrow the capability into a specific, trustworthy interaction that fits naturally into how someone already works. That's the job.
