There is a simple diagnostic I use when a founder tells me about their AI product.
I call it the Weekend Test.
If an AI builder could recreate most of what you have built in a weekend using public models and no proprietary inputs, then you do not have a defensible product yet. You have a demo. And that’s okay! Almost every AI product starts there.
You just don’t want to stop there.
AI has lowered the friction to build impressive things. You can generate persuasive copy, analyze contracts, summarize research, create outbound campaigns, and ship something that feels intelligent in a matter of days. But anyone can do that now. Tap an AI model, make a request, etc.
When you start to think about the defensibility of your product though, you have to think about it through that lens. If anyone can do it, what makes your product special?
The real question is not whether your AI works.
The question is whether your product still has an edge when everyone has access to the same intelligence.
That is where depth comes in.
Because here’s the truth: if your advantage is “the model is impressive,” that advantage has an expiration date.
Real defensibility comes from what you build around the model.
When I pressure test an AI product, I’m not asking, “Does it work?”
I’m asking, “Does it compound?”
Over time, I’ve started thinking about defensibility through a simple lens. I call it D.E.P.T.H.
The DEPTH Framework
D: Data You Own
The first layer is proprietary context. AI on it’s own is smart; AI grounded in your own context? That’s the magic.
Are you building on private datasets? Are you structuring domain knowledge that others can’t easily access? Are you capturing information from your users through usage that provides helpful functionality over time?
If your system runs entirely on public information and a well written prompt, replication is pretty cheap 😬
The moment you start shaping unique data into something structured and reusable, replication becomes significantly harder.
Defensibility begins when context is not interchangeable.
E: Embedded in Workflows
Many AI tools live in isolation. They generate something useful (ie. a draft email, a risk summary, a customer insight) and then the user manually moves that output somewhere else. Cool on its own. But that is convenience, not integration.
A defensible product doesn’t just produce output. It sits inside the actual workflow.
One example: think about an AI sales tool. What if it doesn’t just generate copy. It then pulls deal context from the CRM, adapts messaging based on past reply rates, logs performance automatically, and adjusts future outreach based on what actually converts.
The core difference: it triggers downstream actions and influences what happens next.
If removing your product would not materially disrupt a workflow, then it is still optional. Optional tools are easy to replace.
P: Performance Memory
This is the layer most founders underestimate.
A demo produces outputs. A product remembers.
Does your system learn from past outcomes? Does it store historical decisions? Does it adjust based on what worked and what failed? Does it build a richer understanding of the user or the domain over time?
When performance history accumulates, replication becomes harder because that history cannot be recreated instantly. It has to be earned through usage.
T: Thresholds and Guardrails
Reliable systems require thresholds guardrails.
Not just smart outputs — but clear boundaries.
What happens when the model is uncertain:
- Does it flag low confidence instead of guessing?
- Does it escalate high-risk cases to a human?
- Does it learn from mistakes over time?
A product is designed to be trusted. Trust comes from structure. Defined thresholds, human oversight, and feedback loops that refine performance.
If your system can’t define when it shouldn’t act, keep making some tweaks!
H: Hard to Extract Value
Finally, ask yourself what a customer would lose if they left.
Would they lose historical performance data? Customized workflows? Structured logs and accumulated insights tied specifically to your system? Or would they simply lose access to a feature that could be recreated elsewhere?
If the value is easy to walk away from, it was never deeply embedded.
Let’s see it in action!
Imagine two AI sales tools.
The first generates outbound emails from a company URL and a short description. The writing is strong, experience feels polished, and delivers immediate value.
But underneath, it is essentially a prompt layered on top of a model 😬
The second tool operates differently. It stores historical outreach performance by segment. It pulls context from the CRM about deal stage and account history. It tracks reply rates and adjusts messaging based on outcomes. It flags drafts when confidence is low. It remembers which positioning resonated with which audience over time.
Both tools can generate email copy.
Only one accumulates intelligence.
That accumulated context is what makes replication difficult. Not because the model is different, but because the system around the model has depth.
At the end of the day, the strongest AI products aren’t defined by which model they use.
They’re defined by what they build around it.
They capture context.
They remember decisions.
They improve based on real outcomes.
They actually sit inside how work gets done instead of floating on the edges.
If the model gets better, your product gets better with it.
If everyone suddenly has access to the same powerful models, you’re still okay because what makes your product valuable isn’t just the model. It’s the system you built around it.
Once you start thinking this way, your roadmap shifts naturally. You stop obsessing over prompt tweaks and start asking better questions:
What is this accumulating over time?
What would a customer actually lose if they left?
Where does this become part of how work gets done?
That’s usually where defensibility starts.
Building with AI is easier than it’s ever been. Building something that can’t be rebuilt in a weekend just takes a little more intention and elbow grease.
Happy building! 🚀
PS: If you are building an AI-enabled product and want to pressure test whether it truly has depth, grab some time with me. I’d love to help.