Scaling Technology Ventures · Spring 2026

After Intelligence

Delight Didn't Die. It Adapted.

Delight Has a Shelf Life

Product managers are taught to delight their users. That's the job. Find the unmet need, close the gap, and then ship the thing that feels inevitable, like it should have always been there. The problem with delight is that there's a shelf life that's never mentioned. What once surprised users quickly becomes an expectation, or even something users will resent the absence of. Delight stops being the differentiator and becomes the barrier to entry.

The Stacked S-Curve Model predicts this. Each wave of consumer technology follows the same arc: early differentiation, moment of delight; rapid adoption, the consumption of it; performance convergence, the start of the next shift; then commoditization, when the center of delight has already moved on. The original point of differentiation becomes infrastructure. What was initially a product feature becomes a platform assumption.

We've seen this cycle three times in the last fifteen years. Skeuomorphism gave way to flat design, then motion and microinteractions. Apple trained us to expect beautiful technology. The gap between whatever phone you had in 2007 and the first iPhone was visceral, but by 2015, beautifully designed products were the cost of doing business. You couldn't ship ugly or hard to use and expect forgiveness from users.

As design was largely solved, users moved the goalposts. Delight came from products that knew you. Your taste, your history, the thing you click at midnight on Tuesdays. Algorithms weren't convenient, but almost uncomfortably accurate. Netflix knew what you wanted to watch before you did. TikTok learned your specific flavor of late night in under a week. Products that didn't learn you started to feel like they weren't paying attention. Personalization closed the gap, until the next wave made it look shallow.

November 2022, ChatGPT acquired a million users in five days. It was the first time many found a technology that didn't just know their patterns, but understood them. What followed was a three-year sprint between GPT, Claude, Gemini, Llama, Mistral, and DeepSeek. Every six months, a new model, a new benchmark, a new claim to the frontier. Today, intelligence is still a differentiator, but that moat is fracturing. Claude reasons differently from Gemini or DeepSeek. The distinctions are real and deliberate, but models are becoming task-specific in ways most users don't have the expertise to navigate. The Stanford 2026 AI Index puts the performance gap between the leading models at 2.7%. Close enough to indicate that general intelligence isn't the race anymore. It's a commodity.

And these cycles aren't slowing down; they are compressing. Design took a decade. Personalization, just five years. Intelligence took three. Each wave leaves less runway for the ventures riding it, and less time to recognize when the ground has already shifted beneath them.

For scaling ventures, this compression is a strategic risk hiding in the current AI boom. Intelligence is the entry fee, and everyone has paid it. The next differentiator won't be a smarter model, but the system surrounding it that hasn't been shipped yet.

Every time a layer commoditizes, it becomes the floor the next differentiator builds from. Adaptivity is what personalization always pointed toward, but it had to wait for intelligence to make it possible. It starts with the infrastructure nobody shipped: persistent memory, workflows embedded with curated intelligence, and a system that leverages the right models and surfaces the right interventions before being prompted by a user.

Harvey AI counts half the AmLaw 100 as its clients and reached $190M ARR in under three years. Clear proof that this architecture is being built, and that the strategy is working.

For ventures looking to grow beyond early adopters, these capabilities define the new whole product. Without them, intelligence is a feature with no moat. With them, ventures can escape the commoditization trap that has flattened every wave before it.

An Unfinished Promise

Personalization pointed to something we couldn't quite reach. It modeled the stable you: your preferences, your history, your patterns. Netflix knows your taste. Spotify knows your skip rate. TikTok knows your 2 am scrolling texture. But there's a ceiling that comes with assuming users are always the same version of themselves. Apps have modeled a version of you, but they can't account for the you that shows up on any given Tuesday.

Intelligence provides the tools to finally close that gap. Models that can reason, summarize, generate, and synthesize across domains. But the tools arrived faster than the infrastructure to navigate them, which created new problems: fragmentation across models, surfaces, and interfaces. Every day, a new app. Every quarter, a new model. The release cadence has no relationship to the pace of adoption.

As frontier intelligence commoditizes, the center of delight will again shift to systems that can resolve that fragmentation, continuously hold context, introduce intelligence into workflows, and surface both the right answer and the right intervention at the right moment. Delight will come from how well the product reads and adapts to the human on the other side.

Curating Intelligence

Intelligence has arrived faster than the infrastructure to manage it. What has followed is fragmentation in two places: at the model layer and at the application layer.

At the model layer, most users started with one provider. ChatGPT got there first, and for most, that was the entire category. It wasn't loyalty. The gap between their pre- and post-ChatGPT lives was large enough that comparison felt unnecessary.

As new models arrived with real differences, not just in benchmarks but in routine behavior, choice stopped being passive. Claude was more consistent with long, nuanced writing. OpenAI's reasoning models, like o3, work through complex problems step by step, surfacing tradeoffs before committing to an answer. DeepSeek offered comparable reasoning at a fraction of the cost. The differences were real, and for the first time, legible to consumers.

At the application layer, the opposite is true. Users don't choose, the apps do. Platforms arrive with preconfigured, embedded intelligence. Document editors choose a model optimized for writing. Project management tools choose one for task decomposition. The shopping assistant is optimized to surface what sells, not what you need. The user made none of these decisions, and most don't know they were made at all. The intelligence is present, but it wasn't chosen for you. It was chosen for the application. When what you need doesn't exist in the tech stack, you are forced to live between apps. The gap is invisible until the output is wrong.

This is curating intelligence: a silent, unacknowledged labor of determining which tool, which model, and which surface is right for what you are trying to do, at a price point you can actually sustain. But only if you know to ask the question. At scale, the economics sharpen the problem further. Frontier models charge per token. Running the right model for the right task at the right cost requires routing logic that most platforms don't provide and most users don't know how to build. The teams that figure it out absorb that optimization work themselves. The teams that don't pay for intelligence they aren't fully using, or settle for intelligence that isn't quite right.

Harvey AI is an example of successful curation at the enterprise layer. A securities litigator and DeepMind researcher co-created a platform where they fine-tuned models for law and designed comprehensive, pre-configured workflows for practicing lawyers. The B2B model absorbs the curation tax within the application, so professionals don't have to carry it. This is Crossing the Chasm at play: the early majority of legal professionals have little interest in evaluating models. They want the right one, already configured for their work. Harvey is that product.

The right intelligence, already chosen — not a model the user selected, but the one the system already knew to use. Fine-tuned for the domain, routed to the task, and cost-optimized before the professional arrives. This is the fourth principle, and the one the other three depend on.

Reading the Room

Every session starts from zero. This is the most pressing failure in the current experience. Intelligence doesn't remember you. You explain the project again. You restate the decision you made three weeks ago. You provide context that the system had last week but has since lost entirely. The intelligence is there, but the history is not. And without history, the system cannot tell the difference between a new user and a returning one, a resolved question and a recurring one, a preference and a pattern.

This is not a minor inconvenience, but a structural tax added to every interaction. Users compensate by carrying the context themselves. They copy summaries between tools and re-explain prior decisions, bridging what no single system is designed to hold. The tax is invisible, but it compounds. And it falls disproportionately on the people doing the most complex work, where managing context is their entire job.

Understanding that accumulates into foresight is the first principle of the next center of delight. Not a log of your history, but a read on you that deepens over time until the system no longer needs to be told what matters.

Harvey's Vault holds up to 100,000 documents per matter, and firms have built over 25,000 custom agents on the platform, codifying precedents, playbooks, and institutional standards. A lawyer returning to the platform doesn't have to re-explain a deal; the system already understands it. The context is continuous, the knowledge is encoded, and the permissioning is enterprise-grade. What Paul Weiss knows stays at Paul Weiss. The system doesn't recall history; it operates from it. That's memory with structured governance, and it's what makes trust possible at scale.

One Step Away

The second failure is architectural. Most intelligence tools sit adjacent to work. A user navigates to the tool, explains the situation, and then asks a question. By the time the model has caught up, the work has already started without it.

Intelligence embedded throughout the work, not beside it, is the second principle. Users aren't looking for a tool to visit. They need a system already oriented to their task, one that captured intent at the moment of creation and has been working from it ever since.

Harvey delivers this to legal professionals through 500 pre-built workflow agents, each vetted by Harvey's own legal team. A lawyer using Harvey never has to explain what contract clause review is, what their firm cares about, or what a red flag looks like. That expertise is codified in the product. The lawyer arrives, and the system is already in position. Across Harvey's client base, this translated to more than 20 saved hours per professional each month. The system being in position is what allows professionals to focus entirely on the decisions only they can make.

This is the whole product argument made concrete. Enterprise AI can no longer survive with just a strong model and a chat interface. The whole product is memory plus workflow integration plus permissions plus structured output plus reusable agents plus analytics and monitoring. Ventures that ship only the model are launching incomplete products. Customers are filling those gaps themselves with nights, weekends, and inference bills.

The Unprompted Answer

Most intelligence tools answer questions. You ask, the system responds. But the most consequential professional moments aren't always questions. They are signals.

A project is green by every metric, but is about to go wrong. A legal brief is well-argued but is missing the one piece of evidence that matters most. These are not questions anyone thought to ask. They are things the system should already know to surface.

This is the third failure, and it's the easiest one to miss because the tools never falter. The system responds to every question it's asked and closes every loop it's given. What fails is the premise that the right question will always arrive. The system is waiting to be prompted, but no one knows what to ask next.

Adaptive intelligence closes that gap by not answering better, but by surfacing interventions before the ask forms.

Harvey's agents don't simply return results; they triage. The system processes a matter and decides what level of attention each finding requires. Standard language is parsed, unusual patterns are flagged, and material risk is escalated. The professional doesn't scan every output with the same intensity, because the system has already done that and only surfaces what requires a human in the loop.

Calibration is required from day one. A system that escalates everything is noise, and a system that escalates nothing is a liability. Harvey's Agent Builder gives firms the tools to encode themselves in the platform. They define what signals look like for their practice, clients, and risk tolerance. The system learns what's standard, and over time, no longer needs to be told. The accumulated context does the bulk of the work, while the professional fills in the gaps.

Inevitable, Again

Delight has always felt inevitable in retrospect. The iPhone, the algorithmic feed, the first time a model understood you. Each moment arrived, and we immediately felt like it should have always been there. That feeling is the signal that a new S-curve has started climbing, and the last one has begun its descent into infrastructure.

Three principles define the next center of delight, and they are already being built.

01
Memory that deepens into foresightUnderstanding that accumulates over time until the system no longer needs to be told what matters. It already knows.
02
Intelligence inside the work, not beside itA system already oriented to the task before you arrive, working from intent captured at the moment of creation.
03
Interventions before the ask formsNot answering better. Knowing what to surface, and when. The shift from retrieval to triage.
04
The right intelligence, already chosenFine-tuned for the domain, routed to the task, cost-optimized before the professional arrives. The foundation the other three principles depend on.

Together, they describe a product that doesn't just perform; it adapts. The right model is already chosen. The context is already held. The intelligence is already inside the work. And the system already knows what to surface before anyone thought to ask. That's what personalization always promised and never quite delivered, because it was modeling a stable version of you. Adaptivity finally accounts for the you that shows up today.

For scaling tech ventures, the window is open and simultaneously compressing. Teams building their own context layers, routing their own models to arbitrage tokenomics, and encoding their own expertise into bespoke workflows are compensating for a product gap. It happens to be a large one. The ventures that close it won't just win deals; they will define what AI-native means for the next generation of enterprise software.

Harvey is one proof point. A domain-specific, context-continuous, intervention-aware system that counted half the AmLaw 100 as clients in under three years, reaching $190M ARR and an $11B valuation without an established consumer acquisition channel.

Delight didn't die. It adapted. And the ventures that adapt with it are the ones building the next inevitable thing.

Harvey AI: Background
Founded 2022
Headquarters San Francisco, CA
Employees ≈300
Total Raised ≈$1B
Latest Round Series G · March 2026
Amount Raised ≈$200M
Lead Investors GIC, Sequoia Capital
Business Model B2B SaaS
Value Proposition A legal intelligence platform built specifically for lawyers, not adapted from from general-purpose AI models. 500 ready-to-run workflow agents. Context that follows a matter from open to close. Memory that respects firm permissions.
Target Customers Built for the world's top law firms with a path to serving mid-sized firms as the platform scales.
Customer Acquisition Sold directly to firms. Growth accelerates through relationships and partnerships.
References
  • Bommasani, Rishi, et al. The 2026 AI Index Report. Stanford Institute for Human-Centered Artificial Intelligence (HAI), Stanford University, 2026. aiindex.stanford.edu
  • Harvey AI. "Harvey Raises at $11 Billion Valuation to Scale Agents Across Law Firms and Enterprises" Harvey AI Press Release, 2026. harvey.ai
  • Harvey AI. Platform Overview. harvey.ai
  • OpenAI. "Introducing ChatGPT" OpenAI Blog, November 30, 2022. openai.com/blog/chatgpt