In 2022, Harvey launched an AI legal platform that automated contract analysis, research, and due diligence. Tasks that once consumed thousands of billable hours could now be handled by software. Within months, the company signed six-figure contracts with top law firms.

Three years later, Harvey reached $100 million in annual recurring revenue and a $5 billion valuation.

From the beginning, Harvey operated as an enterprise business. The company launched without a self-serve product or an entry-level pricing tier. Instead, it built a sales-led motion immediately, staffed by former Big Law attorneys who could engage general counsels on risk, compliance, and legal workflows.

This enterprise-first approach reflects a broader pattern among AI-native companies. Go-to-market execution increasingly starts with enterprise sales rather than progressing toward it over time.

The 7-to-2 Compression: When Enterprise Arrives on Day One

A decade ago, the SaaS growth path followed a predictable sequence. Companies launched with a self-serve product, relied on organic adoption, and introduced enterprise sales later. That progression gave teams time to validate demand, refine pricing, and build internal systems before moving upmarket.

Several well-known SaaS companies followed this model. Slack spent years focused on self-serve adoption before expanding into enterprise sales. Dropbox took even longer to develop a dedicated business motion. Moving from individual users to large customers was gradual and intentional.

AI-native companies operate on a compressed timeline. Enterprise customers engage much earlier, often before a self-serve motion has time to mature. As a result, milestones that once unfolded over many years now converge within the first two.

What used to take seven years now happens in two years or less.

Copy of 2585014 graph sellingtoenterprisesblog v2 042125 3

Copy of 2585014 graph sellingtoenterprisesblog v2 042125 2

This compression reflects a change in go-to-market sequencing. Enterprise sales often becomes the starting point rather than the destination. Pricing, contracting, and revenue operations take shape through direct customer conversations instead of incremental experimentation at the edges of a self-serve funnel.
In practice, many AI-native companies reach large contracts quickly. Public reporting and executive disclosures point to rapid enterprise adoption across legal, developer, and research-focused AI platforms, often within the first few years of operation.

Executive commentary across AI-native companies reflects this shift, particularly in how quickly enterprise demand materializes.

Copy of 2585014 graph sellingtoenterprisesblog v2 042125 1

This enterprise-first model is becoming increasingly common. A Forbes analysis of leading AI companies found that the majority rely on sales-assisted go-to-market motions from the start rather than adding them later.

Enterprise buyers are responding just as quickly. According to Mayfield’s 2026 CXO survey, most enterprise technology leaders plan to increase AI spending, with more than half reallocating budget from legacy vendors to AI-native providers.

Why AI Products Force High-Touch Conversations

AI products introduce two characteristics that change how software is sold: unpredictable cost structures and deep integration requirements. Together, these factors push most enterprise AI deals toward sales-led motions, where value, risk, and implementation are defined through direct conversations rather than self-serve checkout.
Five factors consistently drive this shift.

1. Enterprise buyers negotiate before they buy.

AI infrastructure decisions carry long-term implications. Buyers evaluate not only functionality, but also risk exposure, contractual flexibility, and vendor maturity. As a result, purchases involve multi-year agreements, customized terms, and input from finance, legal, engineering, and executive stakeholders. These decisions require coordinated conversations rather than transactional checkout flows.

2. For agentic AI, performance depends on integration depth.

Unlike traditional SaaS products that deliver value immediately after login, agentic AI systems rely on business context. Their effectiveness depends on access to internal systems such as CRM data, operational metrics, historical transactions, and customer communication logs. The more deeply the system integrates, the more value it can deliver.

Because of this dependency, proofs of concept require detailed technical discussions before meaningful value can be demonstrated. Teams evaluate implementation timelines, data architecture, API access, and security requirements up front. These considerations shape whether a pilot can succeed at all.

3. Finance teams require cost predictability before approval.

Enterprise finance leaders face a distinct challenge when evaluating AI products. Usage-based pricing tied to tokens, API calls, or model inference volume introduces variability that can be difficult to forecast. Monthly spend can change dramatically as usage scales, even when customer behavior remains stable.

For finance teams, predictability is a prerequisite. Contracts often include predefined rate limits, volume commitments, and overage pricing before a pilot begins. Without these guardrails, cost surprises surface in invoices rather than conversations. That dynamic creates friction, delays expansion discussions, and undermines trust.

This requirement shapes how AI products are priced and sold to enterprises. Discussions about usage limits and billing mechanics happen early, as part of the buying process, rather than after adoption.

Look at how AI companies design usage-based and outcome-based pricing for enterprise customers.

4. Cost to serve varies widely by customer and geography

AI companies often lack clear visibility into per-customer costs early on. Infrastructure requirements change based on data residency laws, regional compliance standards, and customer-specific deployment needs.

As Harvey expanded globally, operating across more than sixty countries introduced material cost variability. In jurisdictions with strict data processing regulations, infrastructure had to be provisioned locally even when serving a small number of customers. These upfront compute investments shaped deal economics in ways that could not be modeled in advance.

As Harvey CEO Winston Weinberg explained:

“Germany and Australia have incredibly strict data processing laws. You cannot send financial data outside of those countries. We set up Azure or AWS instances in every country, but we might only use them to support three or four large clients. Our margins look strong on a token basis, but they decline once we account for the upfront infrastructure required across jurisdictions.”

As a result, pricing discussions became account-specific. The cost to serve differed by customer, region, and deployment model, making standardized pricing impractical.

5. Pricing emerges through sales conversations, not experiments.

Enterprise AI pricing evolves through negotiation rather than A/B testing. Each contract reflects customer-specific usage patterns, risk tolerance, and value drivers. High-value deals involve legal and finance teams, long approval cycles, and customized terms.

At Harvey, revenue mix shifted quickly as the customer base expanded beyond law firms into corporate buyers. Each segment required different pricing structures, contract terms, and value narratives. These learnings surfaced through direct sales conversations, not landing page experiments.

When infrastructure costs vary and integration depth determines value, custom pricing becomes a functional requirement. AI companies must design contracts that accommodate variability in usage, deployment, and outcomes while remaining predictable enough for enterprise buyers.

 Explore different approaches to AI pricing models.

Where revenue operations break for AI companies

Enterprise demand often accelerates faster than internal systems can support. Marketing generates interest quickly. Sales closes complex deals that include multi-year terms, hybrid subscription and usage pricing, and custom volume commitments. Finance then needs to recognize revenue across both subscription and consumption models to remain compliant with ASC 606.

At this point, many AI companies discover that their revenue stack was not designed for this level of complexity. Stripe, spreadsheets, and Salesforce are stitched together with manual workflows that do not scale.

Quotes don’t match invoices

Breakdowns often begin between sales and finance. Sales negotiates custom usage terms, tiered pricing, and hybrid subscription-plus-consumption models. Finance is responsible for billing accurately and recognizing revenue correctly across those structures.

When deals move from contract to billing, systems frequently cannot support the negotiated terms. Quotes do not map cleanly to SKUs. Billing configurations fall short. The result is repeated back-and-forth between teams as invoices are corrected after the fact.

As one finance leader described it:

“The quote we send isn’t necessarily what we’re actually selling to the customer. We invoice them, they request changes, we cancel the invoice, issue a credit note, and create a new one. That entire loop needs to disappear.”

What should be automated becomes manual. Finance teams spend time adjusting invoices instead of analyzing revenue. Each correction increases the risk of errors and delays downstream reporting.

SKU sprawl compounds the problem

As enterprise deals become more customized, product catalogs expand quickly. Usage tiers, add-ons, regional pricing, and bespoke contract terms introduce new SKUs with every deal. Over time, this creates operational drag.

One company reported issuing hundreds of invoices each month, with dozens requiring deep investigation. Some discrepancies took days to resolve. In several cases, customers were billed for products they never contracted for because usage mapped to the wrong SKU.

The consequences are material. Billing errors erode trust. Churn risk increases. Deals slow down as finance struggles to process contracts at sales velocity.

RevOps friction slows growth

As complexity grows, sales teams feel the impact directly. Reps spend more time navigating internal systems than closing new business. Finance teams struggle to answer basic questions about revenue with confidence. Even fundamental metrics such as current ARR require manual reconciliation.

At this stage, revenue recognition shifts from process to approximation. Forecasting becomes less reliable. Leadership loses visibility into how the business is actually performing.

Finance teams at AI companies face a distinct set of challenges as they move upmarket, including hybrid revenue recognition, unpredictable usage billing, and custom contract terms that legacy systems were never built to handle.

Learn how finance leaders build infrastructure to handle enterprise complexity.

Hybrid GTM Creates Duplicate Realities

Many AI companies operate both self-serve and enterprise motions at the same time. When these motions rely on different systems, data fragmentation follows.

Self-serve teams often use tools like Stripe or Paddle for checkout. Enterprise sales relies on Salesforce with manual billing workflows. The same customer can exist in both systems with different subscriptions, payment terms, and no shared source of truth.

A common scenario illustrates the problem. An individual user signs up through self-serve. Months later, their company negotiates an enterprise contract. There is no reliable way to link those accounts. Teams manually reconstruct who belongs to which contract and which usage applies to which agreement.

Customer success teams lack visibility into renewal contacts because product usage data is disconnected from billing data. Finance teams struggle to reconcile contracted revenue with recognized revenue spread across systems. When leadership asks for a consolidated ARR number, the answer requires days of manual work.

As one operator put it:

“We have 10+ disconnected tools. It works, but it’s messy. Every time we need to change pricing, it’s a cost of opportunity because of how complex the change is.”

Scaling from self-serve to enterprise without duplicating systems requires deliberate infrastructure decisions. Companies that succeed invest early in unified revenue operations that support both motions without fragmenting data or processes.

Look at how successful companies build unified revenue operations for hybrid go-to-market strategies.

The new reality

AI-native companies are reaching scale on timelines that would have been unthinkable a decade ago. Harvey reached $100 million in annual recurring revenue in thirty-six months. Anthropic did it in roughly twenty-four. The window between product launch and enterprise scale continues to compress.

The companies that navigate this compression successfully share a common trait. They assume operational complexity from the start. Rather than waiting for systems to break, they build infrastructure while closing their earliest enterprise deals.

Across AI-native companies, several infrastructure decisions consistently show up early.

They remove engineering bottlenecks from deal execution. Sales teams need the ability to generate complex quotes without waiting days for technical support. When enterprise deals include base seats, usage-based pricing, and region-specific deployment requirements, speed matters. Companies that can quote accurately and immediately protect momentum in competitive sales cycles. Learn how to build quote-to-cash processes that keep up with enterprise complexity.

They automate revenue recognition before compliance becomes a constraint. Enterprise AI contracts often combine subscriptions, usage charges, outcome-based components, and prepaid credits. Manual revenue calculations slow reporting and introduce risk under ASC 606. Teams that invest early in automation avoid retrofitting compliance after scale exposes gaps.

They maintain unified visibility across revenue data. When subscriptions live in one system, usage in another, and adjustments in spreadsheets, teams lose confidence in forecasting and reporting. AI-native companies that scale successfully consolidate billing, usage, and contract data early to preserve clarity as complexity increases.

They design for pricing change as a constant. Customer mix shifts quickly in AI markets. As companies expand across segments, regions, and deployment models, pricing evolves alongside infrastructure costs and value realization. Systems must support versioning, grandfathering, and migrations without disrupting existing contracts.

The pattern is consistent. Enterprise sales arrives early. Pricing evolves through negotiation. Revenue operations becomes a growth constraint if it is treated as a downstream concern.

The outcome is clear. AI-native companies that build for enterprise complexity from day one move faster with fewer reversals. Those that delay infrastructure decisions spend later stages untangling systems instead of compounding growth.

For teams navigating this shift, these resources offer deeper guidance:

For RevOps Leaders:

Complete guide to enterprise infrastructure and scaling hybrid GTM

For Finance Teams:

Managing hybrid revenue recognition and usage-based billing

Overview

Selling to Enterprises Before Your AI Startup Is Operationally Ready.