At Beelieve ’26, Gorgias CFO Kunal Agarwal gave a finance leader’s view of AI monetization: once your product is handling millions of AI interactions, pricing evolves beyond a packaging exercise to a system that manages margin, adoption, and customer risk — all at the same time.
Gorgias’s AI Agent usage has grown 350% in the last 12 months, and Kunal opened by underscoring how that’s changed the stakes:
We power over 300 million conversations for 17,000 customers. The reason I say that is not to brag — it’s more to talk about why it matters, which is the monetization. We have a ton of scale that happens, and so we need to get our monetization decisions right, or it can be very, very expensive.
When every shopping interaction, model call, escalation, and agent workflow has a cost at that scale, every mistake compounds, and the old SaaS assumptions stop working. In the session, Kunal explored how he’s led the evolution of monetization at Gorgias from the ground up.

AI breaks the link between usage, cost, and value
In SaaS, usage was often a reasonable proxy for value. More seats, more tickets, and more volume usually meant more customer value and more revenue. Kunal shared how AI has made that logic less reliable.
“The world of SaaS was fantastic. For a finance person, it was great — near-zero marginal cost, really high gross margins. Life was good. And now AI has fundamentally broken that model, where every single interaction has real marginal costs associated with it. So the fundamentals of how I think about margins, how I think about cost incrementality — all of that changes.”
More than compressing margins, this shift has created a new monetization problem: how much do you charge, and for what? In SaaS, usage was often a reasonable proxy for value. In AI, that breaks.
“If I have an AI agent resolve a support interaction with you, that maybe costs you $5 [with] a human. That is very different than another AI interaction that enables a customer on your website to buy a $250 pair of shoes. Those are the same interactions. They have the same cost from an LLM perspective, but are delivering very different outcomes to the customer.”
The LLM cost is roughly the same, but the customer value is not. That’s the core tension in AI monetization: if you price only on interaction volume, you miss the value difference in outcomes. But, if you try to price every outcome too precisely (especially before the market is ready), you create too much complexity for customers still learning how to adopt AI.

Gorgias chose outcome pricing, but not perfect value capture
Gorgias prices its help desk product on volume because that matches how customer support software is already bought. And, it introduced outcome-based pricing for its AI agent: customers are charged for fully resolved interactions that do not require human intervention, not raw usage.

“If we have an AI agent interaction that doesn’t resolve an interaction… it’s the same amount of cost for us, but we’re not going to charge the customer.”
It’s certainly a margin bet: Gorgias absorbs the cost of failed or escalated AI interactions. The customer pays only when the agent completes the job. But the more interesting decision was what they didn’t do.
“When we launched our AI agent, we thought we’d be really smart: If we’re helping someone sell a $250 shoe, that’s very different than automating a $5 support resolution — so we should charge a lot more.”
The team considered charging differently for support and sales use cases. A sales-assistant interaction that helps convert a $250 order clearly creates different value than an automated support resolution. But customers pushed back on the complexity.
“The biggest hurdle right now is just adopting AI. And what [our customers] felt was: ‘I’m already unsure how this is going to work, and now this sounds really complicated.’
“So we said, okay — we’re just going to charge a flat resolution price for every AI agent resolution. Are we leaving money on the table? Absolutely. But we’re optimizing for adoption.”
The tradeoff was practical: capture less value now to reduce adoption friction. For AI products, simplicity is a strong revenue strategy when the market is still building trust.

Predictability matters more when customer usage is seasonal
A pure usage model sounds clean until you sell into e-commerce.
For Gorgias customers, Black Friday and Cyber Monday create large seasonal spikes. A merchant can move smoothly through the year, hit peak season, blow through usage, and receive a painful overage bill during the exact period when support volume and revenue pressure are highest.
Kunal used this context to underscore how monetization has to account for customers’ operating reality, not just vendors’ cost recovery.
“[Customers] want to be able to have some predictability in what they’re going to pay, and they also want it to align with the outcomes that you’re delivering back to them.”
That’s why Gorgias has encouraged more customers to select annual plans. Annualized usage gives merchants room to move through seasonal peaks without constantly changing packages or getting punished for holiday demand.
For finance leaders, the takeaway is that usage-based pricing still needs a smoothing mechanism when the customer’s business has predictable volatility.
Attribution becomes part of the pricing model
Support outcomes are relatively easy to prove. If the AI agent resolves a ticket without human intervention, the value is obvious. But sales outcomes are harder.
“Think about a customer that comes in Saturday, chats with our shopping assistant, looks at a product, comes back a couple more days, looks again, and then three more days later actually goes and buys it. Did we influence that? We think we did.”
That forced Gorgias to define attribution rules:
“We started with a seven-day attribution window… customers pushed back… we’ve kind of settled on three days.”
That detail matters, because AI monetization is about more than picking a metric: it’s about defining what counts, when it counts, and whether the customer believes the attribution logic. If they don’t trust the attribution window, they won’t trust the invoice.
Monetization breaks if the first 90 days fail
One of the clearest operational lessons Kunal shared came from the user onboarding process at Gorgias.
They saw two early problems: customers were not fully turning the product on, and many churned after roughly 90 days. The company realized that leaving customers to configure the agent on their own created weak usage, weak value perception, and churn.
“An AI agent product is not like a SaaS product. It doesn’t come out of the shelf working. It doesn’t just turn on and work at 100%. It has to be optimized. It has to be trained. It has to be guided.
“If you expect the customer to do that by themselves, it’s not going to happen. They’re not going to find value, and they’re going to say, ‘Why am I paying for this thing?’”
So, Gorgias invested heavily in implementation. At first, that meant white-glove human onboarding that aims to get customers to successful go-lives and active engagement as quickly as possible. Later, the company built an AI agent to help train the AI agent.
“It may seem like on paper as a CFO, like, wow, this is really expensive to get this customer onboarded, but it’s worth it if you think about the difference between the proclivity to churn versus not.”
For finance and monetization teams, this reframes onboarding cost. In AI, implementation is both a customer success expense and the cost of getting the product to a monetizable state.
Usage is the clearest signal of value
Kunal also challenged the usual customer health hierarchy.
Usage is a more important metric for us than NPS, CSAT. I know that’s maybe controversial… but when we looked at the data, customers above 70% usage were much less likely to churn than customers below that.
That insight reshaped how Gorgias thinks about expansion, risk, and how they sell.
“Don’t oversell a product. The inclination is: the customer says they need this sky-high usage, so let me go sell that. What we found is it’s much better to sell a lower amount, have them really get value out of it, and then expand. The inverse is very costly — they like the product, but they don’t feel they’re getting value for what they’re paying.”
The hard-earned sales lesson is that overselling usage can depress perceived ROI. Expansion works better when the customer first feels they are fully consuming what they bought.
The $4 interaction that changed how Gorgias tracks cost
The most concrete moment in the session came from what Kunal called “the $4 lesson”.
“We launched a feature… customers really liked it. And then I looked at the LLM costs for the week and said, ‘What’s happening here?’ There was this huge spike. When we dug into it, we realized this individual feature was costing over $4 per interaction.”
That discovery forced a deeper realization:
“The mistake we made was just looking at LLM costs. That’s like saying the cost of operating a restaurant is just the food — you’re ignoring the staff, the rent, everything else.”
Gorgias now tracks AI cost across three levels: LLM cost per interaction, fully loaded gross margin, and feature-level cost.
For AI monetization, gross margin has to include the full cost of delivering the agent interaction: model calls, infrastructure, support, implementation, orchestration, and whatever else is required to get the outcome.
Without that view, teams won’t know where they can discount, which features are expensive, or which workflows are quietly eroding margins.
Model selection is now a pricing lever
The $4 lesson also changed how Gorgias thinks about agent architecture.
Not every task needs the most expensive model. Some workflows may require high-end reasoning. Others may only need a cheaper model to classify, summarize, route, or retrieve information. As Kunal put it:
Not everything needs the Porsche of LLM models. It’s okay to have the Camry for some things.
More than just an engineering optimization, model routing is now a monetization lever. If model selection happens blindly, the pricing model may look profitable on paper and fail in production. If model routing is deliberate, the company can preserve customer experience while controlling cost per outcome.
Pricing is now a continuous operating loop
Gorgias now reviews pricing on a continuous cadence.
“We evaluate our pricing every 90 days now. It doesn’t mean we change it every 90 days, but we evaluate it constantly… If you’re thinking about pricing as something you discuss once a year at your offsite, you’re slow to the game.”
That loop connects value definition (what outcomes matter), product behavior (what actually happens in usage), cost structure (what it takes to deliver it), and customer feedback (what they accept and trust), and feeds all that information back into pricing.
The shift underneath it all
Kunal’s talk made AI monetization concrete because it was grounded in the actual mechanics: 300 million conversations, one million AI interactions a month, 350% usage growth, a failed seven-day attribution window, a 70% usage health threshold, a costly first 90 days, and one feature that unexpectedly cost $4 per interaction.
The takeaway is that AI monetization only works when finance can see what’s happening inside the product: which interactions resolve, which escalate, which features cost too much, which customers are under-consuming, which attribution rules customers believe, and which model choices change margins.
