The price of your product is wrong.
It’s a harsh reality you’re better off accepting sooner rather than later.
Would you like to sponsor my newsletter? → Send an inquiry ←
Everyone gets pricing wrong. Even OpenAI can’t be spared, with all their Artificial Intelligence.
Case in point: Sam Altman recently posted about their $200/month ChatGPT Pro plan being priced incorrectly (it’s not profitable), commenting: “[But] I personally chose the price and thought we would make some money.’
HA! The infamous last words of every bad pricing decision. Am I right, or am I right??
Three modes of failure in pricing decisions
Mode 1: HiPPO (Highest Paid Person's Opinion) makes the pricing decisions (cough, Sam Altman, cough). After all, they need to feel powerful. Of course, they pray and hope to be right, and occasionally, they might be. But 99% of the time, they swing and miss. I don’t blame these poor HiPPOs - our corporate environment positions executives as all-knowing, future-seeing, always-right mythical beings who supposedly have all the answers. After all, they are just trying to play the part - send your condolences.
Mode 2: Over-reliance on qualitative data. Gasp! Am I really saying that being data-driven here can lead to failure? Yes, yes I am.
Asking people how much they’re willing to pay for your product isn’t a great predictor of what they’ll actually pay. This is because people are much quicker to spend theoretical dollars than real dollars. If a friendly person/survey asks them, “Hey, would you pay $100 for this product?” they don’t want to seem cheap! And maybe in certain situations they truly would… so “Sure,” they say, “I’d even pay $120.” But when the time comes to actually pay? Suddenly, it’s a very different story and even $80 is a non-starter.
One main reason for this is that everyone (me, you, and the potential customer) tends to ignore the cost incurred by the friction it takes to get to the value. The equation you need to remember is:
Price < (Perceived Value – Friction)
That last variable is crucial: The price point has to be lower than the perceived value… minus friction! The more friction that has to be overcome to get to the perceived value, the lower the price needs to be. Surveys, focus groups, and even early A/B tests generally assume perfect conditions: high perceived value and zero friction. That’s never the reality. So, when you roll out a price based on theoretical willingness-to-pay, expect reality to hit hard.
This isn’t to say you shouldn’t use data, but quantitative data > qualitative data in this context. That said, qualitative insights are valuable as input before launching tests to gather quantitative data.
Mode 3: Going to market with a singular price point, as if there needs to be no further discussion. I know all of us have been part of a ‘big launch’ with a carefully determined price point (maybe your company even shelled out $500,000 to get Simon-Kucher involved) and then BAM—major fail. Conversion rates are low, retention is in the toilet, and oops, no contingency planning. So, the next 6 months are consumed with some rollback or ‘testing-out’ activities. Here is a hard truth: If you’re settled on just a singular price point, it’s almost guaranteed to be wrong (no matter how you came up with it).
So. If you can’t trust your CEO’s guess, you can’t rely on research, and even $500K Simon-Kucher crystal ball doesn’t give you the right answer… what are you supposed to do?
P.S. Side note on Simon-Kucher: They are a huge and very (very) expensive firm that is quite popular among executives seeking "solid pricing recommendations." And listen, these folks are incredible at research—the best I’ve ever seen, hands down. But I’ve also witnessed countless instances where their recommendations nearly kill the companies that implement them.
Why?
A. They focus on short-term revenue, often at the expense of long-term growth loops. If your business relies on virality, beware.
B. They don’t stick around to operationalize or optimize the new price points, leaving teams unprepared to handle the fallout.
For a truly complete answer, you’ll need to understand your monetization model and you should take my Reforge monetization & pricing course. But for today, let me walk you through how to approach changing your prices—whether that’s updating existing plans or rolling out new offerings.
How to get the price right
Start off by accepting the truth that your new price will almost definitely be wrong. This is painful (especially if your CEO picked it), but the faster you can admit this, the faster you can get started on finding the optimal price point. Here’s what I recommend:
1. Pick metrics that measure impact on the entire growth ecosystem
It’s not just about how many people buy the plan. It is about how customers retain, expand, and how much they will cost. Metrics should be:
Conversion rate: How many people buy at each price point?
Churn rate: How many renew or buy again?
Retention/Engagement rate: How many people engage and retain?
Expansion rate: How many people upgrade to a higher plan?
Cost impact: For products with high variable costs (looking at you, AI enthusiasts), how does usage at each price affect your margins?
Conversion is easy and quick to measure. The rest… not so much. Every single pricing rollout will require at least a 3 or 6-month (or longer) measurement time frame. This doesn’t mean you need to collect new observations the entire time, but you will need to monitor cohorts over that period to understand their behavior.
2. Run a price sensitivity tests
One of the most powerful tests you should run *every few years* is a simple price sensitivity test. As the most simple option, you can test three price variations: Your current price, and then a lower and higher version. So say your price is $25, then test $19 and $29. This will allow you to begin plotting a rudimentary elasticity curve of your conversion and churn rates that can help forecast additional price points.
Lower prices often attract more paying customers, but they tend to attract lower-intent users with higher churn rates. Higher prices may convert fewer people but attract more loyal customers who are willing to expand into higher-tier plans. So looking at a 3-year projected revenue outcome based on observed conversion, retention, and expansion rates will indicate which price point is the best for the business. You can of course incorporate your costs to understand the impact on cash.
This push-and-pull dynamic is why pricing isn’t just about maximizing conversions. It’s about balancing conversion, retention, expansion - all in the name of long term revenue.
3. Test on new users first
Storytime!
When I first started price testing, I segmented traffic between test and control variations directly on the pricing page. To my dismay, conversion dropped whether we increased or decreased the price. How could that be? After a few sleepless nights spent questioning everything I knew about statistics and user behavior, I realized the pricing page included both new users and returning users.
New users provided clean data and showed the expected increase in conversion with a price decrease. However, returning users did not appreciate the price changes—especially since they’d seen other (control) price points before we started the test. They literally gave feedback that it felt like shopping for an airfare ticket, where prices fluctuated unpredictably based on the time of day. Yuck. This increased their cognitive load and they bounced off the page rather than trying to figure out why prices had changed.
Don’t make this mistake! To avoid this, test pricing changes on new users who have no exposure to your old pricing first. Keep existing users on the old price until the test is complete. After a few months, roll out the changes universally. At SurveyMonkey, we saw that users stopped checking pricing pages after about 3 months of usage, so whenever we rolled out pricing changes, we’d push them live for new users… but wait 3 months to snap them on for existing customers.
4. Isolate Pricing Changes
Whenever possible, avoid changing multiple things at once. If you’re testing prices, don’t simultaneously rename plans, redesign your pricing page, or adjust features. Mixing variables makes it harder to understand what’s driving results.
As I joined Dropbox, we had a big release that changed plan names, prices, and features (at least pricing design didn’t change—although I've seen that bundled into a new pricing & packaging rollout too!). Needless to say, the release did not fulfill forecast expectations and it was a pain in the *ss to figure out which one of those changes caused underperformance. Months of work worth…
Pricing is a sensitive matter, and although it’s not always possible, try to separate pricing changes from plan names, feature allocations, and pricing page design. I know it won’t always be possible, but at least try to ‘pre-test’ pricing changes first.
Let’s hear them objections…
‘But Elena, price testing is unethical.’
‘How can you sleep at night knowing that 2 customers can be paying different prices for the same product?’
Listen, I have plenty of reasons to lose sleep at night, but this isn’t one of them. Not AB testing is still testing… just on 100% of the population and without any quantifiable results. And probably having the wrong price point. And likely lackluster business outcomes. And customer outcomes.
So pick your poison.
‘But Elena, what if our price tests go viral on reddit?’
One of the biggest fears around price testing is customer backlash. Teams worry about price discrepancies leaking and sparking outrage on Reddit or X.
Here’s the reality: no one cares about your pricing as much as you do. Especially if you are in B2B. For real. (There are few exceptions to the rule - those that have very strong network effects or community, and you are not likely to be one of them.)
Unless the changes are extreme, most customers won’t even notice. And if they do? Empower your support team to handle it. If someone sees two different prices, let support honor the lower price. This customer-first approach not only avoids complaints but can actually build loyalty.
And if your price test does go viral? Congratulations—you just got free publicity.
But seriously: Even on massive tests I’ve run at places like Dropbox and SurveyMonkey, we rolled out different prices … and never heard a peep!
Still don’t believe me? Just look at Netflix. The first time they raised prices, it caused a storm of publicity, even earning CNN coverage (they do have very price sensitive user base!).
But guess what? They’ve been doing it consistently since then, without any backlash. If there’s one thing we can learn from Netflix, it’s that ongoing price adjustments are far better than treating them as a once-in-a-blue-moon event.
‘But Elena, what do I do with the existing users on an old price!?’
If the tests work out and you’ve confirmed a new price point, the question becomes: what do you do with existing users who are paying more or less than your new price? Your engineering team will almost certainly pressure you here, since they’ll be upset that they need to maintain a bunch of legacy SKUs, which require ongoing maintenance and QA.
Well you can keep legacy SKU’s and users forever… Or you can clean up those SKUs!
I say bring everyone to your new price point. But you can definitely do it in a way that benefits the customer:
For users paying more: Upgrade them to a higher-tier plan without increasing their price.
For users paying less: Gradually step up their price—just a few percentage points at a time—so the change doesn’t feel sudden or dramatic. Hopefully you’ve loaded more value in the plan over the years anyway, and can use it to justify the increase.
And for anything you’re changing: Always give plenty of notice. And try to frame the change as an increase in value enhancement, not just a price bump.
An important note on churn: If you move prices, you’ll generally see about 6% higher churn on the cohort where you increase prices. So, you’ll want to make sure your price increase is more than 6% if you want to come out ahead after people leave. But if maintaining paid subscribers is particularly important to you, just load ‘em up with discounts or offers for higher plans.
‘But Elena, my boss is just going to pick the price, anyway!’
Dealing with a HiPPO situation? You can strengthen the price testing process even more by creating a monetization council. To get buy-in and avoid siloed thinking, assemble a monetization council/group with representatives from: Finance, Product, Marketing, Sales, and, of course, the almighty CEO.
This group should meet as often as needed, depending on the amount of changes your monetization model is going through. This ensures alignment across all impacted teams, clearly designating a forum where decisions are being made. This council should be making decisions on not only prices, but feature allocations, plan names, value metrics, etc. With everyone at the table, you can make sure everyone agrees on what will be represented—before the test starts.
Here’s the thing: Yes, every CEO or exec has an opinion about pricing, whether you agree with it or not. Rather than dismissing their input, treat it as a hypothesis. Include their preferred price as one of your test variations, alongside other options informed by data.
You can position it like this: “That’s a great hypothesis! Let’s include it as one of the variations in our price test. This way, we can validate that it’s the optimal choice.”
This approach respects their intuition while building in a safety net. If they’re right? Fantastic. If not? You’ll have the data to pivot.
P.S. I’ll write more about this monetization council in a future newsletter.
Final Thought: What to do when you know you’ll be wrong
Even after rigorous testing, your price isn’t final. Market conditions, costs, competitors, and customer behavior will shift. Pricing is an ongoing experiment—your goal is to continuously optimize it.
Don’t let the HiPPO dictate your pricing. And don’t trust what potential customers say they’ll do, either. Instead, let your customers’ wallets, behavior, and retention curves tell you the truth. Always test, always iterate, and always assume you can do better.
Would you like to sponsor my newsletter? Send an inquiry.
Edited with the help of .
Do you like any software for doing price tests?
Very relevant to what my startup is going through. HiPPO is real. Even when agreed to test pricing, the strong personal assumptions (whether admitted or not) going into testing is tricky in enabling a truly objective experiment. Also constantly reminding the team that discounts do not equate to pricing (discounted pricing obviously have a higher conversion, but this is a strictly different buying experience vs lower pricing). We're about to do a pricing study and I'm curious to see if higher price will actually convert better for us -- have you seen this happen?