Why Your Financial Model Isn’t Working (And How to Fix It)
- Bob Wang
- Oct 18
- 5 min read
Lessons from rebuilding models for fintech and SaaS companies that looked great on paper—but failed to guide real decisions.
In the past 90 days, we’ve been brought in as Fractional CFO's to rebuild financial models for three different companies: two in fintech, and one in enterprise SaaS. Sometimes we were brought in by the CEO; other times, I was recommended by the Board or a venture capital firm. Regardless of who made the call, the conversation always started the same way:
"Can you take a look at our model? Something doesn’t feel right."

Despite differences in industry, team size, or stage of growth, the core issue was consistent across all three companies: their financial model wasn’t working.
To be clear, it wasn’t that the models were broken in Excel. In fact, they were technically impressive, sometimes with hundreds of rows and formulas. But the models didn’t serve their purpose: helping leaders make confident, well-informed decisions.
In this article, I’ll break down the top three issues I found, explain the impact of each, and share what a good model should look like instead. If you’ve ever opened your financial model and wondered, "Why does this look right but feel wrong?" — this article is for you.
The Problem with Over-Engineering the Wrong Areas
Issue: Too much complexity in areas that don’t matter. Not enough nuance and metrics in areas that do.
I opened one model that had 15 different lines for office expenses (paper, coffee, internet, parking, snacks...). But their customer acquisition cost (CAC) was a single input cell with no breakdown. Another model had detailed depreciation schedules for every laptop purchased since 2017, but showed revenue as a flat 10% month-over-month growth assumption.
It’s the classic case of missing the forest for the trees.
Impact: Decision-makers got lost in the weeds. The builder of the model thought, "If I include more details, the model must be accurate." They hid in the comfort of minutiae.
But a highly detailed model is not automatically a high-quality model. Without linking the model to real-world drivers, all that detail becomes noise. Worse, it creates a false sense of confidence.
Stakeholders begin to distrust the numbers. Eventually, they stop using the model altogether.
What the Right Model Should Look Like:
Simple in the margins, sophisticated in the drivers.
Use the 80/20 rule: focus effort on the 20% of assumptions that drive 80% of the results.
Don’t over-optimize for accuracy in line items that don’t move the needle. Instead, invest time in understanding:
Customer acquisition cost (CAC) by channel
Customer lifetime value (LTV)
Onboarding timelines
Revenue ramp-up curves
Churn behavior
Pricing strategies
Build logic around these drivers and make them transparent, editable, and explainable.
If your model feels heavy but still doesn’t tell you anything useful—this is likely your issue.
No Easy Way to Run Scenarios
Issue: No toggle to explore "what-if" cases.
None of the models I reviewed had an easy way to toggle between different assumptions. Want to test how a 3-month onboarding delay impacts cash? You’d have to manually change 6 different cells and hope you remembered to change them all back.
Impact:
Teams operated with a single, brittle forecast.
There was no way to stress test against downside cases.
Hiring and capital planning were done off a base-case that may never materialize.
Fundraising decks were built on unverified assumptions because no one had tested how sensitive the plan was to key drivers.
In one fintech case, the model assumed new customers would start transacting 30 days after signing. Reality? It was taking up to 160 days to fully ramp. The difference blew up their cash forecast.
What the Right Model Should Look Like:
A central assumption tab with all key inputs: conversion rates, ramp timelines, hiring speed, pricing, churn.
Clear scenario toggles (dropdowns or Boolean switches) that feed into the rest of the model:
Base Case
Upside Case
Downside Case
A visual dashboard showing output comparisons:
Revenue forecasts
Burn rate
Cash runway
Headcount needs
Ideally, views across different timeframes:
Monthly (next 12 months)
Quarterly (next 2 years)
Annual (3–5 year view for long-term planning or fundraising)
Building scenarios isn’t just a “nice to have”—it’s how you build confidence in decisions.
Confusing Sales Targets with Board-Level Forecasts
Issue: Sales team goals were used as-is in the model.
In each of the companies I worked with, the revenue line in the model was a direct copy-paste from the sales pipeline. There was no adjustment for close rate, onboarding time, ramp-up speed, or churn.
The model assumed that if $1M was in the pipe this quarter, it would become $1M in revenue next quarter.
Impact:
Board projections were inflated.
Headcount plans were too aggressive.
Cash burn exceeded budget.
Founders had to go back to the board and say, "Actually... we’re behind."
The worst part? Finance lost credibility. Because the model didn't set expectations properly, even the accurate parts of the model were now under question.
What the Right Model Should Look Like:
Separate layers:
Sales targets: What the GTM team is aiming for
Probability-weighted forecast: Apply realistic conversion rates
Realization lag: Delay revenue recognition based on onboarding and customer ramp
Build two views:
Aspirational Forecast: For internal stretch planning
Board-Level Forecast: More conservative, used for hiring, cash, and investor expectations
In addition, finance should own the mapping between pipeline activity and revenue realization. That way, if a deal slips, the impact on cash and hiring is immediately clear.
If your model treats bookings and revenue as the same thing, that’s a red flag.
Conclusion: Fixing a Broken Model Is More Than Just a Cleanup
A model that "looks right" but doesn't guide decisions is not working.
It doesn't matter how fancy the formulas are, or how pretty the formatting is. If your leadership team doesn't trust the numbers, or can’t use the model to make real trade-offs—then the model is failing its primary job.
"***** ... Financial model isn't working!!"
In each of the companies I worked with, the solution wasn’t to clean up the formatting. It was to reorient the model around the real business:
What drives revenue?
What are the bottlenecks in onboarding?
How long does it take a customer to ramp?
How does sales activity convert into dollars?
When the model reflects these truths, everything else becomes easier: fundraising, hiring, cash management, board reporting.
Financial model isn't working? Think your model might need a second opinion?
I’ve rebuilt models for venture-backed and growth-stage companies to help them:
Forecast accurately
Plan hiring with confidence
Navigate board expectations
Fundraise based on reality, not wishful thinking
If your current model isn’t helping you make decisions, it might be time for a rebuild.




Comments