Roughly Right is Better Than Precisely Wrong
How Hari is sensing global trends and giving investors and companies clarity.
Nine months ago, I started tracking something.
I’d been watching the AI employment conversation for a while. The consensus timeline for when job displacement would really bite was “3 to 5 years.” That number showed up in McKinsey reports, analyst decks, and dinner party conversations with roughly equal confidence and roughly equal rigor. It felt anchored to what people could comfortably absorb. And I kept noticing signals that suggested the actual timeline was shorter. Quite a bit shorter.
So I did something I’ve done my whole career, but had never bothered to formalize.
I traced the causal chain backward. What has to be true before the outcome can materialize? What are the preconditions, in sequence? And at each link in that sequence, what observable signals would tell you it’s already happening?
I’ve been thinking this way for thirty years. It’s the way I’ve assessed markets, evaluated companies, stress-tested strategies. But it always lived in my head, and in the hundreds of books sitting on my shelves. The AI employment question forced me to write it down.
What came out of that process was published last August as “The Last Normal Year.” The projection: AI-driven employment displacement would become measurable in 6 months, reaching a crescendo within 18 months. That put it roughly two years ahead of the consensus. Six months later, I published the follow-up, “AI Is Coming for Your Job After All.” Five of seven chain links had activated. The JOLTS job openings reading had crossed below the threshold I’d been watching. Challenger layoff data had exploded past 108,000 announcements in a single month. The projection was tracking.
I got some things wrong, too. I’d assumed the displacement would be linear. Steady, accumulating, one-directional. It wasn’t. Companies cut, regretted it, rehired selectively, cut smarter. The actual shape was oscillatory. That miss taught me something about the framework I was building. It needed a way to distinguish between systems that spiral and systems that settle toward a new equilibrium.
I built one.
And then I kept going. Over the past nine months, I’ve taken the thinking that lived in my head and turned it into a formal, repeatable methodology. I’ve stress-tested it across multiple domains. I’ve run it with a handful of venture capitalists, a private equity firms, and several company founders. Their responses have been very positive. And the reason keeps coming back to the same thing.
Which I’ll get to. But first, I want to talk about the fog.
What leaders actually deal with
You have dashboards. You have analysts. You have Gartner subscriptions and board decks and quarterly reviews. You’re still making your most consequential decisions in a fog.
The dashboards tell you what happened. Lagging indicators, mostly, arriving weeks or months after the underlying dynamics already shifted. The analysts give you consensus estimates, which are by definition what everyone else already believes. And the decisions that actually matter, the ones about where to deploy capital and when to restructure and whether the assumptions in your thesis still hold, those need to be made before the data is obvious.
By the time you get confirmation, the market has already moved. Competitors too.
Most leaders know this. Most teams know this. They spend enormous amounts of money trying to cut the fog. And most of what they buy gives them a better-organized view of the past. Which is helpful, but it’s the wrong problem. What you actually need is directional accuracy about the future. Good enough to know when to move. Delivered fast enough that you still have options when the signal lands.
My friend Nate Chaffetz puts it simply: roughly right is really the only kind of right that matters.
He’s spot on. And “roughly right” is what the people I’ve worked with on this keep telling me they’ve been looking for. Two things that are missing from most predictive intelligence, from most dashboards, from most strategy work: what action to take, and when to take it.
What “roughly right” looks like in practice
The methodology does four things.
It maps the causal chain. For any anticipated outcome, it identifies the sequential preconditions that must be true before that outcome can materialize. Each link is observable. Each has leading indicators you can track before the consensus catches up.
It estimates time delays. Ranges backed by historical analogues and current velocity data. Named assumptions you can update as evidence arrives. Not vibes. Not “3 to 5 years.”
It tests from multiple angles. A causal chain is a hypothesis. Hypotheses need to hold up when you push on them. Every projection gets checked against macro and micro perspectives, systems thinking, cross-domain convergence, and counter-indicators that might reshape or invalidate the chain. Identifying the chain and setting thresholds is an interesting way to frame a problem. The multi-perspective testing is what makes the answer worth having.
It produces decisions, not reports. Every projection generates specific threshold-triggered decision points. When indicator X crosses level Y, that’s the signal to execute response Z. And the cost of waiting is mapped explicitly: what each option costs now versus what it costs if you delay six months.
I’ve validated this across AI employment displacement, US institutional stability, sector-level market analysis, and a growing number of company-specific engagements. The logic is the same underneath. Outcomes arrive through sequences of preconditions. Track the preconditions, and you see the outcome forming before it becomes consensus.
Now. Remember the thing I said I’d get to? The thing the VCs and founders and the PE firms kept responding to?
Fifteen minutes with the SaaSpocalypse
It was speed.
In the first week of February 2026, roughly $285 billion in software market value disappeared in a single session. Over the following weeks, the damage widened to nearly $2 trillion. Atlassian fell 35% after reporting the first enterprise seat-count decline in its history. Salesforce dropped 28% even as revenue kept growing. The iShares Software ETF fell 22% year-to-date. The steepest software selloff since the 2022 rate hike cycle.
The trigger: a new generation of agentic AI platforms showed that autonomous agents could perform the same end-to-end workflows that used to require teams of humans running SaaS tools. If one AI agent does the work of five employees, the need for five software licenses vanishes with them. The per-seat pricing model that built a $600 billion industry is suddenly exposed.
A little balance here: Klarna fired a bunch of their people to use AI and moved too soon, leading to that “regret rehire” oscillation I mentioned before. I believe they’ll make cuts again. Jack Dorsey at Block (parent of Square and Cash app) is leading this charge, laying off nearly half of a workforce generating $2 million in profit per employee.
Every VC with SaaS in their portfolio, every PE firm that bought a software company at 12x revenue, every SaaS founder watching the crater form, they’re all inside the same question. When does this stabilize? Who survives?
The causal chain took about fifteen minutes to map. Here’s what it looks like.
Link 1 (already happened): AI capability reaches functional parity with specific SaaS task categories. Coding assistants generating 40-60% of routine code. Agents handling CRM entries, support tickets, document processing at error rates below human benchmarks.
Link 2 (the February catalyst): Agentic platforms demonstrate end-to-end workflow replacement. The market reaction was the recognition that capability had crossed a threshold. “AI can assist with tasks” became “AI can replace workflows.” That’s a different sentence. A very expensive one.
Link 3 (watch this now): Enterprise procurement shifts from per-seat SaaS to AI-native alternatives. The leading indicators: CFO spending surveys, enterprise AI procurement data, and the net-new customer acquisition rate at incumbent SaaS companies. Salesforce dropped 28% because new customer acquisition slowed. The market is pricing in what procurement data will confirm in two quarters.
Link 4 (next up): SaaS companies that can’t embed AI see churn accelerate. The threshold: net revenue retention drops below 100% for three or more consecutive quarters across a basket of mid-cap SaaS names. Atlassian’s seat-count decline is the first structural signal.
Link 5 (6-12 months out): Consolidation wave. Acqui-hires, fire sales, shutdowns. M&A deal flow in SaaS exceeding 2x the trailing three-year average.
Link 6 (12-24 months out): New equilibrium. Survivors are AI-embedded. Pricing shifts from per-seat to outcome-based or consumption-based models. The sector re-rates at new multiples.
One more question the framework forces you to answer: is this a loop that accelerates until something breaks, or one that oscillates toward a new settling point?
The evidence says it oscillates*. Some SaaS categories are far more exposed than others. Infrastructure, data management, and security have high switching costs and less AI substitutability. CRM, project management, and support tools are directly in the path. Companies will overcorrect on the sell side, some SaaS names will rebound, and the settling point probably lands 30-45% below 2024 highs in aggregate, distributed very unevenly.
That took fifteen minutes. It’s a structural map of a $2 trillion market event. A full engagement adds the threshold dashboard, the decision-trigger table, and the strategic response map with time-dependent costing. But even at this level, it’s more structural clarity than most SaaS investors had on February 4th.
Who this is for
Four groups get massive benefit from this “roughly right” model:
Venture capitalists who want another lens on the investments they’re considering. The framework asks whether the market the company is entering is actually going where the pitch deck says. When the chain shows the thesis depends on a precondition that hasn’t activated, that changes how you size the bet.
Private equity firms asking whether their portfolio companies can hit their targets within the standard holding period. A causal chain showing the market shifting midway through a five-year hold is the difference between an exit and a write-down.
Company founders who need an ongoing stress test for their assumptions. Your business plan is a hypothesis about the future. The framework tracks whether the structural conditions it depends on are still trending in your direction, and how much lead time you have to adapt if they aren’t. “Lead time” often means “runway”.
Non-profits and mission-driven organizations working in rapidly shifting environments. Education, public health, community development, social services. These fields are being reshaped by the same forces hitting the private sector, and the organizations doing the work rarely have access to the tools that funded companies use for environmental scanning.
What this is (and what it isn’t)
This is not a dashboard tool or a connector to your data warehouse. Not a BI platform.
The traditional route to that kind of intelligence involves weeks of setup, teams of analysts, data connectors chasing sources, latent data arriving on a delay, and models that are outdated by the time they reach the board. There’s a place for that infrastructure. That’s a different offering and a different problem.
What we’re doing is a rapidly rolling check against leading indicators. The signals that precede the outcomes everyone else waits to confirm. Because the methodology doesn’t depend on heavy data architecture, the turnaround is measured in hours. A full briefing and threshold dashboard for a specific question can be produced in a single working session—even half an hour before a board meeting or investor pitch. No procurement. No multi-month implementation. No six-figure contract just to get to the starting line.
The methodology consolidates three decades of experience, pattern recognition, and cross-domain reading into a structured process that produces consistent, testable, updatable results. It’s not a flashy AI product. It’s a way of thinking that finally has a formal structure around it.
And it works best as a recurring signal check. Monthly or quarterly, depending on how fast your domain is moving. Each refresh shows what changed, whether any thresholds shifted, and whether decision windows opened or closed. The more data you provide, the better the signal.
This is how “roughly right” builds over time. Not a single prediction, but systematic updates that keep you positioned correctly as the structure evolves.
Meet Hari
I’ve been building this for nine months. Three rounds of stress testing, a public prediction that tracked, and a growing set of engagements that have sharpened it considerably. Time to give it a name.
Hari.
Two references, both intentional.
In Asimov’s Foundation series, Hari Seldon created psychohistory, the science of predicting the broad sweep of civilizational change without predicting individual events. He couldn’t tell you which crisis would come. He could tell you a crisis was structurally inevitable, and roughly when. That’s the aspiration. Structural clarity, not omniscience.
Hari is also a Sanskrit name and epithet for Vishnu. The one who takes away darkness and illusion. The preserver. A remover of fog.
Both namesakes share the same conviction. The broad pattern is knowable even when the specifics aren’t. Structural forces produce tendencies you can track. And seeing clearly, roughly right, is enough to change your position before the wave arrives.
Intellectual honesty is a big deal to me. Every projection Hari produces gets logged, tracked, and publicly calibrated. What I projected, what actually happened, what I learned. The track record compounds over time. And it’s the track record, not the pitch, that earns trust.
If you’re making decisions in the fog and you’d like a structural map of what’s forming around you, I’d love to show you what Hari sees in your world.
Note: I used the word oscillate in this piece several times, and it’s helpful to know what it means. In short time frames there are effects, responses to effects, responses to those responses, etc. When there’s a bias toward AI being overhyped, for example, the “regret rehires” (bringing people back because AI wasn’t yet as capable as the company thought) can seem like a confirming signal that AI didn’t live up to the promise and won’t be a thing the way many thought it would. The sequence of events looks like layoffs driven by AI —> dips in performance and customer satisfaction —> lost revenue —> rehire to get performance back to expectations. Projections suggest that there are a few more links in that sequence and that capabilities improve, companies get smarter about workflows, and cuts happen again a few months later. Today’s news can seem definitive, but a lot of it is merely oscillation in a trendline that continues in a direction.


