To most of us "free market" types, the idiocy of central planning might be salient while analyzing countries, economies, and policies. Our intuition is understandably weaker as one moves towards lower levels of aggregation - like the firm. This might be especially true if you’ve read a lot of what Ronald Coase has written. To paraphrase Tyler Cowen "(I reject) the Coasian view that firms are islands of central planning in the middle of a vibrant, efficient market."
In 'The Nature of the Firm', Coase conceptualizes firms as arising when transaction costs make it more efficient to coordinate activities internally rather than through market mechanisms. Instead of negotiating contracts for every task, companies can simply assign work through hierarchies. And is there a better word to evoke memories of the Soviet Union than "hierarchies"? Your friend at Google who averages two and half working hours a day only reifies this belief. After all, why wouldn't these pockets of planning churn out classics like "process over outcomes" or endless bureaucratic drag?
Now, Coase is technically correct, but there is something unhelpful or even counterproductive about the framing. It's like when people say 'evolution does X or Y'—evolution doesn't do anything, it's the mechanism we deduced from observing which species survived and which went extinct. The Coasian view has a similar problem: it makes firms sound like conscious choices to replace markets with central planning (which we instinctively associate with “bad incentives” or “bad epistemics”)
However, the visible and alive firm was better at delivering value more efficiently than competing firms. Firm survival in competitive markets is evidence of superior price discovery - the ability to figure out which resource allocations create the most value relative to what competitors are doing.
Think about a simple software business. You create some software in your free time and sell it. For you to make money, you have to believe you have some edge - you're solving a problem in a way the market isn't currently doing, or you're reaching customers others aren't reaching.
As you're proven right and start making money, by definition, the market gives you more resources to allocate. Now you want to scale. You face a choice: should you build your own sales team or use external channel partners to sell your software? You believe you are an incredible salesperson and you want control over brand and reputation, and you can really succeed at doing that. So you decide to spend your resources doing that, even though you could do it cheaper "using the market." If you are successful and grow to be ten times the size in five years, the market has validated not only your business but also your decision making and ability to understand your own edge. We can’t attribute this successs neatly to any one decision but it’s evidence that the things that mattered - you got right.
The larger a firm gets, the more the market is not just validating its products, but its decision-making heuristics and ability to think about its own abilities - its meta-cognition. Your firm isn't centrally planned - it's the market continuously validating your capacity for self-awareness about where you can create value.
The Recursive Problem
Entrepreneurship typically starts with an intuitive conviction about an opportunity. Most startup "ideas" are bundles of interconnected beliefs—beliefs about the problem, customer behavior, incentives, and execution. But these beliefs don't come pre-organized in a testable format. The entrepreneur's challenge is figuring which of these beliefs to test, in what order and how, given resource contraints.
This creates a recursive optimization problem. You need to make decisions about how to make decisions. Should you research extensively before testing or learn through rapid iteration? Should you test beliefs individually or as integrated systems? How much analysis is worth doing before you just start building?
The recursion emerges because optimizing your decision-making process is itself a resource allocation problem that requires systematic thinking..
Four Level Framework
Level 1: Belief Segmentation - How do you break down your idea into coherent, testable pieces? The segmentation itself determines what you can learn.
Level 2: Belief Prioritization - Which pieces matter most? This depends on dependencies, impact if wrong, and your current confidence levels.
Level 3: Testing Architecture - Do you test pieces individually or as an integrated system?
Level 4: Information Gathering vs. Acting - For each test, do you research first or just execute?
Dimensionalization
Now you can dimensionalize each of these further to understand what's right for your business. I recommend reading (and using in your LLM conversations) this piece by my friend
For each level, you're solving a different problem, so the dimensions that matter change:
Level 1: Belief Segmentation - You're trying to create testable, coherent units. The dimensions that matter are those that help you carve up your idea space effectively: testability (can this be isolated?), specificity (is this actionable?), and causal independence (does this interact with other beliefs?).
Level 2: Belief Prioritization - You're trying to allocate limited resources to highest-impact beliefs. Now the dimensions that matter are about impact and risk: foundational impact (how many decisions depend on this?), confidence gap (how uncertain are you?), and irreversibility (how expensive is being wrong?).
Level 3: Testing Architecture - You're trying to design efficient learning processes. The relevant dimensions shift to system properties: decomposability (can beliefs be tested separately?), resource constraints (what's your bottleneck?), and signal clarity (do you need clean attribution?).
Level 4: Information Gathering vs. Acting - You're trying to choose the optimal learning method. The dimensions become about information quality and timing: information asymmetry (research vs. action learning), competitive timing (does delay hurt?), and failure costs (can you afford to be wrong?).
As Jordan notes, good dimensionalization is about finding the axes that map to reality (fidelity), that you can actually influence (leverage), and that don't overwhelm your cognitive capacity (complexity). Each level of the decision hierarchy requires different dimensional priorities based on what you're optimizing for at that stage.
Come on, Is This Really How People Make Business Decisions?
Isn't the solution just using intuition? Yes, and you should start there—but you need to know where to apply your intuition and where to trust it, and check where it fits in this complete model.
The issue is that we increasingly combine actual intuition with legible heuristics, and these heuristics mostly come from places like Y Combinator or other sources that mean well but cannot possibly specify all the relevant assumptions and caveats that underpin their advice.
Take the common advice to "iterate fast and test one assumption at a time." This heuristic makes sense when you've segmented your beliefs into decomposable pieces, when you have low dependencies that favor sequential testing, and when high signal clarity on one piece means more than a noisy signal on the whole system. But it doesn’t work when your beliefs are highly interdependent. For example, supply and demand on a marketplace are not independent. Sometimes you need to test the whole system to understand any part of it. The framework helps you identify which features of your situation make standard advice more or less applicable, rather than following heuristics blindly simply because they're legible and widely repeated.
But you might say - the recursive optimization problem is genuinely complex, and most people don't have the bandwidth to systematically dimensionalize their belief systems while also trying to build a company. I would have been more inclined to agree…before LLMs.
Before LLMs, the cost of applying rigorous analytical frameworks to business decisions was astronomical. You'd need either exceptional personal discipline to hire expensive consultants. Most founders rightfully chose to rely on intuition and standard heuristics, because the alternative was simply too costly.
LLMs collapse the cost of systematic reasoning. You can now literally have a conversation with a system that can help you dimensionalize your belief system, identify dependencies between your assumptions, and audit whether standard startup advice applies to your specific situation. The cognitive overhead that made this framework theoretical for most people has been dramatically reduced.