The Prompt Stack.
A four-stage method for using AI on investment decisions. It exists because asking a model "what do you think of this stock?" is the worst possible way to use it. The stack forces evidence before opinion and makes the final answer earn its keep.
Set the stance before the question.
Models default to being agreeable. Ask "what do you think of XYZ?" and you get a reasonable-sounding balanced summary that is, in practice, mildly bullish on almost anything. The first stage of the Prompt Stack is to set a stance the model has to argue from.
The stance I use most often is "cautious analyst, not a cheerleader." Sometimes I narrow it further — "value-oriented short-seller", "compliance officer reading this filing for the first time", "risk committee member who has to defend approving this trade." The role does most of the work the rest of the prompt is meant to do. Get it right and the model resists its own defaults.
Separate facts from filler.
Most AI output on a company looks like analysis but is actually narrative. The filter step makes the model split what it's saying into two columns: what's observable in the source material, and what's an inference, assumption, or stylistic flourish.
The instruction I use is plain: "List the observable facts you're drawing on. Then separately, list the inferences you're making from those facts. Mark anything that's neither — anything you're asserting without a basis — and remove it." It's the single highest-value step in the whole stack. Most of what gets removed is exactly the bit that sounded most confident.
Make the downside explicit.
"What are the risks?" produces a checklist. That's not what this stage is for. This stage is for three specific outputs: what could go wrong on this timeframe, what would have to be true for the view to be invalidated, and what would I see in the next 90 days that would tell me I was wrong.
The third question is the most useful. A risk you can't observe in advance is just a worry; a risk with a tripwire is something you can act on. If the model can't give you observable disconfirming evidence, the model doesn't actually have a view — it's pattern-matching.
One action. With a confidence level.
By this point the model has earned the right to a view. The verdict step is short on purpose: one practical action and a stated confidence level (low / medium / high) with one sentence of why. "Hold and reassess after Q3 results, medium confidence — the thesis depends on margin recovery that won't be visible before then." That's the shape.
Forcing a confidence label is what makes the stack honest. A model that has to commit to "low confidence" tends to be more useful than one that's been allowed to hide behind a balanced summary. And it gives you something to check against later: was the model right when it was confident, and wrong when it wasn't?
Discipline, not magic.
The Prompt Stack isn't a clever trick to get better answers out of a model. It's a way of refusing to take the first answer the model gives you. Every stage is a checkpoint that the model — and you — have to pass before getting to the next one.
Most of what the stack does is slow you down at the points where you would otherwise jump to a conclusion. The slowdown is the value.
Use it on a real position this week.
The Lab Book has the full Prompt Stack as copy-paste templates — ten variations for different decisions, plus the checklists I use to decide whether to act on what comes back.