I build AI agents for my real portfolio. Here's what actually works.
A personal lab journal of AI workflows, prompts, and agents tested against real portfolio decisions.
Why I now split every serious AI question into stages before I trust the answer
How I use a four-stage prompting method — Role, Filter, Risk, Verdict — to get useful AI analysis instead of confident-sounding noise.
I asked AI to find the downside in a position I was already long. It found things I'd missed.
An adversarial prompt against my own portfolio. The model surfaced three risks I'd glossed over, and one I'd actively been wrong about.
Lab ReportI gave the same investment question to four AI tools. The results were instructive.
Not a ranking. A controlled test on a real question, with notes on what each tool actually produced and why most of it was useless.
Prompt StackThe prompts I actually use when reviewing a position
Not a list of clever prompts to copy. A walkthrough of the actual sequence I run, and why the order matters more than the wording.
Stop Asking Lazy Questions.
The Prompt Stack is how I turn a general model into a narrower analyst. Instead of asking for instant brilliance, I break the task into stages, force evidence before opinion, and make the final output earn its keep.
Get the actual working material.
Prompt Stack templates, decision checklists, and notes from live experiments. Join the list for new lab reports and Lab Book access.
Ben Dixon documents AI experiments against his own portfolio. Real money, human analysis, sceptical use. Read more →