What AI changes in fundamental investing (and what it doesn’t)
How to use it without fooling yourself

Alistair Smallwood
Head of Applied AI
Insight

AI doesn’t change the modular nature of investing. If anything, it makes it even harder to pretend the job is one linear “research process”.
What it does change is the economics of baseline work.
As retrieval, summarisation and cross-referencing become increasingly automated, the benchmark for what “good” looks like rises. The market absorbs productivity gains quickly, and a lot of that work will become fully commoditised.
I don’t think this becomes a story about working fewer hours. I think it becomes a story about spending more of your hours on the higher-value judgment modules: framing the debate, spotting regime change, being explicit about variant perception, deciding what’s priced, and weighting scenarios honestly. Less time on mechanical work like searching, re-reading and reconstructing your investment case for the 8th time because you forgot why you were long...
If the benchmark is rising, the obvious question is how to use AI without fooling yourself.
The biggest mistake I see is expecting LLMs to be a magic, alpha-generating insight machine.
Because of the “AI will take our jobs” hysteria, people expect superhuman output from a single vague prompt. I dread to think how often an LLM gets one line of context and is asked for a full investment view on X stock. It will usually produce something eloquent and “well sourced”, and that fluency gives it far more gravity than it deserves.
That expectation is the genesis of the problem. LLMs are extremely good at sounding right. When you look a level deeper and the output is irrelevant or plainly wrong, trust breaks down. Then you hear the phrase I’ve heard far too often: “AI just isn’t good enough for us yet.” In my view, that diagnosis is usually wrong.
Two points here:
First, this is really about how we treat AI. You’d never blindly accept a colleague’s (or your boss’s) output without asking what it’s based on. An LLM or agent is no different. Treat it like external research: often knowledgeable, regularly useful, never gospel. It still needs a sceptical filtering layer before it’s allowed anywhere near a decision.
Second, an LLM is most useful when it has as much context as you do: years of notes, models, company docs, and a glimpse of your mental model (sadly it’s more complex than just giving it access to your OneNote). I can already hear you asking:
“Al, how do we actually give it all that?” (Al as in short for Alistair, not AI, an increasingly common mistake)
There's something exciting coming from Primer soon. If you want an early look, drop me a message.



