The Modular Analyst

Most Equity “Research” Is Table-Stakes Work. Edge Is Choosing What to Do Next. Alistair Smallwood

Alistair Smallwood

Head of Applied AI

Insight

A calculator ontop a table

Intro

Before about 12 months ago, not once in my career had I stopped to try to segment or codify what fundamental equity investing really entails apart from the odd times I tried to explain what my day to day looked like to my mum.

You just do the job. You get to know a few companies, then inherit a coverage universe, you follow them, you build instincts, you pick up a thousand small habits from the people around you, and you try to stay on top of the tape without losing your mind.

Thinking more seriously about AI has forced me to be more explicit about what the work actually is. Not because AI is going to replace it, but because it exposes how messy and modular the job really is and how much of it stays implicit until you try to describe it properly.

Fundamental equity investing isn’t a single workflow. It’s a library of modules at different levels of granularity. Some are hygiene modules: reading earnings calls, staying current on strategy, understanding what a company actually sells. Others are higher-judgment modules: framing the debate, deciding what matters, figuring out what’s already priced, and identifying what changes the path.

Doing the hygiene work well mostly gets you in line with the market. Edge comes from choosing and sequencing the few modules that narrow the outcome distribution the most in a specific situation, under time pressure, and then translating that into a view on what’s priced and how you size the position.

Most of the job is parity work (and why that’s fine)

A slightly uncomfortable truth is that a lot of what we call “research” is really just staying in the game: reading the earnings call properly, understanding what the company actually sells, keeping track of how management frames priorities quarter to quarter, updating a mental model of what drives revenue and what drives margins, knowing what peers just said, and generally avoiding being surprised by something everyone else already knows.

That work matters. It’s how you avoid being obviously wrong. It’s how you avoid falling behind consensus for stupid reasons. And it’s how you earn the right to have a view in the first place.

But doing all of that well doesn’t automatically create edge because most of it is available to anyone willing to put in the hours. The prudent assumption is that the market has put in the hours. If you and I both read the same transcript and both understand the product, we can still end up with the same base case the market is already pricing.

Where it gets interesting is when something changes and you have to decide what work is worth doing next.

The real skill is choosing what to do next

Every day you’re confronted with more potential work than you could ever finish: new information, new noise, new opinions, new price moves, ten different things that might matter, all arriving faster than you can properly digest them.

In that environment, the key skill is not raw intelligence or even raw effort. It’s knowing which module to run next in order to narrow your view on the range of outcomes as quickly as possible. And just as importantly, it’s knowing when not to run a module at all because it won’t change anything that matters.

Sometimes that means going back to first principles, building a simple model, and being explicit about your views and why. Sometimes it means checking whether the market reaction is telling you something about expectations. Sometimes it means a quick peer read-across to work out if this is idiosyncratic or sector-wide. And sometimes it means doing nothing taking a note and waiting for a second piece of evidence.

The people who look like they have edge are often just better at this selection problem. They waste less time on work that makes them feel busy, and they spend more time on work that genuinely changes their understanding of what is likely to happen and what the market already believes.

The rest of this essay is an attempt to make that selection problem more explicit, and to explain why AI is unusually well suited to taking some of the parity modules off your plate keeping you up to date across your universe and leaving you with more time for the parts that are actually judgment.

A mistake I made that taught me what “module selection” really means

One of the best examples of this going wrong for me was shorting Raspberry Pi last year. I did the work you’d expect a diligent analyst to do: I understood the company, the product, the technology, and the market structure as best I could. I had high conviction on what I thought was the only question that mattered post-IPO would they hit their first-year numbers?

But I missed the bigger point. We were in an AI stock bull market, and Raspberry Pi was plugged into it. Not because the company had suddenly become an AI powerhouse, but because the market was in a mood to pull anything adjacent to AI into the upside scenario set. That changes your return distribution whether you like it or not. In my head, the upside case was something like +30%, but with a tiny probability attached—maybe 5%, because it felt like a stretch. The downside looked like -40% with much higher probability. And the base case was a high-conviction -15% move on missing their first-year numbers.

The mistake was that my probability on the upside case was wildly wrong. In the regime we were in, the “AI narrative bid” wasn’t a 5% tail. It was a real scenario that could easily have been 30%+ probability. That shifts the probability-weighted outcome materially, even if my fundamental view doesn’t change at all.

I was right on the company and wrong on the market overlay (which is just wrong). And because I’d spent time on the wrong module, I tried to fix the trade by doing more of the same work: more calls, more industrial channel checks, more time on whether Raspberry Pi would genuinely penetrate industrial IoT. That might be the right module in other situations. It wasn’t the module that mattered here.

The module I should have run was the zoomed-out thematic one: what regime are we in, how is the market pricing AI adjacency, and what does that imply for the probability attached to the upside scenario? If I’d done that cleanly, I would have sized the short very differently—or avoided it altogether. Not because the bear case was wrong, but because the distribution had changed.

The lesson: edge isn’t just knowing more about the company. It’s choosing the right work at the right time. And sometimes the right work isn’t another piece of fundamental diligence—it’s stepping back and asking what kind of market you’re actually in, and whether your scenario weights reflect that reality.

So where does AI actually help

If you buy the framing so far, AI’s role becomes clearer. It’s not going to do the job for you. It’s not a magic insight machine. The value is more boring than that and more useful. AI makes the maintenance work faster. It helps you stay oriented across more names. It catches the obvious points you might have missed because you were busy elsewhere. That leaves you with more time for the judgment modules; market overlay, scenario weighting, deciding what’s priced.

The right framing isn’t “AI as analyst.” It’s “AI as accelerant for the stuff that shouldn’t require your full attention.” The goal isn’t to outsource thinking. It’s to buy back time for the thinking that matters.

In practice, this means differentiation shifts up a level. In the old world, edge often came from being exceptionally diligent, the person who could grind through more material than everyone else without dropping the thread. That still matters. But when baseline work becomes easier, edge moves toward being a better thinker: someone who can frame the debate, identify where variant perception sits, and be honest about what’s knowable versus what’s just narrative.

Closing thoughts

The main point is simple: fundamental investing isn’t a single workflow. It’s a library of modules, and most of them are the price of admission rather than a source of edge. Edge comes from orchestration, choosing which modules to run, sequencing them well, and weighting what you learn against what the market already believes.

AI doesn’t change that. If anything, it makes it more obvious. When baseline work becomes faster and cheaper, differentiation moves up a level away from who can grind the hardest and toward who can think most clearly. The people who can frame the debate, identify where variant perception actually sits, and be honest about what’s knowable versus what’s just narrative.

This isn’t a story about working fewer hours. If anything, the game gets harder. As AI compresses the parity advantage, returns increasingly accrue to the smaller number of people who are genuinely good at the judgment layer. The grind advantage gets competed away; the thinking advantage remains and might even widen.

The real benefit is that you get to spend a higher share of your hours on the modules that actually change your understanding and your decisions, rather than on the mechanical work of searching, re-reading, and stitching context together. Which I’m sure you and your colleagues will be delighted to do less of.

I’ve deliberately left the prescriptive part of this underdeveloped. Partly because I’m still figuring it out myself, and partly because this Substack is an attempt to work through these questions in public tracking what I’m trying, what’s working, and what isn’t. More to come.

Share on social media