Longriver Gathering 2025

Last weekend, thirty-one friends joined me in Shenzhen for the Longriver Gathering, an event ‘by investors, for investors’ aimed at building community, sharing ideas and sharpening our investment practices. Run at cost, the true price of admission is an open mind and a willingness to reciprocate. Everyone presents and everyone engages.

This year was the largest Gathering yet, with friends from across China, Europe, the US and Africa, representing hobbyist investors, professionals and allocators (link).

Whereas last year felt like an existential conversation on China, the topics this year covered a broad range of companies and craft: base rates vs. moats; the importance of temperament; activism in Japan; value and value traps; regulatory-driven consolidation; investing with founders; efficiency vs. resilience; and whether long-term buy and hold investing is a realistic strategy in China.

Specific companies discussed included Appier; Argo Graphics; Bloom Energy; Centre Testing International; Datadog; Eclat; Evolution Gaming; Golden Throat; JBM Healthcare; Kweichow Moutai; M&A Research Institute; Paxman; Pinduoduo; Pop Mart; Puregold Price Club; Ripple; SEA Limited; Shenzhou International; and the St. Joe Company.

I was part of a panel on AI, where I introduced Richard Sutton’s 'The Bitter Lesson' (link) and explained that it will be challenging to fully leverage AI in fundamental investing because we cannot build feedback loops with clearly defined ‘good’ outcomes. You can see my slides and read my script below. It was a stretch and I hope it did it justice!

The Longriver Gathering has become an annual tradition for investors in Asia and friends interested in our region. Please drop me a line if you’d like to learn more. Although the event has a limited capacity, I am always eager to include new friends and faces.


This presentation is inspired by an article from Ethan Mollick (link),

AI systems are built from algorithms trained on data and refined through iteration.

At their core, they define an objective, build a model, and improve that model through repeated cycles of training.

In 2019, Computer Scientist Richard Sutton made a trenchant observation: models built on human domain knowledge often start strong but quickly plateau.

Again and again, the big breakthroughs came from general-purpose methods that scale with more data and more compute.

Sutton called this ‘the Bitter Lesson’, bitter because it implied that human knowledge - the expertise we so desperately strive to accumulate, on which we build our identity and our pride - is valuable but limited.

Human expertise doesn’t keep compounding the way computation does.

You could say that we’ve reached a local maximum, bound by our heuristics and priors.

Generalisable methods, without our baggage, can scale and climb further.

A bitter lesson indeed.

You can see the Bitter Lesson in application, like AlphaGo Zero, which learned Go through self-play without human instruction.

Or in large language models, which predict the next token of text after training on the internet’s corpus of knowledge.

The pattern generalizes: wherever you can clearly define an objective and present vast data, scaled compute will deliver superior performance.

Think of business functions with clearly defined objectives like insurance pricing, customer service, medical diagnosis, or targeted advertising.

These can now be done at a level of accuracy, reliability, and cost no human could emulate.

And as compute gets cheaper and more powerful, we will keep getting better and better results.

But there’s a little hiccough.

The Bitter Lesson doesn’t apply so well to business organisations.

Just think about how businesses actually operate.

An influential paper from the 1970s called it “the Garbage Can” model: you pour all the elements of decision-making into a can, stir them up and wait to be surprised by the outcome.

Decisions are made based on who’s in the room or speaks loudest.

Resources go underutilised or forgotten.

People forget why they’re doing what they’re doing but still do it anyway.

Companies are organised anarchy.

And from the perspective of AI, the biggest problem is that most businesses don’t have clear objectives.

So how can their ‘algorithm’ iterate and improve?

Let’s take that insight and turn it on ourselves.

Does the organised anarchy I just described sound like fundamental investing?

It should.

No matter how we dress it up with numbers, formulae, processes and buzzwords, this is an artisanal craft.

And it has to be because it is a game played with imperfect information and long, ambiguous feedback loops.

Contrast that to passive investing, which has a very clear objective.

It has a clear process.

It is efficient and low cost.

And it beats most active managers.

So how do we reconcile AI’s promise with the reality of what we do as investors?

I’ve thought about this and I see three paths.

The future is already here but it’s not evenly distributed.

Path 1: we use AI to stir the garbage can better.

Path 2: we train AI to do our jobs.

Path 3: we use AI to tighten our feedback loop.

The next speaker is going to talk about the first option, so I’ll leave that to him.

The difficulty with the second option is that it’s still so difficult to define our objective.

Make money?

Maximise my “edge”?

Maximise risk adjusted returns within all these constraints?

It gets complex very quickly and that makes it harder for an algorithm to learn.

The third option is what I call ‘Superforecasters at Scale’ because AI makes it cheap and easy to apply the methods Philip Tetlock proposed in his book Superforecasters.

You take a problem and break it down into better-defined constituent parts.

You make precise forecasts and record them.

You update regularly with new information.

You learn and you try to do better next time.

The key is to make this exercise precise and explicit, rather than imprecise and implicit.

To be honest, I’m still thinking this through, and it may not be the best or even the only answer.

But I’m thinking hard because I don’t want to rest on my laurels and get stuck at a local maximum.

Now I’ll close with a more human thought.

The bitter lesson taught us that general models backed with data and compute beat those built around static domain knowledge.

Our instinct is to cling to what we know, but adaptability is the meta-edge.

You need to be actively open-minded!

Be willing to discard old heuristics the moment a better method emerges, even if it stings your pride.

I want to offer these ideas on how to become actively open-minded.

The most provocative might be to surround yourself with young people.

As anyone with children knows, they don’t have our priors or our baggage.

They will pick up that pot and bang it like a drum.

They will draw on the walls.

They will keep asking “Why? Why? Why?” until you’re blue in the face.

And they will also be the first to discover a new and better way of doing things.

So you see this picture and you might think how lucky Rob was to meet Charlie.

Without wanting to stoke his ego too much, I would actually argue that Charlie was very lucky to meet Rob.

Why else would a 99-year-old man go through the trouble and discomfort of entertaining strangers half his age if not for the hope that one of them could change his mind?

That attitude would make the lesson not so bitter after all.

Thank you!

Longriver, InvestingGraham Rhodes