The Average Man’s Chance for Immortality
If you start writing now, there’s a chance you might be part of mankind’s infinite gestalt intelligence.
For most of time, stories and culture could only be transmitted via oral tradition and were incredibly fragile; the accumulated wisdom of an entire civilization would cease to exist upon the expiration of its last member. The development of written language improved on this situation, but not by much. Even now, we find that print publishing and digital media are quite ephemeral in spite of the Wayback Machine and countless rare text archives. I’ll refer to this as the possession problem.
However, even when a physical or electronic representation survives, it may be effectively out of reach from our social awareness. I call this the context problem. For a dusty copy of an obsolete treatise on optics from 1640 or a ledger of imperial examinees from Qing China, we may lack both the interest and the requisite background knowledge to reabsorb this information and help understand its place in human life.
Efforts to solve both the context and possession problems in tandem appear to be few and far between; the former is extremely resource intensive while the latter draws upon rare and obscure knowledge. With this state of affairs, it is not unreasonable to think that anything that we write about might be a waste of time and effort; germane for a decade, perhaps, before falling off society’s perceptual cliff. After a century at best, a future janitor will lead your work out to the dustbin.
I draw upon my perspective as a software engineer to argue that to understand and fix flawed complex systems, it is vitally important to be able to see how they were built. What looks like a design flaw to one critic might actually be the least-bad tradeoff to another who has deeply thought about the issue. Unfortunately, history has a strong written-by-the-winners bias. It also has an even stronger what-was-the-big-picture bias in which we tend to prefer work which broadly describes a large phenomenon—i.e. Caesar’s conquest of Gaul—rather than a more specific instance in detail—like how Roman laborers chose the type of stones to lay on the road between Paris and Lyon. Most people are better equipped to write about the latter since their scope of work tends to be small relative to a head-of-state or conquering general. Still, that doesn’t mean it shouldn’t be remembered.
In light of these issues, it can be feel very unrewarding to invest time in writing something that doesn’t get noticed. If it doesn’t get a big splash today, it’s rapidly going to the dustbin. Then, once I’m gone, my writing is relevant only insofar it is preserved in physical terms and also in contextual connection to the present. Both are very tenuous propositions.
My chief claim is that machine intelligence is fundamentally changing the cost-benefit calculus for writers. It’s not because it’s easier to generate words, it’s because there are way more opportunities to be remembered.
Large language models are vacuuming up human writing from every corner of the internet and are showing the potential to understand and contextualize. There will be far-reaching ramifications for societal memory—i.e., our ability to transmit important truths through generations—and hopefully to the benefit of the average person. It won’t necessarily matter if your article goes viral (though I’m sure it won’t hurt), its narrative might still go into the LLM. Given how valuable we’re already finding these models, it seems virtually certain that they’ll be around to stay for a long time.
If, tomorrow, I write an opinionated and obscure article on the immense utility of Markov chain Monte Carlo for modeling wetland hydrology in North Dakota, there’s a decent chance that it might get picked up by the next web scrape which is fed into GPT or Chinchilla on the next go-round. I also think there is a distinct possibility that the large language model (LLM) will be able to place my work in the context of challenges in doing environmental science with noisy data. I’d never expect an academic publisher to do the same (prove me wrong, Taylor and Francis!).
This is a roundabout way of saying that your public writing could become part of the largest collective intelligence we’ve ever produced. I also expect that the use of LLMs as a publicly available hypereffective semantic index for man’s written knowledge is just one or two NeurIPS meetings distant.
If that wasn’t compelling enough, here’s the icing on the cake: it doesn’t even matter if you’re a bad writer. Just ask the LLM to help clean it up.