This is part 2 of last week’s email.
Last Thursday, I met up with my friend Asad in London to discuss more on the AI/ChatGPT topic.
Asad works at an innovation agency and is on a quest to understand the big questions concerning tech.
As we sat at a rooftop bar overlooking the City of London, our conversation covered a range of topics: how interest rates will affect the subscription economy, tech ethics, existential risk and artificial intelligence.
I’m by no means an expert in these matters but merely just a curious speculator.
There’s no use denying it. AI looks set to transform society just as profoundly as the previous technological revolutions: agriculture, industrial and digital.
There are concerns that AI will replace our jobs, and we’d be twiddling our thumbs, relying on the government to provide us with some form of universal basic income in order to live.
Yes, jobs will be replaced, but new jobs will be created, too. We just don’t know what they are yet and can’t foresee those industries. Take the digital revolution, for example. It has unbundled big media companies and given us the opportunity to start a one-person media company commentating on UK politics from our bedroom.
Asad’s main concern was with Artificial General Intelligence (AGI) and the potential existential risk it poses to deleting humanity off the face of the earth. (There are no accepted precise definitions of what AGI is, but it is commonly accepted as displaying the same rough sort of general intelligence as humans).
I asked him if he thought AGI was plausible within our lifetime, and he felt it was indeed possible.
The median predicted date for AGI on Metaculus, a well-regarded forecasting platform, is 2032. Google’s DeepMind CEO, Demis Hassabis, thinks we may have AGI ‘in the next few years’. Other forecasters are not so optimistic and think it’s achievable by 2059.
I speculate it’s not possible within my lifetime. When I started my psychology degree in 2010, neuroscience was still practically in its infancy. We barely understood what was going on behind the black box that sat between our ears. There’s obviously been a lot of progress since, but even then, we’ve barely begun to scratch the surface. To produce a machine with our level of intelligence is a monumental task.
In my view, AGI will be good at solving logical problems. However, it will fail to solve logic-proof problems.
I’ll let Rory Sutherland explain what I mean. In his book Alchemy, Rory writes:
Here’s a simple (if expensive) lifestyle hack. If you would like everything in your kitchen to be dishwasher-proof, simply treat everything in your kitchen as though it was; after a year or so, anything that isn’t dish-washer proof will have been either destroyed or rendered unsuable. Bingo — everything you have left will now be dishwasher proof! Think of it as a kind of kitchen-utensil Dawarnism
Similarly, if you expose everyone of the world’s problems to ostensibly logical solutions, those that can easily be solved by logic will rapdily disappear, and all that will be left are the ones that are logic-proof — those where, for whatever reason, the logical answer does not work. Most political, business, foreign policy and, I strongly suspect, martial problems seem to be of this type.
Throughout our history, there are logic-proof questions humanity continuously faces that simultaneously divide and connect us.
Questions such as:
Who to marry?
Where to live?
How to get along with people who you disagree with?
How to deal with regret?
How to succeed?
What is fulfilment?
What is happiness?
We’re made up of different combinations of personalities, goals, experiences, luck, and circumstances, which makes a formulaic answer to those questions hard to find. No matter how advanced and smart the world becomes, the best answer will always be the banal reply: “You’ve got to figure it out for yourself.”
These problems can’t be distilled down to a single equation. And in part, why this newsletter exists — this is my attempt at answering these kinds of questions.
Getting back to the point, I doubt AGI will be able to solve these important but hard-to-answer questions. It has to develop tacit knowledge in order to figure out these problems.
However, I understand Asad’s AGI concerns. It ties back to my Russian Roulette post. No matter how low the odds of achieving AGI are, we’re flirting heavily with the risk of ruin here. And when it comes to the risk of ruin, the benefits never outweigh the risks.
If AGI could delete all of humanity, we may want to reconsider whether or not we want to go there. In maths, no matter how big the number is, anything multiplied by zero is always going to equal zero.
Asad’s solution was to vote with his feet and steer clear of AI. I have great admiration and respect for his decision. I may not think it will work, but I respect it. He’s acting in accordance with his values.
I think the most pressing AI concern is the incoming tsunami of misinformation and disinformation.
The unintended consequences of AI have begun. Students are using ChatGPT to write essays. Deepfakes of ex-girlfriends have been created. Even the Ukrainian President Volodymyr Zelenskyy isn’t immune to a deep fake — here’s a video of him calling on his soldiers to lay down their weapons.
With just a few taps, it’s now become effortless to create text, images, video and audio to be almost lifelike.
All of this is just the beginning. And it seems we are on par for the course to become a Black Mirror episode.
In this new world, how will we parse what is real and unreal? Don’t get me wrong, misinformation and disinformation are not new problems. But the digital age has exasperated the problem.
See, newspapers like the Times had to fill a newspaper only once a day. A news channel had to fill twenty hours of programming 365 days a year. Facts could be checked. Sources could be verified.
But digital media have to fill an infinite amount of space. The site that gets the most eyeballs on the internet wins. The economics of the internet created a twisted set of incentives that make traffic more important – and more profitable – than the truth.
Throw in the extra bonus of ChatGPT drumming up whatever we want in an instant, and now we’re spraying gasoline on the dumpster fire of misinformation.
Authorities have suggested tighter regulation of misinformation, but I see this as a futile effort. In the digital realm, you cannot police the present, only the past.
Censorship is one hell of a slippery slope too. Stanford published a harmful language guide. In that guide, the word ‘brave’ was deemed harmful as it perpetuated stereotypes of the “noble courageous savage.” This level of censorship is very reminiscent of George Orwell’s 1984. If this continues, soon we won’t be able to say anything.
If combatting misinformation or censorship doesn’t work, how should we deal with the tsunami wave of bullshit?
Honestly, I don’t know. I’m not smart enough to come up with a possible solution. I know this all sounds like complaining without offering a solution, but I write this as a caution to think even more carefully about what we consume online.
It’s going to become the wild wild west out here.