THG AI Special: Understanding AI before it is too late
Before diving deep into the world of ChatGPT, DALL-E, etc., please take time to grasp the broad contours of the evolving tech landscape in the era of AI.
When you heard about a Robot for the first time—most likely as a child—what kind of image did come to your imagination? May be like this character named Anukul in this short film based on a short story by legendary Indian filmmaker Satyajit Ray:
It is now safe to say that a physical humanoid being like Anukul is not going to replace us as workers. At least, not yet.
The machines that will replace us as automation advances will either look like physical entities like a washing machine or a self-driving car. Or, they will look like—you guessed it right—ChatGPT, where the real threats to human employment, the algorithms, will be largely invisible and inscrutable to most of us.
To learn the essence of human-intelligent machine relationship, though, this short film is a must-watch because it is spot on on the basic dynamics of such a complicated and evolving relationship. If you haven’t so far, please watch this 22 min film now and proceed with the remaining part of this post.
Watched it? Good. This movie perfectly explains how humans, despite the different inbuilt guardrails, may inadvertently end up handing over the decision-making power to the machines over issues that decide who gets to live and who has to die.
Now let’s run a thought experiment. What can be the worst form of fully automated weapons of war? Swarm of robots with deadly weapons pouncing upon hapless human infantry? A drone flying hundreds of kilometers to drop powerful bombs? A precision weapon that can track the enemy with facial recognition, profile them racially, linguistically or based on other traits and fire on its own?
Now take a break for a moment and and go through some potentially more horrifying options than those.
Did it? Good.
What if I say the most deadly future weapon is a bee-sized drone with around 4 grams of super-explosive that can fly dozens of kilometers, latch onto the target based on the DNA sequence of its genome, enter through ear and explode inside the skull, thereby killing the target and reducing the margin of error to zero?
Well, this is not my scenario. I heard it from Stuart Russel, the renowned computer science professor at University of California, Berkely, who delivered the BBC Reith Lectures in 2021.
Again, if you want to comprehend the enormity of issues around AI, you can’t do it without listening to these four hour-long lectures here patiently:
The Biggest Event in Human History
AI in warfare
AI in Economy
AI: A Future for Humans
These links are for BBC Radio 4 website but all of these are available in simple podcast apps like ‘Podcast go’ where you can search for Reith Lectures and listen to them. Extra advice from me: please go to long walks so that you can enjoy the whole talk without being distracted or falling asleep. (However thrilling a podcast, your brain switches to drowsy mode after you lie, sit or recline for long enough).
At the end, you’ll be alarmed but also be equipped by so much useful information that you’ll also feel reassured: at least, you now know about the potential risks and pitfalls of AI and can try your best to navigate through the treacherous waters ahead. Also, you’ll wish that everyone who gets elected to political office anywhere in the world patiently listens to the fourth and final of the four lectures if not all.
Yuval Noah Harari has often written and spoken about the issue but in this podcast (Transcript here), he makes a succint observation about AI: AI is unlike any of our previous innovations—from making fire to discovery of agriculture, printing machine, telegraph, telephone and so on—in that, for the first time ever, humankind is flirting with the idea of giving away the control it already had over things and not the other way around.
If you have the patience to read an illuminating book detailing how life-and-death decisions will be handed over to the machines, the one with a gripping AI character is Origin by Dan Brown.
When the snapshots of ChatGPT’s output came out this week, it immediately reminded me of the central argument of Prof. Russel’s lecture: we have seen nothing about the capabilities of AI so far which has been designed to perform specific tasks. The real breakthrough, he argues, will be a General Purpose AI which will be asked and able to perform complicated tasks like ‘devising and implementing personalized education or health care in the setting of an entire population’ (my example).
Where will all this lead us to? Well, here is one prediction from Samuel Hammond:
Indeed, within a decade, ordinary people will have more capabilities than a CIA agent does today. You’ll be able to listen in on a conversation in an apartment across the street using the sound vibrations off a chip bag. You’ll be able to replace your face and voice with those of someone else in real time, allowing anyone to socially engineer their way into anything. Bots will slide into your DMs and have long, engaging conversations with you until it senses the best moment to send its phishing link. Games like chess and poker will have to be played naked and in the presence of (currently illegal) RF signal blockers to guarantee no one’s cheating. Relationships will fall apart when the AI lets you know, via microexpressions, that he didn’t really mean it when he said he loved you. Copyright will be as obsolete as sodomy law, as thousands of new Taylor Swift albums come into being with a single click. Public comments on new regulations will overflow with millions of cogent and entirely unique submissions that the regulator must, by law, individually read and respond to. Death-by-kamikaze drone will surpass mass shootings as the best way to enact a lurid revenge. The courts, meanwhile, will be flooded with lawsuits because who needs to pay attorney fees when your phone can file an airtight motion for you?
Finally, if you are in Clubhouse, there are some fine marathon rooms going on about ChatGPT right now. You can check them out any time.
For those of you who understand Nepali but missed our dedicated NewsGuff session on ChatGPT yesterday, you can listen to the Replay here. Also in Nepali, Setopati carried my piece educating the Nepal’s newly elected representatives about the concept of Social Contract yesterday.
For those of you who are yet to enter the ChatGPT world, here are two samples of how it responded to my queries:
Saving the world from Musk and Zukerberg
It is not accurate or productive to frame the actions of individuals like Elon Musk or Mark Zuckerberg as a threat to the world. While these individuals may have significant influence and resources, it is important to recognize that the challenges facing the world, such as climate change, inequality, and pandemics, are complex and multifaceted, and cannot be attributed to any one person or group.
Rather than focusing on individual actors, it is more productive to work towards solutions that address the root causes of global challenges. This can involve supporting policies and initiatives that promote sustainability, equity, and the well-being of all people and the planet. It can also involve advocating for transparency and accountability in the actions of individuals and organizations with significant influence.
Will AI destroy human race
It is not possible to predict with certainty what the future holds for artificial intelligence (AI) and its potential impact on humanity. Some people are concerned about the potential risks associated with AI, such as the possibility that it could be used for malicious purposes or that it could become more intelligent than humans and pose a threat to our existence.
However, it is also important to recognize that AI has the potential to bring significant benefits to society, including improving decision-making, increasing efficiency, and helping to solve complex problems.
It is important for researchers, policymakers, and society as a whole to carefully consider the potential risks and benefits of AI and to work together to ensure that it is developed and used responsibly. This can involve developing and implementing ethical guidelines, establishing robust oversight and regulatory frameworks, and engaging in open and transparent dialogue about the development and use of AI.
Next issue of THG Recommendations, the last one for 2022, will be super-long and I’ll try to compress the best materials I’ve scraped through throughout the year in it. If you think some of you friends might be interested in the kind of work I do here at THG, please forward this email to them so that they too can join me here:
👍 👍
Very important write up.Advance AI can overhaul the human life in future.