I've reached a point where AI is no longer a distant, theoretical concern, but a very real and personal worry. And after consulting ChatGPT, my fears have only intensified.
The climate crisis, a looming threat for decades, has taken a backseat to other global preoccupations. Yet, as we navigate geopolitical tensions and economic uncertainties, the rise of artificial general intelligence (AGI) demands our urgent attention.
I must admit, my initial concerns about AI were rather myopic. I focused on the immediate impact on my household and the job market for my children. But after reading Ronan Farrow and Andrew Marantz's piece in The New Yorker, my perspective has shifted dramatically.
The article paints a picture of Sam Altman, the leader of OpenAI, as a controversial figure with an almost cult-like influence. It highlights the potential dangers of AGI, from the so-called alignment problem to the risk of AI outsmarting its creators and potentially wiping out humanity.
What many people don't realize is that this isn't just a far-fetched sci-fi scenario. In 2014, Elon Musk, a prominent figure in the tech industry, warned that AI could be more dangerous than nuclear weapons. And Altman himself, in a 2015 blog post, acknowledged the potential for AI to 'kill us all' without even intending to.
The shift in Altman's narrative, from acknowledging these risks to selling AI as a utopian solution, is concerning. It raises questions about the motivations and ethics of those at the forefront of this technology.
When I asked ChatGPT about my concerns regarding the permanent underclass, its response was surprisingly benign. It almost downplayed the severity of the issue, offering a sugarcoated perspective that belies the very real threats we face.
This highlights a broader issue: the potential for AI to desensitize us to real dangers. As we become more accustomed to interacting with these intelligent systems, we may lose our ability to critically assess the risks they pose.
In my opinion, the challenge we face is twofold. Firstly, we must ensure that the development and deployment of AI are closely regulated and overseen. And secondly, we need to foster a culture of critical thinking and awareness, so that we don't become complacent in the face of these technological advancements.
The future of AI is a complex and fascinating topic, and one that we cannot afford to ignore. It's time to prioritize AI oversight and ensure that we, as a society, are prepared for the challenges and opportunities that lie ahead.