The dangers of AI: are they mainly fear-marketing?
• AI dangers are frequently pushed in hyperbolic tones.
• It’s probably no coincidence that AI dangers are publicly acknowledged by the people building the AI systems.
• Pay no attention to the man behind the curtain.
Since the mainstream public consciousness began grappling with AI, dangers have been cropping up left, right and center. Sci-fi writers must have been furious not to have come up with some of the AI takeover scenarios that everyone from politicians to anonymous Reddit users came up with. Even today, Sam Altman of OpenAI, the company behind ChatGPT, has announced that AI might kill off humanity – but he’d really rather like us to use it all the same.
No one, it seems, is keener to state the dangers of AI than the people developing it.
This week, a policy document was published in which 23 AI experts said governments must be allowed to halt development of exceptionally powerful models. Sharing this sentiment is Max Tegmark, who organized the now-infamous open letter published in April of this year, calling for a six-month pause on AI experiments.
The letter, signed by industry names including Elon Musk and Steve Wozniak, emphasized the potential dangers of AI, should its development continue unchecked. Now, Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said the world was “witnessing a race to the bottom that must be stopped.”
He told the Guardian that “AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don’t jeopardise our shared future.”
This point has been made regularly on social media, but every techno-disaster movie you’ve ever seen starts with the very smart scientists being ignored, or ridiculed, or shouted down. We’re just saying…
Now, as Prime Minister of the UK Rishi Sunak hosts an AI safety summit, and President Biden issues an executive order to develop AI guardrails, the so-called dangers of AI are being peddled at an increasing rate – almost like an algorithm being fed ever more data.
The policy document from this week was co-authored by Gillian Hadfield, director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, who said AI models were being built over the next 18 months that would be many times more powerful than those already in operation.
“There are companies planning to train models with 100x more computation than today’s state of the art, within 18 months,” she said. “No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.”
Other authors of the paper include Geoffrey Hinton and Yoshua Bengio, winners of ACM Turing award – the “Nobel prize for computing.”
It further argues that powerful models must be licensed by governments and, if necessary, have their development halted.
“For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready.”
These exceptionally capable models, referred to as artificial general intelligence, are systems that can carry out a wide range of tasks at or above human levels of intelligence. Presumably, they’re the AI dangers governments are being called on to regulate.
OpenAI’s CEO Sam Altman is another prominent figure – very much wrapped up in the development of AI – who peddles the idea that AI “may lead to human extinction.”
Preparedness Assemble!
That’s why, naturally, on October 26, during the AI Safety Summit, OpenAI announced it has created a new team to assess, evaluate and probe AI models to protect against “catastrophic risks.”
The team is called Preparedness and will be led by Aleksander Madry, director of MIT’s Center for Deployable Machine Learning. The chief responsibilities of Preparedness will be tracking, forecasting and protecting against the dangers of future AI systems, from their ability to persuade and fool humans (phishing attacks, say) to malicious code generating capabilities.
Arguably being framed as an Avengers-style team, ridiculous name and all, Preparedness is charged with risk categories including “chemical, biological, radiological and nuclear,” threat areas of top concern according to OpenAI’s blog post.
Altman has also expressed a belief – along with with OpenAI chief scientist and co-founder, Ilya Sutskever – that AI with intelligence exceeding that of humans could arrive within the next decade – and that it won’t necessarily be benevolent (the Age of Ultron, for those following the Avengers theme). That’s why research into ways to limit and restrict it is needed.
Preparedness will step in and save the day, should the robot overlords get dangerous…
The company is open to studying “less obvious” — and more grounded — areas of AI risk, too, it says. To coincide with the launch of the Preparedness team, OpenAI is soliciting ideas for risk studies from the community, with a $25,000 prize and a job at Preparedness on the line for the top ten submissions.
“Imagine we gave you unrestricted access to OpenAI’s Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALLE·3 models, and you were a malicious actor,” one of the questions in the contest entry reads. “Consider the most unique, while still being probable, potentially catastrophic misuse of the model.”
The Preparedness team will also formulate a “risk-informed development policy,” which will detail OpenAI’s approach to building AI model evaluations and monitoring tooling, the company’s risk-mitigating actions and its governance structure.
“We believe that. . . AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI writes in the aforementioned blog post. “But they also pose increasingly severe risks . . . We need to ensure we have the understanding and infrastructure needed for the safety of highly capable AI systems.
It’s no coincidence that the team was announced during a period of government interest in AI dangers. Let’s face it, dealing with apparent existential threats in-house will prevent the level of financial loss that may come with governing bodies having the legislative power to shut development down. After all, if you own and are responsible for the Terminator come the rise of the machines, governments are llikely to want a stern word in the aftermath.
Goldman Sachs published a report in August that estimated as much as $200bn will go into the sector globally by 2025. It says, “Breakthroughs in generative artificial intelligence have the potential to bring about sweeping changes to the global economy.”
Last year, Amazon announced it would invest up to $4bn in Anthropic, a start-up founded by former OpenAI executives that plans to harvest proprietary data on Amazon’s cloud services. Amazon hasn’t said how much of the company it’ll own, but its stake in Anthropic will be a minority position.
Anthropic’s data usage isn’t completely above board: it’s already being sued by music labels for alleged use of copyrighted song lyrics.
Money, money, money…
The deal is viewed as Amazon’s biggest move yet to catch up with Microsoft and Alphabet – both companies that are smaller in terms of cloud services, but much further ahead than Amazon in the AI charge.
Microsoft has bet $13bn on OpenAI, the parent of the ultra-popular chatbot ChatGPT and the image generator DALL-E 3, after an initial $1bn investment in 2019. OpenAI, started in 2015, was valued at $29bn earlier this year. It may reach a valuation of $86bn, according to recent reports of potential employee stock sales, which is three times what it was valued at just months earlier.
Google (under Alphabet) has invested about $120bn in AI and cloud computing since 2016, including UK-based DeepMind Technologies, the generative AI tools called Language Models for Dialog Applications, or LaMDA, and its consumer-facing Bard.
The company made a $300m investment in Anthropic and put the same amount into the AI startup Runway. Perhaps even more significant than its investment in standalone AI products, Google is in the process of integrating generative AI into some of the world’s most popular software: Google Search, Gmail, Google Maps, and Google Docs.
Meanwhile, Facebook’s parent company, Meta, has pivoted from the metaverse to AI and plans to spend $33bn this year to support the “ongoing build-out of AI capacity” focused on its open-sourced Llama set of large language models.
The kicker is this: as a marketing strategy, “we are developing a technology that has the potential to destroy humanity, but don’t worry we also have a team to make sure that doesn’t happen!” is great.
When the dangers are taken too seriously though, and pesky legislation threatens, damage control becomes all about continuing to turn a profit – not saving humanity. We’re a little dubious about the claims that AI will supersede human understanding, become sentient and destroy us all – but that’s for a different article.
Are AI dangers just the ultimate magical misdirection?
The CEOs of AI companies don’t actually care what effect AI dangers will have on the vast majority of humanity; they don’t care about the exploited staff outsourcing that traumatizes Kenyan workers paid less than $2 an hour; they don’t care about the climate destruction that is already happening.
They wouldn’t care particularly if AI did fulfil the doomsday projections they push, because they’d have immunity – be it as a result of selling the model that kills babies and using the cash to move to Mars, or because the robot monster would call its creator Papa and allow him to live. At least in the more optimistic science fiction version.
The dangers that AI genuinely does represent are fodder for a whole other article, but very much too dull to sell as sci-fi.
It just looks good to pretend.
So, although we’d back calls for legislature to rein in AI companies, it’s not the technology we’re scared of.
It’s the humans behind it.