OpenAI saving the world, for a price
Getting your Trinity Audio player ready...
|
OpenAI will launch a “foundry”, as announced in a blogpost on Friday last week. Titled “Planning for AGI”, the essence of the change is monetization. The foundry will provide a platform for serving its “latest models”, a place for businesses to build AIs to perform specific cognitive tasks. In leaked documents, pricing starts at $250K a year and go as high as $1.5m/pa.
This points at a deal of confidence OpenAI has in its technology, because to inspire such huge financial commitments, GPT-4 must have excellent ability. At the moment, AI delivers a response but users have to evaluate results and apply them themselves, but soon AIs will be directly responsible for completing tasks, with the only necessary human input being supervision.
OpenAI was originally a nonprofit when it was founded in 2015 by tech leaders including Elon Musk. In an introductory statement, the company said that the goal of OpenAI was to “advance digital intelligence…to benefit humanity as a whole, unconstrained by financial return.” The promise was that code, research and patents would be “shared with the world”.
In 2019 the company set up a capped profit sector of operations, likely in a bid to stay ahead of tech rivals like Google and to introduce a competitive spirit. Microsoft was quick to invest $1billion and on GPT-3’s launch in 2020 it was “exclusively” licensed to Microsoft. If OpenAI ever had any real instinct towards openness, it was getting harder to discern.
No longer undertaking AI research transparently, OpenAI has decided to focus speed and profit in the rapidly developing race towards truly smart AI. Supposedly, the high cost of its foundry will ensure safety or an element of control over the product’s uses, the $250k starting price suggesting that pure-play chancers are discouraged.
Nevertheless, the website tagline of “our mission is to ensure that artificial general intelligence…benefits all of humanity” now has an added caveat: “all of humanity” is defined as a part of humanity that can afford it.
OpenAI reportedly expects to make $200 million in 2023 — a big number to many, but nothing on the billions of dollars of investments it’s received. Further, it’s an expensive game: compute training and model testing come with incredibly high costs. Valued at $29 billion, though, OpenAI is one of the most valuable US startups and it isn’t as though its founders are strapped for cash.
OpenAI: the price of change
When companies invest in the foundry, they are signing themselves up for a new era in the workplace. Jobs may become fragmented into tasks to be automated, the human workforce smaller and competing with ever-evolving AI. Emily Bender points out that “there’s too much effort trying to create autonomous machines, rather than trying to create machines that are useful tools for humans.”
Emily Bender is coauthor of what’s known as the octopus paper (actually titled “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data”), leading the argument against the value of AI. It’s rare for experts to openly critique AI because they are so often involved. To truly understand AI, one must understand the complexity involved in its creation. And if you understand to that depth, you’re probably in the thick of it, professionally.
The difference between this technological revolution and the industrial revolution of the 18th and 19th centuries is that the knowledge necessary to adopting new techniques is being strictly guarded, and the motives for guarding knowledge are opaque. OpenAI is framing the development of AI as a burden that it will take on for the greater good: “because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever.”
At this stage, it’s hard to say how much the developments at OpenAI will change. All the same, we hope that there’s a shift in the way we all talk, and think, about artificial intelligence. We’ve learnt to make “machines that can mindlessly generate text… but we haven’t learned how to stop imagining the mind behind it.” There is, as yet, no mind in our AIs, and should an AGI ever emerge, we’d hope that it would think on behalf of all of us, not just those with a cool quarter of a million dollars to petition for enlightenment.