Open-source coders have generative AI now – and it could change everything
Just six months ago, on November 30th, 2022, OpenAI, backed by Microsoft, dropped a bomb on the tech world, in the form of ChatGPT. Since then, the tech industry has lost its collective mind and invested everything up to and including the family silver in generative AI – the new big prize, the new wundertool, the revolutionary technology that would change the world.
And there’s little doubt that it has, or that it will continue to do so. Every company under the sun has found some use for generative AI.
Google, outflanked by the OpenAI/Microsoft launch, burned its year’s supply of midnight oil to get its competitor, Bard, to the world in something that could just about be seriously considered good time. And a new technological arms race was declared, to become kings of generative AI.
The drag factors.
Except what also happened was that ChatGPT, GPT-4, Bard and others, ran into significant issues. Their lack of an objective truth model and the sheer size of their data libraries made them prone to convincing error. Open letters were written by the great and the self-aggrandizing, demanding a pause in the development of the technology. Italy raised legitimate concerns over data privacy. China had a puritanical hissy fit about generative AI trained on anything other than solidly socialist models.
And while companies all over the world, and at every scale, set about integrating large language model generative AI into their business practices, quietly in March, Meta’s new LLaMA platform was leaked to the open-source community.
It’s probably worth a refresher course in what happens when the open-source community gets its hands on a new toy.
The short answer is “practically everything useful you think is developed by major tech giants.”
And now, a document that purports to be a leaked internal memo from Google is painting an alarming picture for the tech giants – and an extremely attractive one for companies and people who want generative AI to do specific things, and who don’t necessarily want to pay tech giant bucks to get it done.
The flavor of the memo is perhaps conveyed in an early line. “While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open-source. Plainly put, they are lapping us.”
The open-source army.
There’s a certain irrevocable logic to this. You can lock a thousand coders and programmers in a basement in OpenAI or Google HQ and tell them to be creative or the puppy gets it. They’ll produce impressive things, to be sure.
But the open-source community is millions, if not billions strong. And they work independently and in teams to solve problems. To smooth out bugs. To build cute new things that nobody ever knew they needed. The open-source community is largely responsible for everything that works on the internet. Get that community a large language model, and it will outperform you every time, however many millions of dollars you pour into R&D in tech giants. Bottom line, the open-source community is a stable quantum computer to your 16k 1980s IBM machine – and the floppy disc it rode in on.
And the open-source community is doing precisely what the open-source community does, on the basis of Meta’s LLaMa platform. Not Google’s Bard, and not OpenAI’s ChatGPT.
The memo continues, listing things that the big companies regard as “major open problems” – which the open-source community has already solved and put into people’s hands. Today. And pretty much for free, rather than behind a paywall designed to claw back a vast amount of research investment.
Tomorrow’s capabilities – today.
For instance, the memo highlights that the open-source crowd has already cracked puzzles like:
- “LLMs on a phone.
- Scalable personal AI.
- Responsible release: This one isn’t ‘solved’ so much as ‘obviated.’
- Multimodality: A current multimodal ScienceQA SOTA was trained in an hour.”
What’s more, the memo, which purports to be from a Google staffer, starkly points out that while the big models still hold a slight edge in terms of quality, that gap is closing with astonishing rapidity. Six weeks from now? Six months?
“Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10m and 540B. And they are doing so in weeks, not months. This has profound implications for us.”
Too true. Without quoting too freely from the document, it starkly predicts:
- “We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
- People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
- Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.”
The breakdown.
What does all this actually mean?
Essentially, smaller, more agile, more project-specific versions of generative AI, that for instance, can be run on a handful of threadrippers, rather than the massively power-hungry processing power currently responsible for the likes of ChatGPT and Bard.
Equally essentially, a free-to-access version of generative AI that you can quickly and easily personalize with the data you actually need it to train on, rather than all data everywhere, as has tended to be the way with the techno-giant models. Less extraneous data and less clunkiness – without the tech giant price tag.
If you want a buzz-phrase for the impact of the open-source community and its intensive play, it’s easy to find – it represents the potential democratization of generative AI.
It’s almost ironic that the tech giants didn’t especially see this development coming, because it’s not as though the open-source community doesn’t have a record when it comes to taking things and finding infinitely better, smoother, faster ways of getting them done.
In particular in this instance, analysts are citing the use of a cheap and easy method of fine tuning, known as LoRA, and a couple of nifty developments which allowed for breakthroughs in scale – in particular, Chinchilla.
Whether the admissions and acknowledgments in the memo turn out to actually be from a Google staffer or not, the open-source community’s work on generative AI models feels like a whole new breakthrough in making the technology available, and personalized, and target-specific.
And that might yet be how you build a technological revolution.
How the big players will officially respond to the likes of OpenLLaMA – yes, there’s already an open-source clone of Meta’s original – joining the market, one thing seems certain.
Things just got interesting.
Again.
This article was created with reference to the text of the “leaked memo” found on the Semianalysis website, with our thanks. The caveats regarding the memo’s contents on the Semianalysis page should be considered to also apply to this article.