What can generative AI do for business?
Getting your Trinity Audio player ready...
|
It was feared that the economics of generative AI and large language models (LLMs) trained on vast amounts of data gathered from the web and requiring thousands of GPUs could limit the rewards to just a few tech pioneers. But that scenario is changing. More firms are asking what generative AI can do for their business, and the specialist skills that were necessary at the beginning of the boom are being codified to help companies more widely.
AI systems capable of showing the right ads to the right people and enabling more powerful web searches are hugely valuable, which explains why tech giants such as Google, Meta and Microsoft have assigned multi-million dollar budgets to projects in these domains. But that activity barely scratches the surface in terms of potential use cases. And providers are showing that the rise of AI doesn’t just have to benefit massive tech firms with large in-house resources.
Democratizing AI access
Andrew Ng – a famous figure in the success of deep learning – is aware that customization requirements can present a hurdle when it comes to realizing AI’s full potential across the long tail of applications. In principle, vision systems for textile firms and food preparation companies – to give just a couple of use cases – could help millions of workers.
Ng’s vision is that simple-to-use platforms that take the heavy lifting out of building an AI model will empower vast numbers of businesses that have, until now, been unable to reap the benefits. Ng’s team has developed a platform dubbed LandingLens that ‘makes computer vision super easy’. And users can quickly educate the system to automate defect detection, improving product quality and dramatically reducing the need for time-consuming manual inspection.
Considering other streamlining options, generative AI and LLMs, to quote Google, represent the pinnacle of information retrieval technology. And while implementing these models may be child’s play for tech giants, how can other companies – for example, firms without large IT teams and working outside of the tech sector – leverage these huge search gains for their own businesses?
Considering what generative AI can do for business, it’s worth noting that LLMs enable powerful summarization features to complement enterprise search. And search queries can be not just conversational text, but also include images to provide so-called multi-modal capabilities.
Google has launched what it dubs Gen App Builder to make generative AI and LLM capabilities more widely available to developers. The product is in early access and promises to give users an out-of-the-box dev experience, accessed via the Google Cloud console.
How to build an AI-powered search engine for your business
The Gen App Builder can create search engines based on web content, by inputting a series of URLs, or users can specify structured data – for example, files in JSON format or BigQuery sources. And if you’re wanting to rapidly search through hundreds of reports, the platform can handle that too – thanks to an unstructured data option, which can be fed with PDFs.
Once created, these custom generative AI search engines can be integrated into applications using API calls and code snippets such as HTML widgets, which users can embed in their websites. What’s more, the Gen App Builder comes with analytics capabilities that provide a wide range of per-session search metrics.
The power of AI-based search for enterprise comes to light when you consider the differences between how traditional databases index information and the way that deep learning models file their data. Rather than tabulate various attributes, AI systems typically employ embeddings as their data structures. And this is a game changer.
Embedding space – mapping the meaning of content
“Once lined with specific content like text, images, tweets, or anything, AI creates a space called embedding space, which is essentially a map of the contents’ meaning,” explains Kaz Sato, a Developer Advocate at Google.
For example, you could represent a document that contained 20% financial information, 50% technical data, and 30% marketing content as a 3D coordinate (0.2, 0.5, 0.3) in embedding space. And it would then follow that a file with the coordinates (0.18, 0.49, 0.33) may be related based on the similarity of those two sets of numbers.
What’s more, real-life systems can represent a vast number of signals across many more coordinates. “In reality, embedding space may have hundreds or thousands of dimensions that can represent millions of different categories of the content,” said Sato.
One way of picturing deep learning is to imagine a process that maps high-dimension raw data onto coordinates, which provides meaning and captures the semantics of the inputs. For example, Google itself uses the approach to organize millions of websites, videos, and apps. And, once populated, the semantically related embeddings can be used to recommend related content.
Google’s embedding projector has various examples of how data can appear compressed into three dimensions. And it’s fascinating to see how sentences, despite being written in different languages, populate similar locations when they share the same meaning.
When users key their search queries into Google, the search engine giant converts that sentence into an embedding that allows a fast vector comparison of that intention with pre-transformed web pages. And the technology gives a performance boost over using keywords alone to match search queries with results.
Enterprise search 2.0
Similarly, companies can use embeddings to represent their products and other internal data, which again highlights what generative AI can do for business. Imagine applying the power of a Google search to company data and making that functionality available internally to staff.
It opens the door way beyond searching a product database, which may be out of date or contain errors. Instead, employees could also query multiple PDF catalogs, product images, and a wide variety of company content, all within a single enterprise search.
As mentioned, tools bring synopsis capabilities to enterprises, but based on company information. And this means that staff can attribute a trusted source to the business data, compared with using ChatGPT, which is decidedly more hit-and-miss.
On TechHQ, we’ve written about how the power of generative AI and LLMs is improving legal front door operations. Firms can use these tools to build advanced chatbots ingested with company data to reply to common staff queries, freeing up legal teams, and other experts to handle new requests from the business.
And platforms such as Grammarly Go are bringing generative AI and LLMs to enterprises, SME’s, and any professionals that work in a team. The recently updated app gives workers the ability to not just autocomplete documents, but include bespoke project terminology and definitions.
Picking up again on the power of semantic search, generative AI and LLMs can dramatically streamline large-scale document preparation. For example, imagine having to respond to an information request that requires removing sensitive details such as PII or unrelated company activity from hundreds of individual documents, spreadsheets, emails, instant messages, and PDFs.
With the right AI system, enterprises can streamline their operations to the point that document preparation is just a few-click process. And there’s a growing number of customizable AI model providers that have solutions designed specifically around the needs of companies.
Today we announced the expansion of our collaboration with @awscloud to extend the availability of our foundational AI models to Amazon Bedrock – a fully managed service that makes pre-trained foundation models accessible via an API.https://t.co/ndeRJCWpan
— cohere (@cohere) July 26, 2023
SambaNova Systems, based in Palo Alto, US, was founded to help business customers thrive in a new era of AI. And its platform is – according to the firm – specifically optimized for enterprises and government organizations. Also, customers retain ownership of models that have been adapted with their data, which points to the importance of AI security and keeping enterprise information in safe hands.
Many firms, worried about company operations being leaked, are telling employees not to use public chatbots at work. External moderators have access to prompts entered into free services such as OpenAI’s hugely popular ChatGPT to monitor that public systems aren’t being manipulated by bad actors. And even if model responses are kept private, the text prompts alone could still reveal much about what businesses were up to if linked to a company email address.
Private LLMs extend what generative AI can do for business
Given concerns about inadvertently distributing business intelligence, it’s no surprise to see a raft of vendors offering private LLMs, fine-tuned on company data and gated for internal use only. The NVIDIA NeMo service is badged as being able to help enterprises combine LLMs with their proprietary data to improve chatbots, customer service operations, and other business functions.
And not only can companies better protect their data using custom services, they can set guardrails so that AI applications don’t go rogue and offer advice beyond the boundary of their digital training foundations.