- AI evolves according to users’ preferences.
- Safeguarding tools gain traction.
- Watermarking and deep-fake spotting.
Although it’s far too early to start talking about the maturity of AI, what’s becoming apparent is that the providers of AI services are evolving tools very quickly to ensure that the technology is more personalized yet more controllable. Users, too, are demanding new features, and companies are stepping up to provide them. What unites both progressions in facility for users and AI vendors is that out-of-the-box large models need more features than satisfied early adopters.
As users bed the new technology into their workflows and companies providing AI respond to user and public feedback, the services and the people who use them are changing.
Hugging Face, the go-to choice for all things AI, has released a series of free tools for both creators and consumers of AI. Alarm caused by advanced deep fakes of audio and video is answered by the organization’s CNN Deepfake Image Detection, designed to examine media’s metadata to check whether the emanating source was human or algorithmic.
While a move in the right direction, metadata attached to media files can easily be removed or spoofed by anyone with a few basic tools. Playful or malicious intent is not circumvented but will deter the less canny producers of AI-generated content from passing off their productions as the real deal.
Creator protection from AI
From the same source, visual or audio content creators can embed digital ‘watermarks’ into their work. The method will add identifying data in the media itself, making its removal nontrivial. Creators of original content that don’t use AI at the point of creation have had a range of similar methods for some years, but the facility from the Hugging Face toolkit will allow some measure of reassurance.
Of course, proving originality and, if applicable, copyright is quite a distance from a creator successfully taking action on infringement of their declared moral rights. In the digital domain, relative anonymity and the pervasive power of large technology companies make legal recompense difficult to exact from transgressors.
The raw abilities of large models are of most use with a degree of gatekeeping or postprocessing. Users of AIs can spend a good deal of time learning their chosen ML tool’s specific vagaries, tuning their queries, and ensuring that the otherwise dumb AI produces the required results. At the other end of the human-AI interaction, later models extend their usefulness by postprocessing before presenting their output. Detection and prevention of so-called hallucinations, for example, were among the first features added to large, public AIs as they moved further out into the public domain from early betas.
Making AI more usable to the everyday Joe is important for providers who wish to assuage the effects of their investments in the technology and models’ high running costs.
Giving a ‘bot a ‘memory’ is the latest feature added to ChatGPT4, something that increases the quality of the user’s experience, rather than having to reset the AI’s desired output type and response style between sessions; an irritation for many.
In the latest iteration of ChatGPT4, bots will now remember (after being told once) that, for instance, responses to queries about subject A should be expert, while those about subject B should be more simplistic. An experienced horticulturalist should, therefore, get in-depth answers to queries about bedding plants, for example, but more basic responses on the financial aspects of running a business.
Preference data
Like queries from identifiable individuals to a ‘traditional’ search engine, personalization comes at the price of data privacy. On the one hand, operators of large machine learning facilities will be able to give users a more seamless experience, one that’s attuned to their tastes and query history. On the other, data about preferences, usage metrics, and areas of interest of the individual will provide a richer dataset on the user, which can be exposed to anyone with the dollars to spend.
In the middle of the AI hype cycle, it’s difficult to identify the trends in use that will define the future of the technology – part of the reason why semi-hysterical reactions to large models are still considered plausible enough to print. But one such trend at this point does seem to be software adjuncts that attune input and output to and from the models at the point of use.
Like all technological constructs, users will ultimately determine how the tech progresses. Companies offering AI (and most new technology) rarely set trends. It’s incumbent on us all to dictate our preferences for the way AI services evolve.