The search for AI truth with ChatGPT
Back in the relative dark ages of AI chatbots (all of half a year ago in August, 2022), we sat down with Don White of Satisfi Labs, which then had what was a fairly novel product – a trained large language model AI-based conversational assistant, that could be used for contextual information and enriched experiences at, for instance, large sporting or entertainment venues. Since those innocent times, AI has become everybody’s best friend, with the arrival of first ChatGPT, then GPT4, then Bard, and very soon, a host of others.
We got back in touch with Don after the multiple ChatGPT and generative AI launches, to talk about truth, justice, and the artificial way.
READ NEXT
ChatGPT: drive me home!
THQ:
So – ChatGPT. We know there are industry expectations of what it can do, and lots of companies are signing up to use it (or its wannabes). And we know there are entirely different public expectations of what it can do. Are any of them realistic, taking the technology as it is as we speak?
The easy acceptance of the Holy Grail.
DW:
There’s a huge disconnect on the technology. The thing is, it is brilliant, and it is fascinating. But the applications of it, people don’t fully appreciate.
I recently talked to a large business in the sports world, and they said, “Well, all we need to do is turn this on and we can basically eliminate a bunch of assets that we have, this is like the Holy Grail.”
Thing is, as we’re demonstrating it to clients, I can make it swear at you, with your brand right next to it on your website. That’s the thing – it’s brilliant, and fascinating, but it doesn’t protect your brand, and it wasn’t designed for that.
So there are great expectations. But there is a lack of understanding that this is pretty much a developer’s tool, as it stands today, not an out-of-the-box product that’s going to be on 100 websites, and someone that’s currently working there just tosses it on.
THQ:
Given that there is that disconnect between what people expect it to be able to do and what it currently can do, are we expecting it to experience a market adjustment in terms of the enthusiasm of companies and people?
DW:
Yeah. I think what’s been really fun is now that people are accepting the technology is here to stay, we’re getting into the real conversations, and ChatGPT is so well-built, it will even tell you what it doesn’t do or wasn’t designed to do. You can ask it really targeted questions, which means you can say to companies and brands, “Look, I know you’re interested in working with this, it will tell you how not to use it.” And usually, that is literally the way you want to use it. That’s where the disconnect kicks in.
THQ:
One of the big things that the likes of OpenAI are at significant pains to point out – almost to damp down the enthusiasm for ChatGPT and other generative AIs as this Holy Grail you mentioned, is that it can absolutely be entirely wrong, but it can do it in a persuasive way, so unless you know a) that it is wrong, and b) in which particulars it’s wrong, it can lead you fairly far astray. How do we get a source of truth that fills that understanding gap?
What is truth? And how do you know?
DW:
That’s where I think the development comes in. Because most companies that I’ve evaluated are really focused more on some metric, like a conversion, a deflection, a funnel. Not many really created these deep knowledge bases that have indexing and are able to pull content from various places. Some have, if you look into the knowledge management space – that’s going to be a hot space now for sure.
The biggest gap right now is in understanding how the brands get their data formatted in such a way that a source of truth can exist. I was recently talking to a city, because we built a product that can guide people through a city experience and things they want to do and so on. And their first point of view was “Well, ChatGPT just knows everything about my city.”
And I said, “Okay, well, where did it learn it from?” And they said, “The internet?”
The next question obviously has to be “How does it know what you would want to say?”
Let’s say I asked it for the best bars in Liverpool. So with a query for “best bars in Liverpool,” it could literally look at the number of bars mentioned online from 2018 to 2021. Like it has some algorithm to decide, and there’s obviously a way to figure out how it came up with its results.
But are they the results you want to showcase right now? What about the hot new bar that opened last month? Is it going to include – or push forward – somewhere that has an amazing new buzz but which doesn’t yet have years of reviews to mine?
When you really dig into this, the question becomes how are you actually going to be prepared for this evolution?
The divergent mindsets.
Right now, we have businesses coming to us with one of two different mindsets, and the difference is where the great disconnect comes in between expectations and reality.
There are companies coming to us saying “I’m just going to plug this into my current setup.”
That’s an unrealistic mindset with which to approach this technology.
THQ:
Because right now, as you mentioned, it’s more or less a developer’s tool, rather than an out-of-the-box solution to x-problem or y-problem?
DW:
Right.
Then we have have clients approaching us saying, “Okay, how do I get ready for this?”
That, by contrast, is a great mindset. The idea that you should be preparing to use it when your systems are in line, that makes sense – get ready, because it really is going to rock the world. But it’s not there yet if you want to protect your brand.
THQ:
The question’s not just “Can I use it now?” but also “Does it do what I want it to do now? And if not, how do I get to a place where it’ll do what I want it to do?”
Don’t poke the lion with a stick.
DW:
ChatGPT right now is like a beautiful wild animal to an extent, but without a certain guardrails, it can get out of hand. Lions are beautiful. But without protections and precautions, lions can be scary, right? So what you have to understand is how else you could know it’s telling the truth, unless you have something to measure it against.
So with a lot of our developments, we know the right answer. So we can get ChatGPT to measure or show that we have validation. If you’re a company that doesn’t know the right answer, you literally could be publishing… well, lies is a strong word, but non-verified content. Non-verified content that looks as real and convincing as verified content would look – assuming you didn’t know any better.
The other day, we were testing with some stadium. And we asked a question about an umbrella or something, like can you take umbrellas into the stadium or something.
It totally made it up. Like, the truth was the opposite of its answer. But it looked real. And then it added something creative. Like, “Yes, you can bring in an umbrella, but security may take it away from you” or something. It was kind of funny, but that’s the thing.
THQ:
If you get a funny response, you expect there to have been a human being behind the fun? So you’re led to expect human-verified responses.
DW:
And if you’re just plugging ChatGPT into your existing setup, what you’re essentially doing is trusting it with the reputation of your brand.
Which it’s not ready to handle yet.
In Part 2 of this article, we’ll delve deeper into the search for truth – and what it means in the age of generative AI.