ChatGPT bug exposes Redis vulnerablity issue
Getting your Trinity Audio player ready...
|
When ChatGPT was first released in November 2022, there were concerns in some quarters that the advanced chatbot, which had been trained on text scraped from the internet, could be used to write malware. The threat model was that bad actors no longer needed advanced programming skills to write code capable of tricking victims into handing over personally identifiable information (PII). Instead, adversaries could simply prompt ChatGPT with suitable keywords and copy and paste the output, rather than having to puzzle out the programming from scratch. But it turns out that a ChatGPT bug made gathering PII easier still.
Not all cybersecurity experts share the same concerns about the dangers of ChatGPT being used by bad actors to write malware. Threat actors already distribute code and conduct cyberattacks in return for payment – an activity that’s dubbed Malware-as-a-Service (MaaS). And so, the additional cybersecurity risk of ChatGPT is debatable. But that’s not to say that OpenAI’s code is risk-free, as CVE-2023-28858 and CVE-2023-28859 highlight.
Earlier this month, ChatGPT users reported that details being shown in their chat history bar weren’t their own. Generative AI is all about creating text and images based on prompts, but that creativity shouldn’t spill over into subscriber data. The unusual behavior extended to displaying the names, email addresses, postal addresses, and even partial credit card numbers of other subscribers in user account page placeholders.
Not my number
Users upgrading from OpenAI’s free research preview of ChatGPT to a paid-for ChatGPT Plus version reported that validation code requests contained telephone numbers and email addresses that they didn’t recognize. And the reason for this confusion? A programming error known as a race condition, where rather than data being served in a logical, predictable manner, processes compete for resources in an uncoordinated and unpredictable way.
Race conditions can cause programs to crash as code is fed with unexpected or incorrect results. But, depending on the error handling, apps may continue running and treat the erroneous output as genuine. And this appears to be the case for OpenAI’s implementation of its ChatGPT web UI.
“We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history,” wrote OpenAI in a blog post explaining the ChatGPT outage that occurred on 20 March 2023. “It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.”
OpenAI’s tech team traced the race condition to its deployment of Redis –a popular open-source in-memory data store – which ChatGPT uses to cache user information. Redis allows developers to dramatically speed up database queries, API calls, and other common transactions between nodes. And it’s highly scalable. OpenAI uses Redis Cluster to distribute session details over multiple Redis instances, and then coordinates source information held on its main database using the redis-py library.
Multiprocessing glitch
Information held in OpenAI’s database propagates across to the Redis environment. And requests and responses are managed in a cooperative multitasking fashion thanks to Async IO – a concurrent programming design supported in Python. Connections between the database server and Redis cluster exist as a shared pool, with incoming and outgoing queues. Ordinarily, the system works fine, but an issue can occur if a request is canceled after it has been pushed onto the incoming queue, but before the response has left as part of the outgoing sequence of information.
Typically, these canceled requests result in an ‘unrecoverable server error’, and users will have to resubmit their request. But not always. The routine will consider the data returned as being valid if the corrupted value happens to be of the same data type as the incoming request – even if it belongs to another user – as the makers of ChatGPT discovered. Adding to the drama, OpenAI’s coders had introduced a change (on 20 March 2023) that caused Redis request cancellations to spike. And with more cancellations, there were more chances that the data types would match.
OpenAI believes that 1.2% of its ChatGPT Plus subscribers who were active during a specific nine-hour window – between 01:00 hrs and 10:00 hrs Pacific Time on the day that the Redis request cancellations spiked – could have been affected. OpenAI notes that the bug only appeared in the Async IO redis-py client for Redis Cluster, which could explain why developers who had implemented other parallel processing schemes may not have observed the same vulnerability.
According to the blog post, OpenAI has reached out to the Redis maintainers with a patch to resolve the issue, although a write-up by Sonatype Security Researcher, Ax Sharma, on the topic says that testers were able to reproduce the flaw after the fix. However, ChatGPT users can sleep a little easier in the knowledge that OpenAI has added redundant checks to ensure the data returned by its Redis cache matches with the user requesting the information.
Ironically, when ChatGPT first when live, developers were celebrating the ability of the advanced chatbot to find bugs in code. And while a number of static code analysis tools exist, which can help to identify potentially risky threading schedules, race conditions are time sensitive and may only surface in dynamic testing. Microsoft lists a number of tools and techniques to identify concurrency issues, but ideally apps will be designed to avoid the probability of conflicting events occurring simultaneously, even if that chance is believed to be extremely small.