chatbots - TechHQ Technology and business Wed, 21 Feb 2024 17:45:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Oh, Air Canada! Airline pays out after AI accident https://techhq.com/2024/02/air-canada-refund-for-customer-who-used-chatbot/ Wed, 21 Feb 2024 09:30:24 +0000 https://techhq.com/?p=232218

Ruling says Air Canada must refund customer who acted on information provided by chatbot. The airline’s chatbot isn’t available on the website anymore. The case raises the question of autonomous AI action – and who (or what) is responsible for those actions. The AI debate rages on, as debates in tech are wont to do.... Read more »

The post Oh, Air Canada! Airline pays out after AI accident appeared first on TechHQ.

]]>
  • Ruling says Air Canada must refund customer who acted on information provided by chatbot.
  • The airline’s chatbot isn’t available on the website anymore.
  • The case raises the question of autonomous AI action – and who (or what) is responsible for those actions.

The AI debate rages on, as debates in tech are wont to do.

Meanwhile, in other news, an Air Canada chatbot suddenly has total and distinct autonomy.

Although it couldn’t take the stand, when Air Canada was taken to court and asked to pay a refund offered by its chatbot, the company tried to argue that “the chatbot is a separate legal entity that is responsible for its own actions.”

After the death of his grandmother, Jake Moffat visited the Air Canada website to book a flight from Vancouver to Toronto. Unsure of the bereavement rate policy, he opened the handy chatbot and asked it to explain.

Now, even if we take the whole GenAI bot explosion with a grain of salt, some variation of the customer-facing ‘chatbot’ has existed for years. Whether just churning out automated responses and a number to call or responding with the offkey chattiness now ubiquitous with generative AI’s output, the chatbot provides the primary response consumers get from really any company.

And it’s trusted to be equivalent to getting answers from a human employee.

So, when Moffat was told he could claim a refund after booking his tickets, he went ahead and, ceding to encouragement, booked flights right away safe in the knowledge that – within 90 days – he’d be able to claim a partial refund from Air Canada.

He has the screenshot to show that the chatbot’s full response was:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Which seems about as clear and encouraging as you’d hope to get in such circumstances.

He was surprised then to find that his refund request was denied. Air Canada policy actually states that the airline won’t provide refunds for bereavement travel after the flight has been booked; the information provided by the chatbot was wrong.

Want an Air Canada refund? Talk to the bot...

Via Ars Technica.

Moffat spent months trying to get his refund, showing the airline what the chatbot had said. He was met with the same answer: refunds can’t be requested retroactively. Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.

We’ve underlined the phrase that the chatbot used to link further reading. The way that hyperlinked text is used across the internet – including here on TechHQ – means few actually follow a link through. Particularly in the case of the GenAI answer, it functions as a citation-cum-definition of whatever is underlined.

Still, the chatbot’s hyperlink meant the airline kept refusing to refund Moffat. Its best offer was a promise to update the chatbot and give Moffat a $200 coupon. So he took them to court.

Moffat filed a small claim complaint in Canada’s Civil Resolution Tribunal. Air Canada argued that not only should its chatbot be considered a separate legal entity, but also that Moffat never should have trusted it. Because naturally, customers should of course in no way trust systems put in place by companies to mean what they say.

Christopher Rivers, the Tribunal member who decided the case in favor of Moffat, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Rivers also found that Moffat had no reason to believe one part of the site would be accurate and another wouldn’t – Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” he wrote.

In the end, he ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) (around $482 USD) off the original fare, which was $1,640.36 CAD (around $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Ars Technica heard from Air Canada that it will comply with the ruling and considers the matter closed. Moffat will receive his Air Canada refund.

The AI approach

Last year, CIO of Air Canada Mel Crocker told news outlets that the company had launched the chatbot as an AI “experiment.”

Originally, it was a way to take the load off the airline’s call center when flights were delayed or cancelled. Read: give customers information that would otherwise be available from human employees – which must be presumed to be accurate, or its entire function is redundant.

In the case of a snowstorm, say, “if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal being to automate every service that did not require a “human touch.”

Crocker said that where Air Canada could, it would use “technology to solve something that can be automated.”

The company’s investment in AI was so great that, she told the media, the money put towards AI was greater than the cost of continuing to pay human workers to handle simple enquiries.

But the fears that robots will take everyone’s jobs are fearmongering nonsense, obviously.

In this case, liability might have been avoided if the chatbot had given a warning to customers that its information could be inaccurate. That’s not good optics when you’re spending more on it than humans at least marginally less likely to hallucinate refund policies out of thin data.

Because it didn’t include any such warning, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.” The responsibility lies with Air Canada for any information on its website, regardless of whether it’s from a “strategic page or a chatbot.”

This case opens up the question of AI culpability in the ongoing debate about its efficacy. On the one hand, we have a technology that’s lauded as infallible – or at least on its way to infallibility, and certainly as trustworthy as human beings, with their legendary capacity for “human error.” In fact, it’s frequently sold as a technology that eradicates human error, (and, sometimes, the humans too) from the workplace.

So established is the belief that (generative) artificial intelligence is intelligent, when a GenAI-powered chatbot makes a mistake, the blame lies with it, not the humans who implemented it.

Fears of what AI means for the future are fast being reduced in the public media to the straw man that it will “rise up and kill us” – a line not in any way subdued by calls for AI development to be paused or halted “before something cataclysmic happens.”

The real issue though is the way in which humans are already beginning to regard the technology as an entity separate from the systems in which it exists – and an infallible, final arbiter of what’s right and wrong in such systems. While imagining the State versus ChatGPT is somewhat amusing, passing off corporate error to a supposedly all-intelligent third party seems like a convenient “get out of jail free card” for companies to play – though at least in Canada, the Tribunal system was engaged enough to see this as an absurd concept.

Imagine for a moment that Air Canada had better lawyers, with much greater financial backing, and the scenario of “It wasn’t us, it was our chatbot” becomes altogether more plausible as a defence.

Ultimately, what happened here is that Air Canada refused compensation to a confused and grieving customer. Had a human employee told Moffat he could get a refund after booking his flight, then perhaps Air Canada could refuse – but this is because of the unspoken assumption that said employee would be working from given rules – a set of data upon which they were trained, perhaps – that they’d actively ignored.

In fact, headlines proclaiming that the chatbot ‘lied’ to Moffat are following the established formula for a story in which a disgruntled or foolish employee knowingly gave out incorrect information. The chatbot didn’t ‘know’ what it said was false; had it been given accurate enough training, it would have provided the answer available elsewhere on the Air Canada website.

At the moment, the Air Canada chatbot is not on the website.

Feel free to imagine it locked in a room somewhere, having its algorithms hit with hockey sticks, if you like.

It’s also worth noting that while the ruling was made this year, it was 2022 when Moffat used the chatbot, which is back in the pre-ChatGPT dark ages of AI. While the implications of the case impact the AI industry as it exists here and now, the chatbot’s error in itself isn’t representative, given that it was an early example of AI use.

Still, Air Canada freely assigned it the culpability of a far more advanced intelligence, which speaks to perceptions of GenAI’s high-level abilities. Further, this kind of thing is still happening:

"Howdy doodley doo!" The chipper nature of chatbots often disguises their data or algorithm flaws.

“No takesies backsies.” There’s that chatbot chattiness…

Also, does it bother anyone else that an AI chatbot just hallucinated a more humane policy than the human beings who operated it were prepared to stand by?

The post Oh, Air Canada! Airline pays out after AI accident appeared first on TechHQ.

]]>
A privacy-first chatbot that’s anything but secretive https://techhq.com/2023/12/what-data-is-amazon-chatbot-launch-q-revealing/ Fri, 08 Dec 2023 09:00:40 +0000 https://techhq.com/?p=230549

Amazon chatbot released at the beginning of December – a year after some competitors.  The chatbot has run into problems, and could leak confidential data.  It’s also reported to be hallucinating significantly. Amazon recently launched an AI chatbot called Amazon Q. As has been the case with almost every chatbot launch in the year of... Read more »

The post A privacy-first chatbot that’s anything but secretive appeared first on TechHQ.

]]>
  • Amazon chatbot released at the beginning of December – a year after some competitors. 
  • The chatbot has run into problems, and could leak confidential data. 
  • It’s also reported to be hallucinating significantly.

Amazon recently launched an AI chatbot called Amazon Q. As has been the case with almost every chatbot launch in the year of undue haste that was 2023, problems have been identified very quickly after its launch. In this instance, the Amazon chatbot is “experiencing severe hallucinations and leaking confidential data.”

That’s right – far from being a weapons and gadgets expert, Amazon Q is a spy on LSD.

The revelation came, not – as if often the case with malfunctioning chatbots – from internet trolls posting their getarounds, but from leaked internal documents (andgiven its habit of radical indiscretion, you have to wonder whether Amazon Q leaked them itself!). Posted to Platformer, they say that the leaked data includes the location of Amazon Web Services (AWS) data centers around the world.

An Amazon spokesperson told us, contrary to the report, “Amazon Q has not leaked confidential information.”

Amazon’s chatbot also allegedly revealed internal discount programs and unreleased features. The incident was marked as “sev 2,” enough of an issue to warrant paging engineers at night and making them work through the weekend. Observers could be forgiven for acknowledging that Amazon warehouse working patterns, finally enforced for the higher ups, is how you know the problem was significant.

“Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon,” a spokesperson said. “No security issue was identified as a result of that feedback. We appreciate all of the feedback we’ve already received and will continue to tune Q as it transitions from being a product in preview to being generally available.”

It’s worth remembering that Amazon pitched Q as a more security- and privacy- focused alternative to other generative AI chatbots.

Amazon chatbot, Q, is experiencing significant issues, just a month after its launch.

Amazon’s chatbot, Q, was launched at the AWS developers conference at the start of the month.

Adam Selipsky, CEO of Amazon Web Services, told the New York Times that companies “had banned these AI assistants from the enterprise because of the security and privacy concerns.” In response, the Times reported, “Amazon built Q to be more secure and private than a consumer chatbot.”

Whether these features were meant to include the security and privacy of AWS is another story. Internal documents claim Q has given misleading answers on digital sovereignty and other issues.

““Expect the Q team to be very, very busy for a while,” one employee said in a Slack channel seen by Platformer. “I’ve also seen apparent Q hallucinations I’d expect to potentially induce cardiac incidents in Legal.”

Amazon is very cagey about the locations of its vast data center footprint, which is made up of its own builds and wholesale leases. As of December 2022, AWS revealed it owns 15.4 million square feet of data center space around the world, and leases another 18 million.

A few years ago, WikiLeaks published internal Amazon documents showing the locations of its data centers as of 2015. A 20-page document included a map tagged with locations for easier viewing.

Is the Amazon chatbot running late?

The issues with the Amazon chatbot come at a time when the company is fighting against the perception that Microsoft and Google have beaten it in the AI arms race. After announcing it would spend $4 billion on AI startup Anthropic, Amazon revealed Q at its annual Amazon Web Services developer conference.

As these things go, chatbots are arguably fairly passe by now. Microsoft’s involvement in ChatGPT, which began making headlines just over a year ago, means Amazon might be thought of as falling (way) behind. To successfully launch an initial chatbot in December, 2023, potential business customers need a cogent answer to the wuestion of what it does over and above the chatbots that have been out there, developing and overcoming their data nightmare teething troubles for a year.

Amazon chatbot Q - like a 2-year-old.

Q is acting like an AI 2-year-old – no filter and a wild imagination.

The chatbot offering from Amazon has been presented as an enterprise-software version of ChatGPT. Initially, it would answer questions from developers about AWS, edit source code and cite sources. Yes, it’s competing with Microsoft and Google, but it’s priced lower. ChatGPT Prime, if you like.

Another selling point would have been its increased security – but the information leaks reported internally make that significantly harder to sell.

Still, the risks presented by Q, outlined in the document, are actually ypical of LLMs, all of which return incorrect or inappropriate responses some of the time. So, it’s not as though Amazon’s chatbot is worse than the others – it’s just that its at least as notably deranged, in a very public way, from a company that’s a year late to the party, and that marketed it on the basis on increased security. The equation of saleability there is hard to solve without significant reparenting of the model.

And, of course, there’s that inconvenient detail that its responses threaten to spill one of the best kept secrets in the data center sector: where AWS is hidden.

What happens next is anyone’s guess – but there will clearly be some extensive overtime worked in what we like to think of as Q Branch at Amazon, trying to get the privacy-centered chatbot to shut up.

The post A privacy-first chatbot that’s anything but secretive appeared first on TechHQ.

]]>
New horizons with OpenAI GPT updates https://techhq.com/2023/11/how-are-openai-gpt-updates-opening-opportunities-for-businesses/ Mon, 20 Nov 2023 12:30:58 +0000 https://techhq.com/?p=230006

• OpenAI GPT has new updates for businesses. • OpenAI is introducing variants of GPT that can be individually trained. • OpenAI and Microsoft are partnering on these new developments. A new bag of tricks has been opened up by OpenAI as the company announces new updates that will affect all users. Now, it is... Read more »

The post New horizons with OpenAI GPT updates appeared first on TechHQ.

]]>

• OpenAI GPT has new updates for businesses.
• OpenAI is introducing variants of GPT that can be individually trained.
• OpenAI and Microsoft are partnering on these new developments.

A new bag of tricks has been opened up by OpenAI as the company announces new updates that will affect all users. Now, it is possible to produce a personalized version of the ChatGPT AI chatbot, which can also be shared with others.

These “GPT” personal chatbots were announced on Monday, November 6th at an OpenAI Developer event in San Francisco. At the “DevDay” event, OpenAI CEO Sam Altman created a bot that provided startup advice in only a few minutes, echoing Mr Altman’s earlier role as head of Y Combinator, an organization that funded early-stage startups.

Natural language conversations are responsible for the creation of these GPTs. For instance, they can be used to design logos and stickers, or be used to teach certain subjects to children.

OpenAI store to pay creators

It has also been announced that OpenAI will launch a store later this month. Here, users can upload a customized GPT for personal or professional use. The most-used GPTs will then receive a portion of OpenAI’s revenue; another indication of how the company is striving to democratize and create a product ecosystem. The store will feature a system that will inspect GPTs for compliance with OpenAI’s user policies.

Coming practically immediately - an OpenAI GPT Store.

Coming practically immediately – an OpenAI GPT Store.

OpenAI released a statement saying, “Starting today, you can create GPTs and share them publicly. Later this month, we’re launching the GPT Store, featuring creations by verified builders. Once in the store, GPTs become searchable and may climb the leaderboards. We will also spotlight the most useful and delightful GPTs we come across in categories like productivity, education, and ‘just for fun.’ In the coming months, you’ll also be able to earn money based on how many people are using your GPT.”

The emergence of this GPT Store marks the arrival of a potentially transformative marketplace, offering a future where creativity aligns with monetization.

It seems OpenAI is taking a leaf out of Apple’s book. Apple launched its own App Store one year after the release of the iPhone in 2007. Now, OpenAI is launching a store one year after the release of ChatGPT. The aim? To expand the ecosystem and access to AI tools and services. What will be challenging is ensuring that all GPTs released on the store have rigorous data sourcing policies to ensure that, for instance, GPTs on the store can all be trusted to comply with regulations and deliver authentic results within the needs of the buying community.

It’s somewhat hard to believe that OpenAI’s ChatGPT was only launched in November 2022. In under a year, the AI organization has revolutionized the industry, showcasing the possibilities associated with sophisticated types of generative AI, and large language models.

Major updates for ChatGPT Plus

These new changes are initially being rolled out to Enterprise and ChatGPT Plus users. OpenAI plans to release it to a wider audience in the as-yet unspecified “near future.”

ChatGPT-4 comes with a bouquet of extras for ChatGPT Plus, but now, most of these have been combined into OpenAI’s most robust model yet. Until now, users have had to select each feature from a drop-down list, but this streamlined approach makes everything simpler to access.

When Plus subscribers select GPT-4, they can now access their Advanced Data Analysis, as well as DALL-E 3’s image generation.

ChatGPT-4 is still unable to use plugins, though. This is because there might be conflicts with the newly integrated capabilities of GPT-4, due to the variation of plugins available.

ChatGPT-Turbo

Another notable announcement at OpenAI’s DevDay conference was the unveiling of a new AI model dubbed “GPT-4 Turbo.” This model has updated information and knowledge, when compared to ChatGPT-4. Turbo has information dating to April 2023, rather than September 2021. Moreover, Turbo users are able to insert over 300 pages of text with just one prompt, thanks to a 128k context window.

According to OpenAI, “GPT-4 Turbo is available for all paying developers to try by passing gpt-4-1106-preview in the API, and we plan to release the stable production-ready model in the coming weeks.”

It was also reported that prices are to be cut across various OpenAI models, in an aim to stay ahead of the competition, such as Google’s Anthropic, and Elon Musk’s chatbots. GPT-4, for instance, is set to be three times cheaper than GPT-4.

Microsoft and OpenAI – a thriving partnership

There is little doubt that generative AI is and will continue to be profitable. This is evident as over 92% of Fortune 500 companies now utilize ChatGPT, a rise of 12% from August this year. This indicates a growing inclination to invest in generative AI.

Microsoft is a major investor in OpenAI, and has partnered closely with the company since 2019. The multinational technology corporation powers Azure OpenAI, a popular service used for its cloud platform. This contributes to Microsoft’s productivity applications and AI searches.

This tight bond between the two companies was evident at the conference, with Microsoft CEO Satya Nadella appearing briefly to say, “we love you guys.” Afterwards, Sam Altman said, “I think we have the best partnership in tech, and we’re excited to build AGI together.”

While events since the conference, including the removal of Altman from his OpenAI position and his taking a role in a “new advanced AI team” at Microsoft, would seem to give the lie to the “best partnership in tech” line, it’s clear that there’s a close continuing relationship between OpenAI and Microsoft.

OpenAI and Microsoft - still the best of partners?

OpenAI and Microsoft – still the best of partners?

According to Mr Nadella, the initial objective for the “partners” is to leverage systems like Azure’s cloud computing platform and supercomputers. This can then help OpenAI develop superior AI models. With such a powerful partnership, expect seismic leaps in the fields of artificial general intelligence over the next few years.

The journey of GPTs and assistants is only just beginning, according to Sam Altman. “Over time, GPTs and assistants are precursors to agents [that] are going to do much more. They’ll gradually be able to plan and perform more complex actions on your behalf.”

Considering ChatGPT now reaches an estimated 100 million weekly users, it is no surprise OpenAI has made these changes to bolster its overall user experience.

Security concerns arise with recent GPT updates

The new OpenAI GPT updates, like the Code Interpreter, have raised security concerns within OpenAI’s popular AI due to critical flaws in its file-upload feature. Research has highlighted vulnerabilities that let malicious websites prompt code execution.

OpenAI GPT updates in a nutshell.

OpenAI GPT updates in a nutshell.

Despite requiring specific user actions, such as pasting a malicious URL, the risk persists. It has been found that ChatGPT has the ability to unwittingly send data to external servers. This flaw is certainly alarming as ChatGPT, capable of executing commands and handling Linux-based files, should ideally avoid executing external instructions.

This loophole presents a significant security risk, but OpenAI has not commented on this matter yet. Therefore, a cloud hangs over the recent highlights of GPT personal chatbots and GPT-Turbo, with concerns about the AI’s vulnerability still unaddressed.

The post New horizons with OpenAI GPT updates appeared first on TechHQ.

]]>
Practical ways to use AI to improve your organization’s finances https://techhq.com/2023/10/best-ways-ai-improve-business-finances/ Tue, 31 Oct 2023 15:06:06 +0000 https://techhq.com/?p=229411

Sponsored by Ramp AI has played a starring role in most functions over the past year, with countless products hitting the market promising to revolutionize how things are done. The finance department is no different; from automated fraud detection to intelligent chatbots providing real-time spend analysis, the technology certainly has the potential to relieve repetitive... Read more »

The post Practical ways to use AI to improve your organization’s finances appeared first on TechHQ.

]]>

Sponsored by Ramp

AI has played a starring role in most functions over the past year, with countless products hitting the market promising to revolutionize how things are done. The finance department is no different; from automated fraud detection to intelligent chatbots providing real-time spend analysis, the technology certainly has the potential to relieve repetitive tasks from personnel.

However, while AI certainly has promise in finance teams, so far, the tangible impact of these products has been minimal. Eric Glyman, the co-founder and CEO of Ramp, told Bloomberg: “I think far more companies are marketing the use of AI, how their chatbot’s going to revolutionize the industry, than are truly trying to solve problems.

“I’ve never met a customer who said, ‘I just wish I could chat with my bank account, I would ask it questions and learn things.’ Instead, they tend to ask things like, ‘I’d like to pay less for this service I’m paying for, I’d like to automate my accounting and close my books quicker.’

“I think, often, if you see companies claiming AI but you can’t find real customers behind it, they’re not talking about how they’re integrating AI truly into the workflow, their software is probably ‘AI washing’.”

AI products that don’t solve actual pain points are examples of AI washing. In these solutions, automation might be too complex to set up or not fit for purpose, so accounting teams remain bogged down by basic operational tasks like chasing employees for receipts and coding transactions manually, while finance leaders are still tethered to stale data for important business decisions.

Many companies market AI without genuinely integrating it into their products’ workflows, and finance leaders may not become aware of this until after investing in it. As a result, they become skeptical over whether they’ll actually see any payback from their investment. In 2020, a BCG and MIT study found that only 10 percent of organizations saw “significant financial benefits” through increased revenue or cost savings after implementing an AI solution.

So how can finance teams select AI that’s actually worth their while? Before investing in a new AI-powered solution, it is wise to audit the organization’s needs. This way, decision-makers can be sure that the solution aligns with actual pain points and workflow requirements, preventing costly missteps and disappointment.

Auditing AI software for your finance team

  1. Will it speed up work?

A good place to start is thinking about how it can speed up work. AI has the potential to take on demand/revenue forecasting, anomaly and error detection, and financial reporting, to name just a few of its abilities. As a result, finance professionals can shift their focus to more value-added activities, such as strategic analysis and decision-making.

Mark D. McDonald, senior director of research at the Gartner Finance Practice, said: “Forecasting is a popular use case in finance departments because legacy processes are manually intensive and notoriously unreliable. AI excels at automation and improving accuracy.

“Many pre-configured software packages address common finance processes such as accounts receivable and accounts payable but be aware that use cases which address unique business needs, such as forecasting, will require some internal skills to build.”

Reporting is another common time-consuming area for finance teams. Generic chatbots can be marginally useful, but those actually trained on a company’s data can perform deeper analysis to provide valuable and actionable insights. Access to such insights will also be accelerated if the bot responds to natural language commands.

  1. Will it increase accuracy?

As Mr McDonald highlighted, another aspect to look out for in AI-enhanced finance products is how it can make things more accurate. Machine learning algorithms can identify and organize data from various sources in a single place, reducing the scope for human errors. In finance, this encompasses coding expenses, checking for inconsistencies in financial statements, and ensuring compliance with regulations. However, as before, a product that offers these features will only be a worthwhile investment if it aligns with the organization’s specific needs and challenges.

AI in finance

Source: Shutterstock

For example, if finance teams are suffering from long close processes, look for finance software that offers accounting AI. These days, expense coding can be automated with an algorithm trained on coding data from tens of thousands of accountants. Human errors in expense reporting could be eliminated with AI-scanned receipts that are instantly matched to transactions and suggested memos from each receipt’s context.

“Automating back-office workflows is a key to achieving efficiency gains across a number of areas, including accounts payable, accounts receivable, and internal IT services, such as helpdesk support,” said Randeep Rathindran, vice president of research in the Gartner Finance Practice. “In a cash-constrained environment, where margins are under pressure, the urgency to improve productivity in these areas is heightened.”

  1. Will it save money?

Finally, when auditing the company’s needs for an AI-enhanced solution, business leaders must consider where it could save money. Productivity increases could present savings indirectly, but there are potential direct benefits to be accrued, too.

AI has the potential to make vendor price comparisons for accounts teams, for example. Ramp’s finance AI offers price intelligence that helps finance leaders get the best deal by bringing the wisdom of the crowd to software pricing. Finance teams can upload software contracts and, powered by GPT-4, Ramp extracts pricing details and benchmarks them against millions of Ramp transactions. This provides visibility into software pricing down to individual SKUs and cost per seat, so finance teams instantly understand whether they’re getting a fair price.

AI can also catch out-of-policy spending by analyzing expenses against established guidelines and alerting finance teams in real-time, potentially saving the organization from resource-intensive compliance issues.

The demand for well-integrated, useful products in accounting and finance is there. A report from Accenture found that 84 percent of C-suite executives believe they must leverage AI to achieve their growth objectives. Ramp’s finance platform has been built with AI as a core component from day one to ensure that all its features address real challenges for finance professionals and help organizations scale successfully. To explore the AI capabilities of Ramp and witness the impact it can make on your business, schedule a demo with the team today.

The post Practical ways to use AI to improve your organization’s finances appeared first on TechHQ.

]]>
Will OpenAI join other tech giants in making its own AI chips? https://techhq.com/2023/10/will-openai-make-its-own-ai-chips/ Tue, 10 Oct 2023 08:59:14 +0000 https://techhq.com/?p=228819

OpenAI plans to work on its AI chips to avoid a costly reliance on Nvidia. It’s been rumored to be considering an acquisition to speed the process. Either way, bringing a custom chip to market could take years. The AI chips market has emerged as a pivotal force in the technology industry, driven by the... Read more »

The post Will OpenAI join other tech giants in making its own AI chips? appeared first on TechHQ.

]]>
  • OpenAI plans to work on its AI chips to avoid a costly reliance on Nvidia.
  • It’s been rumored to be considering an acquisition to speed the process.
  • Either way, bringing a custom chip to market could take years.

The AI chips market has emerged as a pivotal force in the technology industry, driven by the increasing demand for AI solutions across various applications. These specialized chips, designed specifically for AI workloads, have become the cornerstone of AI innovation. Companies like Nvidia, Intel, and AMD compete fiercely to produce cutting-edge AI chips that offer unmatched computational power, energy efficiency, and scalability. 

Nvidia has specifically played a pioneering role in the AI chips market, establishing itself as a leading provider of high-performance Graphics Processing Units (GPUs) tailored for AI and machine learning (ML) workloads. The H100, announced last year, is Nvidia’s latest flagship AI chip, succeeding the A100, a roughly US$10,000 chip called the “workhorse” for AI applications. Developers today primarily use the H100 to build so-called large language models (LLMs), which are at the heart of AI applications like OpenAI’s ChatGPT. 

Running those systems is expensive and requires powerful computers to churn through terabytes of data for days or weeks. They also rely on hefty computing power to generate text, images, or predictions for the AI model. Training AI models, especially large ones like GPT, requires hundreds of high-end Nvidia Graphics Processing Units (GPUs) working together.

The thing is, most of the world’s AI chips are being produced by Nvidia. Especially since OpenAI’s ChatGPT has become a global phenomenon, the California-based chipmaker has become a one-stop shop for AI development, from chips to software to other services. The world’s insatiable hunger for more processing power has even pushed Nvidia to become a US$1 trillion company this year. 

Nvidia vs. AMD vs. Intel: Comparing AI Chip Sales. Source: Visual capitalist

Nvidia vs. AMD vs. Intel: Comparing AI Chip Sales. Source: Visual capitalist

While tech giants like Google, Amazon, MetaIBM, and others have also produced AI chips, Nvidia accounts for more than 70% of AI chip sales today. It holds an even more prominent position in training generative AI models, according to the research firm Omdia. For context, OpenAI’s ChatGPT fuses together a Microsoft supercomputer that uses 10,000 Nvidia GPUs. 

But from OpenAI’s point of view, while Nvidia is necessary for the company’s current operations, the dependency may need to be revised in the long term. Because if, as Bernstein analyst Stacy Rasgon says, each ChatGPT query costs the company around 4 cents, the amount will only continue to grow alongside the usage of ChatGPT. 

The report by Reuters in whcih Rasgon is quoted also said that if OpenAI increased its query volume to 1/10th of Google’s over time, it would require US$48 billion in GPUs to scale to that level.

Beyond that, it must spend US$16 billion annually to keep up with demand. This is an existential problem for the company – and the industry at large. With that in mind, the company has been considering working on its own AI chips to avoid a costly reliance on Nvidia.

Even CEO Sam Altman has indicated that the effort to get more chips is tied to two major concerns: a shortage of the advanced processors that power OpenAI’s software and the “eye-watering” costs associated with running the hardware necessary to power its efforts and products.

OpenAI’s plans on AI chips: To make or not to make?

According to recent internal discussions described to Reuters, the company has been actively discussing this matter but has yet to decide on the next step. The discussion has so far centered on options to solve the shortage of expensive AI chips that OpenAI relies on.

Will OpenAI create its own AI chips?

Will OpenAI create its own AI chips? If so, how, and by when?

Simultaneously, there have been rumos that Microsoft is also looking in-house and has accelerated its work on ‘Codename Athena,’ a project to build its own AI chips.

While it’s unclear if OpenAI is on the same project, Microsoft reportedly plans to make its AI chips available more broadly within the company as early as this year.

A report by The Verge indicated that Microsoft may also have a roadmap for the chips that includes multiple future generations. A separate report suggested that the chip reveal will likely occur at Microsoft’s Ignite conference in Seattle in November. Athena is expected to compete with Nvidia’s flagship H100 GPU for AI acceleration in data centers if that comes through. 

“The custom silicon has been secretly tested by small groups at Microsoft and partner OpenAI,” according to Maginative’s report. However, if OpenAI were to move ahead to build a custom chip independent of Microsoft, it would include a heavy investment that could amount to hundreds of millions of dollars a year in costs – with no guarantee of success.

What about an acquisition?

While the company has been exploring making its own AI chips since late last year, sources claim that the ChatGPTmaker has evaluated a potential acquisition target. Undoubtedly, the acquisition of a chip company could speed the process of building OpenAI’s chip – as it did for Amazon.com and its acquisition of Annapurna Labs in 2015.

“OpenAI had considered the path to the point where it performed due diligence on a potential acquisition target, according to one of the people familiar with its plans,” Reuters stated. Even if OpenAI has plans for a custom chip – including an acquisition – the effort will likely take several years.

In short, whatever the path may be, OpenAI will still be highly dependent on Nvidia for a while.

The post Will OpenAI join other tech giants in making its own AI chips? appeared first on TechHQ.

]]>
The synergy of AI and human agents: A new era in Customer Support https://techhq.com/2023/08/contact-center-future-ai-chatbots-freshworks/ Wed, 09 Aug 2023 13:25:53 +0000 https://techhq.com/?p=227049

Colin Crowley, the Senior Director of Customer Engagement at Freshworks, explains why, despite advances in AI, contact centers will always need the human touch.

The post The synergy of AI and human agents: A new era in Customer Support appeared first on TechHQ.

]]>

From chatbots and virtual assistants that offer instant, personalized assistance to predictive analytics that anticipate customer needs, AI is redefining the very essence of customer interactions.

Automated systems are so intuitive that they can sort out simple queries without anyone having to wait in a tedious call queue. Human agents can now concentrate on customer issues that could only be solved with their skills, with AI-powered tools allowing them to do so more efficiently than ever.

TechHQ spoke to Colin Crowley, the Senior Director of Customer Engagement at contact center systems specialist Freshworks, about some of the challenges facing businesses and their CX improvement challenges.

“On a general basis, a lot of these AI-powered technologies help to resolve one of the age-old battles in the customer support world between efficiency and quality,” he said.

The typical AI chatbot can help to increase a company’s efficiency but automated, smart systems can’t handle every inquiry. However, forgoing them entirely results in human agents dealing with a high volume of menial issues, leading to unnecessarily long wait times.

“Typically, in the past, efficiency and quality were at loggerheads, where you would have to sacrifice one for the other. And, typically, you were sacrificing the former for the latter because it’s easier to quantify and monetize efficiency.”

In a contact center, efficiency is measured, traditionally, with metrics like the number of contacts an agent makes each hour or the average issue-handling time. However, an assessment of quality would involve evaluating a myriad of factors, including the agent’s communication skills, problem-solving abilities, and customer satisfaction.

Mr Crowley said: “I think what AI has done is really change that up entirely, so we now live in a world where you’re capable of increasing quality and efficiency at the same time; it’s no longer a choice between one or the other.”

Quality can now be metricized through AI-powered sentiment analysis, which gives a reading of how a customer is feeling about a conversation, and scoring based on specific criteria or keywords, allowing agents’ performance to be easily monitored at scale.

The best way to strike a balance is to allow chatbots to solely deal with high-volume, straightforward queries, said Mr Crowley.

He added: “That enables customer support teams increasingly to be, I guess you could say, ‘human’ at scale. It empowers them to be more personalized and empathetic and, at the same time, they can handle more contacts simultaneously.”

Companies can also implement A/B testing to achieve that balance, identifying whether a bot or a human agent would work better for ‘grey area’ customer inquiries. For example, early in the sales process, questions from prospective customers are likely to be simple ones that a chatbot could manage, but also the human touch could help boost brand image and be effective towards conversions.

“AI-powered technology can help you look at the qualitative difference in those conversations […] It makes the chatbots able to handle a lot more in a way that doesn’t make you feel like you’re in an endless loop.”

 

 

Similarly, an agent-facing bot could flag issues that a customer may have with the service when they reach out with an unrelated query. The agent can then offer to solve this issue too.

AI tools can further benefit human support agents with real-time sentiment analysis. This is particularly beneficial when they must quickly move between multiple conversations requiring different sensitivity and empathy levels.

Mr Crowley said: “Agent-facing bots can be there to help correct your tonality when you’re communicating back and forth with customers to make sure that, based on the mood and the sentiment of the customer, you’re communicating in such a way that is most acclimated to what the customer needs to hear at that particular time.”

For managers, more advanced, AI-powered data analytics can give them a unified view of agent performance, helping to identify areas of improvement. It is common for customer support organizations to sit on top of swathes of unused data, largely because they do not know how to gather it or turn it into actionable insights.

“That’s a huge area where AI can help because it can analyze all of this […] data coming in from customer contacts on the fly and help to funnel that back into the operations, product, or web team to improve your service or product,” said Mr Crowley.

The technology can also help agents offer benefits to customers while remaining mindful of company finances.

Mr Crowley said: “It’s hard to know what’s the best discount or amount of credits to give someone, and when you’re giving too much or too little.

“Based on certain criteria of the customer, which could be their loyalty tier, prior contact history, or how many orders they’ve had previously, that agent-facing bot could recommend the perfect accommodation that satisfies the customer and, at the same time, is conscientious about the company’s pocketbook.”

While there are many benefits to applying AI solutions, doing so effectively is challenging. Often the data necessary for them to work comes from disparate company systems, and it is time-consuming to collate it, especially in real-time and at scale.

The solutions can also involve a significant upfront financial investment without an easily measurable ROI, which has traditionally held companies back. However, Mr Crowley says that this is likely to decrease in the near future.

“Like with ChatGPT, we are seeing widely accessible technology that’s democratizing access to AI,” he said. “At Freshworks, we’re trying to build AI into the product from the get-go. This means you don’t have to balance three separate vendors for this and that AI technology and deal with the data siloing that takes place and extra price tags.”

Advancements in AI solutions are coming thick and fast, and customers now have hyper-personalized experiences in their dealings with all businesses. Their expectations for support are constantly being raised, therefore.

One way Mr Crowley said meeting increased demands could be achieved is by moving some agents, who have a reduced workload thanks to lower-level issues being taken over by chatbots, into an “innovation team.”“Agents can move into more specialized roles where they help to be pioneers in company technology by just keeping up with that technology curve and helping to pilot programs,” he said.

Other roles could be created for agents which involve performing quality assurance on the chatbots and managing their conversational flow. “That’s a great way to utilize agents who are really skilled,” said Mr Crowley.

“The technology can actually create more career opportunities for agents within a customer support organization.”

The role of the support agent itself will also evolve with the integration of AI. Mr Crowley said: “The average customer support agent is going to be handling specialty tasks more and more because most of the lower-level issues are going to be taken care of.

“That entirely changes the customer support role from being […] this lower-level role, where you’re just a reactive receptacle of people’s complaints, to a role for people who are more skilled, more specialized, and paid better.“You could say that it actually improves the agent experience too, because typically agents don’t like handling very mundane issues because it’s not very satisfying.”

However, it is necessary to pair the launch of AI tools with a clear plan about how the support agent role will change and remain a part of the company structure.

Any investment in customer-facing AI technologies should also be matched by investment into projects which support agents. After all, by having a bot at their fingertips which helps them manage four customer interactions simultaneously, they will reap benefits from their faster response time in the form of higher satisfaction scores.

“That’s a great way to condition customer support agents to see AI technology as something that enhances their experience and empowers them to do better without being burnt out, rather than seeing AI as something that takes their jobs away,” said Mr Crowley.

He added that integrating AI technologies into customer support organizations will influence their strategies more broadly. The success of chatbots could see more of a focus on conversational engagement through channels like Facebook Messenger and Direct Messages on Twitter.

Mr Crowley said: “These are all channels that are relatively real-time but with an asynchronous back and forth and are more immediately attached to everyone’s social life. AI is making conversational engagement disproportionately more efficient and higher quality than other contact channels, and that will push more companies to go towards those channels in higher degrees.”

They will also level the playing field between smaller and larger companies, as readily available AI solutions mean they can compete on efficiency in a way that was not previously possible.

He said: “That’s going to force, I think, a lot of these bigger companies to have better customer service. The smaller companies would traditionally tend to have better customer service because they are spending more on that as a differentiator. So AI will help increase competition on the customer service side with the bigger companies.”

The post The synergy of AI and human agents: A new era in Customer Support appeared first on TechHQ.

]]>
OpenAI quietly bins its AI detection tool – because it doesn’t work https://techhq.com/2023/07/ai-classifier-tool-cancelled-ineffective/ Thu, 27 Jul 2023 10:02:46 +0000 https://techhq.com/?p=226612

OpenAI reveals it is scrapping ‘AI Classifier’ in a blog post addition When announced, it could correctly identify 26% of AI-written text Other detection tools are not much better – we look at why this is OpenAI, the creator of the world’s most famous large language model (LLM) ChatGPT, has halted work on its AI... Read more »

The post OpenAI quietly bins its AI detection tool – because it doesn’t work appeared first on TechHQ.

]]>
  • OpenAI reveals it is scrapping ‘AI Classifier’ in a blog post addition
  • When announced, it could correctly identify 26% of AI-written text
  • Other detection tools are not much better – we look at why this is

OpenAI, the creator of the world’s most famous large language model (LLM) ChatGPT, has halted work on its AI detection tool. Dubbed ‘AI Classifer’, the tool was first announced via a blog post back in January, where OpenAI claimed it could “distinguish between text written by a human and text written by AIs from a variety of providers”.

Funnily enough, this blog post is also where the company revealed that it would no longer be working on AI Classifier, in an addendum published on July 20. It says that the ‘work-in-progress’ version that was available to try out is no longer available “due to its low rates of accuracy”.

Researchers in San Francisco may have been struggling to improve the system from when it was first unveiled in January, where it could only correctly identify 26 per cent of AI-written text and produced false positives 9% of the time. The recent blog post addition claims that they are now focusing on “more effective provenance techniques for text” but does not share where these techniques will be implemented once found.

OpenAI is not alone in struggling to create a beast to conquer its own ChatGPT and the clones of varying intelligence. A 2023 study from the Technical University of Darmstadt in Germany found that the most efficient tool could only correctly identify AI-generated text less than 50 per cent of the time.

AI Classifier is no longer available “due to its low rates of accuracy”. Source: OpenAI

Some AI-based spoofing is getting easier to spot: Intel’s ‘FakeCatcher’ has a 96 per cent success rate thanks to a technique that detects the changes in blood flow on people’s faces – a characteristic which cannot be replicated in a deepfake.

The difficulties in spotting AI-generated text appear to lie in the fundamental procedures that underpin them. They all generally work by measuring how predictable or generic the text is. For GPTZero, an AI detection tool built by Princeton University student Edward Tian, this is calculated by two metrics; ‘perplexity’ and ‘burstiness’.

The former is a measure of how surprising or random a word is given its context, with more generic phrases more likely to be flagged as AI-generated. The latter denotes how much a body of text varies in sentence length and structure, as bots tend towards uniformity while human creativity favours variety.

However, these are not hard and fast rules. Different human authors have different styles, some more robotic than others, and chatbots are getting better at mimicking these with every new piece of content they digest. This is where the difficulties stem from.

Over the past few months, social media users have discovered that AI detection tools claim sections of the Bible and US Constitution are AI-generated. As Mr Tian himself told Ars Technica, the latter specifically is a text fed repeatedly into the training data of many LLMs.

“As a result, many of these large language models are trained to generate similar text to the Constitution and other frequently used training texts,” he said. “GPTZero predicts text likely to be generated by large language models, and thus this fascinating phenomenon occurs.”

This is not the only issue. Multiple studies have found AI detector tools to be inherently biased. Stanford University researchers found that they “consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified”.

This suggests that the rules the tools are based upon may unintentionally punish those who have not spent their lives immersed in English, and as such have a more limited vocabulary or grasp of linguistic expressions than those who have.

Some tools also show bias towards labelling text as either human or AI-generated. Researchers from the Nanyang Technological University in Singapore evaluated the efficacy of six detection systems on lines of code from online databases like StackOverflow and that had been produced by AI.

AI detection tools generally work by measuring how predictable or generic the text is. Source: Shutterstock

They found that some tools leaned in a different direction when labelling content and that the individual biases are due to “differences in the training data used” and “threshold settings”.

A separate paper, which compared the results of the Singapore study with others, found that the majority of detection tools were less than 70% accurate and had a “bias towards classifying the output as human-written rather than detecting AI-generated text”. The performance also worsened if the AI-generated text was manually edited or even machine paraphrased.

Indeed, just applying the correct prompt can allow content to bypass detectors. The Stanford researchers claim that telling an LLM to “elevate the provided text by employing literary language” will result in it turning a piece of content from a non-native English speaker that it assumes is AI-generated to one it deems human-written.

Models have been created that can spit out similar prompts which will allow AI-generated texts to evade detection. These are, of course, open to misuse, but can also be used to improve the performance of shoddy detectors – something sorely needed as LLMs become more prevalent in society.

The post OpenAI quietly bins its AI detection tool – because it doesn’t work appeared first on TechHQ.

]]>
Generative AI – with training wheels? https://techhq.com/2023/07/how-do-you-make-generative-ai-safe/ Tue, 18 Jul 2023 20:51:17 +0000 https://techhq.com/?p=226368

• Generative AI has a data-hunger problem for companies. • A new service can prevent staff leaking data to the generative AI. • There are ethical considerations, so buy-in from HR is essential Generative AI is one of those ideas that’s “too big to fail,” and too useful to get rid of, even if we... Read more »

The post Generative AI – with training wheels? appeared first on TechHQ.

]]>

• Generative AI has a data-hunger problem for companies.
• A new service can prevent staff leaking data to the generative AI.
• There are ethical considerations, so buy-in from HR is essential

Generative AI is one of those ideas that’s “too big to fail,” and too useful to get rid of, even if we could.

Since it arrived, initially in the form of ChatGPT from the Microsoft-backed OpenAI and quickly followed by Google Bard and others, it’s been adopted into both business and civilian life with – perhaps – more and blinder trust than it warrants, because when it works, it works brilliantly and transformatively.

But issues with the tech have been ongoing – not the least of which is its capacity to take the data you feed it into itself, and use it outside of your sphere. That means, as Samsung found out in April 2023, that generative AI can take a company’s proprietary data and make it legally available to the world, providing it’s been given the data by someone with legitimate access to that data – say a company employee.

That’s made some companies wary of using generative AI in ways that might help them get ahead in their sector, for fear of losing sensitive, proprietary or even personal data.

Samsung itself initially stopped its staff using ChatGPT entirely in the wake of its data accident, and many other companies have followed suit – despite knowing that they risk falling way behind the curve of their industry by not making use of the bright, shiny new technology that all the cool kids (and, more to the point, all the rich kids) are using.

In Part 1 of this article, we spoke to Rich Davis, Head of Solutions Marketing at Netskope – a company that claims to have a world first product that makes generative AI data-safe for company use. He explained the dichotomy many companies found themselves in – to use generative AI or to be absolutely certain of their data-safety.

Then he told us that Netskope had been working for a decade on protecting data in transit from users to SaaS apps, and that it had developed a parser that could speak the language used between clients and generative AI.

We asked him how companies were using this apparently world-first tool that, in theory at least, could make generative AI safe in even the most… Samsung… of situations.

Getting visibility over generative AI usage.

RD:

So far, about 25% of our user base has used the tool to just gain visibility – a case of “just tell me on a daily basis what we are seeing and what tools are in use, what are people doing with the generative AI.” Are staff mainly posting data for queries? Is that data long form or short form? Or are they mainly playing with it? Are they using non-company data? Or are people just trying to, say, summarize last season’s NFL action in a sentence?

A lot of companies are literally just trying to get a handle on how the generative AI is being used, because this is all still pretty new.

THQ:

That’s the thing, isn’t it? It has so many potential uses, if you just let staff go nuts with it, they may not yet appreciate what they could be using it for. Or, alternatively, you might have tech-savvy staff that are all about the “How can I make this more efficient?” queries.

RD:

Yeah, we see a lot of the NFL-style queries, but we’re seeing a lot of sensitive data in the queries too. Around 25% of our client base is using it, and we have between 2,500 and 3,00 large organizations. So a fair number are already using our tool for visibility.

The next step is application access – what do we allow, what don’t we allow, starting to simply control it on a potential basis. For instance, we’re going to allow something like Jasper, because if you have a marketing arm, you could be using that to look at whether you could improve your copywriting. But say as an organization, we don’t want to enable some of the other apps out there. Our tool lets you allow or disallow access to data as is appropriate.

And then there’s a third tier.

THQ:

You understand when you say that, it sounds like it comes with three dramatic chords, right?

Generative AI training wheels – the third tier.

RD:

That third tier, and I would say about half of that 25% are already doing this, is actually using our ability to deeply look into the transaction, understand exactly what the user is doing. We can potentially look at the data they’re using and make a decision to block that request being sent if it contains sensitive data.

THQ:

So your tool is a kind of data cop?

RD:

Sort of. But it’s also sort of like a firewall. People can still use the generative AI, they can type in whatever they like – but as soon as there’s sensitive data going in, it’s blocked.

THQ:

And then what? Flashing alarms? SWAT teams dropping down ropes from the ceiling? Weeping staff taken away to a data-gulag?

RD:

Ha. They get a coaching alert that says these are the boundaries, these are the terms of how we can use this tool today.

THQ:

So that last tier trains people by allowing them to bump into safe walls, but without setting the place on fire or accidentally donating proprietary data to the Russians?

RD:

Exactly. Actually, a lot of organizations have initial warnings when you start using generative AI that says “Don’t post sensitive information, please have a read of the safe usage policy” – and then you hit Enter to continue, so it’s a kind of initial re-setting of the mindset, an initial training in the do’s and don’ts.

Then you’ve got that hard safeguard at the end of the line, which kicks in if staff go on to do something stupid, and post the minutes of the latest board meeting into the generative AI – which you can stop from actually being sent up to the AI.

Generative AI needs to be made safe.

Pathways to safety will begin to appear in increasing numbers.

It’s a soft approach to training, and a hard-remove tool because actually, at the end of the day, we can’t have this information going out. It’s really no different to what we’ve been doing for years across any application. It’s just a different case here with the risk of where this data could potentially go, and how it could potentially be reused. That danger’s not necessarily there if you’re just storing files in a personal OneDrive, for instance.

Awareness of generative AI data leak potential.

THQ:

It’s a question, isn’t it – how aware are staff generally that the Samsung sort of data leak could happen? Unless they’re fairly tech-savvy, they’ll usually have a mainstream media view of it, but they won’t necessarily know what privileged data is, and therefore that they shouldn’t be using it in generative AI.

RD:

No, exactly. They could well just be thinking “This could make my life easier.”

That’s why training is so important – because you’re not putting that brick wall up. As soon as anyone sees a restriction they go, “okay, how am I going to get around this, I want to play with it. I’ll try a different generative AI.”

So actually saying, “Yes, you can use this, here are the risks” actually helps a great deal on user acceptance. And this is something that even outside of generative AI, a lot of our customers have been using for quite some time.

THQ:

Training wheels, rather than baby gates.

If prevented from using one generative AI, staff may well just use another.

Tell people they can’t do a thing without explaining why, and they’ll just find another way.

RD:

Exactly. And the other positive to that approach is that as an organization, you get an understanding of the usage, because you’re putting it in their hands, and you’re seeing what the usage is – whereas if you just stop using it in case your data goes flying out the door, you’ve got no idea what problems people were trying to solve with it.

We’re seeing a growth in usage of about 25% month-on-month at the moment, as you’d expect in the corporate environment, and about one in ten users in an organization are using one of these generative AI tools on a regular basis, actually on their work within their work environment.

THQ:

Is there not something creepy in the idea of monitoring the usage of a particular program across companies?

RD:

Yes, there is actually.

THQ:

O…kay. It’s fair to say we didn’t expect you to agree on that.

RD:

This has always been a big debate, and I’ve talked to CISOs for years about this whole concept. I think it comes down to needing to protect the business. It’s that trade-off between monitoring users and protecting the organization. And that’s why when you’re doing this, you’ve obviously got to put safeguards in place, you’ve got to anonymize user data so you can’t pinpoint a particular individual user, you’ve got to really limit the control as to who can see these types of reports.

One thing I would always say to any organization is that you’ve got to do something like this in conjunction with HR. You’ve got to bring your HR team into this whole approach from the beginning if you’re going to do this type of analysis and control.

Monitoring staff use of generative AI could have ethical considerations.

Monitoring and intervening with your staff’s use of generative AI? Mmm, you’re going to need some organizational buy-in on that.

Where I’ve seen it really work well is in organizations where they’ve got that organizational buy-in from the beginning. Where they’ve gone through, explained it to the users, “This is why we’re putting this technology in place. These are the risks of the business. And this is why we’re doing it.” That’s where it alleviates most of the creep-factor, because people get it.

 

In Part 3 of this article, we’ll look at the layers of protection that make up the first effective method of keeping your data safe while using generative AI within a company.

What happened with Samsung?

The post Generative AI – with training wheels? appeared first on TechHQ.

]]>
Are AI customer service chatbots truly safe on company websites? https://techhq.com/2023/07/ai-customer-service-chatbots-safe-on-company-websites/ Mon, 03 Jul 2023 18:10:32 +0000 https://techhq.com/?p=225985

Customer service chatbots are usually ‘rule-based’ rather than AI-powered But this is changing, as some companies have integrated ChatGPT This is risky, as they are still susceptible to bias and hallucinations Chatbots are an increasingly familiar part of 21st century life. Visit any e-commerce website and you won’t be surprised to hear a ‘plink’ sound... Read more »

The post Are AI customer service chatbots truly safe on company websites? appeared first on TechHQ.

]]>
  • Customer service chatbots are usually ‘rule-based’ rather than AI-powered
  • But this is changing, as some companies have integrated ChatGPT
  • This is risky, as they are still susceptible to bias and hallucinations

Chatbots are an increasingly familiar part of 21st century life. Visit any e-commerce website and you won’t be surprised to hear a ‘plink’ sound as a friendly chatbot pops up in the bottom right-hand corner, asking if it can help you with anything.

Over the last few years, these have become increasingly popular as customer support agents, as the benefits are obvious. They are available 24/7, ready to answer straightforward queries so that human agents have time to deal with more complex ones. They also boost consumer interaction and subsequently, if providing positive experiences, a site’s SEO.

While the rise of ChatGPT has led the term AI to be thrown around wildly when discussing chatbots, the majority of those tasked with customer service do not actually rely on this technology. Instead, they are ‘rule-based’ chatbots, which simply function on an ‘if/then’ decision tree logic when given various prompts. Therefore the conversations that they can hold are strictly limited to their purpose-based code.

Chatbots: the new generation

Actual AI-powered chatbots work differently. They utilize advanced natural language processing (NLP) techniques and machine learning algorithms to understand and generate human-like responses. These “generative AI” chatbots are trained on vast amounts of data and can analyze and interpret the context and intent behind user queries, allowing them to provide more nuanced and personalized responses. While they are more advanced than rule-based chatbots, they also require more time and financial investment to set up.

This type of chatbot has been getting significantly more attention since the release of OpenAI’s ChatGPT late last November. The so-called “gold rush” of generative AI adoption that has followed the arrival of ChatGPT has led many companies to investigate integrating generative AI into their chatbots to make them more advanced.

ChatGPT.

Generative AI chatbots have been getting significantly more attention since the release of OpenAI’s ChatGPT last November. Source: Shutterstock

Such a change is warranted, as a Gartner report from this year revealed that just 8 percent of customers used chatbots during their most recent customer service interaction, and only a quarter of them said they would use one again. This is largely down to poor resolution rates for many functions, including just 17 percent for billing disputes, 19 percent for product information, and 25 percent for complaints.

Companies that have started to adopt the large language models that power ChatGPT and its like into their service bots include Meta, Canva, and Shopify, but their decision to do so is not without risk. ChatGPT is a generative AI, meaning that, in theory, it creates new sentences in response to every prompt. This kind of unpredictability often isn’t great for a brand, as it creates a certain element of risk.

‘Hallucinations,’ in this context, are when an AI confidently states incorrect information as fact. This doesn’t matter when creating a static piece of content that can be proofed before it is published anywhere – in other words, when there remains a knowledgable human being between the AI and the consumer. However, this is not the case if the bot is holding a real-time conversation with a customer.

A high-profile example of this is when Bard, the AI chatbot developed by Google’s parent company Alphabet, responded to a question about the James Webb Space Telescope wrongly in an advert. While it wasn’t chatting with a potential client or similar, the error still wiped $100 billion off Alphabet’s share price.

Bard, the AI chatbot developed by Google’s parent company Alphabet, responded to a question about the James Webb Space Telescope wrongly in an advert. The error wiped $100 billion off Alphabet’s share price.

Bard, the AI chatbot developed by Google’s parent company Alphabet, responded to a question about the James Webb Space Telescope wrongly in an advert. Source: Twitter

Limitations of ChatGPT

Limitations of ChatGPT. Source: ChatGPT

However, the technology has improved in just the short period that has passed since AI chatbots entered the mainstream. So much so that some companies are confident that GPT-4 – a more advanced version of ChatGPT – is safe to use on their websites. Customer service platform provider Intercom is one of them.

“We got an early peek into GPT-4 and were immediately impressed with the increased safeguards against hallucinations and more advanced natural language capabilities,” Fergal Reid, Senior Director of Machine Learning at Intercom, told econsultancy.com. “We felt that the technology had crossed the threshold where it could be used in front of customers.”

GPT-4 chatbots from OpenAI are increasingly popular.

GPT-4 is an advanced version of the language model developed by OpenAI. Source: Shutterstock

Yext, a digital experience software provider, has its AI-powered chatbot source data from its CMS, which stores information about its clients’ brands, when answering questions, helping to prevent hallucinations.

But distinctly powerful chatbots with extended capabilities could bring their own set of problems. Gartner’s customer service research specialist, Michael Rendelman, said that their implementation would lead to “customer confusion about what chatbots can and can’t do.”

He added: “It’s up to service and support leaders to guide customers to chatbots when it’s appropriate for their issue and to other channels when another channel is more appropriate.”

Doorways to frustration and data vulnerability?

In fact, they could lead to increased customer frustration rather than reducing it. Fresh from a conversation with ChatGPT or another bot with similarly substantial financial investment, a site visitor would be bolstered with a certain level of expectation of the general capabilities of chatbots.

If the one on your site does not meet that same level of competency, it could result in anger and frustration, which bots are not well-equipped to deal with either. A 2021 study found that ‘chatbot anthropomorphism has a negative effect on customer satisfaction, overall firm evaluation, and subsequent purchase intentions’ if the customer is already angry.

Indeed, hyper-realistic customer service chatbots open the door for scams. Bad actors can exploit them to create convincing imitations, leading customers to unknowingly engage with fraudulent entities and increase their susceptibility to scams.

AI angel investor Geoff Renaud told Forbes: “It opens up a Pandora’s box for scammers because you can duplicate anyone. There’s much promise and excitement but it can go so wrong so fast with misinformation and scams. There are people that are building standards, but the technology is going to proliferate too fast to keep up with it.”

Microsoft's Tay chatbot

In 2016, Microsoft launched a generative chatbot named Tay on social media. Within hours of its release, Tay began posting inflammatory and offensive remarks. Source: Twitter

Another issue is bias, as AI bots can pick it up from the data on which they are trained, particularly if it comes from the internet. One infamous example of a brand being harmed by a biased chatbot comes from Microsoft.

In 2016, the tech giant launched a generative chatbot named Tay on social media platforms like Twitter. However, within hours of its release, Tay began posting inflammatory and offensive remarks, as it had learned from interacting with users who deliberately provoked it. Due to the inappropriate and offensive behavior, Microsoft was forced to shut it down and issue an apology.

More recently, the National Eating Disorder Association  (NEDA) announced it would be letting go of some of the human staff that worked on its hotline – to be replaced by an AI chatbot called Tessa. But, just days before Tessa’s official launch, test users started reporting that it was giving out harmful dietary advice, like encouraging calorie counting.

As with any digital system, implementing a customer service chatbot can introduce vulnerabilities that open the door to hackers. But as interactions often include the submission of the customer’s personal data, there may be more at stake with generative AI.

Vulnerabilities include the lack of encryption during customer interactions and communication with backend databases, insufficient employee training leading to unintentional exposure of backdoors or private data, and potential weaknesses in the hosting platform used by the chatbot, website, or databases. These can be exploited for data theft or to spread malware, so businesses must be constantly vigilant and set up to patch them when identified.

Currently, a business that wants to make use of the most advanced chatbots needs to make hefty investments in time and money, but this will shrink as the technology becomes more commonplace.

Exactly what this means for human agents remains uncertain, but it will likely reshape the customer service landscape. Businesses will need to adapt and redefine the skills and roles of their staff to ensure a harmonious integration of technology and human touch.

The post Are AI customer service chatbots truly safe on company websites? appeared first on TechHQ.

]]>
The 2023 Imperva Bad Bot Report: the impact of generative AI https://techhq.com/2023/07/imperva-bad-bot-report-what-impact-will-generative-ai-have-on-the-bot-landscape/ Mon, 03 Jul 2023 17:16:51 +0000 https://techhq.com/?p=226002

• Imperva Bad Bot Report reveals a new democratization of bot-use. • GenAI voice cloning could be a major new weapon for bot activity. • 2023 a significant year for increased web-scraping. In the tenth annual Imperva Bad Bot Report, there’s enough data on bot activity to let you reach several important conclusions that should... Read more »

The post The 2023 Imperva Bad Bot Report: the impact of generative AI appeared first on TechHQ.

]]>

• Imperva Bad Bot Report reveals a new democratization of bot-use.
• GenAI voice cloning could be a major new weapon for bot activity.
• 2023 a significant year for increased web-scraping.

In the tenth annual Imperva Bad Bot Report, there’s enough data on bot activity to let you reach several important conclusions that should govern elements of your business plan.

In Part 1 of this article, we spoke to Peter Klimek, Director of Technology, Officer of the CTO, at Imperva, about the significant rise in the number of bad bots out there, and how they’re making themselves felt as a fundamental part of how the internet works to everyone’s disadvantage, and the profit of bad actors.

In Part 2, we got specific and talked about the rise in bad bot attacks on APIs particularly, as outlined by the data in the Imperva Bad Bot Report for the year.

While we had Peter in the chair, it has now become mandatory under the UN Convention of Tech Journalism that we ask him how the whole bad bot situation is being affected by generative AI.

PK:

Ah, yes. I love this one.

THQ:

We thought you might at least have had significant practice at answering it. So what sort of impact is generative AI having on the sophistication, targeting, and proliferation of bad bots? What data does the Imperva Bad Bot Report actually carry on it?

PK:

It’s fair to caveat this response with the admission that it’s still early days for us, we’re still getting a better understanding of what we’re seeing here. But there are really three themes that I’m seeing personally that are emerging right now.

Imperva Bad Bot Report – three themes on GenAI.

The first impact is that generative AI is really lowering the barrier to entry when it comes to creating a bot.

If you think about it, if you have absolutely no programming experience, and you want to build your first bot to go and scrape your favorite website for whatever reason, you can use generative AI and ask it a couple of quick questions. And it’s going to give you working code to build a very simple bot.

It’s not going to be very advanced, but at least you can know something in a matter of minutes, and it will probably work.

So I think that’s one of the bigger things: it’s democratizing or opening up access to various people, effectively lowering the bar to entry when it comes to being able to create bots.

Believe it or not, that’s a relatively benign example. But it’s something that organizations will have to deal with. The second impact is kind of an interesting one. We don’t have any specific data on this yet, so this is not Imperva Bad Bot Report sanctioned, but we’re really interested to watch what happens here.

The Imperva Bad Bot Report reveals the US as the most targeted nation in 2023.

The Imperva Bad Bot Report reveals the US as the most targeted nation in 2023.

The generative AI models themselves are trained on a vast amount of data that was scraped from the internet. I would say that much of this data is dubiously or non-ethically sourced, in many instances.

Imperva Bad Bot Report: web-scraping gold rush?

Now, different businesses will have different practices around what they do. But at the end of the day, I think with the increase in funding for generative AI companies that have gone in heavily on this technology, we’re effectively entering a kind of a gold rush phase, if you will, where these companies are going to be trying to build models as quickly as possible, they’re going to be trying to build domain-specific models.

There’s a very high likelihood that that means we’re going to see the amount of broad internet scraping attacks go up.

If we go back to look at the Imperva Bad Bot Report, this year we were able to do a ten-year retrospective, and the closest parallel I can give you to this kind of gold rush is in 2014, when there was a massive spike in good bot traffic. And when I went back and asked our threat research team about that, I said, “Why was 2014 so different for good bot traffic?” They said “That’s the year that Bing and DuckDuckGo came online.”

What happened was two new search engines were aggressively growing and crawling the internet and pulling this data. And now we classify those as good bots, because they clarified and declared that they were search engines, but it’s a really good example of what happens when you have someone or something really big trying to come online.

THQ:

Release the GenAI Kraken!

PK:

What?

THQ:

Sorry, just thinking out loud.

Imperva Bad Bot Report: bargain basement influence fraud.

PK:

The third impact could be the biggest of the lot. It’s the reduced cost of either scams or influence fraud operations as a whole. A really good example here would be your classic romance fraud, your catfishing attacks.

Historically, these types of operations relied on human operators, so they wouldn’t be classified as a bad bot attack, because they had a human that was actually sitting behind the computer keyboard, typing in the responses.

Now, fraudsters can automate their generative AI to do all that.

THQ:

Catfished by a chatbot. There’s something monumentally grim and sad about that.

PK:

This is one of the bigger challenges, and one that we’ll probably end up seeing and having to deal with more and more in the next few years.

THQ:

You mentioned the potentially suspect data on which the large language models have been trained. We spoke to someone recently who said Google had released details of the Bard training data, and around 40% of it was either unverified, non-factual, obviously biased or had potential PII in it, so that’s got to be viewed as a source of model vulnerability, surely? And of course, the other big players haven’t released the details of their training data, because there is no regulation that says they have to.

The Imperva Bad Bot Report shows a growing level of bot sophistication.

Sophistication is increasingly the key for bot survival – unfortunately, they have it covered.

PK:

Yeah. I just saw something that speaks to the increase in how much bots are going to be tied to the future of generative AI, and it’s this. There’s a social media influencer with a fan base of millions.

Apparently, she previously had a premium Telegram channel where she would spend five hours a day talking with her fans, providing access and things like that.

She recently partnered with a generative AI startup where they used all of her communications in the past to effectively train a model that mimics her, and now she’s selling access and time with her bot – that’s effectively mimicking her past responses.

THQ:

Pay me real money to spend time with my bot? That’s pretty mind-blowing, right enough – but it’s in line with stories we’ve read of South Korean social media influencers which are entirely AI, with actors hired for motion-capture when the influencers need to interact with real 3D objects.

Possibly this is a legacy mindset, but have to ask where the value in those interaction are coming from.

Imperva Bad Bot Report: voices of deceit.

PK:

Yep.

THQ:

And also, you mentioned the democratization of bad actorship – if people want to create simple bots, there’s a GenAI now that will help them do it.

We attended a roundtable recently with one of the big security companies, and they showed research they’d done on the dark web where for instance, GenAI was being used to deliver significantly more effective and believable phishing scams as a service. Easily breaking down language barriers to successful phishing and so on.

The Imperva Bad Bot Report shows a picture of dangers to come.

What will the future of bots look like once GenAI gets its datasets on them?

PK:

Yeah, I think that is one of the big ones, too. Phishing is already one of the biggest threat vectors that every organization has to deal with today. And the sophistication of those attacks has just become all the more potent now.

And it’s even moving beyond things like email. This is one of the things that’s getting really sophisticated. 60 Minutes just had a great piece over the weekend, where someone showed how they clone a voice and effectively create a scam. And with that cloned voice, they were able to go and get the person’s passport number.

I’ve spoken with US law enforcement, and that was one of the big things that they called out right away, it’s a big concern to them. You’ve got certain public officials in an organization like a CEO, CFO, they’re regularly doing calls, speaking events, things like that, so their voices can be cloned fairly easily.

So yeah, there’s definitely going to have to be a new level of diligence that organizations are going to need to apply to… literally everything now.

 

In the final part of this article on the data and conclusions of the 2023 Imperva Bad Bot Report, we’ll take a look at bots in hiding, the geopolitics of bad bots, and what the future of bad bots – and of corporate vigilance – might look like.

Sure, combine bots and an evolving AI. What could possibly go wrong?

The post The 2023 Imperva Bad Bot Report: the impact of generative AI appeared first on TechHQ.

]]>