Chatbots can help banks become smarter and more efficient
Some of the biggest players in the banking world are catching up with the artificial intelligence rush by launching chatbots to serve their tech-savvy customers.
Ally Bank is among the first of these players, implementing a virtual assistant which uses AI technologies such as natural language processing to answer customer queries.
The aim is for banks to develop innovative products or technology to both cut down costs and offer better, speedier service to its customers.
According to a report by Juniper Research, chatbots are expected to help save global banks over US$8 billion per year by 2022. As well as this, the report estimates that a single chatbot inquiry will save approximately four minutes in comparison to traditional call center inquiries.
HSBC, one of the world’s largest financial institutions, launched its “Amy” chatbot to answer basic questions such as how to open an account.
“When we take out our smartphone, we spend more time in text messaging through social media more than we use it as a phone. This is why we believe customers would like to use a chatbot,” says Daniel Chan, head of business banking at HSBC Hong Kong, as reported by South China Morning Post.
As technology is innovating, consumers are expecting speedier service when and how they wish. AI-powered chatbots can answer customer queries in seconds – quicker than the time it takes for a telephony agent to pick up the phone at a call center.
HSBC is not the only bank to launch a chatbot. Others include DBS and Hang Seng Bank in Asia, Standard Chartered Bank in London, and Wells Fargo Bank in the US are also among the leading few setting the trend.
YOU MIGHT LIKE
Why customer service is going to be fully automated
While more and more banks are increasingly jumping the AI-powered bandwagon, liability issues surrounding chatbots have led to banks taking a cautious approach in their fintech plans.
For instance, a well-meaning, innocent robot may turn ‘rogue’. And companies would do well to know of the potentially abusive and incorrect responses that a chatbot may output.
An example of a “chatbot gone wrong” is Microsoft’s ‘Tay’, which hit the headlines after it began tweeting offensive tweets within 24 hours of its launch.
Thus, while chatbots may increase ease and efficiency when dealing with customer queries, they also pose a myriad of legal and risk issues that need to be considered by those using them.