15 Oct 2024
I have yet to come across consumers who do not have a complaint about chatbots that are being increasingly deployed by e-commerce sites to deal with customer queries and complaints.
Initially, compared to the highly frustrating experience of going through complex and lengthy Interactive Voice Response (IVR) menus to speak to a customer care executive, artificial intelligence (Al)-assisted automated chats seemed like a welcome change, given their quick and easy accessibility. However, poorly designed and executed chatbots that fail to answer queries or resolve complaints have left consumers totally disappointed and angry.
First and foremost, most of these chatbots are not programmed to deal with a wide range of problems that consumers encounter in their transactions on e-commerce sites. Yet, businesses force consumers to go through that frustrating experience, before connecting them to a human being, thereby wasting the time of consumers.
Under the Consumer Protection (E-Commerce) Rules, every e-commerce entity must display prominently the email address, landline and mobile numbers of customer care as well as grievance officers. But many are violating this mandate with impunity.
In a case where a wrong product was delivered to a customer, the only answer she could get from the chatbot was that it was a ‘non-returnable item’. That may be so, but she had not got what she had ordered and there was no way of communicating this on the chat. It was only after wasting considerable time that she got a phone call from a consumer care executive. She later learnt that the phone number from which she got a call was programmed to only make calls and not receive calls. The e-commerce entity practically barred consumers from calling them!
In another case where the chatbot provided by a bank could not answer her query, the consumer asked for the helpline number on which she could speak to an official. The repeated response was: “Sorry, I am still learning. Can you please rephrase it for me once again? I can help you with bank-related queries”!
A consumer found on an online clothes retailer website that the default option for payment was through cash on delivery. Since he could not locate the option to pay through credit card, he tried the chatbot for help. It did not even understand his query.
Chatbots are also known to ‘hallucinate’ or give incorrect information. In the United States, where many companies are using chatbots endowed with advanced Al technologies, there have been instances of companies withdrawing them following highly erroneous advice or use of profanities. A parcel delivery firm in Britain, for example, disabled the Al function in its online chat system after a customer, frustrated by its poor responses, made it compose a poem on how bad the company’s customer service was!
In February, Air Canada was held liable for the incorrect information on a discount claim given by its chatbot to a customer. It was asked to pay him $812 by a civil resolution tribunal in Canada.
Last year, the Consumer Financial Protection Bureau in the United States issued a warning against advanced Al chatbots being used by banks and said they posed the risk of providing inaccurate financial information, besides affecting customer privacy and data protection.
In the absence of proper encryption, authentication and authorisation, chatbots can also inject malware or ransomware into the users’ devices. Companies must pay more attention to what consumers want, give them an option to choose an automated or human interaction and not force chatbots as the primary customer service delivery channel. Chatbots may save them money, but they will also take away customers!
Source: Tribune India