With the launch of ChatGPT in November 2022, AI chatbots became an overnight sensation. Now, Google has launched its own version of the concept called Bard. It works on generative AI, creating content in response to a query from the user. There is a lot of talk about the potential of this technology for creating content for businesses, but are these chatbots safe?
In a letter signed by over 1000 experts including Elon Musk, there has been a call for an immediate pause on this type of AI due to a significant risk to society and civilisation by human-competitive AI systems in the form of economic and political disruptions. Italy has also become the first country in the West to temporarily ban ChatGPT over safety concerns indicating experts are worried.
Currently Google’s Bard Chatbot is only available for people over 18 and comes with a host of disclaimers about the credibility of information Bard might supply. This suggests an element of risk, so let’s dive into the potential concerns in relation to cybersecurity.
What are the main uses of chatbots currently?
At present, the most common use for chatbots is as a customer service tool. They enable AI-powered answers to user queries and can be available 24/7 without the need for human staff. Constant availability like this can help capture leads that might otherwise be lost, answering queries and reducing business costs.
The problem is that these chatbots often function by having access to lots of customer and business data. Without extensive precautions, this could make them a vulnerability. Moreover, the power of the latest generation of chatbots could make them a tool for malicious actors.
What vulnerabilities could chatbots entail?
Cybersecurity threats come from people who try to steal sensitive data from individuals or organisations for their personal gain. Chatbots may be powerful tools, but they can also be exploited by these people in various ways. Some of these vulnerabilities include:
Harmful malware like ransomware can expose data and hold it hostage. If an attacker was able to hack into a system, they could use a chatbot to spread malware to other devices.
If the chatbot lacks protections like data encryption, hackers could use them to steal important data. They could also manipulate them to alter data, rendering it lost or unusable.
Malicious actors may impersonate or repurpose a chatbot. This could lead to customers revealing sensitive data to the hacker as they believe they are interacting with a trusted business.
How else can AI chatbots present a risk?
One particular risk of AI chatbots is that they can write code. This could make them tools for people with limited coding ability to create or finetune ransomware. For example, they could generate many versions of a certain piece of malware in very little time, making that ransomware more effective at getting around cybersecurity software. Other dangers of chat AI include:
Chatbots rely on the information they are fed, so the information they provide is only as reliable as that source data. If someone trains a chatbot on inaccurate information, they can spread misinformation to every person that interacts with them.
Chatbots can be used to perpetuate or even amplify certain biases. If they are trained on data that incorporates these biases, that is the bias they will spread through interactions. This fails to account for diverse points of view and may not be representative of the full population.
Lack of Empathy
Chatbots are not sensitive to emotional cues. This can make them a poor choice for certain forms of interaction, such as therapy or crisis counselling.
Chatbots have no moral or legal responsibility for their actions. This makes it very difficult to hold them accountable when they misinform or cause harm in any other kind of way.
Loss of Jobs
Certain content creation or customer service tasks can be automated using chatbots. The software can do these things very quickly and very cheaply, which could make it a threat to human employees in various industries.
Individuals or groups could harness chatbots to manipulate and influence the user. Things like propaganda or fake news are significant concerns with this.
They Can Be Hard to Identify
In many cases, people may be unaware that they are interacting with a chatbot rather than a real human being. This can cause issues with trust as many people would prefer to talk to another person.
A Powerful Tool in the Right Hands
The bottom line is that chat AI can be of great value if used responsibly and ethically. It can automate tedious, time-consuming tasks, improve customer service and even boost revenue for businesses. But developers and users need to be aware of the threats. Steps must be taken to mitigate them.
In order to avoid chat AI becoming a significant threat, there needs to be a widespread understanding of what it is and how it works. There also needs to be a clear set of policies for implementing it. A chatbot’s performance needs to be monitored and evaluated so that issues can be identified early on.
Overall, the message is that this powerful tool must be used responsibly. Caution must be applied at all times. Technology is becoming more and more important to our lives, and we must take steps to keep ahead of the risks.