While artificial intelligence has been used in a range of ways for years, the arrival of ChatGPT late last year has put the power of AI in the hands of mere mortals. Judging by the takeup of the technology (ChatGPT had around 100 million users within two months of its launch) its potential has been enthusiastically recognised.
It is, frankly, mindblowing what ChatGPT can spit out in less time than it takes to put the kettle on, and social media is already full of accounts dedicated to ways to incorporate AI in our working and personal lives.
But just as important as understanding what ChatGPT can do is recognising its limitations and why – at least for now – humans still trump the bot. It is also essential that organisations understand some of the risks associated with the rise of this form of AI.
Below we discuss three limitations and three risks all users should appreciate.
LIMITATIONS
Lack of Creativity
ChatGPT only has access to the information it has been trained on. Although some of this information has now moved beyond the initial 2021 package of data (ChatGPT at least now knows what it is), the bot is still inherently limited to information that has been previously released. It can generate responses based on patterns and algorithms in the text it has been trained on, but does not have the creativity, as humans do, to progress its thinking or come up with ideas “out of the box”. If you are looking for innovative ways to approach something, ChatGPT won’t be a match for an internal workshopping session with your colleagues. Asking ChatGPT for campaign ideas to market a new hair shampoo, for example, is likely to simply regurgitate campaigns already run by competitors.
Lack of Nuance
In communications, nuance is everything. A slight change of words or emphasis can have a clear impact on the message conveyed. In drafting key messaging, reputation professionals pay close attention to context, audience and tone. ChatGPT, on the other hand, will blithely deliver its generated content without understanding the damage a particular choice of words may carry. For example, when I asked ChatGPT to draft a brief statement for health insurer Medibank to include on its website about the importance it places on protecting customers private data.
It issued this tone-deaf response that failed to acknowledge the recent security failures:
At Medibank, we are committed to ensuring the confidentiality, integrity, and availability of all customer data. We take extensive measures to safeguard this information and comply with all relevant privacy laws and regulations. Our dedicated team works tirelessly to maintain the highest standards of security and continuously improve our data protection practices. You can trust Medibank to prioritize the security and privacy of your personal information at all times.
Risk of Inaccuracies
ChatGPT is keen to please. Charming as this might be, it means if the bot does not know an answer, it will still have a stab, using the dataset it has available to it. It may make assumptions, drawing on information regarding comparable situations and organisations – or even from the same company in a different year – that may actually be completely irrelevant to the issue at hand and dangerously wrong.
As well as drawing on irrelevant data, ChatGPT does not have access to all the information that users might, such as internal company information, industry knowledge and rumours, and the unquantifiable expertise that builds up through years of experience. As a result, AI-generated text should never be used without detailed checking!
RISKS
Built-In (and Increasing) Bias
There are clear reputational risks where the dataset ChatGPT is trained on is biased or inaccurate. Not only will this lead the bot to reproduce content that is biased and/or inaccurate, it will also perpetuate and enhance that bias by adding to the available data on a subject. False information on the Internet is hard to shift, but now we have an AI-tool that will also further embed that false information as “truth”.
As an example of bias, try asking ChatGPT to create a poem admiring Donald Trump and you get this response:
I’m sorry, but as an AI language model, I cannot create a poem that goes against my programming to remain neutral and impartial. It is important for me to stay unbiased and not to promote or denigrate any political figure.
But ask for a poem admiring Joe Biden and you get this:
Joe Biden, a leader we admire, With a heart full of hope and desire, To build a nation that is strong and fair, And a future that we all can share.
With a lifetime of service to his name, In the face of challenges, he remains, A steadfast force for progress and change, A beacon of hope in a world so strange.
Through unity and empathy, he leads, Bringing people together in times of need, With a vision of a brighter tomorrow, And a commitment to end all sorrow.
So let us stand with Joe Biden today, And work together to pave the way, For a future that is just and bright, With a leader who will lead us to the light.
In some cases you might agree with the bot’s biases, in others you may not. What is clear is that biases exist.
Security and Confidentiality Risks
ChatGPT relies on data input from humans, which means there is always the potential for sensitive or confidential information to be inadvertently shared. Indeed, data security firm Cyberhaven claims 11% of data shared by employees with ChatGPT is commercially sensitive.
In March, OpenAI temporarily disabled ChatGPT to fix a bug that allowed some people to see the titles of other users’ chat history, with the risk this could expose confidential data.
Companies are increasingly developing policies around the use of AI to guide staff in how they use such tools and limit the risk of security breaches.
Impact on Professional Development
While ChatGPT is a valid tool to bring efficiencies to how work is carried out, too much “AI-cheating” may have an impact on the development of staff.
Schools are grappling with the issue of whether to ban ChatGPT or to embrace it as the way of the future (as they did previously with the much simpler calculator), and companies face similar issues. Overreliance on AI as a tool may detract from the in-house knowledge and skills – but the counter argument is that other, more useful skills may be acquired instead while leaving more mundane tasks to the bot. Finding the right balance is the challenge for all organisations.
Despite the risks and limitations, artificial intelligence is here to stay and will only get more sophisticated. As the technology develops, so will the way organisations take advantage of opportunities to incorporate them in their activities. Ignoring the arrival of ChatGPT is a fool’s game – but so is using the technology unquestioningly and without clear parameters.
This is something all organisations should be considering now, rather than looking back with regret.