Risks on the horizon as seen by DALL-E
Let's talk about ChatGPT and other tools. Every day, new applications enter the market and change our business life. Many companies see the opportunities and, on the other hand, fear that competitors could use artificial intelligence to penetrate their market.
Nevertheless, rushed action is not a good idea. Before deploying AI in business, we should take a look at the potential risks and consider how to address them.
Table of Content
Artificial Intelligence has revolutionized various industries, and its impact on business operations is still growing at fast pace.
Some benefits of this technology are:
No other tool has ever conquered the business world as quickly as ChatGPT. 100m users in just one month.
It's understandable. Sending a short task to the system—called prompt—delivers a very reasonable result in almost any domain. The way Large Language Models (LLM) like ChatGPT operate feels human, and it is helpful. Authors and software developers are at the forefront of users and benefit from a huge productivity boost.
Generative AI is a fascinating field within artificial intelligence that focuses on creating and generating new content such as images, texts, or even music. It uses neural networks in a so-called transformer architecture (the T in GPT) to learn patterns from existing data it was pretrained on (the P in GPT) and generate new content (the G in GPT).
The basic process is to imitate human content creation. Systems mimic the style and characteristics of the training data.
It was in 2019 that I first published an article describing how this type of AI will transform our business world. Few people agreed with my view, which is now turning out to be a drastic underestimation.
Because of the importance Generative AI has gained in such a short time, I will focus on the risks associated with it.
Despite all the buzz around GenAI, we must not forget that other, more analytical AI and Machine Learning (ML) systems have a long tradition and form a basis for many business applications. There are cases where they perform much better, consume less energy, and are therefore less costly.
Applying Generative AI where Machine Learning is required simply because it seems to be easier is a common mistake and creates corresponding risks.
Even though the benefits of AI in business operations are significant, it is important to be aware of the potential risks and challenges. Only some of them are technical, many have a psychological background that can be avoided through information, communication, and training. Let's go through this certainly incomplete list:
Hallucinations of ChatGPT – fabricated information given with an unbeatable tone of conviction – is the most discussed risk factor. The results of Large Language Models can seem very reasonable on first sight, but often they are false, distorted, or biased, as many examples have revealed.
Far-reaching decisions based on such false statements are certainly a risk.
I agree, it's not only a problem the systems show. Also humans tend to fabricate content from time to time. But a major problem is that the results from Bard, ChatGPT and others sound so very convincing. People are more inclined to trust a hallucinating machine than a bragger. Here we see that the psychological aspect should not be underestimated.
Some results of LLMs are not hallucinated, but also not correct. They may contain minor errors, distortions, biases, or wrong conclusions. The risk factor may be less than regarding hallucinations, but should also be minimized. Unfortunately, they can never be eliminated completely. That is why the awareness of employees should be kept high.
Here we have a pure psychological risk factor that is completely understandable but should be under control. People try to be effective. Often, they are stressed and under time pressure. When in such a situation, generative AI creates a text that reads wonderfully, then it tempts to take it as it is and wave the necessity to double-check.
Such behavior is both, understandable and highly critical.
This is an innate risk of the rise of Large Language Models. They are good at what they are doing, not excellent, but good. More and more people adopt this technology to improve their writing or programming skills. And it works!
The result is that more and more good texts and software appear on the scene. They are good, not excellent, but good. Good becomes the new mediocrity.
Generative AI can be used for sophisticated tasks, such as market research, generation of ideas and business concepts, planning launch campaigns, problem-solving, discovering best practices and much more. When we keep in mind that also these advanced cases are prone to the same errors, it is clear that we must be super careful not to opt for the wrong options.
To be clear, risks do not only arise from the use of AI only. Humans can also produce the same erratic strategies. The problem again is that ChatGPT & Co. appear to argue so very compelling.
I mentioned above that Large Language Models are not always the best solution. Yes, they produce stunning results on almost any request, but sometimes, it is just smoke and mirrors.
A company that sits on a treasure of stored data is often better off if they apply traditional Machine Learning Algorithms. For instance, tables look so natural and understandable to us. Not to AI, and especially not to Large Language Models.
To choose the right methodology for evaluation of data requires some more profound knowledge of Machine Learning. If we apply to tabular data a Large Language Model (which is not made for it), a Random Forest, a simple Decision Tree, a Logistic Regression, or a Support Vector Machine, in every case we will get a different result.
There is no one-size-fits-all solution, although generative AI sometimes pretends to be exactly this. Selecting the wrong system and algorithm will, in the best case, only end in useless spendings, in the worst case, in severe misinterpretations of the data.
Only a few larger companies will bury the load of hosting, training and maintaining their own AI system. Most rely on AI as a Service. Then it is like any other SaaS. Data will be transmitted to a data center you have no control over. It can happen, and it will happen: Data submitted with a prompt can land in hands with bad intentions.
Companies that offer a public chat system, must defend the system against prompt injections, where offenders try to insert malicious code to manipulate the system.
Considering all the risks mentioned, it should be clear that liability issues may also arise. AI-generated content can lead to copyright infringement, algorithm bias to discrimination, and any wrong decision can lead to third-party claims.
The unreflective use of generative AI can have serious consequences. Mediocrity, superficiality, and a continuous stream of small errors can undermine trust in a company. Data breaches and more serious errors can even destroy trust in a second.
It is a combination of all the measures mentioned so far. The most important part is constant monitoring of how the company and its brands are perceived. Employees should be knowledgeable about the use of artificial intelligence and be at a professional level. Ethical considerations also play a role.
Embracing AI technologies with careful consideration will lead to significant benefits and competitive advantages. A well-designed strategy has the potential to revolutionize business operations and create new opportunities. However, it is essential to understand and minimize the associated risks.
Uwe Weinreich is one of the AI experts whose expertise goes beyond prompt engineering. His broad knowledge of different architectures and algorithms in combination with his entrepreneurial and strategic background make him a valuable and value-adding consultant on your way to Artificial Intelligence in business.