19/01/2023

ChatGPT: What AI holds in store for security

What AI holds in store for security Bots & Botnets

Chatbots that use AI technology have been around for a while. Many companies use them to supplement their FAQs on their websites and as an extension of a support offering. ChatGPT has been shaking up the media landscape nicely in late 2022. Some of the answers to the questions that users were able to ask the program are extremely convincing. It is virtually impossible to tell whether the answer to a question comes from a human or a machine. This made an impression - so much of an impression in fact, that Microsoft is planning to invest ten billion US dollars in OpenAI, the company that developed ChatGPT, according to initial unconfirmed reports. In return, the software giant from Redmond is to pocket 75 percent of OpenAI's profits - until the previously invested sum is recouped. Once this goal has been reached, Microsoft is to receive a 49 percent stake in OpenAI. So this technology is also about a lot of money. 

The potential uses of AI-based chat systems are cause for both enthusiasm and concern. It is likely that Microsoft now wants to send its own search engine Bing into the race against the top dog Google armed with AI support. Equally conceivable is an integration into Microsoft's virtual assistant Cortana.  Many, from developers and journalists to people working in customer service and sales, are critical of the developments because they fear that an AI could take over their jobs in the future and leave them out of work. As understandable as these fears are, it is rather unlikely that this scenario will occur in the medium term. After all, the advent of the automobile did not make all coachmen unemployed overnight.  

 

Abuse of Chatbots?

Of course, such technology is also open to abuse - for example, by cybercriminals who commit fraud crimes with the help of a chatbot. Where today entire call centers with human employees are used to perform support scams, for example, there may one day be a computer or a data center. Combined with another technology from OpenAI called Vall-E, the possibilities here are downright terrifying. Vall-E is capable of mimicking voices with uncanny accuracy, and with relatively little source material. A small voice sample is enough. Until recently, these possibilities were the domain of science fiction writers. In the here and now, this kind of voice imitation from the computer could also be used to persuade mobile phone providers to disclose or change personal data.

Used correctly - or rather incorrectly - ChatGPT and Vall-E can become a nightmare for IT security. Especially when they are used in industrial espionage. In the worst case, one might not even trust the mail or the call from colleagues or the boss anymore, because the voice is so deceptively genuine that the ruse is not noticeable over the phone. Is there perhaps also a regression here that makes it imperative to do more things face-to-face if you want to be sure you're not talking to an artificially intelligent version of the other person?

And in times of ever-improving deepfakes that can make people do and say things they have never said or done themselves, this composite of technologies could even trigger wars. Against this backdrop, it would also not be out of place to speak here of a risk technology whose use must be closely regulated. Both politicians and manufacturers have already made it clear back in 2018 that there must be clear regulations and laws here. For example, the European Commission is also addressing this issue in a draft.

The use for spreading and generating fake news is also absolutely conceivable. Because the texts generated by ChatGPT are human in an almost uncanny way. One of the reasons for this will receive attention later in this text.  ChatGPT is also able to write program code. A simple request is all it takes for the bot to spit out the desired lines of code. This naturally aroused concerns that software developers could become obsolete in the future. But there is no cause for alarm in this respect. One of the reasons is that ChatGPT never learned to develop software. Thus, the system does not "know" how secure code works. The generated code is functional, but the security may be doubted. Source code generated by an AI may therefore contain security vulnerabilities. Unsurprisingly, there have even been attempts to use ChatGPT to program ransomware. So far, the aspect of AI-generated software is still in its infancy. After all, no one has yet generated entire software suites exclusively with an AI. How and by what AI-generated software could be recognized is still unclear - this topic alone very likely contains material for several PhD theses. 

Intelligent cheating?

So there is no reason for software developers to get nervous and fear for their jobs. So far, there is no substitute for years of programming experience and an eye for whether code is safe or not. AI-generated malware has not yet appeared (to the best of my knowledge), but if it did at some point, it is questionable whether it would even be recognizable as such.

Another purpose of abuse has already been reported in the media: ChatGPT can write good-sounding and completely coherent essays on any topic within seconds.  Many students gave ChatGPT glowing reviews: "I couldn't have written that better myself". Of course, this will please those who may have forgotten the due date for an essay. AI-generated essays could easily end up on the desks of teachers in the near future, if they are not already doing so. This could also become a problem in the academic world. The fact that a text written by ChatGPT is virtually indistinguishable from a text written by a human potentially spells trouble for academic integrity. This is also taken into account by new software programs that should enable teachers to recognize AI-generated texts submitted by students as their own work.

Old issues, reshaped?

AI systems have also had a racism and sexism problem in the past. People with dark skin, as well as women, often experience discrimination in AI decisions. However, this is not an inherent characteristic of artificial intelligence. These behaviors are learned, through the source material for the training phase as well as the algorithms that underlie the AI. The source material is provided by humans - and if there is already a subliminal bias there, it naturally transfers to the system that learns from its creators. AI cannot reflect on its own behavior, recognize this bias and adjust itself accordingly. It also has no filter. These small errors are also what make AI seem so incredibly human. When fake news or conspiracy myths are spread, this ensures greater acceptance among people who are susceptible to them.

Fundamental Issues

The data basis on which an AI chatbot learns is therefore of the utmost importance. The principle of "garbage in, garbage out" applies here: If you feed the AI with garbage, the result is only garbage. Both Facebook (or Meta) and Microsoft have already experienced this firsthand. Tay, a chatbot developed by Microsoft using AI technologies, became a moderate PR disaster for Microsoft in 2016 after the Twitter-connected bot began publishing racist, and salacious, as well as sexist and misogynistic posts. Tay even referred to the Holocaust as a fabrication in one post. The bot was online for less than 48 hours before Microsoft pulled the plug. A second attempt was canceled after less than an hour. That was in 2016, and a lot has happened since then - or so one would think. But in August 2022, Meta introduced Blender Bot 3. This bot revealed a weakness that ChatGPT also has. Depending on how a question is asked, answers can vary. When a user asks the bot a question, it researches the web in the background - and also falls for less reliable sources. Result: Blender Bot 3 shared conspiracy theories and openly expressed anti-Semitic views. And the bot doesn't have a good word to say about Meta CEO Mark Zuckerberg either - he's "creepy and manipulative."  

The problem is obvious and not unlike the process that humans also go through.  

AI kindergarten

Without wanting to anthropomorphize a computer system now: Perhaps we should treat modern AI systems like infants. Not from an emotional point of view, but rather from an educational point of view. If we don't educate AI properly from the beginning, it can do a lot of damage. Like a toddler who loudly and gleefully parrots the swear words it has picked up from adults, an AI needs to be steered in the right direction. We learn from our surroundings and from interactions with others. AI turns this into an entity of its own rules, detached from human interaction, in which ethical concepts play no role. This set of rules is based on what information the AI finds on the Internet - and it is precisely such content that is ethically questionable that enjoys artificially hyped popularity. And what gets a lot of approval - regardless of the source - must be good. This is how AI-generated posts with racist content come about. Again, garbage in, garbage out. So AI would actually have to learn media literacy to distinguish serious from unserious information. And that is yet another chapter. One thing is certain: social networks may not be the right "school" for future AIs.  
So the next big task will be to select the materials and data that systems like ChatGPT learn from. So things will continue to be exciting. 
The fact that human behavior can be so deceptively imitated is a cause for concern, as well as enthusiasm. In the future, we will have to develop methods that make it possible to distinguish between a real human being and an AI - especially in contacts that take place over the Internet or telephone. As useful as AI is and as convenient as it may be to integrate it as a support, we should not blindly trust it.  

Going Forward

Ultimately, it's no use demonizing a new technology. It is here now and will play a greater role in the future - whether some individuals are comfortable with it or not. And no ground-breaking technology has ever disappeared from the scene as a result of misgivings and demonization. If we think back in history: With the advent of the first railroad, rumors persisted that no human being could survive speeds of over 20 km/h and would either suffocate or literally be jolted to unconsciousness. This has already been largely refuted by the medical weekly "The Lancet," which still exists today. Instead, the January 25, 1862 issue, for example, contains descriptions of what we today would call "motion sickness," as well as warnings to protect oneself from colds on a train trip. And today? The current speed record for unmodified trains set by the French TGV in 2007 is just under 575 km/h - and the German ICE runs at up to 300 km/h in regular service.  

The trick is to keep pace with developments and not let them overtake us. Those who allow themselves to be overtaken will be as slow as they are inevitably left behind. 


Tim Berghoff

Tim Berghoff

Security Evangelist


Share Article