Sat, Nov 09, 2024 | Jumada al-Awwal 7, 1446 | DXB ktweather icon0°C

How ChatGPT is changing the cybersecurity game

Experiment has revealed that the tool can be utilised to create malware code

Published: Fri 3 Mar 2023, 3:51 PM

Updated: Fri 3 Mar 2023, 7:54 PM

  • By
  • Nicolai Solling

Top Stories

ChatGPT and the AI behind it can – and already are – being used by cyber attackers. - KT file

ChatGPT and the AI behind it can – and already are – being used by cyber attackers. - KT file

You’ve probably heard of ChatGPT by now – the conversational chatbot from OpenAI which is taking the world by storm. Using artificial intelligence (AI) technology, ChatGPT can generate human-like text based on prompts entered by users. As one of the most sophisticated applications of natural language processing to date, ChatGPT is able to “answer, follow-up on questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” according to OpenAI.

This is a highly exciting tool that can be useful for a huge range of purposes – ranging from creating a recipe with limited ingredients, to summarising a research paper at the click of a button. But as with many technical breakthroughs, ChatGPT and the AI behind it can – and already are – being used by cyber attackers.

AI-powered phishing attacks

Research shows that chatbot systems like ChatGPT can be misused by attackers to craft phishing emails and codes. By leveraging the natural language processing capabilities of ChatGPT, attackers can create well-written, convincing phishing emails that are difficult to identify, with little effort on their part.

This is especially dangerous as users have traditionally been led to believe that they can identify phishing emails based on spelling or grammar errors. But this is no longer the case, as AI tools can create phishing emails at scale that are flawless from a language perspective.

Phishing attacks are already a serious threat; data from Proofpoint found that 21 per cent of surveyed employees in the UAE and 19 per cent in KSA reported that they fell victim to stolen online credentials in the past year, based on surveys conducted in August 2022. And as attackers increasingly incorporate AI for smarter phishing campaigns, vigilance will become more important than ever.

Deepfakes

AI technologies can also be utilised to create deepfakes, which refer to synthetic photographs, videos, and audio. Deepfakes can be used to deceive and defraud – for example, what if the “friend” calling you is not even the person you think it is? Technology can alter voices, accents and even the choice of words, making you easily believe it’s the right person at the other end of the line, when it very well may not be.

This has serious implications for cybersecurity, as attackers can utilise deepfakes for social engineering campaigns, where a user or individual is tricked into performing an action such as revealing information or even installing a malicious application. Social engineering is based on building trust between the attacker and the victim. Deepfake technology enables attackers to build trust more quickly and deeply, through a highly convincing impersonation that can easily deceive most.

Generating malicious code

A recent experiment with ChatGPT revealed that the tool can be utilised to create malware code. Although ChatGPT employs controls meant to prevent it from creating harmful output, researchers were able to bypass these filters through carefully crafted text prompts.

It is worth noting that the code created by ChatGPT likely wouldn’t be enough on its own to launch a malware campaign, as cyberattacks normally involve a chain of events, and are rarely limited to a single piece of code. Regardless, this recent finding has highly worrying implications. Instead of having to develop technical capabilities, attackers with zero coding skills can now use tools like ChatGPT to generate malicious code with relative ease.

What can be done?

These side effects of artificial intelligence can be intimidating to think about.

The good news is that as cyber attackers evolve their methods, cybersecurity professionals are simultaneously innovating to thwart emerging threats and stay ahead of the curve. And in many cases, the same technologies leveraged by bad actors can be utilised by security professionals to identify and mitigate threats. For example, in the same way that artificial intelligence can be weaponised by cyber attackers, it can also be used for faster incident detection and response.

Adaptation is key

When ground-breaking technologies like ChatGPT are unleashed, society as a whole needs to adapt in response. It is vital to build awareness among the public around the latest advancements in technology, and how they are being weaponised by attackers. Additionally, people must discard outdated notions – for example, that phishing emails are characterised by poor language, or that fake images and videos are easily identifiable.

AI adoption has the potential to bring about many positive changes, but it also poses new security risks. In this rapidly evolving field, it’s important to stay vigilant and adapt to the latest developments. By embracing cybersecurity innovation, we can take advantage of the convenience and efficiency offered by AI, while keeping our sensitive information secure. In short, embracing the benefits of AI requires a balanced approach that prioritises safety and security.

The writer is chief technology officer, Help AG, the cybersecurity arm of e& enterprise (formerly Etisalat Digital).



Next Story