7th November 2024
Hilton London Canary Wharf
24th June 2025
Hilton London Canary Wharf
Close this search box.

Is generative AI the next big cyber threat for businesses?

By Robert Smith, Product Manager, Cyber Security at M247

Unless you’ve been living under a rock over the past twelve months, you will have heard all about ChatGPT by now.

A shorthand for ‘Chat Generative Pre-Trained Transformer’, the smart chatbot exploded onto the tech scene in November last year, amassing 100 million users in its first two months to become the fastest growing consumer application in history. Since then, it has piqued the curiosity of almost every sector – from artists and musicians to marketers and IT managers.

ChatGPT is, in many ways, the poster child for the new wave of generative AI tools taking these sectors by storm – Bing, Google’s Vertex AI and Bard, to name a few. These tools’ user-friendly interfaces, and ability to take even the most niche, specific prompts, and convert them into everything from artwork to detailed essays, have left most of us wondering: what is next for us, and more specifically, what is next for our jobs? So much so that a report released last year found that nearly two thirds of UK workers think AI will take over more jobs than it creates.

However, while the question around AI and what it means for the future of work is certainly an important one, something that is too often overlooked in these discussions is the impact this technology is currently having on our security and safety.

The threat of ‘FraudGPT’

According to Check Point Research, the arrival of advanced AI technology had already contributed to an 8% rise in weekly cyber-attacks in the second quarter of 2023. We even asked ChatGPT if its technology is being used by cyber-criminals to target businesses. “It’s certainly possible they could attempt to use advanced language models or similar technology to assist in their activities…”, said ChatGPT.

And it was right. Just as businesses are constantly looking for new solutions to adopt, or more sophisticated tools to develop that will enhance their objectives, bad actors and cyber-criminals are doing the same. The only difference between the two is that cyber-criminals are using tools such as AI to steal your data and intercept your devices. And now we’re witnessing this in plain sight with the likes of ‘FraudGPT’ flooding the dark web.

FraudGPT is an AI-powered chatbot marketed to cyber-criminals as a tool to support the creation of malware, malicious code, phishing e-mails, and many other fraudulent outputs. Using the same user-friendly prompts as its predecessor, ChatGPT, FraudGPT and other tools are allowing hackers to take similar shortcuts and produce useful content in order to steal data and create havoc for businesses.

As with any sophisticated language model, one of FraudGPT’s biggest strengths (or threats) is its ability to produce convincing e-mails, documents and even replicate human conversations in order to steal data or gain access to a business’ systems. Very soon, it’s highly likely that those blatantly obvious phishing e-mails in your inbox may not be so easy to spot.

And it doesn’t stop there. More and more hackers are likely to start using these AI-powered tools across every stage of the cyber ‘kill chain’, leveraging this technology to develop malware, identifying vulnerabilities, and even operate their malicious attacks. There are already bots out there that can scan the entire internet within 24 hours for potential vulnerabilities to exploit, and these are constantly being updated. So, if AI is going to become a hacker’s best friend, businesses will need to evolve and adopt the latest technology too, in order to keep pace with them.

What can businesses do?

To start with, IT managers (or whoever is responsible for cyber-security within your organisation) must make it their priority to stay on top of the latest hacking methods and constantly scan for new solutions that can safeguard data.

Endpoint Threat Detection and Response (EDR) is one great example of a robust defence businesses can put in place today. EDR uses smart behavioural analysis to monitor your data and the things you usually do on your devices, and can therefore detect when there are even minor abnormalities in your daily activities. If an EDR system detects that an AI has launched an attack on your business, it can give your IT team a heads up so they can form a response and resolve the issue. In fact, most cyber insurers today insist that businesses adopt EDR as a key risk control before offering cover.

Cyber-security providers, such as Fortinet and Microsoft, have already begun incorporating AI into their solutions, too, but making sure you have the latest machine learning and AI (not just simple, predictive AI) operating in the background to detect threats will give your business the upper hand when it comes to hackers.

And finally, educate your workforce. Although many are worried that AI will overtake us in the workplace and steal our jobs, it’s unlikely the power of human intuition will be replaced anytime soon. So, by arming your team with the latest training on AI and cyber-threats – and what to do when they suspect an AI-powered threat is happening – you can outsmart this new technology and keep the hackers at bay.


Leave a Reply

Your email address will not be published. Required fields are marked *