generative ai Archives - Cyber Secure Forum | Forum Events Ltd
Posts Tagged :

generative ai

Generative AI now the most frequently deployed AI solution in organisations

960 640 Stuart O'Brien

According to a Gartner survey conducted in the fourth quarter of 2023, 29% of the 644 respondents from organisations in the U.S., Germany and the U.K. said that they have deployed and are using GenAI, making GenAI the most frequently deployed AI solution. GenAI was found to be more common than other solutions like graph techniques, optimisation algorithms, rule-based systems, natural language processing and other types of machine learning.

The survey also found that utilizing GenAI embedded in existing applications (such as Microsoft’s Copilot for 365 or Adobe Firefly) is the top way to fulfill GenAI use cases, with 34% of respondents saying this is their primary method of using GenAI. This was found to be more common than other options such as customizing GenAI models with prompt engineering (25%), training or fine-tuning bespoke GenAI models (21%), or using standalone GenAI tools, like ChatGPT or Gemini (19%).

“GenAI is acting as a catalyst for the expansion of AI in the enterprise,” said Leinar Ramos, Sr Director Analyst at Gartner. “This creates a window of opportunity for AI leaders, but also a test on whether they will be able to capitalize on this moment and deliver value at scale.”

The primary obstacle to AI adoption, as reported by 49% of survey participants, is the difficulty in estimating and demonstrating the value of AI projects. This issue surpasses other barriers such as talent shortages, technical difficulties, data-related problems, lack of business alignment and trust in AI (see Figure 1).

“Business value continues to be a challenge for organizations when it comes to AI,” said Ramos. “As organizations scale AI, they need to consider the total cost of ownership of their projects, as well as the wide spectrum of benefits beyond productivity improvement.”

Figure 1: Top Barriers to Implement AI Techniques (Sum of Top 3 Ranks)
[Image Alt Text for SEO]

Source: Gartner (May 2024)

“GenAI has increased the degree of AI adoption throughout the business and made topics like AI upskilling and AI governance much more important,” said Ramos. “GenAI is forcing organizations to mature their AI capabilities.”

“Organizations who are struggling to derive business value from AI can learn from mature AI organizations,” said Ramos. “These are organizations that are applying AI more widely across different business units and processes, deploying many more use cases that stay longer in production.”

The survey found 9% of organizations are currently AI-mature and found that what makes these organizations different is that they focus on four foundational capabilities:

  • A scalable AI operating model, balancing centralized and distributed capabilities.
  • A focus on AI engineering, designing a systematic way of building and deploying AI projects into production.
  • An investment on upskilling and change management across the wider organization.
  • A focus on trust, risk and security management (TRiSM) capabilities to mitigate the risks that come from AI implementations and drive better business outcomes.

“AI-mature organizations invest in foundational capabilities that will remain relevant regardless of what happens tomorrow in the world of AI, and that allows them to scale their AI deployments efficiently and safely,” said Ramos.

Focusing on these foundational capabilities can help organizations mature and alleviate the current challenge of bringing AI projects to production. The survey found that, on average, only 48% of AI projects make it into production, and it takes 8 months to go from AI prototype to production.

Photo by Solen Feyissa on Unsplash

Is generative AI the next big cyber threat for businesses?

960 640 Stuart O'Brien

By Robert Smith, Product Manager, Cyber Security at M247

Unless you’ve been living under a rock over the past twelve months, you will have heard all about ChatGPT by now.

A shorthand for ‘Chat Generative Pre-Trained Transformer’, the smart chatbot exploded onto the tech scene in November last year, amassing 100 million users in its first two months to become the fastest growing consumer application in history. Since then, it has piqued the curiosity of almost every sector – from artists and musicians to marketers and IT managers.

ChatGPT is, in many ways, the poster child for the new wave of generative AI tools taking these sectors by storm – Bing, Google’s Vertex AI and Bard, to name a few. These tools’ user-friendly interfaces, and ability to take even the most niche, specific prompts, and convert them into everything from artwork to detailed essays, have left most of us wondering: what is next for us, and more specifically, what is next for our jobs? So much so that a report released last year found that nearly two thirds of UK workers think AI will take over more jobs than it creates.

However, while the question around AI and what it means for the future of work is certainly an important one, something that is too often overlooked in these discussions is the impact this technology is currently having on our security and safety.

The threat of ‘FraudGPT’

According to Check Point Research, the arrival of advanced AI technology had already contributed to an 8% rise in weekly cyber-attacks in the second quarter of 2023. We even asked ChatGPT if its technology is being used by cyber-criminals to target businesses. “It’s certainly possible they could attempt to use advanced language models or similar technology to assist in their activities…”, said ChatGPT.

And it was right. Just as businesses are constantly looking for new solutions to adopt, or more sophisticated tools to develop that will enhance their objectives, bad actors and cyber-criminals are doing the same. The only difference between the two is that cyber-criminals are using tools such as AI to steal your data and intercept your devices. And now we’re witnessing this in plain sight with the likes of ‘FraudGPT’ flooding the dark web.

FraudGPT is an AI-powered chatbot marketed to cyber-criminals as a tool to support the creation of malware, malicious code, phishing e-mails, and many other fraudulent outputs. Using the same user-friendly prompts as its predecessor, ChatGPT, FraudGPT and other tools are allowing hackers to take similar shortcuts and produce useful content in order to steal data and create havoc for businesses.

As with any sophisticated language model, one of FraudGPT’s biggest strengths (or threats) is its ability to produce convincing e-mails, documents and even replicate human conversations in order to steal data or gain access to a business’ systems. Very soon, it’s highly likely that those blatantly obvious phishing e-mails in your inbox may not be so easy to spot.

And it doesn’t stop there. More and more hackers are likely to start using these AI-powered tools across every stage of the cyber ‘kill chain’, leveraging this technology to develop malware, identifying vulnerabilities, and even operate their malicious attacks. There are already bots out there that can scan the entire internet within 24 hours for potential vulnerabilities to exploit, and these are constantly being updated. So, if AI is going to become a hacker’s best friend, businesses will need to evolve and adopt the latest technology too, in order to keep pace with them.

What can businesses do?

To start with, IT managers (or whoever is responsible for cyber-security within your organisation) must make it their priority to stay on top of the latest hacking methods and constantly scan for new solutions that can safeguard data.

Endpoint Threat Detection and Response (EDR) is one great example of a robust defence businesses can put in place today. EDR uses smart behavioural analysis to monitor your data and the things you usually do on your devices, and can therefore detect when there are even minor abnormalities in your daily activities. If an EDR system detects that an AI has launched an attack on your business, it can give your IT team a heads up so they can form a response and resolve the issue. In fact, most cyber insurers today insist that businesses adopt EDR as a key risk control before offering cover.

Cyber-security providers, such as Fortinet and Microsoft, have already begun incorporating AI into their solutions, too, but making sure you have the latest machine learning and AI (not just simple, predictive AI) operating in the background to detect threats will give your business the upper hand when it comes to hackers.

And finally, educate your workforce. Although many are worried that AI will overtake us in the workplace and steal our jobs, it’s unlikely the power of human intuition will be replaced anytime soon. So, by arming your team with the latest training on AI and cyber-threats – and what to do when they suspect an AI-powered threat is happening – you can outsmart this new technology and keep the hackers at bay.

Where does GenAI fit into the data analytics landscape?

960 640 Guest Post

Recently, there has been a lot of interest and hype around Generative Artificial Intelligence (GenAI), such as ChatGPT and Bard. While these applications are more geared towards the consumer, there is a clear uptick in businesses wondering where this technology can fit into their corporate strategy. James Gornall, Cloud Architect Lead, CTS explains the vital difference between headline grabbing consumer tools and proven, enterprise level GenAI…

Understanding AI

Given the recent hype, you’d be forgiven for thinking that AI is a new capability, but in actual fact, businesses have been using some form for AI for years – even if they don’t quite realise it.

One of the many applications of AI in business today is in predictive analytics. By analysing datasets to identify patterns and predict future outcomes, businesses can more accurately forecast sales, manage inventory, detect fraud and resource requirements.

Using data visualisation tools to make complex data simpler to understand and more accessible, decision-makers can easily spot trends, correlations and outliers, leading them to make better-informed data-driven decisions, faster.

Another application of AI commonly seen is to enhance customer service through the use of AI-powered chatbots and virtual assistants that meet the digital expectations of customers, by providing instant support when needed.

So what’s new?

What is changing with the commercialisation of GenAI is the ability to create entire new datasets based on what has been learnt previously. GenAI can use the millions of images and information it has searched to write documents and create imagery at a scale never seen before. This is hugely exciting for organisations’ creative teams, providing unprecedented opportunities to create new content for ideation, testing, and learning at scale. With this, businesses can rapidly generate unique, varied content to support marketing and brand.

The technology can use data on customer behaviour to deliver quality personalised shopping experiences. For example, retailers can provide unique catalogues of products tailored to an individuals’ preferences, to create a totally immersive, personalised experience. In addition to enhancing customer predictions, GenAI can provide personalised recommendations based on past shopping choices and provide human-like interactions to enhance customer satisfaction.

Furthermore, GenAI supports employees by automating a variety of tasks, including customer service, recommendation, data analysis, and inventory management. In turn, this frees up employees to focus on more strategic tasks.

Controlling AI

The latest generation of consumer GenAI tools have transformed AI awareness at every level of business and society. In the process, they have also done a pretty good job of demonstrating the problems that quickly arise when these tools are misused. From users who may not realise the risks associated with inputting confidential code into ChatGPT, completely unaware that they are actually leaking valuable Intellectual Property (IP) that could be included in the chatbot’s future responses to other people around the world, to lawyers fined for using fictitious ChatGPT generated research in a legal case.

While this latest iteration of consumer GenAI tools is bringing awareness to the capabilities of this technology, there is a lack of education around the way it is best used. Companies need to consider the way employees may be using GenAI that could potentially jeopardise corporate data resources and reputation.

With GenAI set to accelerate business transformation, AI and analytics are rightly dominating corporate debate, but as companies adopt GenAI to work alongside employees, it is imperative that they assess the risks and rewards of cloud-based AI technologies as quickly as possible.

Trusted Data Resources

One of the concerns for businesses to consider is the quality and accuracy of the data provided by GenAI tools. This is why it is so important to distinguish between the headline grabbing consumer tools and enterprise grade alternatives that have been in place for several years.

Business specific language is key, especially in jargon heavy markets, so it is essential that the GenAI tool being used is trained on industry specific language models.

Security is also vital. Commercial tools allow a business to set up its own local AI environment where information is stored inside the virtual safety perimeter. This environment can be tailored with a business’ documentation, knowledge bases and inventories, so the AI can deliver value specific to that organisation.

While these tools are hugely intuitive, it is also important that people understand how to use them effectively.

Providing structured prompts and being specific in the way questions are asked is one thing, but users need to remember to think critically rather than simply accept the results at face value. A sceptical viewpoint is a prerequisite – at least initially. The quality of GenAI results will improve over time as the technology evolves and people learn how to feed valid data in, so they get valid data out. However, for the time being people need to take the results with a pinch of salt.

It is also essential to consider the ethical uses of AI.

Avoiding bias is a core component of any Environmental, Social and Governance (ESG) policy. Unfortunately, there is an inherent bias that exists in AI algorithms so companies need to be careful, especially when using consumer level GenAI tools.

For example, finance companies need to avoid algorithms running biassed outcomes against customers wanting to access certain products, or even receiving different interest rates based on discriminatory data.

Similarly, medical organisations need to ensure ubiquitous care across all demographics, especially when different ethnic groups experience varying risk factors for some diseases.

Conclusion

AI is delivering a new level of data democratisation, allowing individuals across businesses to easily access complex analytics that has, until now, been the preserve of data scientists. The increase in awareness and interest has also accelerated investment, transforming the natural language capabilities of chatbots, for example. The barrier to entry has been reduced, allowing companies to innovate and create business specific use cases.

But good business and data principles must still apply. While it is fantastic that companies are now actively exploring the transformative opportunities on offer, they need to take a step back and understand what GenAI means to their business. Before rushing to meet shareholder expectations for AI investment to achieve competitive advantage, businesses must first ask themselves, how can we make the most of GenAI in the most secure and impactful way?

OPINION: The promising influence of generative AI on cybersecurity

960 640 Stuart O'Brien

The cybersecurity landscape has always evolved rapidly, with new threats emerging and old ones becoming more sophisticated. As defenders strive to stay ahead, generative artificial intelligence (AI) is proving to be a game-changing ally, transforming threat detection, response strategies, and even the creation of defensive measures.

Generative AI holds the potential to transform threat detection. By learning from past cyber-attacks and security incidents, AI can generate predictive models to identify potential threats before they materialise. This capability enables proactive security measures and mitigates the risk of breaches, significantly bolstering cybersecurity efforts.

Beyond prediction, generative AI also enhances incident response. When a security incident occurs, AI can generate an optimal response strategy based on factors like the nature of the threat, the systems affected, and past successful interventions. By minimising the time between threat detection and response, AI can limit the damage inflicted by cyber-attacks.

In the realm of network security, generative AI is instrumental in the creation of ‘honeypots’ – decoy systems designed to lure attackers. AI can generate realistic network traffic, creating convincing honeypots that can distract attackers, provide early warning of an attack, and provide valuable intelligence about attacker tactics.

Another transformative application of generative AI is in cybersecurity training. AI can generate simulated cyber-attack scenarios for cybersecurity professionals to practice their response, improving their skills and readiness for real-world threats.

Perhaps most innovatively, generative AI can be used to create new security systems. AI algorithms can generate novel encryption algorithms, security protocols, or system configurations, pushing the boundaries of current cybersecurity technology and making our digital assets more secure.

However, as promising as these advancements are, the integration of generative AI in cybersecurity is not without challenges. Issues of data privacy, the risk of AI being used maliciously, and the need for humans in the loop to supervise and interpret AI actions must be addressed.

In conclusion, the potential of generative AI in revolutionising the cybersecurity industry is immense. It brings forth a proactive, adaptive, and innovative approach to cybersecurity, helping defenders stay one step ahead of attackers. As generative AI continues to evolve, it will undoubtedly shape the future of cybersecurity, transforming it into a more resilient, intelligent, and agile domain. The cybersecurity industry stands on the cusp of this exciting era, and the successful navigation of this transformation will determine its efficacy in our increasingly interconnected world.

The cybersecurity landscape is evolving rapidly, with new threats emerging and old ones becoming more sophisticated. As defenders strive to stay ahead, generative artificial intelligence (AI) is proving to be a game-changing ally, transforming threat detection, response strategies, and even the creation of defensive measures.

Generative AI holds the potential to transform threat detection. By learning from past cyber-attacks and security incidents, AI can generate predictive models to identify potential threats before they materialise. This capability enables proactive security measures and mitigates the risk of breaches, significantly bolstering cybersecurity efforts.

Beyond prediction, generative AI also enhances incident response. When a security incident occurs, AI can generate an optimal response strategy based on factors like the nature of the threat, the systems affected, and past successful interventions. By minimising the time between threat detection and response, AI can limit the damage inflicted by cyber-attacks.

In the realm of network security, generative AI is instrumental in the creation of ‘honeypots’ – decoy systems designed to lure attackers. AI can generate realistic network traffic, creating convincing honeypots that can distract attackers, provide early warning of an attack, and provide valuable intelligence about attacker tactics.

Another transformative application of generative AI is in cybersecurity training. AI can generate simulated cyber-attack scenarios for cybersecurity professionals to practice their response, improving their skills and readiness for real-world threats.

Perhaps most innovatively, generative AI can be used to create new security systems. AI algorithms can generate novel encryption algorithms, security protocols, or system configurations, pushing the boundaries of current cybersecurity technology and making our digital assets more secure.

However, as promising as these advancements are, the integration of generative AI in cybersecurity is not without challenges. Issues of data privacy, the risk of AI being used maliciously, and the need for humans in the loop to supervise and interpret AI actions must be addressed.

In conclusion, the potential of generative AI in revolutionising the cybersecurity industry is immense. It brings forth a proactive, adaptive, and innovative approach to cybersecurity, helping defenders stay one step ahead of attackers. As generative AI continues to evolve, it will undoubtedly shape the future of cybersecurity, transforming it into a more resilient, intelligent, and agile domain.

The cybersecurity industry stands on the cusp of this exciting era, and the successful navigation of this transformation will determine its efficacy in our increasingly interconnected world.