AI Archives - Cyber Secure Forum | Forum Events Ltd
Posts Tagged :

AI

Generative AI now the most frequently deployed AI solution in organisations

960 640 Stuart O'Brien

According to a Gartner survey conducted in the fourth quarter of 2023, 29% of the 644 respondents from organisations in the U.S., Germany and the U.K. said that they have deployed and are using GenAI, making GenAI the most frequently deployed AI solution. GenAI was found to be more common than other solutions like graph techniques, optimisation algorithms, rule-based systems, natural language processing and other types of machine learning.

The survey also found that utilizing GenAI embedded in existing applications (such as Microsoft’s Copilot for 365 or Adobe Firefly) is the top way to fulfill GenAI use cases, with 34% of respondents saying this is their primary method of using GenAI. This was found to be more common than other options such as customizing GenAI models with prompt engineering (25%), training or fine-tuning bespoke GenAI models (21%), or using standalone GenAI tools, like ChatGPT or Gemini (19%).

“GenAI is acting as a catalyst for the expansion of AI in the enterprise,” said Leinar Ramos, Sr Director Analyst at Gartner. “This creates a window of opportunity for AI leaders, but also a test on whether they will be able to capitalize on this moment and deliver value at scale.”

The primary obstacle to AI adoption, as reported by 49% of survey participants, is the difficulty in estimating and demonstrating the value of AI projects. This issue surpasses other barriers such as talent shortages, technical difficulties, data-related problems, lack of business alignment and trust in AI (see Figure 1).

“Business value continues to be a challenge for organizations when it comes to AI,” said Ramos. “As organizations scale AI, they need to consider the total cost of ownership of their projects, as well as the wide spectrum of benefits beyond productivity improvement.”

Figure 1: Top Barriers to Implement AI Techniques (Sum of Top 3 Ranks)
[Image Alt Text for SEO]

Source: Gartner (May 2024)

“GenAI has increased the degree of AI adoption throughout the business and made topics like AI upskilling and AI governance much more important,” said Ramos. “GenAI is forcing organizations to mature their AI capabilities.”

“Organizations who are struggling to derive business value from AI can learn from mature AI organizations,” said Ramos. “These are organizations that are applying AI more widely across different business units and processes, deploying many more use cases that stay longer in production.”

The survey found 9% of organizations are currently AI-mature and found that what makes these organizations different is that they focus on four foundational capabilities:

  • A scalable AI operating model, balancing centralized and distributed capabilities.
  • A focus on AI engineering, designing a systematic way of building and deploying AI projects into production.
  • An investment on upskilling and change management across the wider organization.
  • A focus on trust, risk and security management (TRiSM) capabilities to mitigate the risks that come from AI implementations and drive better business outcomes.

“AI-mature organizations invest in foundational capabilities that will remain relevant regardless of what happens tomorrow in the world of AI, and that allows them to scale their AI deployments efficiently and safely,” said Ramos.

Focusing on these foundational capabilities can help organizations mature and alleviate the current challenge of bringing AI projects to production. The survey found that, on average, only 48% of AI projects make it into production, and it takes 8 months to go from AI prototype to production.

Photo by Solen Feyissa on Unsplash

What data protection considerations are there when procuring, developing and deploying AI systems?

960 640 Stuart O'Brien

By Liz Smith, associate in the commercial team at independent UK law firm Burges Salmon

In the rapidly evolving landscape of AI technology, data protection remains a crucial area of concern for businesses. Here we summarise some of the key data protection considerations for businesses procuring, developing or deploying AI systems…

  • Purpose and lawful basis: Whenever personal data is processed within the AI value chain, whether the business is developing, deploying or procuring an AI system, there must be an appropriate lawful basis and such personal data must only be processed for the stated purpose.
  • Role: It is important to identify from an early stage the role of the business in the context of data protection legislation (Data Controller, Data Processor or Joint Controller) to understand the applicable obligations. The role of the business is likely to change based on where the business sits in the AI value chain, and whether it is deploying and developing an AI model vs procuring an AI model. Where the business acts as a data controller and is procuring an AI model, it needs to be clear on what personal data is being processed by the supplier and for what purpose (for example, is personal data being used to train the model?)
  • Security: It is important to ensure appropriate levels of security against unauthorised or unlawful processing, accidental loss, destruction or damage. As AI is rapidly developing the security risks are also changing at pace. Most businesses will likely procure an AI system rather than develop one in house. The integration of the AI system into the wider IT structure, as well as reliance on third party software and intricacy of the AI value chain, adds an extra degree of complexity which is likely to increase security risks. This complexity can make it more difficult to identify and manage securityrisks and to flow-down and monitor compliance with security policies, therefore it is important businesses undertake robust due diligence when engaging suppliers and pay special attention to the specific risks posed by AI systems to their business. Given this is a rapidly developing area, businesses should actively monitor and take into account state-of-the-art security practices.
  • Data Protection Impact Assessments (DPIA): A DPIA is a critical process for organisations using AI to ensure that personal data is handled lawfully and transparently.  A DPIA must be completed before an AI system is deployed if processing is likely to result in high risk to individuals. The meaning of ‘likely to result in high risk’ is not defined in UK GDPR but a key point to note that it is the purpose of the DPIA to ascertain if the processing is high risk, so whether or not a DPIA is required should be determined on an assessment of the potential for high-risk. Some processing of personal data (for example, large scale use of sensitive data) will always require a DPIA.
  • Transparency: Businesses must be transparent about their use of AI, providing clear information to individuals about how their data is being used and for what purposes. The ICO’s guidance focuses on ensuring AI systems are “explainable” to data subjects and emphasises the need to tailor explanations to affected individuals. Businesses should consider whether updates are required to their data protection policy and privacy notices to meet this requirement.
  • Automated decision making: The use of automated decision making which has legal or similarly significant effects on an individual triggers specific legal requirements under data protection legislation. If the decision impacts any individual legal entitlements or the ability to obtain funding or secure a job it is likely to fall in scope of these specific legal requirements. Businesses can only carry out this type of decision making where the decision is:
  1. necessary for the entry into or performance of a contract;
  2. authorised by law that applies to the business; or
  3. based on the individual’s explicit consent.

If this is the case, the law requires businesses to give individuals specific information about the process (about the logic involved, the significance and the envisaged consequences). Businesses will need to introduce methods for any relevant individuals to request human intervention or challenge a decision which impacts them and will need to carry out regular check to ensure the systems are working as intended.

  • Bias and discrimination: If left unchecked, AI systems can inadvertently lead to bias and discrimination. Bias can arise because of the contents of the training data or the way it has been labelled by humans. Deploying an AI system with underlying bias increases the risk of an AI system making a discriminatory decision, especially in the context of hiring, promotions, or performance assessments. This exposes the business to claims of discrimination. It could impact the day to day operations of the business if an AI system needs to be removed or fixed to resolve the issue and it may lead to internal or external reputational damage. Businesses deploying AI systems will benefit from testing the decisions made by the AI system for different groups to assess whether the outcomes are acceptable. It may also be appropriate to carry out or request an audit of the underlying data used to obtain a clear understanding of how the AI system has been trained.
  • Supply chain due diligence: The majority of businesses will procure AI systems from a third party. Businesses that develop AI may obtain training data from an external source. Wherever a business sits within the AI value chain, carrying out due diligence checks on any third party providers to ensure compliance with data protection legislation is key.

Photo by Tim Mossholder on Unsplash

30% of increasing demand for APIs will come from AI and LLM

960 640 Stuart O'Brien

More than 30% of the increase in demand for application programming interfaces (APIs) will come from AI and tools using large language models (LLMs) by 2026, according to Gartner.

“With technology service providers (TSPs) leading the charge in GenAI adoption, the fallout will be widespread,” said Adrian Lee, VP Analyst at Gartner. “This includes increased demand on APIs for LLM- and GenAI-enabled solutions due to TSPs helping enterprise customers further along in their journey. This means that TSPs will have to move quicker than ever before to meet the demand.”

A Gartner survey of 459 TSPs conducted from October to December 2023 found that 83% of respondents reported they either have already deployed or are currently piloting generative AI (GenAI) within their organizations.

“Enterprise customers must determine the optimal ways GenAI can be added to offerings, such as by using third-party APIs or open-source model options. With TSPs leading the charge, they provide a natural connection between these enterprise customers and their needs for GenAI-enabled solutions.”

The survey found that half of TSPs will make strategic changes to extend their core product/service offerings to realize a whole product or end-to-end services solution.

With this in mind, Gartner predicts that by 2026 more than 80% of independent software vendors will have embedded GenAI capabilities in their enterprise applications, up from less than 5% today.

“Enterprise customers are at different levels of readiness and maturity in their adoption of GenAI, and TSPs have a transformational opportunity to provide the software and infrastructure capabilities, as well as the talent and expertise, to accelerate the journey,” said Lee.

Throughout the product life cycle, TSPs need to understand the limitations, risks and overhead before embedding GenAI capabilities into products and services. To achieve this, they should:

  • Document the use case and clearly define the value that users will experience by having GenAI as part of the product.
  • Determine the optimal ways GenAI can be added to offerings (such as by using third-party APIs or open-source model options) and consider how the costs of new features may affect pricing decisions.
  • Address users’ prompting experience by building optimizations to avoid user friction with steep learning curves.
    Review the different use-case-specific risks, such as inaccurate results, data privacy, secure conversations and IP infringement, by adding guardrails specific to each risk into the product.

Photo by Growtika on Unsplash

IT experts poll: Elon Musk is ‘wrong’ that no jobs will be needed in the future

960 640 Stuart O'Brien

Elon Musk’s claim that AI will make all human jobs irrelevant should not be taken seriously, according to a survey of tech experts conducted by BCS, The Chartered Institute for IT.

During an interview with UK Prime Minister Rishi Sunak for the AI Safety Summit last year, Musk said: ‘There will come a point where no job is needed — you can have a job if you wanted to have a job … personal satisfaction, but the AI will be able to do everything.’

But in a poll by BCS, The Chartered Institute for IT, 72% of tech professionals disagreed with Musk’s view that AI will render work unnecessary. Some 14% agreed (but only 5% ‘strongly’ agreed), with the rest unsure.

In comments, many IT experts said Musk’s statement was ‘hyperbole’ and suggested it was made to create headlines.

Those currently working in computing agreed that AI could replace a range of jobs, but would also create new roles, including oversight of AI decision making – known as ‘human in the loop’.

They also said that a number of jobs, for example hairdressing, were unlikely to be replaced by AI in the near future, despite advances in robotics.

BCS’ AI and Digital in Business Life survey also found AI would have the most immediate impact this year on customer services (for example chatbots replacing human advisers).

This was followed by information technology, then health and social care, then publishing and broadcasting, then education.

Leaders ranked their top business priorities as cyber security (69%), AI (58%) and business process automation (45%).

Only 8% of participants told BCS their organisation has enough resources to achieve their priorities.

Cyber attacks were most likely to keep IT managers awake at night in 2024 – this result has been consistent over the last 11 years of the survey.

Rashik Parmar MBE, Chief Executive of BCS, The Chartered Institute for IT said: “AI won’t make work meaningless – it will redefine what we see as meaningful work.

“Tech professionals are far more concerned about how ‘ordinary’ AI is affecting people’s lives today, for example, assessing us for credit and invitations to job interviews, or being used by bad actors to generate fake news and influence elections. The priority right now is to ensure AI works with us, rather than waiting for a Utopia.

“To build trust in this transformational technology, everyone working in a responsible AI role should be a registered professional meeting the highest standards of ethical conduct.”

The BCS poll was carried out with over 800 IT professionals, ranging from IT Directors and Chief Information Officers, to software developers, academics and engineers.

Photo by Arif Riyanto on Unsplash

Is defensive AI the key to guarding against emerging cyber threats?

960 640 Stuart O'Brien

Google’s recent announcement of an artificial intelligence (AI) Cyber Defense Initiative to enhance global cybersecurity underscores the importance of defending against increasingly sophisticated and pervasive cyber threats.

And according to analysts at GlobalData, AI will play a pivotal role in collecting, processing, and neutralising threats, transforming the way organisations combat cyber risks.

Looking at AI cyber threat detection technology through the lens of innovation using GlobalData’s Technology Foresights tool reveals some compelling insights. Patent filings have surged from 387 in 2018 to 1,098 in 2023, highlighting a robust growth trajectory in AI-driven security solutions. Furthermore, the entry of 53 new companies in 2023, for a total of 239, showcases the expanding interest and investment in this critical area of technology.

Vaibhav Gundre, Project Manager of Disruptive Tech at GlobalData, said: “The ability of AI to improve threat identification, streamline the management of vulnerabilities, and enhance the efficiency of incident responses is key in addressing the continuous evolution of cyber threats. The rapid progression in the field of defensive AI is underscored by a 13% compound annual growth rate in patent applications over the last three years, reflecting a strong commitment to innovation. This trend is also indicative of the recognized importance of having formidable cyber defense systems in place, signifying substantial research and development activities aimed at overcoming new cyber threats.”

An analysis of GlobalData’s Disruptor Intelligence Center highlights the partnership between AIShield and DEKRA as a notable collaboration aimed at enhancing the security of AI models and systems. Through advanced training, assessment, and protection strategies, the partnership seeks to bolster cyber resilience across industries and foster trust in AI technologies.

Similarly, Darktrace’s collaboration with Cyware exemplifies a proactive approach to cybersecurity. By facilitating collaboration among security teams and sharing threat intelligence, the partnership enables organizations to mitigate risks and respond effectively to emerging cyber threats.

AI cyber threat detection finds application across diverse use cases, including threat detection in security cameras, real-time malware detection, network threat detection, anomaly detection in critical infrastructure, fraud prevention, and AI-powered surveillance systems.

Gundre concluded: “As organizations harness the power of AI cyber threat detection, they must also confront significant challenges. The rapid evolution of cyber threats, coupled with the complexity of regulatory landscapes, underscores the need for continuous innovation and collaboration. While patents and partnerships lay the foundation for robust cyber defense strategies, addressing these challenges will require a concerted effort from industry stakeholders. By staying vigilant and embracing a proactive approach, organizations can navigate the evolving cybersecurity landscape with confidence, safeguarding critical assets and preserving digital trust.”

Photo by Mitchell Luo on Unsplash

Is generative AI the next big cyber threat for businesses?

960 640 Stuart O'Brien

By Robert Smith, Product Manager, Cyber Security at M247

Unless you’ve been living under a rock over the past twelve months, you will have heard all about ChatGPT by now.

A shorthand for ‘Chat Generative Pre-Trained Transformer’, the smart chatbot exploded onto the tech scene in November last year, amassing 100 million users in its first two months to become the fastest growing consumer application in history. Since then, it has piqued the curiosity of almost every sector – from artists and musicians to marketers and IT managers.

ChatGPT is, in many ways, the poster child for the new wave of generative AI tools taking these sectors by storm – Bing, Google’s Vertex AI and Bard, to name a few. These tools’ user-friendly interfaces, and ability to take even the most niche, specific prompts, and convert them into everything from artwork to detailed essays, have left most of us wondering: what is next for us, and more specifically, what is next for our jobs? So much so that a report released last year found that nearly two thirds of UK workers think AI will take over more jobs than it creates.

However, while the question around AI and what it means for the future of work is certainly an important one, something that is too often overlooked in these discussions is the impact this technology is currently having on our security and safety.

The threat of ‘FraudGPT’

According to Check Point Research, the arrival of advanced AI technology had already contributed to an 8% rise in weekly cyber-attacks in the second quarter of 2023. We even asked ChatGPT if its technology is being used by cyber-criminals to target businesses. “It’s certainly possible they could attempt to use advanced language models or similar technology to assist in their activities…”, said ChatGPT.

And it was right. Just as businesses are constantly looking for new solutions to adopt, or more sophisticated tools to develop that will enhance their objectives, bad actors and cyber-criminals are doing the same. The only difference between the two is that cyber-criminals are using tools such as AI to steal your data and intercept your devices. And now we’re witnessing this in plain sight with the likes of ‘FraudGPT’ flooding the dark web.

FraudGPT is an AI-powered chatbot marketed to cyber-criminals as a tool to support the creation of malware, malicious code, phishing e-mails, and many other fraudulent outputs. Using the same user-friendly prompts as its predecessor, ChatGPT, FraudGPT and other tools are allowing hackers to take similar shortcuts and produce useful content in order to steal data and create havoc for businesses.

As with any sophisticated language model, one of FraudGPT’s biggest strengths (or threats) is its ability to produce convincing e-mails, documents and even replicate human conversations in order to steal data or gain access to a business’ systems. Very soon, it’s highly likely that those blatantly obvious phishing e-mails in your inbox may not be so easy to spot.

And it doesn’t stop there. More and more hackers are likely to start using these AI-powered tools across every stage of the cyber ‘kill chain’, leveraging this technology to develop malware, identifying vulnerabilities, and even operate their malicious attacks. There are already bots out there that can scan the entire internet within 24 hours for potential vulnerabilities to exploit, and these are constantly being updated. So, if AI is going to become a hacker’s best friend, businesses will need to evolve and adopt the latest technology too, in order to keep pace with them.

What can businesses do?

To start with, IT managers (or whoever is responsible for cyber-security within your organisation) must make it their priority to stay on top of the latest hacking methods and constantly scan for new solutions that can safeguard data.

Endpoint Threat Detection and Response (EDR) is one great example of a robust defence businesses can put in place today. EDR uses smart behavioural analysis to monitor your data and the things you usually do on your devices, and can therefore detect when there are even minor abnormalities in your daily activities. If an EDR system detects that an AI has launched an attack on your business, it can give your IT team a heads up so they can form a response and resolve the issue. In fact, most cyber insurers today insist that businesses adopt EDR as a key risk control before offering cover.

Cyber-security providers, such as Fortinet and Microsoft, have already begun incorporating AI into their solutions, too, but making sure you have the latest machine learning and AI (not just simple, predictive AI) operating in the background to detect threats will give your business the upper hand when it comes to hackers.

And finally, educate your workforce. Although many are worried that AI will overtake us in the workplace and steal our jobs, it’s unlikely the power of human intuition will be replaced anytime soon. So, by arming your team with the latest training on AI and cyber-threats – and what to do when they suspect an AI-powered threat is happening – you can outsmart this new technology and keep the hackers at bay.

Threat Predictions for 2024: Chained AI and CaaS operations give attackers more ‘easy’ buttons 

960 640 mattd

With the growth of Cybercrime-as-a-Service (CaaS) operations and the advent of generative AI, threat actors have more “easy” buttons at their fingertips to assist with carrying out attacks than ever before. By relying on the growing capabilities in their respective toolboxes, adversaries will increase the sophistication of their activities. They’ll launch more targeted and stealthier hacks designed to evade robust security controls, as well as become more agile by making each tactic in the attack cycle more efficient.

In its 2024 threat predictions report, the FortiGuard Labs team looks at a new era of advanced cybercrime, examines how AI is changing the (attack) game, shares fresh threat trends to watch for this year and beyond, and offers advice on how organisations everywhere can enhance their collective resilience against an evolving threat landscape…

The Evolution of Old Favorites

We’ve been observing and discussing many fan-favorite attack tactics for years, and covered these topics in past reports. The “classics” aren’t going away—instead, they’re evolving and advancing as attackers gain access to new resources. For example, when it comes to advanced persistent cybercrime, we anticipate more activity among a growing number of Advanced Persistent Threat (APT) groups. In addition to the evolution of APT operations, we predict that cybercrime groups, in general, will diversify their targets and playbooks, focusing on more sophisticated and disruptive attacks, and setting their sights on denial of service and extortion.

Cybercrime “turf wars” continue, with multiple attack groups homing in on the same targets and deploying ransomware variants, often within 24 hours or less. In fact, we’ve observed such a rise in this type of activity that the FBI issued a warning to organizations about it earlier this year.

And let’s not forget about the evolution of generative AI. This weaponisation of AI is adding fuel to an already raging fire, giving attackers an easy means of enhancing many stages of their attacks. As we’ve predicted in the past, we’re seeing cybercriminals increasingly use AI to support malicious activities in new ways, ranging from thwarting the detection of social engineering to mimicking human behavior.

Fresh Threat Trends to Watch for in 2024 and Beyond

While cybercriminals will always rely on tried-and-true tactics and techniques to achieve a quick payday, today’s attackers now have a growing number of tools available to them to assist with attack execution. As cybercrime evolves, we anticipate seeing several fresh trends emerge in 2024 and beyond. Here’s a glimpse of what we expect.

Give me that big (playbook) energy: Over the past few years, ransomware attacks worldwide have skyrocketed, making every organisation, regardless of size or industry, a target. Yet, as an increasing number of cybercriminals launch ransomware attacks to attain a lucrative payday, cybercrime groups are quickly exhausting smaller, easier-to-hack targets. Looking ahead, we predict attackers will take a “go big or go home” approach, with adversaries turning their focus to critical industries—such as healthcare, finance, transportation, and utilities—that, if hacked, would have a sizeable adverse impact on society and make for a more substantial payday for the attacker. They’ll also expand their playbooks, making their activities more personal, aggressive, and destructive in nature.

It’s a new day for zero days: As organisations expand the number of platforms, applications, and technologies they rely on for daily business operations, cybercriminals have unique opportunities to uncover and exploit software vulnerabilities. We’ve observed a record number of zero-days and new Common Vulnerabilities and Exposures (CVEs) emerge in 2023, and that count is still rising. Given how valuable zero days can be for attackers, we expect to see zero-day brokers—cybercrime groups selling zero-days on the dark web to multiple buyers—emerge among the CaaS community. N-days will continue to pose significant risks for organizations as well.

Playing the inside game: Many organisations are leveling up their security controls and adopting new technologies and processes to strengthen their defenses. These enhanced controls make it more difficult for attackers to infiltrate a network externally, so cybercriminals must find new ways to reach their targets. Given this shift, we predict that attackers will continue to shift left with their tactics, reconnaissance, and weaponisation, with groups beginning to recruit from inside target organisations for initial access purposes.

Ushering in “we the people” attacks: Looking ahead, we expect to see attackers take advantage of more geopolitical happenings and event-driven opportunities, such as the 2024 U.S. elections and the Paris 2024 games. While adversaries have always targeted major events, cybercriminals now have new tools at their disposal—generative AI in particular—to support their activities.

Narrowing the TTP playing field: Attackers will inevitably continue to expand the collection of tactics, techniques, and procedures (TTPs) they use to compromise their targets. Yet defenders can gain an advantage by finding ways to disrupt those activities. While most of the day-to-day work done by cybersecurity defenders is related to blocking indicators of compromise, there’s great value in taking a closer look at the TTPs attackers regularly use, which will help narrow the playing field and find potential “choke points on the chess board.”

Making space for more 5G attacks: With access to an ever-increasing array of connected technologies, cybercriminals will inevitably find new opportunities for compromise. With more devices coming online every day, we anticipate that cybercriminals will take greater advantage of connected attacks in the future. A successful attack against 5G infrastructure could easily disrupt critical industries such as oil and gas, transportation, public safety, finance, and healthcare.

Navigating a New Era of Cybercrime

Cybercrime impacts everyone, and the ramifications of a breach are often far-reaching. However, threat actors don’t have to have the upper hand. Our security community can take many actions to better anticipate cybercriminals’ next moves and disrupt their activities: collaborating across the public and private sectors to share threat intelligence, adopting standardized measures for incident reporting, and more.

Organisations also have a vital role to play in disrupting cybercrime. This starts with creating a culture of cyber resilience—making cybersecurity everyone’s job—by implementing ongoing initiatives such as enterprise-wide cybersecurity education programs and more focused activities like tabletop exercises for executives. Finding ways to shrink the cybersecurity skills gap, such as tapping into new talent pools to fill open roles, can help enterprises navigate the combination of overworked IT and security staff as well as the growing threat landscape. And threat sharing will only become more important in the future, as this will help enable the quick mobilization of protections.

Threat Predictions for 2024: Chained AI and CaaS operations give attackers more ‘easy’ buttons 

960 640 Guest Post

With the growth of Cybercrime-as-a-Service (CaaS) operations and the advent of generative AI, threat actors have more “easy” buttons at their fingertips to assist with carrying out attacks than ever before. By relying on the growing capabilities in their respective toolboxes, adversaries will increase the sophistication of their activities. They’ll launch more targeted and stealthier hacks designed to evade robust security controls, as well as become more agile by making each tactic in the attack cycle more efficient.

In its 2024 threat predictions report, the FortiGuard Labs team looks at a new era of advanced cybercrime, examines how AI is changing the (attack) game, shares fresh threat trends to watch for this year and beyond, and offers advice on how organisations everywhere can enhance their collective resilience against an evolving threat landscape…

The Evolution of Old Favorites

We’ve been observing and discussing many fan-favorite attack tactics for years, and covered these topics in past reports. The “classics” aren’t going away—instead, they’re evolving and advancing as attackers gain access to new resources. For example, when it comes to advanced persistent cybercrime, we anticipate more activity among a growing number of Advanced Persistent Threat (APT) groups. In addition to the evolution of APT operations, we predict that cybercrime groups, in general, will diversify their targets and playbooks, focusing on more sophisticated and disruptive attacks, and setting their sights on denial of service and extortion.

Cybercrime “turf wars” continue, with multiple attack groups homing in on the same targets and deploying ransomware variants, often within 24 hours or less. In fact, we’ve observed such a rise in this type of activity that the FBI issued a warning to organizations about it earlier this year.

And let’s not forget about the evolution of generative AI. This weaponisation of AI is adding fuel to an already raging fire, giving attackers an easy means of enhancing many stages of their attacks. As we’ve predicted in the past, we’re seeing cybercriminals increasingly use AI to support malicious activities in new ways, ranging from thwarting the detection of social engineering to mimicking human behavior.

Fresh Threat Trends to Watch for in 2024 and Beyond

While cybercriminals will always rely on tried-and-true tactics and techniques to achieve a quick payday, today’s attackers now have a growing number of tools available to them to assist with attack execution. As cybercrime evolves, we anticipate seeing several fresh trends emerge in 2024 and beyond. Here’s a glimpse of what we expect.

Give me that big (playbook) energy: Over the past few years, ransomware attacks worldwide have skyrocketed, making every organisation, regardless of size or industry, a target. Yet, as an increasing number of cybercriminals launch ransomware attacks to attain a lucrative payday, cybercrime groups are quickly exhausting smaller, easier-to-hack targets. Looking ahead, we predict attackers will take a “go big or go home” approach, with adversaries turning their focus to critical industries—such as healthcare, finance, transportation, and utilities—that, if hacked, would have a sizeable adverse impact on society and make for a more substantial payday for the attacker. They’ll also expand their playbooks, making their activities more personal, aggressive, and destructive in nature.

It’s a new day for zero days: As organisations expand the number of platforms, applications, and technologies they rely on for daily business operations, cybercriminals have unique opportunities to uncover and exploit software vulnerabilities. We’ve observed a record number of zero-days and new Common Vulnerabilities and Exposures (CVEs) emerge in 2023, and that count is still rising. Given how valuable zero days can be for attackers, we expect to see zero-day brokers—cybercrime groups selling zero-days on the dark web to multiple buyers—emerge among the CaaS community. N-days will continue to pose significant risks for organizations as well.

Playing the inside game: Many organisations are leveling up their security controls and adopting new technologies and processes to strengthen their defenses. These enhanced controls make it more difficult for attackers to infiltrate a network externally, so cybercriminals must find new ways to reach their targets. Given this shift, we predict that attackers will continue to shift left with their tactics, reconnaissance, and weaponisation, with groups beginning to recruit from inside target organisations for initial access purposes.

Ushering in “we the people” attacks: Looking ahead, we expect to see attackers take advantage of more geopolitical happenings and event-driven opportunities, such as the 2024 U.S. elections and the Paris 2024 games. While adversaries have always targeted major events, cybercriminals now have new tools at their disposal—generative AI in particular—to support their activities.

Narrowing the TTP playing field: Attackers will inevitably continue to expand the collection of tactics, techniques, and procedures (TTPs) they use to compromise their targets. Yet defenders can gain an advantage by finding ways to disrupt those activities. While most of the day-to-day work done by cybersecurity defenders is related to blocking indicators of compromise, there’s great value in taking a closer look at the TTPs attackers regularly use, which will help narrow the playing field and find potential “choke points on the chess board.”

Making space for more 5G attacks: With access to an ever-increasing array of connected technologies, cybercriminals will inevitably find new opportunities for compromise. With more devices coming online every day, we anticipate that cybercriminals will take greater advantage of connected attacks in the future. A successful attack against 5G infrastructure could easily disrupt critical industries such as oil and gas, transportation, public safety, finance, and healthcare.

Navigating a New Era of Cybercrime

Cybercrime impacts everyone, and the ramifications of a breach are often far-reaching. However, threat actors don’t have to have the upper hand. Our security community can take many actions to better anticipate cybercriminals’ next moves and disrupt their activities: collaborating across the public and private sectors to share threat intelligence, adopting standardized measures for incident reporting, and more.

Organisations also have a vital role to play in disrupting cybercrime. This starts with creating a culture of cyber resilience—making cybersecurity everyone’s job—by implementing ongoing initiatives such as enterprise-wide cybersecurity education programs and more focused activities like tabletop exercises for executives. Finding ways to shrink the cybersecurity skills gap, such as tapping into new talent pools to fill open roles, can help enterprises navigate the combination of overworked IT and security staff as well as the growing threat landscape. And threat sharing will only become more important in the future, as this will help enable the quick mobilization of protections.

Empowering cybersecurity with AI: A vision for the UK’s commercial and public sectors?

960 640 Stuart O'Brien

In the age of digital transformation, cybersecurity threats are becoming increasingly sophisticated, challenging the traditional security measures employed by many UK institutions. Enter Artificial Intelligence (AI) – a game-changer in the realm of cybersecurity for both the commercial and public sectors. AI’s advanced algorithms and predictive analytics offer innovative ways to bolster security infrastructure, making it a valuable ally for cybersecurity professionals…

  1. Proactive Threat Detection:
    • Function: By continuously analysing vast amounts of data, AI can identify patterns and anomalies that might indicate a security breach or an attempted attack.
    • Benefit: Rather than reacting to threats once they’ve occurred, institutions can prevent them, ensuring uninterrupted services and safeguarding sensitive data.
  2. Phishing Attack Prevention:
    • Function: AI can evaluate emails and online communications in real-time, spotting the subtle signs of phishing attempts that might be overlooked by traditional spam filters.
    • Benefit: This significantly reduces the risk of employees unknowingly granting access to unauthorised entities.
  3. Automated Incident Response:
    • Function: When a threat is detected, AI-driven systems can instantly take corrective actions, such as isolating affected devices or blocking malicious IP addresses.
    • Benefit: Swift automated responses ensure minimal damage, even when incidents occur outside regular monitoring hours.
  4. Enhanced User Authentication:
    • Function: Incorporating AI into biometric verification systems, such as facial or voice recognition, results in more accurate user identification.
    • Benefit: This curtails unauthorised access and adds an additional layer of security beyond passwords.
  5. Behavioural Analytics:
    • Function: AI algorithms can learn and monitor the typical behaviour patterns of network users. Any deviation from this pattern, such as accessing sensitive data at odd hours, raises an alert.
    • Benefit: This helps detect insider threats or compromised user accounts more effectively.
  6. Predictive Analysis:
    • Function: AI models can forecast future threat landscapes by analysing current cyberattack trends and patterns.
    • Benefit: Organisations can prepare and evolve their cybersecurity strategies in anticipation of emerging threats.
  7. Vulnerability Management:
    • Function: AI can scan systems to identify weak points or vulnerabilities, prioritising them based on potential impact.
    • Benefit: Cybersecurity professionals can address the most critical vulnerabilities first, ensuring optimal resource allocation.
  8. Natural Language Processing (NLP):
    • Function: AI-powered NLP can scan and interpret human language in documents, emails, and online communications to detect potential threats or sensitive information leaks.
    • Benefit: It provides an additional layer of scrutiny, ensuring data protection and compliance.

By harnessing the capabilities of AI, the UK’s commercial and public sectors can look forward to a more robust cybersecurity posture. Not only does AI enhance threat detection and response, but its predictive capabilities ensure that organisations are always a step ahead of potential cyber adversaries. As cyber threats continue to evolve, so too will AI’s role in countering them, underscoring its pivotal role in the future of cybersecurity.

Learn more about how AI can support your cyber defences at the Security IT Summit.

Where does GenAI fit into the data analytics landscape?

960 640 Guest Post

Recently, there has been a lot of interest and hype around Generative Artificial Intelligence (GenAI), such as ChatGPT and Bard. While these applications are more geared towards the consumer, there is a clear uptick in businesses wondering where this technology can fit into their corporate strategy. James Gornall, Cloud Architect Lead, CTS explains the vital difference between headline grabbing consumer tools and proven, enterprise level GenAI…

Understanding AI

Given the recent hype, you’d be forgiven for thinking that AI is a new capability, but in actual fact, businesses have been using some form for AI for years – even if they don’t quite realise it.

One of the many applications of AI in business today is in predictive analytics. By analysing datasets to identify patterns and predict future outcomes, businesses can more accurately forecast sales, manage inventory, detect fraud and resource requirements.

Using data visualisation tools to make complex data simpler to understand and more accessible, decision-makers can easily spot trends, correlations and outliers, leading them to make better-informed data-driven decisions, faster.

Another application of AI commonly seen is to enhance customer service through the use of AI-powered chatbots and virtual assistants that meet the digital expectations of customers, by providing instant support when needed.

So what’s new?

What is changing with the commercialisation of GenAI is the ability to create entire new datasets based on what has been learnt previously. GenAI can use the millions of images and information it has searched to write documents and create imagery at a scale never seen before. This is hugely exciting for organisations’ creative teams, providing unprecedented opportunities to create new content for ideation, testing, and learning at scale. With this, businesses can rapidly generate unique, varied content to support marketing and brand.

The technology can use data on customer behaviour to deliver quality personalised shopping experiences. For example, retailers can provide unique catalogues of products tailored to an individuals’ preferences, to create a totally immersive, personalised experience. In addition to enhancing customer predictions, GenAI can provide personalised recommendations based on past shopping choices and provide human-like interactions to enhance customer satisfaction.

Furthermore, GenAI supports employees by automating a variety of tasks, including customer service, recommendation, data analysis, and inventory management. In turn, this frees up employees to focus on more strategic tasks.

Controlling AI

The latest generation of consumer GenAI tools have transformed AI awareness at every level of business and society. In the process, they have also done a pretty good job of demonstrating the problems that quickly arise when these tools are misused. From users who may not realise the risks associated with inputting confidential code into ChatGPT, completely unaware that they are actually leaking valuable Intellectual Property (IP) that could be included in the chatbot’s future responses to other people around the world, to lawyers fined for using fictitious ChatGPT generated research in a legal case.

While this latest iteration of consumer GenAI tools is bringing awareness to the capabilities of this technology, there is a lack of education around the way it is best used. Companies need to consider the way employees may be using GenAI that could potentially jeopardise corporate data resources and reputation.

With GenAI set to accelerate business transformation, AI and analytics are rightly dominating corporate debate, but as companies adopt GenAI to work alongside employees, it is imperative that they assess the risks and rewards of cloud-based AI technologies as quickly as possible.

Trusted Data Resources

One of the concerns for businesses to consider is the quality and accuracy of the data provided by GenAI tools. This is why it is so important to distinguish between the headline grabbing consumer tools and enterprise grade alternatives that have been in place for several years.

Business specific language is key, especially in jargon heavy markets, so it is essential that the GenAI tool being used is trained on industry specific language models.

Security is also vital. Commercial tools allow a business to set up its own local AI environment where information is stored inside the virtual safety perimeter. This environment can be tailored with a business’ documentation, knowledge bases and inventories, so the AI can deliver value specific to that organisation.

While these tools are hugely intuitive, it is also important that people understand how to use them effectively.

Providing structured prompts and being specific in the way questions are asked is one thing, but users need to remember to think critically rather than simply accept the results at face value. A sceptical viewpoint is a prerequisite – at least initially. The quality of GenAI results will improve over time as the technology evolves and people learn how to feed valid data in, so they get valid data out. However, for the time being people need to take the results with a pinch of salt.

It is also essential to consider the ethical uses of AI.

Avoiding bias is a core component of any Environmental, Social and Governance (ESG) policy. Unfortunately, there is an inherent bias that exists in AI algorithms so companies need to be careful, especially when using consumer level GenAI tools.

For example, finance companies need to avoid algorithms running biassed outcomes against customers wanting to access certain products, or even receiving different interest rates based on discriminatory data.

Similarly, medical organisations need to ensure ubiquitous care across all demographics, especially when different ethnic groups experience varying risk factors for some diseases.

Conclusion

AI is delivering a new level of data democratisation, allowing individuals across businesses to easily access complex analytics that has, until now, been the preserve of data scientists. The increase in awareness and interest has also accelerated investment, transforming the natural language capabilities of chatbots, for example. The barrier to entry has been reduced, allowing companies to innovate and create business specific use cases.

But good business and data principles must still apply. While it is fantastic that companies are now actively exploring the transformative opportunities on offer, they need to take a step back and understand what GenAI means to their business. Before rushing to meet shareholder expectations for AI investment to achieve competitive advantage, businesses must first ask themselves, how can we make the most of GenAI in the most secure and impactful way?