All Archives - Page 6 of 80 - Cyber Secure Forum | Forum Events Ltd

All

How to boost employee cybersecurity awareness

960 640 Stuart O'Brien

In today’s digital-first landscape, the human element remains one of the most significant vulnerabilities in any organisation’s cybersecurity posture. For IT and cybersecurity professionals in the UK, fostering a culture of cybersecurity awareness among employees is crucial. However, finding the right partners and solutions to facilitate this is equally paramount. Here are the key considerations, based on input from Security IT Summit delegates and suppliers, for ensuring effective employee cybersecurity training and awareness…

  1. Comprehensive Content:
    • Relevance: Training content should be relevant to the organisation’s specific threats and industry sector.
    • Up-to-date Material: The cyber landscape evolves rapidly; training materials should reflect the most recent threat intelligence.
  2. Engaging Delivery Methods:
    • Interactive Modules: Interactive e-learning platforms can boost engagement and retention.
    • Real-life Scenarios: Simulated phishing campaigns or breach scenarios allow employees to practice their response in a controlled environment.
  3. Continuous Learning:
    • Regular Updates: Cyber threats change continuously; regular training refreshers are vital.
    • Newsletters and Bulletins: Monthly or weekly cyber updates can keep security top-of-mind for employees.
  4. Assessment and Feedback:
    • Knowledge Checks: Quizzes or tests can gauge employee understanding and highlight areas that need further training.
    • Feedback Mechanisms: Ensure employees have a platform to provide feedback or ask questions about the training.
  5. Scalability and Customisation:
    • Adaptable Solutions: The chosen training solution should be scalable to accommodate organisation growth.
    • Tailored Training: Content should be customisable to address the unique risks and policies of the organisation.
  6. Certifications and Compliance:
    • Industry Standards: Training programs should align with recognised industry standards and best practices.
    • Record Keeping: For compliance purposes, ensure the solution provides detailed records of employee training and completion.
  7. Engagement and Culture:
    • Gamification: Incorporating game elements can make training more engaging and competitive.
    • Leadership Buy-in: Executive endorsement can drive a culture where cybersecurity is everyone’s responsibility.
  8. Partner Reputation and Expertise:
    • Track Record: Consider partners with a proven track record in delivering effective cybersecurity awareness training.
    • Continuous Development: Partners should invest in updating and improving their training solutions regularly.
  9. Integration Capabilities:
    • Learning Management System (LMS) Integration: Ensure the training platform can integrate with existing LMS or HR systems for streamlined management.
    • Multi-device Accessibility: Training should be accessible across various devices, including mobiles and tablets, catering to a modern workforce.
  10. Budget and Return on Investment (ROI):
  • Cost Analysis: While budget is a factor, it’s vital to weigh the costs against the potential losses from a cyber breach.
  • Measurable Outcomes: Choose solutions that offer measurable outcomes to gauge ROI effectively.

As cyber threats continue to evolve, so too must our defence strategies. Ensuring employees are knowledgeable and vigilant against cyber risks is a foundational step. By selecting the right partners and solutions, organisations can significantly bolster their cybersecurity resilience, turning their human element from a potential vulnerability into a formidable line of defence.

Are you looking to boost IT security awareness in your business? The Security IT Summit can help!

Image by kirill_makes_pics from Pixabay

For data privacy, access is as vital as security 

960 640 Guest Post

By Jaeger Glucina, MD and Chief of Staff, Luminance 

If you’re in the UK, you could hardly have missed the story this summer about Nigel Farage’s public showdown with the specialist bank Coutts. What started as an apparent complaint about a lack of service being provided to Farage quickly became a significant political talking point and, ultimately, resulted in the CEO of the NatWest-owned bank resigning his position.

However, if your work sees you taking responsibility for security, compliance, and business continuity, you may need to take stock of how this story highlights an approaching risk factor that all companies need to be aware of. While the details of Coutt’s decision to drop Farage as a customer were being launched onto the newspapers’ front pages, the actual way in which Farage obtained that information remained very much a secondary story.

Those details were obtained when Farage lodged a data subject access request, or ‘DSAR’, with Coutts. This legal mechanism, introduced as part of the EU’s General Data Protection Regulation, compels organisations to identify, compile, and share every piece of information that they hold relating to an individual. This could range from basic data like names and addresses in a customer database to internal email or text conversations pertaining to them.

The purpose, as with analogous legislation like the California Consumer Privacy Act, is to tip the scales of power around matters of data and privacy back in favour of the consumer. To achieve that, there is real regulatory muscle to ensure that DSARs are acted on. Upon receipt, organisations must respond within thirty days, and non-compliance can carry a fine of up to 4% of the business’s annual global turnover.

The reputational damage that a DSAR could trigger for some businesses should, by now, be readily apparent. Even benign requests can pose a serious challenge to an organisation’s legal resource.

While the potentially punitive results of non-compliance makes DSARs a priority issue, mounting a response is not easy as you might think. The breadth of the request demands an exhaustive and wide-ranging search through information systems, including records of Slack messages and video calls as well as emails, documents, spreadsheets, and databases. At the same time, of course, our usage of such systems is ever-expanding. Every new productivity tool in an organisation’s arsenal will represent a potential landing point for sensitive data which needs to be collated, analysed and appropriately redacted in a DSAR process.

You can imagine that for legal teams this is an onerous workload which saps capacity from higher-value areas of work that drive business growth. Worse, it is a highly labour-intensive, repetitive process which few legal professionals would ideally choose to engage in. Many external firms won’t take DSAR cases on, and if one can be found the fees will likely run to tens of thousands of pounds.

All of that adds up to a growing need for a new kind of data discoverability: not just a way for businesses to oversee data siloes, but to analyse and draw from them in a highly specific way which meets strict legal criteria.

Clearly, the repetitive and precise nature of the task makes it a perfect candidate for automation. With AI, teams can rapidly cull datasets down to just those items which are likely to be relevant before identifying any personal data which needs to be excluded or redacted. In one recent rollout of the technology, this resulted in UK-based technology scale-up, proSapient, halving the time taken to respond to a DSAR and avoiding £20k in costs while maintaining the robust level of detail which GDPR compliance demands.

Any data professional out there knows that a proliferation of personal data residing in systems is an almost inevitable consequence of our modern working practices: digital tools underpin our productivity, and information about people, whether they are customers, clients, or employees, is relevant to almost any process.

Anecdotally, we know that whenever a story involving DSARs hits the headlines, businesses experience a spike of requests. The GDPR may now be half a decade old, but awareness of how it can be leveraged will only continue to grow – far past the capacity of existing tools and team structures to cope.

That means that empowering legal teams with the tools they need manage this new data reality is of paramount importance, both to safeguard the organisation’s future resilience and continuity, and to enable them to focus on delivering the levels of productivity expected from them.

Cybersecurity Awareness Month: We asked the experts about this year’s priorities

960 640 Stuart O'Brien
What are the key considerations, threats and opportunities for IT security professionals in 2023? To mark Cybersecurity Awareness Month 20233 we polled some leading experts for their thoughts…
Milind Mohile, Vice President, Product Management, Citrix
“Hybrid work is still on the rise in 2023, a trend which is only increasing complexity for security teams, with geographically separate workforces, using a variety of managed and unmanaged devices, over the internet, accessing a combination of enterprise-hosted and SaaS apps. Traditional security measures are no longer enough to safeguard a business’s sensitive applications and data, therefore businesses must truly understand how to implement a comprehensive Zero Trust Application Access (ZTAA) framework.
ZTAA goes beyond Zero Trust Network Access (ZTNA) to encompass not just networking, but also application usage and activities even after access has been granted. Unlike traditional security models that rely on perimeter defences with “point-in-time” security controls, and policy engines that follow binary “grant/deny” rules, a ZTAA model combines the principles of “never trust, always verify” with granular access and action controls that can be dialled up and down based on circumstances, telemetry or behaviours. This constant vigilance and fine-grained control is where ZTAA truly shines.
ZTAA will evolve rapidly as solutions incorporate AI to aid in continuous monitoring user behaviours and determining the right responses to suspicious activity. As such, ZTAA enables unrivalled protection against unauthorised access and security breaches, as well as unintentional risky behaviour, making it essential for businesses with hybrid workforces, where users expect to be able to log in from anywhere in the world.”
Matt Tuson, General Manager, EMEA, LogicMonitor
“Over the last two decades, the field of cybersecurity defence has flourished into an advanced, diverse field. However, I think that we will soon see a real evolutionary step take place, which takes us beyond just manning the barricades against digital foes. Businesses are learning that, regardless of whether downtime comes from adversarial attacks or internal technological failures, the bottom-line impact is much the same, and what really matters is getting back to a state of health as quickly and smoothly as possible.
A digital immune system (DIS) approach, built around a mindset which is more agnostic as to the source of problems and more unified in its focus on recovery, will come into focus as a better way of organising teams and technology to create valuable outcomes. The good news for those who have spent years building cybersecurity expertise is that this change will put them closer to the heart of business value. Everything we have learned about resilient systems, designed redundancy, and human psychology will become relevant to business thinking more broadly. Together with more unified data practices and AI tools to action that data, the digital immune system is going to shift the goalposts from the well-defended enterprise to the self-healing enterprise.”
Duncan Bradley, Duncan Bradley, Director of Customer Engagement UKI Cyber Resiliency Practice, Kyndryl 
“The last two decades have witnessed consistent evolution in both how we do cybersecurity and the kinds of risk that cybersecurity seeks to mitigate. The most important lesson emerging in this space right now, though, is really a perspective shift around what cybersecurity is for.
For most of IT history we have spoken of defence, prevention, and avoidance, building a suite of tools and tactics to stop bad outcomes. We have been successful and made it very difficult to break into organisations, so bad actors are now compromising organisations’ user accounts with increasingly sophisticated targeted social engineering attacks, and the growing use of AI techniques, only serves to increase the challenge of detection. Going forward, that conversation is going to be re-oriented around minimising damage and recovering quickly and seamlessly from it. Whether through criminal activity, human error, or natural disaster, breaches and outages happen. The most successful businesses in such moments will be those that have invested in resilience strategies which are agnostic about the source of damage and laser-focused on returning to operational status. That demands a holistic approach where recovering data and reinstating services is baked in at every level, just as something like authorising access is today.
The cybersecurity community has developed very mature methodologies for integrating the human and technological aspects of protecting against attack. In twenty years’ time, resilience will be just as embedded in what we do”
“Cybersecurity Awareness Month serves as yet another reminder of the importance of protecting data in our increasingly digitalised world. AI will be on the agenda, as the recent explosion of generalist technologies and data-scraping tools make data more accessible than ever.
For many businesses, data privacy and security represent a minefield. Whether it’s mitigating the risk of employees exposing sensitive data to GPT-based tools or providing rapid responses to personal information requests, the data privacy challenges for business leaders today are wide-ranging. However, the reality is that compliance isn’t optional, and many are finding themselves on the wrong side of the data privacy coin.
And when it comes to compliance, it’s always going to be more difficult for smaller businesses and start-ups. They cannot afford to take the “get fined, pay up” approach of industry giants. This is why we need to be aware of the benefits of AI as much as its potential risks. AI-driven automation can play a key role in helping SMEs or overburdened legal departments understand, centralise, and analyse their enterprise data, ensuring they keep up with what is an increasingly complex and volatile regulatory landscape. The future of data security depends on our collective ability to adapt – and you can be sure that AI will be at the forefront of enabling businesses to achieve data-driven insights into compliance data, automate compliance tasks and mitigate risk.”
Karl Schorn, Vice President of Professional Services at Systal
“Cybercriminals are using AI and machine learning to develop more effective attacks, such as automated phishing campaigns and AI-driven malware. As technology evolves, so do the attack vectors. Emerging technologies like quantum computing and 5G networks bring new security concerns. This combined with a shortage of skilled personnel and the need to maintain legacy systems and infrastructure is stretching resources as more data and services are moving to the cloud – further pressing the need to protect a wider attack surface, with fewer resources, and skills against determined and developing adversaries.
Addressing these challenges requires a multi-faceted approach that includes technological solutions, strong policies and regulations, employee education, and collaboration among governments, industries, and security experts. Cybersecurity is an ongoing process, and organizations must remain adaptive and proactive in the face of evolving threats.”
John Linford, Forum Director, The Open Group Security & Open Trusted Technology (OTTF)
“It now seems fair to describe the continuing rise of cyber risk as inexorable. Not a week goes by without an analyst or research report announcing a new statistic about the increasing rate of attacks, the diversification of methods, or the growing financial losses being caused.
This means that it’s no longer feasible for organizations to consider any elements of the service topology as ‘trusted’. Rather than assuming any device on a network must have passed a security checkpoint and therefore can be trusted, organizations should be looking to models which secure the data and assets those networks are there to carry, requiring continuous verification of trustworthiness in order to ensure computer security. And Zero Trust ensures computer security for users, data/information, applications, APIs, devices, networks, cloud, etc., wherever they are – instead of forcing a “secure” network within a company.
By assuming every action is potentially malicious and performing security checks on an ongoing, case-by-case basis, Zero Trust reduces successful attacks and protects organizations in the event of a breach as other data and assets remain secure, rather than being accessible by an attacker. In order to successfully implement and ensure proactive mitigation of cyber threats is commonplace, the industry must establish standards and best practices for Zero Trust, which will also be a critical component of cybersecurity awareness.”
Charles Southwood, Regional Vice President and General Manager in UK, Denodo
“The digital landscape is in a constant state of evolution, and along with it, the sophistication of cyber threats continues to grow. These threats take on various forms, ranging from phishing attacks and malware infections to data breaches that can compromise sensitive information. For businesses, safeguarding data and systems must be a number one priority.
While data holds the promise of transforming operations and propelling businesses ahead of the competition, when not adequately protected, it can become a double-edged sword, especially in our current AI-powered landscape. Attacks that utilise this technology can automate and enhance the sophistication of threats, making it more vital than ever to stay ahead of the curve.
Implementing strong authentication methods, encrypting sensitive data, and keeping software and systems up to date are fundamental steps in safeguarding your digital assets. Additionally, having a well-defined incident response plan and regularly assessing the cybersecurity practises of third-party vendors and partners can strengthen the overall security posture.
Cybersecurity isn’t a one-time effort; it’s an ongoing commitment. By investing in robust cybersecurity measures, you not only protect your business but also enhance the trust of your clients and partners. Stay vigilant, stay secure.”
Image by joffi from Pixabay

Are you ready for next month’s Security IT Summit? Here’s everything you need to know…

960 640 Stuart O'Brien

The Security IT Summit takes place in Manchester in just under 4 week’s time – it’s a unique opportunity for you to meet with the UK’s leading cybersecurity solutions providers, plus peers from some of the biggest organisations.

9th November 2023

Radisson Blu Hotel Manchester Airport

Top areas covered: The suppliers who attend can help you with your upcoming projects and cover: Access Control, Identity Access Management, UK Cyber Strategy, Incident Response, Penetration Testing, Risk Management, Artificial Intelligence , Employee Security Access, Password Management and much more…

Your complimentary guest pass includes:

– An itinerary, designed by you, of pre-qualified one-to-one meetings with solution providers

– A seat at the industry seminar sessions

– Lunch and refreshments throughout

– Networking breaks to optimise your opportunity to make new connections

Click here to secure your free place via our short booking form, or you can recommend a colleague to attend.

For more information, feel free to contact us.

These are the solutions cybersecurity professionals need in 2023/24

960 640 Stuart O'Brien

Access Control, Employee Security Awareness and UK Cyber Strategy are topping the list of solutions the UK’s cybersecurity professionals are sourcing for 2023/24, according to our exclusive research.

The findings have been revealed in the run up to the Security IT Summit, which takes place on November 9th in Manchester and are based on delegate requirements.

Delegates registering to attend are asked which solutions they needed to invest in during 2023/24 and beyond.

Penetration Testing and AI/Machine Learning rounded out the Top 5.

Top 10 technologies being sourced by Security IT Summit delegates 2023/24:

Access Control

Employee Security Awareness

UK Cyber Strategy

Penetration Testing

AI/Machine Learning

Application Security

ID Access Management

Incident Response

Mobile Security

Risk Management

Sarah Beall, Managing Director at Forum Events & Media, said: “The way we match buyers and suppliers at the Security IT Summit gives us a unique insight into the types of products and services the industry is looking for right now. Not only does it mean we can deliver a highly-targeted B2B event with proven outcomes for all attendees, but we can deliver valuable insights into how the market is developing at what is a hugely exciting time for all stakeholders.”

To find out more about the Security IT Summit, visit https://securityitsummit.co.uk

For more information about the buying trends data and the Security IT Summit, contact Jennie Lane on 01992 374098 | j.lane@forumevents.co.uk

Empowering cybersecurity with AI: A vision for the UK’s commercial and public sectors?

960 640 Stuart O'Brien

In the age of digital transformation, cybersecurity threats are becoming increasingly sophisticated, challenging the traditional security measures employed by many UK institutions. Enter Artificial Intelligence (AI) – a game-changer in the realm of cybersecurity for both the commercial and public sectors. AI’s advanced algorithms and predictive analytics offer innovative ways to bolster security infrastructure, making it a valuable ally for cybersecurity professionals…

  1. Proactive Threat Detection:
    • Function: By continuously analysing vast amounts of data, AI can identify patterns and anomalies that might indicate a security breach or an attempted attack.
    • Benefit: Rather than reacting to threats once they’ve occurred, institutions can prevent them, ensuring uninterrupted services and safeguarding sensitive data.
  2. Phishing Attack Prevention:
    • Function: AI can evaluate emails and online communications in real-time, spotting the subtle signs of phishing attempts that might be overlooked by traditional spam filters.
    • Benefit: This significantly reduces the risk of employees unknowingly granting access to unauthorised entities.
  3. Automated Incident Response:
    • Function: When a threat is detected, AI-driven systems can instantly take corrective actions, such as isolating affected devices or blocking malicious IP addresses.
    • Benefit: Swift automated responses ensure minimal damage, even when incidents occur outside regular monitoring hours.
  4. Enhanced User Authentication:
    • Function: Incorporating AI into biometric verification systems, such as facial or voice recognition, results in more accurate user identification.
    • Benefit: This curtails unauthorised access and adds an additional layer of security beyond passwords.
  5. Behavioural Analytics:
    • Function: AI algorithms can learn and monitor the typical behaviour patterns of network users. Any deviation from this pattern, such as accessing sensitive data at odd hours, raises an alert.
    • Benefit: This helps detect insider threats or compromised user accounts more effectively.
  6. Predictive Analysis:
    • Function: AI models can forecast future threat landscapes by analysing current cyberattack trends and patterns.
    • Benefit: Organisations can prepare and evolve their cybersecurity strategies in anticipation of emerging threats.
  7. Vulnerability Management:
    • Function: AI can scan systems to identify weak points or vulnerabilities, prioritising them based on potential impact.
    • Benefit: Cybersecurity professionals can address the most critical vulnerabilities first, ensuring optimal resource allocation.
  8. Natural Language Processing (NLP):
    • Function: AI-powered NLP can scan and interpret human language in documents, emails, and online communications to detect potential threats or sensitive information leaks.
    • Benefit: It provides an additional layer of scrutiny, ensuring data protection and compliance.

By harnessing the capabilities of AI, the UK’s commercial and public sectors can look forward to a more robust cybersecurity posture. Not only does AI enhance threat detection and response, but its predictive capabilities ensure that organisations are always a step ahead of potential cyber adversaries. As cyber threats continue to evolve, so too will AI’s role in countering them, underscoring its pivotal role in the future of cybersecurity.

Learn more about how AI can support your cyber defences at the Security IT Summit.

The crucial role of audio solutions in IT security for hybrid work models

960 640 Charles

Shure partnered with global market research giant, IDC, to delve deep into the challenges IT security departments may face as they integrate hybrid work models. Drawing from insights of over 600 respondents from a range of countries, including the UK, the study sheds light on IT security concerns surrounding the quality of audio systems and its potential implications for secure and effective communication.

One of the paramount findings was that, while organisations in the UK are spearheading the adoption of hybrid work structures in Europe, they appear to be underestimating the security aspects linked to high-caliber audio solutions. The gaps in audio quality not only hinder effective communication but could also pose potential security risks, especially when critical information is misheard or misunderstood.

It was evident from the study that poor communication and reduced engagement during virtual meetings were key issues businesses were grappling with. Many IT security professionals highlighted that subpar audio equipment is not only a challenge for clear communication but also a potential security vulnerability, especially if employees resort to non-secure means of communication due to poor audio quality.

Globally, 72% of thriving organisations understand the importance of investing in professional-grade audio gear not just for clear communication, but also from a security standpoint. However, the UK seems to lag in this realisation, signifying an urgent call-to-action for IT security teams.

The IDC research underscores that top-tier audio isn’t merely about sound clarity; it’s also about creating a secure communication environment in the era of hybrid work. Quality audio solutions can prevent miscommunications, reduce the need for repeated information transfer, and thus limit exposure to potential security threats.

Key insights from the research with relevance to IT security are:

  • Team Dynamics: 94% of respondents acknowledge that technology which mimics the essence of face-to-face interactions can positively influence team dynamics and motivation.
  • Operational Efficiency: 90% believe robust audio solutions promote inclusive meetings and foster more efficient and secure work sessions.
  • Employee Trust: 90% perceive such investments as an indication of the company’s commitment to their well-being and security.
  • Organisational Reputation: 89% feel it impacts how both staff and external stakeholders perceive the company’s commitment to security.
  • Employee Confidence: 73% believe that quality audio solutions boost their confidence in the organisation’s dedication to secure and clear communication.
  • Decision-Making: 49% recognise its role in facilitating better-informed, and thus more secure, decision-making processes.

For a comprehensive understanding of the study and more in-depth insights, access the IDC Infobrief sponsored by Shure: : https://effortless.shure.com/content-hub/posts/idc-infobrief

5 Minutes With… Javvad Malik, Security Awareness Advocate at KnowBe4

960 640 Stuart O'Brien

In the latest instalment of our cybersecurity industry executive interview series we spoke to Javvad Malik (pictured), Security Awareness Advocate at KnowBe4, about the importance of training employees to avoid risks, common mistakes made in cyber defence strategies and why action blockbuster The Predator is the perfect movie for cybersecurity professionals… 

Tell us about KnowBe4

KnowBe4 is the world’s first and largest New-school security awareness training and simulated phishing platform that helps you manage the ongoing problem of social engineering. Our main mission is to enable employees to make smarter security decisions every day, and that’s what our products are all designed to support.

Why is security awareness and training important?

Humans are the most attacked vector in any cyber security incident. As a result, security awareness and training is essential to equip employees with the knowledge and skills needed to navigate the digital world safely. By promoting a strong security culture, organisations can significantly reduce the likelihood of successful cyberattacks and minimise the potential impact when incidents occur.

What do organisations most commonly get wrong when it comes to cybersecurity?

Too many times organisations focus on the new shiny threats out there and focus on highly technical and often theoretical threats. The main threats most organisations face still revolve heavily around phishing, poor passwords, and unpatched software. By focussing on the fundamental controls, organisations can reduce their risk significantly compared to chasing the latest shiny tech.

What advice would you give someone starting out in cybersecurity

Be patient, learn your craft, find mentors who can help you grow into areas you want to.

What infosec technology could you not live without?

From a personal perspective, I think a password manager has become invaluable in creating, storing and managing credentials. I genuinely don’t know any of my credentials – which I think is a good thing.

What’s your favourite cybersecurity movie?

Predator, with Arnold Schwarzenegger. You probably are wondering why that is a cybersecurity movie, and to answer that I explained my thoughts here: https://javvadmalik.com/2020/10/29/why-predator-is-the-ultimate-ciso-movie/

Where does GenAI fit into the data analytics landscape?

960 640 Guest Post

Recently, there has been a lot of interest and hype around Generative Artificial Intelligence (GenAI), such as ChatGPT and Bard. While these applications are more geared towards the consumer, there is a clear uptick in businesses wondering where this technology can fit into their corporate strategy. James Gornall, Cloud Architect Lead, CTS explains the vital difference between headline grabbing consumer tools and proven, enterprise level GenAI…

Understanding AI

Given the recent hype, you’d be forgiven for thinking that AI is a new capability, but in actual fact, businesses have been using some form for AI for years – even if they don’t quite realise it.

One of the many applications of AI in business today is in predictive analytics. By analysing datasets to identify patterns and predict future outcomes, businesses can more accurately forecast sales, manage inventory, detect fraud and resource requirements.

Using data visualisation tools to make complex data simpler to understand and more accessible, decision-makers can easily spot trends, correlations and outliers, leading them to make better-informed data-driven decisions, faster.

Another application of AI commonly seen is to enhance customer service through the use of AI-powered chatbots and virtual assistants that meet the digital expectations of customers, by providing instant support when needed.

So what’s new?

What is changing with the commercialisation of GenAI is the ability to create entire new datasets based on what has been learnt previously. GenAI can use the millions of images and information it has searched to write documents and create imagery at a scale never seen before. This is hugely exciting for organisations’ creative teams, providing unprecedented opportunities to create new content for ideation, testing, and learning at scale. With this, businesses can rapidly generate unique, varied content to support marketing and brand.

The technology can use data on customer behaviour to deliver quality personalised shopping experiences. For example, retailers can provide unique catalogues of products tailored to an individuals’ preferences, to create a totally immersive, personalised experience. In addition to enhancing customer predictions, GenAI can provide personalised recommendations based on past shopping choices and provide human-like interactions to enhance customer satisfaction.

Furthermore, GenAI supports employees by automating a variety of tasks, including customer service, recommendation, data analysis, and inventory management. In turn, this frees up employees to focus on more strategic tasks.

Controlling AI

The latest generation of consumer GenAI tools have transformed AI awareness at every level of business and society. In the process, they have also done a pretty good job of demonstrating the problems that quickly arise when these tools are misused. From users who may not realise the risks associated with inputting confidential code into ChatGPT, completely unaware that they are actually leaking valuable Intellectual Property (IP) that could be included in the chatbot’s future responses to other people around the world, to lawyers fined for using fictitious ChatGPT generated research in a legal case.

While this latest iteration of consumer GenAI tools is bringing awareness to the capabilities of this technology, there is a lack of education around the way it is best used. Companies need to consider the way employees may be using GenAI that could potentially jeopardise corporate data resources and reputation.

With GenAI set to accelerate business transformation, AI and analytics are rightly dominating corporate debate, but as companies adopt GenAI to work alongside employees, it is imperative that they assess the risks and rewards of cloud-based AI technologies as quickly as possible.

Trusted Data Resources

One of the concerns for businesses to consider is the quality and accuracy of the data provided by GenAI tools. This is why it is so important to distinguish between the headline grabbing consumer tools and enterprise grade alternatives that have been in place for several years.

Business specific language is key, especially in jargon heavy markets, so it is essential that the GenAI tool being used is trained on industry specific language models.

Security is also vital. Commercial tools allow a business to set up its own local AI environment where information is stored inside the virtual safety perimeter. This environment can be tailored with a business’ documentation, knowledge bases and inventories, so the AI can deliver value specific to that organisation.

While these tools are hugely intuitive, it is also important that people understand how to use them effectively.

Providing structured prompts and being specific in the way questions are asked is one thing, but users need to remember to think critically rather than simply accept the results at face value. A sceptical viewpoint is a prerequisite – at least initially. The quality of GenAI results will improve over time as the technology evolves and people learn how to feed valid data in, so they get valid data out. However, for the time being people need to take the results with a pinch of salt.

It is also essential to consider the ethical uses of AI.

Avoiding bias is a core component of any Environmental, Social and Governance (ESG) policy. Unfortunately, there is an inherent bias that exists in AI algorithms so companies need to be careful, especially when using consumer level GenAI tools.

For example, finance companies need to avoid algorithms running biassed outcomes against customers wanting to access certain products, or even receiving different interest rates based on discriminatory data.

Similarly, medical organisations need to ensure ubiquitous care across all demographics, especially when different ethnic groups experience varying risk factors for some diseases.

Conclusion

AI is delivering a new level of data democratisation, allowing individuals across businesses to easily access complex analytics that has, until now, been the preserve of data scientists. The increase in awareness and interest has also accelerated investment, transforming the natural language capabilities of chatbots, for example. The barrier to entry has been reduced, allowing companies to innovate and create business specific use cases.

But good business and data principles must still apply. While it is fantastic that companies are now actively exploring the transformative opportunities on offer, they need to take a step back and understand what GenAI means to their business. Before rushing to meet shareholder expectations for AI investment to achieve competitive advantage, businesses must first ask themselves, how can we make the most of GenAI in the most secure and impactful way?

AI: The only defence against rising cyberattacks in the education sector?

960 640 Stuart O'Brien

Scott Brooks, Technical Strategist at IT Support company Cheeky Munkey, provides expert insight on how the rise of AI is impacting cyberattacks on schools, and why AI might be the only way for schools and universities to defend themselves against more advanced attacks…

The UK’s education sector is significantly more vulnerable to cyberattacks than education sectors in other countries. In 2022, the UK’s education sector accounted for 16% of total victims on data leak sites, compared to 7% in the US and 4% in France1.

With 1,500 pupils returning to school today after an additional unplanned week off following the attack on Highgate Wood School, the need to consider how AI can be used to help protect schools against cyberattacks is more potent than ever.

Big businesses such as Google, Tesla and PayPal2 are using AI systems to improve their cybersecurity solutions.  At the same time, cybercriminals are able to use AI technology to create new cyberattack methods which are harder to defend against.

With this in mind, educational institutions must invest in learning about the new kinds of cyber threats they may face and AI cybersecurity systems. This article provides an overview of the new threats AI poses to schools and universities, as well as the reasons that educational institutions should invest in AI as a defensive system.

New AI threats to cybersecurity

Hackers using AI

It’s been found that AI is making cybercrime more accessible, with less skilled hackers using it to write scripts – enabling them to steal files3. It’s easy to see how AI can increase the number of hackers by eliminating the need for sophisticated cyber skills.

Hackers can also use machine learning to test the success of the malware they develop. Once a hacker has developed malware, they can model their attack methods to see what is detected by defences. Malware is then adapted to make it more effective, making it much harder for IT staff to catch and respond to threats.

False data can also be used to confuse AI systems. When companies use AI systems for cybersecurity, they learn from historical data to stop attacks. Cybercriminals create false positives, teaching cybersecurity AI models that these patterns and files are ‘safe’. Hackers can then exploit this to infiltrate school systems.

Imitation game

Cyber threats that would once have been categorised as ‘easy’ to repel are getting harder to defend against as AI is improving its ability to imitate humans. A key example of this is phishing emails. Bad grammar and spelling are usually telltale signs warning recipients not to click a link in an email. Attackers are now using chatbots to ensure their spelling and grammar are spot on, making it trickier for school staff to spot the red flags.

Cybersecurity skills gap

Currently, there’s a skills gap within the cybersecurity industry. It’s argued that not enough people have the skill level and knowledge required to develop and implement cybersecurity AI systems. This is because AI is developing at such a rapid pace that it’s hard for professionals to keep up4.

Hiring people with the specialised skills needed, as well as procuring the software and hardware required for AI security systems, can also be costly – especially for schools with already stretched budgets. This means that educational institutions are likely playing catch-up with hackers.

How can AI help improve cybersecurity?

Although AI can be used for ever-more sophisticated attacks, it can also be a powerful tool for improving cybersecurity.

Analysis

AI offers an improved level of cybersecurity, which can help reduce the likelihood of an attack on schools. By analysing existing security systems and identifying weak points, AI allows IT staff to make necessary changes.

Artificial intelligence systems learn to identify which patterns are normal for a network by using algorithms to assess network traffic. These systems can quickly spot when traffic is unusual and immediately alert security teams to any threats, allowing for rapid action.

In addition to preventing network attacks, AI can also be used to improve endpoint security. Devices such as laptops and smartphones are commonly targeted by hackers. To combat this threat, AI security solutions scan for malware within files – quarantining anything suspicious.

Advanced data processing

AI-based security solutions are continuously learning and can process huge volumes of data. This means that they can detect new threats and defend against them in real-time. By picking up on subtle patterns, these systems are able to detect threats that humans would likely miss. It also enables AI to keep up with ever-changing attacks better than traditional antivirus software, which relies on a database of known malware behaviours and cannot identify threats outside of that database.

The ability of AI systems to handle so much data also makes their implementation incredibly scalable. These systems can handle increasing volumes of data in cloud environments and Internet of Things devices and networks.

Working with humans

Since AI systems can automatically identify threats and communicate the severity and impact of an attack, they help cybersecurity teams to prioritise their work. This saves workers time and energy, allowing them to respond to more urgent security threats.

Task automation is another key benefit of AI for educational institutions. AI systems can automate tasks such as routine assessments of system vulnerabilities and patch management. This reduces the workload of external cybersecurity teams and allows for more efficient working, reducing costs for schools and universities. By automating these tasks, AI can alleviate the shortage of skilled workers, addressing the cyber skills gap5.

The rise of AI is understandably a cause of concern for educational institutions and teaching staff alike. Improved cyber threat capabilities mean that schools and universities need to be prepared for changing attacks. However, it’s clear that adopting AI systems is the best way for educational institutions to improve their own cybersecurity. By combining adept cybersecurity staff with artificial intelligence cybersecurity systems, educational institutions can stay ahead of new threats and improve the efficiency of their operations.