AI Archives - Page 2 of 2 - Cyber Secure Forum | Forum Events Ltd
Posts Tagged :

AI

OPINION: The promising influence of generative AI on cybersecurity

960 640 Stuart O'Brien

The cybersecurity landscape has always evolved rapidly, with new threats emerging and old ones becoming more sophisticated. As defenders strive to stay ahead, generative artificial intelligence (AI) is proving to be a game-changing ally, transforming threat detection, response strategies, and even the creation of defensive measures.

Generative AI holds the potential to transform threat detection. By learning from past cyber-attacks and security incidents, AI can generate predictive models to identify potential threats before they materialise. This capability enables proactive security measures and mitigates the risk of breaches, significantly bolstering cybersecurity efforts.

Beyond prediction, generative AI also enhances incident response. When a security incident occurs, AI can generate an optimal response strategy based on factors like the nature of the threat, the systems affected, and past successful interventions. By minimising the time between threat detection and response, AI can limit the damage inflicted by cyber-attacks.

In the realm of network security, generative AI is instrumental in the creation of ‘honeypots’ – decoy systems designed to lure attackers. AI can generate realistic network traffic, creating convincing honeypots that can distract attackers, provide early warning of an attack, and provide valuable intelligence about attacker tactics.

Another transformative application of generative AI is in cybersecurity training. AI can generate simulated cyber-attack scenarios for cybersecurity professionals to practice their response, improving their skills and readiness for real-world threats.

Perhaps most innovatively, generative AI can be used to create new security systems. AI algorithms can generate novel encryption algorithms, security protocols, or system configurations, pushing the boundaries of current cybersecurity technology and making our digital assets more secure.

However, as promising as these advancements are, the integration of generative AI in cybersecurity is not without challenges. Issues of data privacy, the risk of AI being used maliciously, and the need for humans in the loop to supervise and interpret AI actions must be addressed.

In conclusion, the potential of generative AI in revolutionising the cybersecurity industry is immense. It brings forth a proactive, adaptive, and innovative approach to cybersecurity, helping defenders stay one step ahead of attackers. As generative AI continues to evolve, it will undoubtedly shape the future of cybersecurity, transforming it into a more resilient, intelligent, and agile domain. The cybersecurity industry stands on the cusp of this exciting era, and the successful navigation of this transformation will determine its efficacy in our increasingly interconnected world.

The cybersecurity landscape is evolving rapidly, with new threats emerging and old ones becoming more sophisticated. As defenders strive to stay ahead, generative artificial intelligence (AI) is proving to be a game-changing ally, transforming threat detection, response strategies, and even the creation of defensive measures.

Generative AI holds the potential to transform threat detection. By learning from past cyber-attacks and security incidents, AI can generate predictive models to identify potential threats before they materialise. This capability enables proactive security measures and mitigates the risk of breaches, significantly bolstering cybersecurity efforts.

Beyond prediction, generative AI also enhances incident response. When a security incident occurs, AI can generate an optimal response strategy based on factors like the nature of the threat, the systems affected, and past successful interventions. By minimising the time between threat detection and response, AI can limit the damage inflicted by cyber-attacks.

In the realm of network security, generative AI is instrumental in the creation of ‘honeypots’ – decoy systems designed to lure attackers. AI can generate realistic network traffic, creating convincing honeypots that can distract attackers, provide early warning of an attack, and provide valuable intelligence about attacker tactics.

Another transformative application of generative AI is in cybersecurity training. AI can generate simulated cyber-attack scenarios for cybersecurity professionals to practice their response, improving their skills and readiness for real-world threats.

Perhaps most innovatively, generative AI can be used to create new security systems. AI algorithms can generate novel encryption algorithms, security protocols, or system configurations, pushing the boundaries of current cybersecurity technology and making our digital assets more secure.

However, as promising as these advancements are, the integration of generative AI in cybersecurity is not without challenges. Issues of data privacy, the risk of AI being used maliciously, and the need for humans in the loop to supervise and interpret AI actions must be addressed.

In conclusion, the potential of generative AI in revolutionising the cybersecurity industry is immense. It brings forth a proactive, adaptive, and innovative approach to cybersecurity, helping defenders stay one step ahead of attackers. As generative AI continues to evolve, it will undoubtedly shape the future of cybersecurity, transforming it into a more resilient, intelligent, and agile domain.

The cybersecurity industry stands on the cusp of this exciting era, and the successful navigation of this transformation will determine its efficacy in our increasingly interconnected world.

Cybersecurity priorities: Why AI-powered threat detection should be in your plans

960 640 Guest Post

By Atech Cloud

The changed world we’ve found ourselves living in since the global pandemic struck in 2020 has been particularly helpful to cybercriminals. Nothing illustrates this so well as the SolarWinds hack, described by Microsoft president Brad Smith as the most sophisticated cyberattack of all time, the reverberations of which have been felt throughout 2021.

Homeworking, the ongoing digitalisation of society, and the increasingly online nature of our lives mean opportunities are about for phishers, hackers, scammers, and extortionists. As we head into 2022, there is, unfortunately, no sign of this letting up. This is why it’s essential for individuals and organisations to be aware of the ever-growing avenues of attack as well as what can be done to mitigate the risks.

So let’s take a look at the most important and significant trends affecting our online security in the next year and beyond while throwing in some practical steps we recommend taking to avoid becoming victims:

AI-powered cybersecurity

Similar to the way in which it is used in financial services for fraud detection, artificial intelligence (AI) can counteract cybercrime by identifying patterns of behaviour that signify something out-of-the-ordinary may be taking place. Crucially, AI means this can be done in systems that need to cope with thousands of events taking place every second, which is typically where cybercriminals will try to strike.

A product we recommend and work with is the Azure Sentinel Solution for all cloud security needs.

To find out why cloud-native security operations is the hot button topic for this year and how to deliver it, read the rest of this article on our blog.

How AI stopped a WastedLocker intrusion before ransomware deployed

1024 681 Stuart O'Brien

By Max Heinemeyer, Director of Threat Hunting, Darktrace

Since first being discovered in May 2020, WastedLocker has made quite a name for itself, quickly becoming an issue for businesses and cyber security firms around the world. WastedLocker is known for its sophisticated methods of obfuscation and steep ransom demands.

Its use of ‘living off the land’ techniques makes a WastedLocker attack extremely difficult for legacy security tools to detect. As ransomware dwell time shrinks to hours rather than days, security teams are increasingly relying on artificial intelligence to stop threats from escalating at the earliest signs of compromise – containing attacks even when they strike at night or on the weekend.

This article examines a WastedLocker intrusion that targeted a US agricultural organization in December.

The initial infection appears to have taken place when an employee was deceived into downloading a fake browser update. Attempted reconnaissance began just 11 minutes after the initial intrusion, and the attacker used an existing administrative credential to establish successful administrative and remote connections to other internal devices. Several hours later – in the early hours of the morning – the attacker used a temporary admin account to attempt a file transfer.

Darktrace AI detected every stage of this intrusion, picking up on all unusual activity for the organization and unusual user behavior, including HTTP connections to anomalous external destinations and highly unusual connections between internal devices.

With Darktrace’s real-time detections – and Cyber AI Analyst investigating and reporting on the incident in a number of minutes – the security team were able to contain the attack, taking the infected devices offline.

Without Darktrace in place, the ransomware would have been successful in encrypting files, preventing business operations at a critical time and possibly inflicting huge financial and reputational losses to the organization.

Darktrace’s AI detects and stops ransomware in its tracks without relying on threat intelligence. Ransomware has thrived this year, with attackers constantly coming up with new attack TTPs. However, the above threat find demonstrates that even targeted, sophisticated strains of ransomware can be stopped with AI technology.

For more information on Darktrace, click here.

Breaking down AI’s role in cybersecurity

960 640 Guest Post

Data security is now more vital than ever. Today’s cybersecurity threats are incredibly smart and sophisticated. Security experts face a daily battle to identify and assess new risks, identify possible mitigation measures and decide what to do about the residual risk. 

This next generation of cybersecurity threats require agile and intelligent programs that can rapidly adapt to new and unforeseen attacks. AI and machine learning’s ability to meet this challenge is recognised by cybersecurity experts, the majority of whom believe it is fundamental to the future of cybersecurity. Paul Vidic, Director, Certes Networks, outlines how AI and machine learning will play a fundamental role in enabling organisations to detect, react to – even prevent – emerging cyber threats more promptly and effectively than ever before...

Why is Cybersecurity so Important?

Cybersecurity is important because it encompasses everything that pertains to protecting our sensitive data, personally identifiable information (PII), protected health information (PHI), personal information, intellectual property, data, and governmental and industry information systems from attempted theft and damage.

As the whole world is becoming more digitalised, cybercrime is now one of the biggest threats to all businesses and government organisations around the world.

According to recent reports, cyber criminals exposed 2.8 billion consumer data records in 2018, costing US organisations over $654 billion. Meanwhile, the 2019 Ninth Annual Cost of Cybercrime Study calculated the total value of risk as $US5.2 trillion globally over the next five years. 

The same report identified the use of automation, advanced analytics and security intelligence to manage the rising cost of discovering attacks.

Enter AI and Machine Learning

Artificial Intelligence (AI) and machine learning technologies address these challenges and are giving rise to new possibilities for cybersecurity threat protection. AI in cybersecurity plays an important role in threat detection, pattern recognition, and response time reduction. Adopting AI in cybersecurity offers better solutions when it comes to analysing massive quantities of data, speeding up response times, and increasing efficiency of often under-resourced security teams.

AI is designed and trained to collect, store, analyse and process significant amounts of data from both structured and unstructured sources. Deploying technologies such as machine learning and deep learning allows the AI to constantly evolve and improve its knowledge about cybersecuritythreats and cyber risk.

For example, by recognising patterns in our environment and applying complex analytics, AI enables us to automatically flag unusual patterns and enable detection of network problems and cyber-attacks in real-time. This visibility supplies deeper insights into the threat landscape which in turn informs the machine learning. This means that AI-based security systems are constantly learning, adapting and improving. 

Risk Identification

Risk identification is an essential feature of adopting artificial intelligence in cybersecurity. AI’s data processing capability is able to reason and identify threats through different channels, such as malicious software, suspicious IP addresses, or virus files.

Moreover, cyber-attacks can be predicted by tracking threats through cybersecurity analytics which uses data to create predictive analyses of how and when cyber-attacks will occur. The network activity can be analysed while also comparing data samples using predictive analytics algorithms. 

In other words, AI systems can predict and recognise a risk before the actual cyber-attack strikes.

Conclusion

Of course, fundamental security measures such as malware scanning, firewalls, access controls, encryption, and policy definition and enforcement remain as important as ever. AI does not replace these; rather, it complements them.

However, as AI and machine learning technologies continue to mature, it is possible to imagine a time when the cybersecurity industry – having long been at the mercy of the malevolent hacker – may finally have the tools to take the lead. 

Skill boost as £18.5m committed to data & AI degrees

960 640 Stuart O'Brien

Up to 2,500 people will benefit from £18.5 million of funding to retrain and become experts in data science and artificial intelligence (AI).

The investment is part of the Government’s ongoing strategy to drive skills within the technology sector, supporting more adults to upskill and retrain to progress in their existing careers or find new employment.

£13.5 million investment will fund new degree and Masters conversion courses and scholarships at UK academic institutions over the next three years.

£5 million is also being invested to encourage technology companies to develop cutting-edge solutions, utilising AI and automation, to improve the quality of online learning for adults.

The Adult Learning Technology Innovation Fund, which will be launched in partnership with innovation foundation Nesta, will provide funding and expertise to incentivise tech firms to harness new technologies to develop bespoke, flexible, inclusive, and engaging online training opportunities to support more people into skilled employment.

More than 2.1 million people are currently employed across the tech sector, contributing £184 billion to the economy every year and inward investment to the UK AI sector stood at £1 billion for 2018, which is more than Germany, France, Netherlands, Sweden and Switzerland combined.

The Government is also investing in data-driven technologies, such as artificial intelligence, through the modern Industrial Strategy, so tech businesses and people with the drive and talent can succeed.

“UK firms continue to build on our heritage as the home of Artificial Intelligence, and through our modern Industrial Strategy we’re investing in that strength to ensure we remain world-leaders in the field and at the very forefront of the latest technologies,” said Greg Clark, Business secretary.

“These new retraining opportunities and scholarships will ensure people from all backgrounds have the opportunity to move into new and exciting careers, and to shape this innovative industry for years to come.”

Discussing the launch of the new Adult Learning technology Fund, Damian Hinds, Education Secretary, said: “Artificial Intelligence and other new technologies are transforming the way we live and work and have the potential to radically improve online learning and training, so more people can get the skills they need.

“We all have busy lives, juggling work and family commitments so online courses are a great way for more people to retrain or upskill and secure a rewarding career. Investing in cutting edge technologies such as AI will mean we can future proof the online learning experience and ensure it better meets students’ needs.

“This is an exceptional opportunity for technology firms to work with Government to put their ideas into action to help develop pioneering online training opportunities for adults.”

Potential applicants to the AI and Data Conversion courses will hold a degree in other disciplines and scholarships will be made available to support applications from diverse backgrounds. This could include people returning to work after a career break and looking to retrain in a new profession, under-represented groups in the AI and digital workforce, including women and people from minority ethnic backgrounds, or lower socio-economic backgrounds.

The news follows the recently announced multimillion skills package which saw the creation of industry-funded AI Masters, prestigious Alan Turing Institute AI research fellowships, and 16 dedicated Centres at universities across the country to train 1000 extra AI PhDs.

Image by Gerd Altmann from Pixabay

Research into AI cyber security threat lacking

960 640 Stuart O'Brien

A study of cyber security academic research projects worth €1bn to assess academic trends and threats has found Cyber Physical Systems, Privacy, IoT and Cryptography the strongest cyber security areas to watch – but that Artificial Intelligence is an “apparent omission”.

Crossword Cybersecurity looked at nearly 1,200 current and past research projects from academic institutions in the United Kingdom, United States, Europe, Australia, and Africa, with reported funding of EU projects at over €1 billion.

The database identified several global trends by comparing the periods January 2008 to June 2013 with July 2013 to December 2018, including:

· Cyber Physical Systems (CPS) – Over 100 projects were found in this area alone, a significant figure. The United States appears to be the most active in CPS research, with a focus on securing critical infrastructure.
. Privacy – Projects related to privacy have increased by 183% in recent years.
· Internet of Things (IoT) – Projects with an IoT element have increased by 123% lately, with around 14% of current projects having this characteristic.
· Cryptography – With the promise of quantum computing on the horizon, there has been an influx of new projects that apply the technology to the future of cryptography, with a 227% increase in this area of research (albeit this was from a low base).

Significant differences can also be seen between regions. For example, the EU appears distinctly focused on minimising Small & Medium Enterprises’ (SME) exposure to cyber security risk. Conversely, when compared with other regions, the US has a greater focus on the human component of cyber security. Other US top project funding areas include Cyber Physical Systems (as applied to smart cities and power grids), securing the cloud, cybercrime, and the privacy of Big Data sets (as applied to the scientific research community).

In the UK, the leading research verticals are critical infrastructure and securing the health sector (with 11 current projects each). Current funding across UK projects exceeds £70m, with quantum and IoT-related projects both more than doubling over five years. There are currently nine new UK projects with a focus on Cyber Physical Systems.

The four UK projects with the greatest funding are in the fields of Safe and Trustworthy Robotics, Big Data Security, Cybercrime in the Cloud and Quantum Technology for Secure Communications.

The most notable UK decline was in big data projects, which have dropped by 85%.

Globally, there are currently 52 global projects with a cryptographic focus, and at least 39 current live EU projects featuring a cryptographic element. In the UK, this area has been consistently strong over the last ten years, with 18 projects starting between 2008 and mid 2013, and 19 projects from mid 2013 to now.

Tom Ilube, CEO at Crossword Cybersecurity plc said: “The need to protect critical infrastructure has never been stronger as technology becomes more deeply embedded in every aspect of our daily lives. However, one apparent omission is research solely focused on the application of AI techniques to complex cyber security problems. We hope to see more of that in the future, as the industry works to stay ahead of the constantly evolving cyber security landscape.”

The Crossword Cybersecurity database will be periodically updated, to deliver ongoing insight into the most prevalent cyber security research trends and investment areas. If you are interested in further details, contact the Scientific Advisory Team at Crossword Cybersecurity on innovation@crosswordcybersecurity.com.

Ransomware and phishing top concerns for IT professionals

960 640 Stuart O'Brien
Ransomware (24%) and phishing attacks (21%) are the top two concerns among IT leaders in 2018, according to new research.
Barracuda surveyed more than 1,500 IT and security professionals in North America, EMEA, and APAC about their IT security priorities, how these have shifted over the 15 years and what is expected to change within another 15 years.
Other key finding include:
  • In 2003, viruses (26%) and spam and worms (18%) were noted as the top two threats
  • In 2003 only 3% identified cloud security as a top priority. This number has gone up to 14% in 2018
  • 43% identified AI and machine learning as the development that will have the biggest impact on cyber security in the next 15 years
  • 41% also believe the weaponisation of AI will be the most prevalent attack tactic in the next 15 years

Overall, Barracuda says study indicates that while the top security priorities have remained consistent over the past 15 years, the types of threats organisations are protecting against has shifted significantly.

Looking ahead, respondents believe that the cloud will be a higher priority 15 years in the future and that AI will be both a threat and an important tool.

A full 25 percent of respondents said email was their top security priority in 2003, and 23 percent said the same about their current priorities.

Network security came in a close second for both 2003 and 2018 priorities, with 24 percent and 22 percent respectively.

31 percent of respondents chose AI as the new technology that they will rely on to help improve security, and 43 percent identified the increasing use of artificial intelligence and machine learning as the development that will have the biggest impact on cyber security in the next 15 years.

On the other hand, 41 percent believe the weaponisation of AI will be the most prevalent attack tactic in the next 15 years.

“Artificial intelligence is technology that is top of mind for many of the IT professionals we spoke with — both as an opportunity to improve security and as a threat,” said Asaf Cidon, VP email security at Barracuda. “It’s an interesting contrast. We share our customers’ concern about the weaponization of AI. Imagine how social engineering attacks will evolve when attackers are able to synthesize the voice, image, or video of an impersonated target.”

Cylance raises $120m to expand AI cybersecurity platform

960 640 Stuart O'Brien

US-based Cylance has closed a $120 million funding round led by funds managed by Blackstone Tactical Opportunities and other investors.

The company says the financing will enable it to continue a global expansion and extend its portfolio cybersecurity solutions.

Cylance offers a machine learning-powered predictive endpoint security solution that protects users from unknown cyberattacks, in particular from threats which may not exist for years to come. The company says that since its inception its approach has prevented attacks on average 25 months before the attack was launched and first discovered.

The new funding will bolster the company’s sales, marketing and development efforts to expand its global footprint across Europe, the Middle East, and Asia Pacific, and extend its product offerings.

“Cylance has proven that artificial intelligence can defend against cybersecurity problems that were previously thought impossible to prevent,” said Cylance CEO, Stuart McClure. “With the most advanced application of AI in endpoint security, Cylance products continuously learn and improve over time, enabling customers to achieve a state of ‘Perpetual Prevention’ and creating a simple silence on the endpoint.”

“Blackstone was an early believer in Cylance’s approach of applying AI to prevent one of the most difficult issues businesses face today – cyberattacks that disrupt operations and damage reputations,” said Viral Patel, Senior Managing Director in Blackstone’s Tactical Opportunities group.

“This has been a unique opportunity to participate in funding a company helping to turn the tide against a very serious threat to organizations worldwide,” added Dave Johnson, a Senior Advisor to Blackstone.

“With annual revenues over $130 million for fiscal year 2018, over 90% year-over-year growth, and more than 4,000 customers, including over 20% of the Fortune 500, we have demonstrated market success, scale and traction,” said Brian Robins, Chief Financial Officer at Cylance. “We are honored to have Blackstone Tactical Opportunities expand its commitment to Cylance by leading this round of financing. The investment supports our growth strategy and will enable us to continue on the path to becoming cash flow positive.”

Embrace AI, say cyber security professionals

960 640 Stuart O'Brien

The global head of security intelligence at IBM Nick Coleman has called for cyber professionals to embrace the world of Artificial Intelligence (AI) and automation.

During the Isaca CSX Europe 2017 conference in London, Coleman said that without embracing the worlds of AI and automation, security execs will be “obsolete in three of four years.”

“The threats are becoming so serious that we need to embed artificial intelligence and automation into security processes so that we can be more intelligent and efficient in our response.

“We should be looking at each of these areas and finding ways to embed AI and automation wherever it makes sense to do so to improve efficiency, and thereby improve capability and, ultimately, enable greater business resilience,” Coleman said.

Coleman added that as the cyber security world becomes more sophisticated, the number of threats will continue, highlighting the need to automate as much as possible.

Commenting on IBM’s Watson super computer and the ability of it to ingest four million security-related documents an hour, Coleman added: ““Research shows that around a third of their time is spent gathering and processing information, but this is something that can be automated.

“We already have automated planes and ships, and relatively soon we will have self-driving cars, so they should be looking to where it makes most sense to automate in cyber security to make sure they are ready for the future and have developed the skills to deliver value on top of automation.”

Responsible leadership critical to managing AI and robots

960 640 Stuart O'Brien

A recent roundtable held at Nyenrode Business Universieit has found that responsible leadership is critical to manage changes such as job losses to AI and robotics technologies, both societally and environmentally.

The roundtable, made up of 24 managers of prominent, ethically responsible Dutch firms, as well as 24 outstanding students from seven Dutch universities, met to discuss AI and robotics technologies developments.

Bob de Wit, Professor of Strategic Leadership at Nyenrode Business Universiteit and organiser of the event, commented: “Advancements such as AI, robotics and big data will be the catalysts for a societal revolution. As businesses increasingly adopt them, huge numbers could lose their jobs, affecting both work and economic structures globally.

“It is likely that the new jobs that these technologies create will be high-skilled and too few in number. And when every economy relies on its citizens having income, once these job losses start hitting – purported by consulting firm, CBRE, to be half of professional jobs by 2025 – then spending will stop, taxes will plummet and the economy will suffer.

“Although every business wants to keep up with the digital revolution, cutting corners ethically could result in far worse consequences for us all.”

Without commitment to responsible leadership, sectors such as oil and energy could harness tech advancements to protect their interests at great future cost.

Wit concluded: “Businesses, societies and governments are not fully prepared for the speed of the advancements we are making in work-related technology. The next generation of managers need to prioritise ethical, social and environmental responsibility when making big decisions, perhaps even putting these above profit. The power tech affords us is immense, but if misused, the consequences could be irreversible.”

  • 1
  • 2