Sopra Steria | F.A.Z.-Institut

Cybersecurity in the era of AI

Artificial intelligence (AI) is transforming the cybersecurity business – for both the angels and the demons. Cybercriminals are using generative AI (GenAI) to individually tailor and scale attacks. However, companies and public authorities are also harnessing the strengths of AI and incorporating the technology into their IT security. How do businesses and public authorities assess the threat posed by hackers? What influence does GenAI have? And how can organisations use AI to protect themselves effectively? Sopra Steria Discover provides the answers – with key findings on this website as well as expert interviews and further insights in the free report.

AI is transforming cybersecurity

Read now

Human vulnerability

Read now

Combining AI and technical expertise

Read now

Counteracting dangers

Read now

AI is transforming cybersecurity

Read now

Human vulnerability

Read now

Combining AI and technical expertise

Read now

Counteracting dangers

Read now

“Cybercriminals are continuing to arm themselves with AI, meaning companies and authorities need to reposition themselves. A new cybersecurity strategy that focuses on the possibilities of AI should be designed to translate automated and rapid learning directly into improved cyber defence.”

Olaf Janßen, Head of Cyber Security at Sopra Steria Germany

AI is transforming cybersecurity

  • Three out of four organisations see an intensified threat level in the digital space due to malicious use of AI.
  • Around one in three organisations plans to adapt its cybersecurity strategy to the AI era in the next twelve months.

AI is transforming cybersecurity

  • Three out of four organisations see an intensified threat level in digital space due to malicious use of AI.
  • Around one in three organisations plans to adapt its cybersecurity strategy to the AI age in the next twelve months.

AI poses new dangers

The use of AI in cybercrime is threatening the IT security of companies and public administrations more than ever. GenAI in particular poses a challenge to organisations: Cybercriminals are using the technology to personalize and automate their attacks. With the help of language generation driven by GenAI, they can make the ‘grandchild trick’ appear more authentic, for example. Artificial intelligence also enables them to analyse relationships in social networks quickly and easily. This allows them to target their attacks even more individually.

73%

of the specialists and managers surveyed believe that the malicious use of AI has drastically raised the threat level in the digital space.¹

71%

of professionals and executives surveyed say that cybercriminals use AI significantly better to attack than organisations do for defending or preventing an attack.¹

¹ Specialists and managers; n = 564
Source: F.A.Z-Institut, Sopra Steria

More network security with AI

In light of the new possibilities, companies and public authorities are becoming increasingly concerned that the balance of power is shifting towards the attackers. The specialists and managers surveyed are of the opinion that cybercriminals are much better at using AI to attack than companies are at defending themselves. However, there are also positive signs, as awareness of AI-based cybersecurity is growing. One third of the organisations surveyed are planning to adapt their own cybersecurity strategy to the AI era, while almost as many companies and authorities are preparing to use AI to improve their IT security. After all: “It takes a network to defeat a network.” Find out in our report how you too can use GenAI to strengthen your cybersecurity.

0%

of the organisations surveyed plan to invest in improving their cybersecurity over the next twelve months.¹

¹ Specialists and managers; n = 564
Source: F.A.Z-Institut, Sopra Steria

33%

of the specialists and managers surveyed intend to adapt their cybersecurity strategy to AI.¹

30%

of the specialists and managers surveyed are preparing to use AI in cybersecurity.¹

54%

of the specialists and managers surveyed say that without the use of AI in cybersecurity, organisations will stand no chance against cyberattacks in the future.¹

¹ Specialists and managers; n = 564
Source: F.A.Z-Institut, Sopra Steria

Use neural networks - now!

Interview with Dr Barbara Korte, Squad Lead AI@Cyber Security

Want to learn more about AI in cybersecurity?

Human vulnerability

  • Inappropriate employee reactions to phishing attacks are the biggest weakness in IT security, according to the survey.
  • Employees are increasingly using their own AI tools, putting cybersecurity to the test.

Human vulnerability

  • Inappropriate employee reactions to phishing attacks are the biggest weakness in IT security, according to the survey.
  • Employees are increasingly using their own AI tools, putting cybersecurity to the test.
34%
Inappropriate employee response to AI-powered social engineering attacks¹
43%
Inappropriate responses by employees to attacks such as phishing¹
34%
Insufficient preparation by the company for attacks¹

What do you currently see as the three biggest weaknesses in your organisation’s cybersecurity?

¹Multiple answers possible with a maximum of three answers; depiction of the three most frequent response categories; Specialists and managers; n = 564
Source: F.A.Z.-Institut, Sopra Steria

Negligence is more dangerous than ever

Inappropriate employee responses to phishing and social engineering attacks remain the biggest weaknesses in company cybersecurity, according to the specialists and managers surveyed. This is worrying, because AI makes it easier than ever for cybercriminals to exploit this vulnerability. One reason for this is the use of AI as a service (AIaaS). This can be used, for example, to generate credible phishing emails on a massive scale. The aim of hackers is to gain access to passwords or account data via employees. They are becoming increasingly efficient in their approach, as our representative employee survey shows.

41%

of the employees surveyed report more phishing messages.¹

31%

of the employees surveyed state that phishing messages by AI can hardly be recognized as such any more.¹

¹Employees; n = 1,003
Source: F.A.Z.-Institut, Sopra Steria

Would you have recognized it? This video shows a non-real person speaking with a cloned voice. Both the voice and the appearance of real people can be cloned using AI in a very short time and used for attacks. We invested about 15 minutes and 15 euros in the creation of this video. Imagine what you would see if it had been a day and more money?!

(The video was generated with the help of AI tools for facial and speech simulation)

Organisations must act now

The increasing use of AI tools by the key vulnerability, humans, is also a threat. According to a study by Microsoft, 75 per cent of knowledge workers worldwide use AI at work. 78 per cent of them access their own private applications, thereby overriding company guidelines on AI usage. This entails significant risks for data security. However, the organisations surveyed are not yet sufficiently aware of this fact. Only a quarter of specialists and managers classify the use of ChatGPT, DeepL & Co. as a significant risk to their own IT security. At the same time, our representative survey of employees revealed that two-thirds of the working population in Germany use AI in a professional context. This is despite the fact that, according to employees, less than half of employers regulate the use of AI tools transparently. Do you want to know how you can protect yourself against AI-based phishing attacks? Our report provides the answers.

43%

of employees surveyed state
that their employer does not regulate the use of AI applications (transparently).¹

¹Employees; n = 1,003
Source: F.A.Z.-Institut, Sopra Steria

26%

of the specialists and managers surveyed see the unregulated use of AI tools such as ChatGTP & Co. as a threat to cybersecurity.²

¹ Specialists and managers; n = 564
Source: F.A.Z-Institut, Sopra Steria

0%

of employees surveyed use AI applications such as ChatGPT, Midjourney or DeepL in their daily work.¹

¹ Employees; n = 1,003
Source: F.A.Z.-Institut, Sopra Steria

Don’t view AI as a problem, but as part of the solution

1.

Develop guidelines for the use of AI applications in everyday working life and sensitize your employees to AI-supported phishing attacks.

2.

Examine the options for using AI to support repetitive and time-consuming tasks so that your employees’ time and energy can be fully invested in more complex tasks.

3.

Use the possibilities that AI offers to personalise test attacks or adapt awareness campaigns to new attack patterns.

Do you want to know how you can protect yourself against AI-based phishing attacks?

Our study report provides the answers.

Combining AI and technical expertise

  • Around half the organisations surveyed want to invest more in cybersecurity.
  • Organisations need to combine technology and expertise to increase cybersecurity.

Combining AI and technical expertise

  • Around half the organisations surveyed want to invest more in cybersecurity.
  • Organisations need to combine technology and expertise to increase cybersecurity.

Cybersecurity needs a strategy

With the growing threat of cybercrime, companies and authorities are also becoming increasingly aware of the need for strategically planned IT security. In around three quarters of the organisations surveyed, cybersecurity is currently seen as a strategic issue that must be considered in every new process that is set up. Nevertheless, concerns about hacker attacks are considerable: Many organisations lack IT security experts and the corresponding expertise. Almost half of the specialists and managers surveyed are convinced that cybercrime has reached a whole new level due to AI and are taking this as an opportunity to invest more in cybersecurity.

72%

of the specialists and managers surveyed say that cybersecurity is a strategic issue that is considered in every new process they set up in their organisation.¹

49%

of the specialists and managers surveyed want to invest more in cybersecurity, as cybercrime has reached a whole new level due to AI.¹

¹ Specialists and managers; n = 564
Source: F.A.Z-Institut, Sopra Steria

Strength comes from collaboration

When it comes to the fight against AI-based cybercrime, the respondents agree: IT security is a joint task. Cybersecurity experts are rare. Specialists and managers would therefore like to see greater cooperation between the various authorities, research institutes and companies. However, AI is also seen as an important tool for strengthening an organisation’s own cyber resilience. There is great potential, especially when it comes to identifying attacks at an early stage: Machine learning helps with pattern recognition and can identify anomalies that are often barely visible to humans. AI can optimize automation tools to contain and push back attacks before hackers can access data. Find out more in our report.

54%

of the specialists and managers surveyed see a lack of personnel as an obstacle to greater cybersecurity.¹

34%

of the specialists and managers surveyed would like to invest more in cybersecurity, as AI offers new opportunities for protecting IT systems.¹

¹ Specialists and managers; n = 564
Source: F.A.Z-Institut, Sopra Steria

0%

of the specialists and managers surveyed agree with the statement that cybersecurity is a joint task. To protect themselves effectively against cyberattacks, organisations should work together and stop poaching cybersecurity experts from each other.¹

¹ Specialists and managers; n = 564
Source: F.A.Z-Institut, Sopra Steria

Four steps for facing the new cyber threats with confidence

Constantly minimise risks

Develop a security strategy that ensures that the organisation, processes and technology can adapt to changing risks.

Reduce your incident risk

Always plan preventive, active and reactive elements into your cybersecurity strategy.

Detect attacks at an early stage

Continuous monitoring of IT systems helps to detect threats before damage occurs.

React with power

The earlier you detect attacks, the faster and more effectively you can react.

What levers can be used to master the challenges of the AI age?

Counteracting dangers

  • 93 per cent of respondents expect an increase in cyber damage scenarios over the next twelve months.
  • Organisations can only counteract this by strengthening their cyber resilience.

Counteracting dangers

  • 93 per cent of respondents expect an increase in cyber damage scenarios over the next twelve months.
  • Organisations can only counteract this by strengthening their cyber resilience.

Interview

“We are in a race”

IT systems are exposed to many threats. A successful hacker attack has massive consequences for the affected organisation and can also have an impact on society. In this interview, Timo Kob, Professor of Cybersecurity and Business Protection at FH Campus Wien, explains where companies and authorities need to take action now and in the future.

„Mr. Kob, how has the threat situation in digital space developed for companies and public administrations in recent years?“

„The general situation has been getting worse for years. There are several reasons for this …“

Prof Timo Kob is a Professor of Cybersecurity and Business Protection at FH Campus Wien and a member of the Management Board of HiSolutions AG.

Would you like to read more?

Recommended reading

Living with disinformation

Disinformation is evolving. AI, education, and accountability are key to combating this systemic threat in the digital age.

DORA and AI Act: Boost your cybersecurity posture

Discover how DORA and the AI Act boost cybersecurity, improving ICT risk management and securing AI in finance.

Striking a balance between innovation and resilience in the banking sector

Balancing product development, security and compliance is a challenge for banks. Erwan Brouder, our Deputy Head of Cybersecurity, France, gives us his analysis.