Fri, Jul 18, 2025 | Muharram 23, 1447 | Fajr 04:12 | DXB weather-sun.svg42°C

UAE: Everything is manipulated; AI social engineering is a real threat, experts warn

Experts sound alarm on social engineering and urged firms to involve cybersecurity teams when automating tasks, adopting new AI technologies

Published: Wed 25 Jun 2025, 10:38 PM

Cybersecurity professionals should focus on behavioural patterns, as AI-driven social engineering is becoming a real threat, industry executives said at a cybersecurity conference organised by Khaleej Times on Wednesday.

"AI social engineering is real. Anyone who has any form of social media, be it Facebook, TikTok, or even LinkedIn, I'm sure we've all seen a video that looks very real. Something wasn't quite right about. We dig a bit deeper and look into the comments, and we see it's deepfake video content, deepfake audio, impersonating something that's deemed to be real. That is tremendously scary," Raj Sandhu, Regional Lead MEA- Principal Solutions Architect, SecurityHQ, said at the FutureSec 2025 conference.

"I speak to government entities, enterprise customers, almost every single day, and the one thing CISOs tell me is they're getting pressure from their board to deal with and counter AI social engineering. Now, traditional email content filtering is not going to cut it these days. We need to focus on behaviour — something that my team and I are focusing on.

"It is thinking about behaviours – how are users interacting with devices, and how are entities interacting with devices? What is the attack pattern? Because everything is being manipulated, it's hard to see what is real and what is just strange behaviour. So it's important to have that declassification," he said during the conference.

Stay up to date with the latest news. Follow KT on WhatsApp Channels.

Involving cybersecurity teams early on

A large number of public and private sector professionals and officials attended the one-day conference in Dubai.

Dr Tim Nedyalkov, Global Cybersecurity and AI Security Expert called for the early involvement of cybersecurity members during automation or when incorporating new artificial intelligence (AI) technology to make the processes faster and better.

"When I'm engaged with executive teams, one of my first questions is, how often do you talk to your cybersecurity teams? And then, normally they say, they invite them to board meetings, maybe once every quarter. On average, they spend between four to six hours per year in direct contact with their cybersecurity teams. Why don't you just double or triple the time that you spend with your security teams?" said Nedyalkov, Global Cybersecurity and AI Security Expert.

"Cybersecurity is often on the tail end of the equation, which is not ideal. There are many situations where cybersecurity can be involved much earlier in the journey, whether it is in work, automation, or when incorporating the latest and greatest piece of AI technology. When cybersecurity is involved early in the journey, things will always move a lot faster and better," he said during the FutureSec 2025 conference.

Dr Nedyalkov said that every single organisation wants to do more and faster with AI, but one of the biggest challenges is the lack of governance or the actual foundations for doing things safely and securely with AI.

"I've seen organisations deploying models, for example, with hiring. In some situations, the models cannot explain their decisions. I've seen software engineers deploying models with trained on data that cannot be traced back, and it makes a big difference," he added.