Mon, Jan 19, 2026 | Rajab 30, 1447 | Fajr 05:45 | DXB
16.2°C
As algorithms advance, they will increasingly automate processes and workflows in sectors demanding efficiency, scalability, and data-driven decision-making

As the Middle East progresses in its goal to become a global AI powerhouse, agentic AI is poised to transform organisations across the region. Gartner predicts that by 2028, 33% of enterprise software applications will include embedded agentic AI, enabling 15% of day-to-day work decisions to be made autonomously by AI agents.
Like other technology advances before it, agentic AI arrives with its own cyber risks, and security teams must collaborate with technology and IT leaders to manage this new, intelligent, and autonomous “workforce”, ensuring security is baked into their deployment and operations.
The Gulf’s AI ambitions
Agentic AI is characterised by its ability to act autonomously, and customer support chatbots, despite their flaws, were perhaps the earliest examples of AI agents. But interacting with support chatbots can often feel more frustrating than helpful. This should serve as good insight for organisations considering agentic AI deployments. Instead of making rushed decisions motivated by the sirens of technology FOMO, agentic AI deployments should be carefully planned and thought out.
The rise of AI agents across the Middle East is inevitable. As algorithms advance, they will increasingly automate processes and workflows in sectors demanding efficiency, scalability, and data-driven decision-making. In the UAE, federal AI integration under the 2031 development framework, coupled with sovereign AI infrastructure partnerships, is paving the way for agent-driven operations in public services and regulated industries. Similarly, Saudi Arabia is accelerating its AI agenda by establishing Humain as a national AI company and committing $1.5 billion to develop domestic platforms, cloud infrastructure, and industry-specific applications.
Governments across the region are betting heavily on AI transformation. Functions such as incident response, network optimization, data analysis, software development, and supply chain management stand to benefit from agentic AI’s analytical, organizational, and predictive capabilities. In critical sectors like healthcare and financial services— both bound by strict data residency requirements— agentic AI promises to revolutionize diagnostics, treatment planning, and risk management.
The transformative potential is significant, but large-scale adoption won’t happen without disruption. AI agents will introduce new responsibilities for technology and security leaders, and change organisations’ digital estates, which is often a catalyst for new cyber risks.
New responsibilities in the age of sovereignty
CIOs, CTOs, and CISOs in the Middle East already face unique challenges around data sovereignty and critical infrastructure protection, but the spread of AI agents will fundamentally alter their role. Before handing over critical tasks to autonomous systems, organisations must build trust and confidence over their behaviours and reliability. Traditionally in charge of managing IT systems and implementing new strategies, CIOs and CTOs will now deploy, monitor, and measure the reliability and efficiency of this new artificial workforce. Similarly, security teams will no longer be solely responsible for just securing human users and traditional infrastructure, but also autonomous AI agents and the new environments they operate in.
To achieve this, security leaders need full visibility over AI agent deployments, preventing “shadow AI” from emerging, and being involved from the earliest stages is the best way to ensure security is inherent to operations. This is particularly critical in the Gulf, where the Saudi Data & AI Authority (SDAIA) requires case-by-case approval for data transfers and the UAE’s Personal Data Protection Law enforces strict data localisation for sensitive sectors including banking, healthcare, and government services. That includes auditing any vendor behind AI agents or integrating agentic AI capabilities, and ensuring transparency and high security standards in the way data is accessed and used.
The UAE’s law also includes requirement to build secure environments for AI agents to operate and preventative efforts to ensure algorithms are not tampered with, be it from data poisoning, cutting off access to the data they need to operate, or any other techniques. With Saudi Arabia experiencing 270,179 DDoS attack attempts during the first half of 2025, robust protection frameworks are essential.

Just as they would with new employees, security teams need to define access policies for each new AI agent to avoid over-permissioning. A compromised agent with excessive privileges could be exploited to gain access to and move freely within an organisation’s systems, disrupt other AI agents, and access and exfiltrate sensitive data. The AI-human security parallel extends to monitoring behaviours, and security must build visibility into AI agents’ actions and be in a position to detect any suspicious behaviour that might indicate compromise.
We are only scratching the surface, but it is easy to see that securing AI agents will be a multi-pronged affair. Rigorous access controls, continuous monitoring of their behaviour, strong data encryption for the data they consume and process, and stringent input and output validation to prevent adversarial attacks are all capabilities organisations must build. Organizations should also consider running regular security audits and penetration testing targeting both AI agents and their integrations in order to identify and address vulnerabilities before they can be exploited.
In the UAE, where the National Cybersecurity Strategy establishes comprehensive frameworks for protecting critical infrastructure, and Saudi Arabia where the National Cybersecurity Authority applies Essential Cybersecurity Controls across government and critical sectors, organisations must ensure their AI agent deployments align with these evolving national frameworks.
Securing AI agents is not going to be a walk in the park, and I want to reiterate how critical it is to involve security from the outset of agentic AI projects . With the right level of understanding of an AI’s mission and inner workings, security teams will be able to adjust security and access parameters without reducing security protections, in order to enable secure AI. As the wider GCC continues to harmonise cybersecurity frameworks through initiatives like Bahrain’s GCC AI Ethics programme, regional collaboration on AI security standards will become increasingly important for organisations operating across multiple Gulf markets.
The writer is VP Middle East, Turkey and Africa, Netskope.