AI and cybersecurity: Reputation in the age of Deepfakes

The fun side of AI comes with hidden dangers, say social media experts
- PUBLISHED: Sun 31 Aug 2025, 9:00 AM
- By:
- Sana Eqbal
Have you uploaded your photo to an AI app to turn yourself into a cartoon avatar? What may seem like harmless fun has now sparked serious warnings from authorities in Abu Dhabi. The Department of Government Enablement cautioned residents that these apps could expose biometric data; facial features that, if compromised, are irreplaceable. By uploading just one image, users may be unknowingly training AI systems to recognise their faces, opening the door to identity theft, fraud, and even the creation of convincing deepfakes.
Privacy risks in avatar trends
Earlier this year, trends like the Studio Ghibli-inspired avatars surged across social media, followed by the Barbie Box challenge that allowed users to design custom boxed versions of themselves. While engagement skyrocketed, experts pointed out that such trends often mask deeper risks. Nicolai Solling, CTO of Help AG, explained that many avatars capture far more than a stylised image. “Uploaded photos may carry metadata such as device details and location, inadvertently revealing where you are,” he warned.
Social media experts agree that these AI trends thrive because they tap into creativity and self-expression. But the very data that fuels them can also be exploited by malicious actors or absorbed by platforms to train AI systems without clear consent. Unless users are paying for premium services, their photos may be repurposed to strengthen algorithms, raising questions of data ownership and control.
Interestingly, even AI tools themselves are beginning to set limits. During the Barbie Box challenge, some users noticed that ChatGPT refused to generate hyper-realistic action figures based on their real photos, citing content policy restrictions. While it allowed cartoon-inspired versions, the refusal highlighted an important point — AI platforms are actively shaping what is permissible, blurring the lines between fun, ethics, and safety. This form of digital self-regulation reflects growing awareness about how realistic likenesses can be misused. But as one UAE resident pointed out, the difference between “inspired by” and “modeled exactly” may feel trivial to the user, especially if their photo is still uploaded and processed in the background.
The cultural shift of AI-generated content
For many observers, these viral challenges are not temporary fads but part of a broader cultural transformation. Sam Proctor, head of technology at Asset Integrity Engineering, sees them as “early signs of a much bigger pattern of consumerisation of AI.” As tools become more advanced and accessible, people are likely to continue experimenting with identity and creativity in ways that merge play, nostalgia, and technology.
Yet this shift has sparked backlash. Some marketing professionals argue that AI-generated content dilutes originality, threatening creative industries. Tayiba Ahmed, head of audience engagement at Street FZC, believes AI is best used as a supportive tool for brainstorming rather than replacing human creativity altogether. “It’s great for generating references you might not find online,” she said, “but it lacks the genuine hard work behind artistic expression.”
Trust in digital systems
The deeper issue is not just about avatars or cartoon filters, but about trust. Who owns the data we share with AI apps? What happens to biometric information once it is uploaded? Christoph, a UAE-based privacy expert, warned that such data may be stored, reused, or even sold. Beyond faces, AI can infer age, gender, mood, ethnicity, and location. “Unless users request deletion, images could circulate behind the scenes indefinitely,” he cautioned. Proctor added that while large, reputable players generally handle data securely, bad actors can exploit AI trends to lure vulnerable users. For instance, a fake “fun filter” app could be designed specifically to harvest faces for scams, phishing, or identity fraud. With AI making fake calls and deepfake videos more convincing, the risks are multiplying.
Protecting authenticity in a deepfake era
For UAE brands and institutions, these risks go beyond personal privacy. Reputation is now at stake in a digital environment where fake voices, images, and even entire campaigns can be fabricated within minutes. Maintaining authenticity has become a strategic priority.
Authorities recommend three practical steps for individuals: delete unused apps and images uploaded to AI platforms, restrict app permissions to block unnecessary access, and spread awareness among friends and family. For businesses, the lesson is similar — invest in monitoring tools, verify sources, and adopt strict governance policies around digital content.
A future of cautious optimism
Despite these risks, experts remain cautiously optimistic. AI-generated avatars, when used responsibly, can even enhance privacy by masking real faces in public spaces. The key is awareness and control. As Proctor noted, “We trust email, cloud services, and countless digital tools every day. The same must apply to AI, but only with clear boundaries.”
For brands in the UAE, the challenge will be navigating this cultural shift while preserving trust. In the age of deepfakes, reputation is as fragile as a single click and authenticity is the most valuable currency.






