Personal data protection needs a level playing field

People often fall prey to the dark pattern design of apps and websites

By Shalini Verma

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Top Stories


Published: Mon 29 Nov 2021, 11:52 PM

You and I could be sitting in the same room, scrolling through social media on our respective phones, and yet feeding off radically different world views. Same incident and two very different perspectives. Both could be only partially true; both could be completely dystopic. We all know that there is a cool algorithm at play. But what really fuels this algorithmic control is our personal data. The digital breadcrumbs we leave behind on websites and apps are not just our birthday, gender, and location, but reams of information that could be spun into a full-blown novel.

It’s not entirely our fault. We often fall prey to the dark pattern design of apps and websites. The design is used to manipulate users into clicking on a link. The premise is that users move swiftly through the information deluge, often stumbling upon unwanted apps and links. Dark pattern design is used to surreptitiously collect personal data. The very foundation of consent is broken.

The Internet increasingly feels like a dangerous place, reminiscent of the ancient trade routes on which caravans were robbed by bandits. At least in those days, traders banded together to avoid getting robbed. Now a larger repository of personal data is more likely to face hacker attacks. Beyond the deception, there is the risk of accidental clicks.

I could have blindly opted in, knowing fully well that data sharing may be a point of no return. The correlations done by using our personal data are unimaginable. My neighbor’s ability to secure a home loan may be affected by my financial statement.

At a fundamental level, personal data is used to identify a living person. Isolated information such as my name is usually not considered personal data. This is because name alone cannot be used to identify me. But if my name were to be combined with my home address, it would become personal data.

The EU’s General Data Protection Regulation (GDPR) uses the principle of storage limitation and minimalism to drive personal data stewardship by organizations. Overnight users became aware of the Internet cookies they were inviting from each website they visited. The idea was to users make an informed decision.

While we do need personal data to power the digital economy, growing privacy concerns have prompted the industry and governments to reevaluate personal data management. One such suggestion came from Sir Tim Berners Lee who proposed data trusts. The European Union subsequently floated the idea of a pan-European data trust, which would perform its fiduciary duties of sharing personal data on behalf of citizens. This admittedly places inordinate power in the hands of a few, which feels a lot like what happened with Big Tech. Regardless, there is a need to cleanly delineate data storage from usage for a more level-playing field. This will need a mandate that disaggregates data processers from data collectors. Today they are one and the same.

In the meantime, no organisation can be fully trusted with our personal data, even if the data is anonymised. Data is typically concealed by removing all personally identifiable information from the dataset. Yet, attackers and researchers can ‘de-anonymise’ the data to retrieve sensitive user information as was the case with a patient dataset. It is done by merely cross referencing the data with information from other public data sources like voter database and data from previous breaches. This is no rocket science.

It gets easier when the data is sparse, meaning it has fewer similarities because of the richness of the data and personalisation. My viewing history and preferences on Amazon Prime would be statistically rare when it has a large variety of attributes. No matter how well my personal data is concealed, in the data universe it would be easy to identify me if my details are correlated with another dataset. Now researchers have demonstrated that with fewer than 15 attributes, one can deanonymise a dataset. In addition to the financial and security hazards, we can be easily manipulated on social media by using our personal data.

The footprint of data privacy laws has rapidly grown to restrict cross border data transfers and empower users to request for data deletion and withdrawal of consent. In the UAE, organisations are mandated to ensure that their employees receive adequate security awareness training on the types of data that can or cannot be shared and to whom. Stored data could be identified using group patterns, which makes it tougher to identify individuals in that group. It is hard to identify a flamingo in a water park full of flamingoes with shared traits. If nothing else works, we can take control of our shared data by poisoning it with harmless inputs that confuse Big Tech. No harm in confounding those who misuse it. That would certainly increase the level playing field.

Shalini Verma is CEO of PIVOT technologies, a Dubai-based cognitive innovation company. She tweets @shaliniverma1.

More news from