The Emirates called for strengthening the international humanitarian response and providing urgent relief to those in need
Is Big Brother watching Europeans or will he soon do so? The European Union could preempt that possibility by banning some uses of artificial intelligence (AI).
It is set to formally release regulations governing AI for discussion this week that include a ban on use of AI in mass surveillance and in building social credit scores.
The proposed rules would also reportedly require special authorization for use of facial recognition technology in public spaces. As well, oversight will be mandated on “high-risk” AI systems including those that pose a potential threat to safety such as self-driving cars and AI that can have an impact on personal wellbeing through use in job hiring, judicial decisions and loan credit.
The initiative would also create a “European Artificial Intelligence Board” with representatives from every member country to help the European Commission decide which AI systems are “high-risk” and recommend changes to prohibitions.
It is the latest round in the EU’s determination to mark out its own “digital sovereignty” following moves to rein in Big Tech and sweeping regulations to protect user privacy. But is part of a virtual bureaucracy that is now growing to match the real-life over-regulation for which the EU is renowned? Or is it instead a much-needed oversight ecosystem that can help protect us from a dystopian future?
Some claim the EU is creating a morass of red tape that only Big Tech can afford to unravel — and therefore stifling to competition — yet others say it could be setting the gold standard for other regions to follow in keeping a necessary eye on AI. Concerns are certainly growing about misuse, particularly in disinformation, influencing public opinion and manipulation of perceptions.
In his visionary book 1984, written after the propaganda battles of WWII, George Orwell warned us of those possibilities. But perhaps he had the date wrong. It would be in 2015 that the Western world experienced some real, carefully crafted manipulation of mass thinking in what came to be known as the Cambridge Analytica scandal.
The British firm of that name used AI in the form of advanced algorithms and machine learning to target and influence voters using social media, including during Donald Trump’s winning presidential campaign. The system was able to learn what presentations could most influence voters — down to fine-grain details of what colours and graphics are most effective — then rolled out campaigns to influence susceptible voters that AI identified.
On a more mundane level, a fact of today’s online life is that machine learning and AI follow us around the internet, later serving up more of what we had before because it thinks we like similar material.
Of course AI is not always annoying or used for dark purposes. Many developers see it as a force for good. One example is a Canadian company’s use of Big Data and artificial intelligence that was able to send out a warning in December, 2019 about a virus spreading in Wuhan, China, before either the World Health Organisation or the Centers for Disease Control in the US could issue warnings.
BlueDot uses AI-driven algorithms that scour foreign-language news reports, animal and plant disease networks, and official announcements to give its clients advance warning. The firm’s founder Kamran Khan says it “can pick up news of possible outbreaks — little murmurs on forums or blogs, indications of some kind of unusual events going on”.
In addition to worries over disinformation and manipulation of public opinion, the biggest other concerns about AI are probably related to use in facial recognition technology or spying on the general public. Some 51 digital rights organisations have signed an open letter to the European Commission calling for a complete ban on the use of facial recognition technologies for mass surveillance — without any exceptions.
According to the European Digital Rights (EDRi) coalition, there are no circumstances where the benefits from facial recognition in mass surveillance would outweigh the harm caused to individual rights, such those in privacy, data protection, non-discrimination or free expression.
The EDRi says it has found recent examples of biometric technologies used in mass surveillance across the majority of EU countries, including facial recognition for queue management at airports in Rome and Brussels, and use by German authorities to surveil G20 protesters in Hamburg. More than 40,000 European citizens have now signed a petition called “Reclaim Your Face” to support a ban on biometric surveillance.
Researchers that analysed more than 200 million tweets discussing Covid-19 found that about 45 percent were sent by accounts on Twitter that behave more like computerized robots than humans.
And while they may not get all of it right, certainly the EU is correct in putting the issue high on the global agenda. Can we actually control this massive virtual organism? Most agree that we can always use a little more intelligence. Just don’t make it too artificial.
Jon Van Housen and Mariella Radaelli are journalists based in Milan
The Emirates called for strengthening the international humanitarian response and providing urgent relief to those in need
Photonics is the study of light and has several applications including LED, VR, holograms, high-speed internet and solar panels among other things
A delegation from Egypt arrived in Israel on Friday hoping to revive the truce negotiations
The decision aims to protect the safety and security of pilgrims
They have been carrying out rescue operations for the last four years, drawing on their extensive experience in saving people from desert terrain
Dubai Amateur 17-year-old Oscar Craig continues to showcase his impressive form with third-round level par 72 at Saadiyat Beach Golf Club
The tragic incident led to the death of a number of innocent people from Yemen
Alongside regional leaders, US Secretary of State Antony Blinken will attend the meetings