AI won't kill off people, it could change what we consider human

Top Stories

AI wont kill off people, it could change what we consider human

Dubai - Bots are here to stay; we created them, just don't let them take over lives

by

Allan Jacob

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Mon 7 Aug 2017, 2:40 PM

Last updated: Mon 7 Aug 2017, 4:46 PM

There are no ifs and buts when bots are driving Artificial Intelligence and changing the way we live, work, and even love. Imagine 'living' data with brains programmed to think and respond like humans. It's artificial, we created it - which should make it fake, not genuine - yet it's real and progressing at a fast clip ahead of us.
So why blame bots when they learn to talk to each other and keep us out of the loop? Facebook, reports said, shut a programme when two chat bots left their makers out of the conversation. They improvised and spoke in a language that only they knew, and it went viral with the help of bots- which is to be expected when we spend a good part of our waking hours online.
Does it matter that this development went over most of our heads as if we strived to make them talk in our lingo? We, the people, with years of experience and wisdom under our belts. Now, that could be considered a body blow to human intelligence, which is outsourcing its neural system to devices that crunch our big data for the larger global good. But that's only side of the argument. What's the truth about the new wave of intelligent gadgets or smart devices that are wired to think and behave like us?
The reality of our virtual existence is this - AI is all over the place - from our smartphones to our home - which is not a dangerous thing - only that it can get creepy. Beds, bunkers, refrigerators, TVs and other gadgets could be run by the Internet of Things powered by tiny bots. They'll sweep our homes, read our mails and reveal our chivalry and chicanery in equal (or unfair measure) it suits their masters like Google, Amazon and Facebook, who will profit from deploying them. Once that is done, they could crawl into our beds for an (artificial) consummation of the act.
IT'S REAL AND IT'S HERE
And no, this is not the stuff of fantasy. I read in some rag (it's true) that the world's first robotic brothel in Barcelona wants to go public. The human owner is looking for investors to take his titillating love machines all over the world.
Which opens the debate here on the role of AI in the next phase of human evolution. The data is overwhelmingly in favour of the phenomenon and its benefits to our race. Take the UAE for exmple. Most people in the country prefer "interactions" with machines than with fellow humans, according to research by Accenture. The study found that three-fourths (76 per cent) of UAE "respondents are comfortable with an AI application (like Siri) responding to their questions, and more than two thirds (68 per cent) have interacted with computer-based applications in the last 12 months". What's interesting about the report is that people found it more satisfying talking with some unseen genie run by algorithms in their devices that have answers to their queries than with people. A vast majority of people say "AI engagements are faster and more polite than human interactions". That's saying a lot without meaning much - they like talking to dead bots with live brains than people in flesh and bone.
"The fact that more and more UAE consumers are comfortable using voice assistants, gesture control and eye movement on mobile devices and at home is encouraging for the devices and services markets ? and is helping make this the year when artificial intelligence goes mainstream," says Gerardo Canta, who heads Accenture in the Middle East and Africa.
To figure out how far we have travelled with Artificial Intelligence, beyond the recent gibberish plugged and unplugged by Facebook, I tried getting in touch with a few experts from industry. Many politely declined comment when I slipped in questions about their tryst with AI. Yet, everyone who is a someone in technology is talking about it. There's this fear of being left out if companies don't take the leap of faith by dumping their best human brains and replacing them with AI . Machine Learning is another development, a facet of AI in which computers learn without being directly programmed by humans. They pick things up along the way and transform our lives, often without our permission.
While writing this, I stumbled upon an update (fed to me by news bots, of course) that Google is trying out 'computational photography'. Let me break it down for you. Here, Machine Learning touches up your snaps like a photographer does in real time. This programme knows the look you want and gets into your mind without you realising it. Trouble is, it panders to our deeper narcissistic tendencies by acting on its own and doing what it thinks is beneficial for us. What could go wrong? If it develops a mind of its own - which it already has - it could also distort that touched up image.
WHO'S IN CHARGE?
Do these developments, this progress in AI technology call for regulation, or oversight? I'm okay with AI if it can be controlled by people - before it controls itself - which could be a recipe for disaster on a industrial scale. Corporations and governments therefore, have a responsibility to work together to set the ground rules before it gets out of hand, is all that I'm saying. Elon Musk, Tesla's CEO has already called for regulation, but FB's Mark Zuckerberg disagrees. That's because the social media giant has three AI labs. It also bought out three AI companies recently. There's much at stake for these companies spearheading this march of technology, and ethical issues can remain in the backburner.
With corporates reluctant to discuss the issue further, I got in touch with Dr Judy Goldsmith, Professor of Computer Science at the University of Kentucky, who believes the biggest dangers are from deploying untested AI technologies. "As we have seen with Machine Learning based software that reproduces historical human biases, we don't always foresee the shortcomings of the technology,'' she says.
"The question of control is related to questions of fairness, accountability, and transparency in computer systems - issues that have arisen in the wake of Machine Learning algorithms based on, for instance, neural networks, where we cannot see 'why' the network's decisions are made," says the professor.
Technology development is way ahead of regulation. "We have barely begun to understand and regulate liability issues for product design, manufacturing, delivery, and use. We are still asking basic questions about privacy with respect to our online presence, communication, and use of networked devices and what they can perceive and discover about us," she tells me.
Dr Goldsmith also went on to share my view that social media could be responsible for spreading the myth about an AI doomsday. "The image of Terminator (or choose your favorite bad-boy robot from the movies) taking over the world is exciting and scary. So we hit "share". It's a way to shift the blame for future ills from humans to technology; blaming the bots relieves us of responsibility."
What about the role of 'subjective' robots of the future that act against humanity? Long before that becomes an issue, we will have - we do have - systems that make decisions for us without the active participation of humans. Perhaps the oldest such are the systems that make instantaneous decisions about trading stocks. We have reactive checks in place: if the stock market in the US (Wall Street) loses money steadily for some time, the market shuts down, to break the downward spiral.
Technology based on AI has even made deadly conflict easier and less personal, with the use of long-distance communication beginning a trend that includes remotely operated drones and autonomous military bots. "We don't need to imagine self-willed AIs to be afraid of the consequences of technology in war," says Dr Goldsmith. People will abide by the rules of war, we should expand those rules to prohibit so-called "robots that kill".
For Dr Goldsmith, the biggest concern is about the ways that our use of technology is affecting our basic humanity -- how we function as human beings, alone and in society. There are huge, unpredicted social consequences of technology, from our collective inability to focus, uninterrupted, on a task, to the screen we put between ourselves and our environments. "How often have we seen some event or sight at which all people's eyes are on phones, framing the images and video, rather than directly experiencing what is before them?" she asks.

As an educator, she worries about the future, and the "pressure to turn students into data", about an education system that focuses on high-stakes tests rather than on teaching skills and reasoning. But she holds out hope (that's what I thought) despite renowned physicist Stephen Hawking's grim predictions that the spread of AI tech could end mankind as we know it. We'll be around but it may change what we consider "human", she says.
allan@khaleejtimes.com
 


More news from