Thursday, 7 April 2016

Part 2: Wither the Human Brain?

Part 1 introduced artificial intelligence or cognitive computing, the characteristics of the technology, and how its increasing use is consistent with another trend, which is the generation and consumption of vast amounts of data.

Big Data and Cognitive Computing
Floods, deluges, vast oceans of data, pour in remorselessly from the computer systems that hum behind the scenes of modern life, everything from banking to travel to any form of commerce, to data generated by all of us texting away on our smartphones, to the embarrassing excesses of email flooding every inbox with spam, to videos and photographs posted on social media of special meals and holidays to devices themselves sending signals all the time. Your car, your smartphone, airplanes in the air, any connected device, is generating data all the time.  

(Technology geeks call these Big Data and the Internet of Things, among other things. Analysts talk about ‘billions’ of connected devices and by some estimates, sometime in the first decade of the 21st century, the number of connected devices exceeded the human population.  IBM says that some 40% of all data generated by 2020 will come from devices and machines.)


And what does this have to do with cognitive machines?  A lot, actually – humans are great at exercising judgement, having empathy, thinking lofty – or squalid – thoughts, but our capacity to process data is limited. The human brain can hold vast amounts of the stuff, but how much can you absorb on an ongoing basis, hour by hour, day by day, moment by moment – and making sense of it all?

You and I, as humans can’t, but machines can, they can ingest virtually unlimited amounts of data.  A cognitive machine would not just ingest the data through the brute force of computational ability, but crucially, it would also understand it, a dimension that is missing in ‘normal’ computers.

Learning Machines
As an intelligent computer begins to understand the vast amounts of data, it also begins to get better at discerning what the data is about – in other words, it learns. Learning means that, over time, just as a human does, that it gets better and faster at its task – it can diagnose, discern and come to conclusions more quickly than before – it may be said to become ‘smarter’.

This is where the conversation is supposed to turn to it being a matter of time before computers start replacing humans.  But there’s another way of looking at how these cognitive computers are used.
Because of their vast data processing capability and ability to comprehend that data, cognitive systems can become assistants, like super-hyped up research assistants, only bringing relevant facts to the human. 

Machines at Work
Case studies abound.  North Face’s experimental XPS website uses cognitive technology to understand where the adventurer plans to go, when, and what he or she plans to do (hiking, skiing, etc) and makes recommendations of clothing based on the time of year, activity and the weather conditions of that particular place (temperature, wind chill) which it pulls from a live database of the weather.    

Under Amour uses IBM’s Watson to analyse the health habits of those who use its wearable product, and to make recommendations based on data pulled from people with similar profiles.  There’s even a cute dinosaur toy, from CogniToy, which uses speech recognition to converse with children, and learns their likes and dislikes, tempering its responses appropriately, so that in effect, the toy becomes a personal toy customized to the character of the child it interacts with.

Japan’s penchant for robots finds expression in Softbank’s Pepper robot, a human-like robot which can ‘read’ human emotions and respond in kind.  There are plans afoot to open stores staffed mostly by its humanoid robot, which can interact independently with human customers.

Out of the public eye, cognitive systems are already being used for medical research, stock trading, scientific discovery, self-driving cars and customer-facing Q&A systems which appear to have the intelligence of a human operator at the other end. 

For example, Singapore’s tax portal has a beta system employing the service of Ask Jazmine, a ‘virtual assistant’ to ‘converse’ with taxpayers in ‘normal’ language – Jazmine even paraphrases responses with phrases such as “I am not sure but I think this is what you’re looking for..”

What is common to many of these applications is that they typically process large amounts of data, such as weather data, health data, tax regulation, medical data, legal data, and so on.  As the volume of data we interact with increases, so does the need for systems able to make sense of it all and to sift through the haystack for the needle of relevant data.

Artificial intelligence systems can take the drudgery out of going through all that data, to come up with recommendations, which the human then makes a decision on.  Doctors exercise their judgement based on the recommendations of their virtual assistants, the consumer decides if the recommended jacket suits him or her, the wearer decides what he wants to do based on health data presented to him.  Hardly a case of machines replacing humans, but rather, assisting us.

Humans at the Mercy of Machines
There are darker scenarios – like anything else, technology can be applied to many uses, just as a hammer can be used as a tool or as a weapon, depending on the person who uses it.  And what happens if action by an AI machine unintentionally causes harm to humans?

There’s enough concern that AI specialists have banded together to sign an Open Letter by the Future of Life Institute that pledges to coordinate progress in the field to ensure it does not grow beyond humanity's control.


There’s a touch of déjà vu here  - wasn’t it the science fiction writer Isaac Asimov who presciently foresaw a distant future when robots would coexist with men, and in 1942, formulated his “Three Laws of Robotics” governing the interaction between robots and men, which specifically forbade robots from harming humans?

No comments:

Post a Comment