Author: George Mangion
Published on Malta Today 2nd March 2017
Regardless of how artificial intelligence (AI) is defined, there is little doubt that this resource can be of great value, especially in big data applications. Many organisations are now collecting more data than they did just a few years ago, making use of business intelligence and descriptive analytic technologies such as query, reporting, comparing and contrasting options so in the end the massive computer power so harnessed helps in analysing what has happened in the past and with use of predictive analytics techniques opens a window leading to accurate predictions.
Undoubtedly, artificial intelligence is fast becoming a major technology for prescriptive analytics, the step beyond predictive analytics that helps us determine how to implement and/or optimise optimal decisions. In business applications it can assess future risks, quantify probabilities and in so doing, give us insights how to improve market penetration, customer satisfaction, security analysis, trade execution, fraud detection and prevention, while proving indispensable in land and air traffic control, national security and defence. This is not to mention a host of healthcare applications such as patient-specific treatments for diseases and illnesses.
Recently, Google’s DeepMind algorithm taught itself how to win Atari games. Algorithms can now recognise handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos.
Typically the giant search engine firm itself, Google, a pioneer in the field of artificial intelligence, is developing self-driving automobiles, smartphone assistants and other examples of machine learning while it is no secret that Facebook founder Mark Zuckerberg and actor Ashton Kutcher, recently invested $40 million in an entity focusing on developing artificial brains.
In science fiction films such as Matrix, we see how futuristic devices will facilitate facial recognition, interpret human comments, self-drive cars, and perform complex language translations. Some of these devices no longer exist in the realm of fantasy but became real tools as a result of advances in science and social media.
Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. This involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called “Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries.
Certainly a chilly reference to Big Brother dominance. But not all is doom and gloom as we appreciate how social media technology links various civilizations, can teach farmers how to improve crop yields and speed up the progress in complex human genome classification. Delivery drones, both wheeled and airborne, may in the near future compete with couriers while supermarket robots silently stack food items on shelves and move merchandise in warehouses.
For a moment, let us stop and take a look at major stages of human advancement over the centuries, starting with ancient Greeks who laid the foundations of democracy. The Greek philosopher Plato in his dialogues with Socrates pontificated on the importance of human values, which of course many centuries later are absent in the alien world of robots and soulless machines powered by artificial intelligence. This is a foreboding legacy. AI society is at a crossroads, one which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.
The way forward beyond 2017 is awe inspiring as soon there will be computers running faster than human brains. It is unstoppable – computing power will continue to grow at a fast rate. Readers may well comment that this is a blessing showing how mankind has developed its ability to programme machines using computers running at incredible speeds – effortlessly solving practical problems. The dilemma is whether all this computing power (now housed in Cloud) can be harnessed to be our servant. The million dollar question is – can we succeed to instil values in brains of such behemoths? Not so fast – scientists warn us that such unbridled power threatens the very existence of mankind.
Famed astrophysicist Professor Stephen Hawking warned of the dangers posed by developing human-like intelligence in computer systems. Just note how computer researcher Alan Turing has succeeded in designing one computer that can successfully imitate a human in conversation, although few doubt that more research is needed to optimise neural design to reach perfection – for example to add unique functions like short-term memory in human brains.
Simply put, its ecosystem consists of a special network programmed to learn as it stores memories, empowering the device to mutate beyond its original code. Artificial intelligence in machines can even replicate human judgments previously considered to be too complex. Steely and devoid of emotions once activated it can supervise hundreds of skilled factory operators.
As if by magic complex algorithms that ‘learn’ from past examples can relieve engineers of the need to write out every command. It comes as no surprise that cash rich Google invested $400 million in this project, which in the near future may turn out to be the building block for functional robots. The investment in robotics fires the gun to a race for design of superior machines with abilities to recognise objects, people and pets, and determine their own actions. For example, can we imagine how in the near future one can own robots using artificial intelligence at home – which among other things (like faithful servants) warn us when to refill the cat food dish as they can sense the bowl is empty.
Quoting Ray Kurzweil, a futurist and inventor recently hired by Google as director of engineering, he believes that computers will soon become more intelligent than human beings. He predicts this event, which he refers to as “the singularity,” to occur by the year 2045.
Equally ominous was the prediction of Stephen Hawking who exclaimed that “the development of full artificial intelligence could spell the end of the human race”. Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the future consequences of creating something that can match or surpass humans would take off on its own, and re-design itself at an exponential rate. Stephen fears that humans, limited by slow biological evolution, couldn’t compete with such superior machines and lose control.
For example, Intel is smartly developing microchips that pave the way for sensor networks which potentially mimic the brain’s capacity for perception, action, and thought. Yet there is a contrary view to the claim that we lost our “Man versus Machine” fight. Some computer scientists believe that AI can be harnessed and be our servant, not master.
Charlie Ortiz, heading the software company Nuance Communication, feels that increased AI might be useful to solve current problems that human intelligence cannot fathom. In his opinion, there is no reason to assume that AI will harm us even if it becomes more complex than human intelligence, yet it is in our interest to set up safeguards that will firewall our own intelligence when AI (a new god) reaches its acme and possibly mutates to become our master.
To conclude, mankind has survived calamities in past millennia and thus take courage that our natural instinct to survive the AI threat will again redeem us.
Author: George Mangion
Published on Malta Today 2nd March 2017
Get in touch: email@example.com | +356 21 493 041