The discussion about artificial intelligence (AI) has been painting a gloomy picture of the future. Some of the best known thinkers in the technology arenas are worried that the future machines will become our overlords and subject us to slavery or even finish our existence as a not-so-useful-race. Tesla founder and CEO Elon Musk called artificial intelligence “our biggest existential threat” . He also made a donation of $10 million to the Future of Life Institute (FLI) for running a global research program aimed at keeping AI “beneficial to humanity” .
Bill Gates insists that AI is a threat and says he is “concerned” about machine super-intelligence. Professor Stephen Hawking went to the extreme by saying the development of full artificial intelligence “could spell the end of the human race”. Furthermore, Hawking type of doomsday picture is a great inspiration for Hollywood fantasy and horror movies.
None of these thinkers still deny that AI and machines serve a fundamental part of our daily needs today. Tesla is on the way in the development of self-driving cars; Microsoft is in the forefront of the Deep Learning developers; Hawking uses AI to communicate (due to his ALS). Technology development, not only AI technologies, but that of connected devices and automated systems will mean that evidently they will serve even more important part of our daily lives. Technology will serve mostly being pervasive, this makes the recognition of the development trend difficult. I believe that the benefits of artificial intelligence to all aspects of our lives are too great to be ignored.
Something that we definitely should worry over are shifts in the professions that the developments of AI will be causing. The eventual result will be the eradication of many positions and the changing roles for the workers. AI will do to white-collar jobs something similar what steam power did to blue-collar ones during the Industrial Revolution. As lots of white-collar work depends on recognizing patterns and translating symbols, AI will replace many of the people doing this by providing automated alternatives and by making the remaining workers more productive.
But, should we be scared of the horror scenarios and because of the fears limit us harnessing the benefits of the development? I do believe that we’ll be able to create a truly intelligent machine at some point of the future. Still, it is not taking place in the next ten years. It is also very important to understand that current and near-future AI is far away being able to learn to think intelligently outside its programming. Without that the “destruction of humanity” will not become in its agenda, unless we include our self-destruction into its programming. Secondly, I fail to see why the “destruction of humanity” would top any AI’s list of action items. Furthermore, I find it very difficult to see how large gain in intelligence would entail a large increase in power.
I believe that an important part of ensuring that a superintelligence will have a beneficial impact on the human beings is to endow it with philanthropic values. The top goals of superintelligence should include being respect and friendliness. AI should be designed in the way that it cannot deliberately change these top goals or rid itself of being respectful and friendly. The exact definition of ‘respect’ and ‘friendliness’ is naturally a thorny issue, but we have time for that. And we are still in the driving seat of designing the motivational architecture of future AI.
More importantly the development of AI should be a learning process where the future is co-created with the machine and human beings. This is a part of the essence on co-creative intelligence – utilization of good parts of machine intelligence and human intelligence. Humans in the loop can recognize the potential dangers of artificial intelligence and jointly work with the AI towards delivering value without putting us at risk. We need AI that are on our side, AI that exists with us by learning from us. Ethics, e.g., could be learned from ethical people, or by studying the choices and results of dead people (if we are scared that AI learns to manipulate us, they will never manipulate dead people).
For now, my advice for you is not to focus on the threat of computers taking over the world. Put your focus on finding your answer to the threat that they might be taking over your job in the next few years. Actually, Co-creative Intelligence has potential to be the answer for both of these threats.