::: nBlog :::
In 2016, a few prominent scientists, such as my idol Stephen Hawking, and business leaders like Elon Musk warned about the dangers of Artificial Intelligence. Since the bold visions of Marvin Minsky in 1971 (that AI will supersede human brain in 5 years), an uneasy AI discourse has ensued between (and within) the academia and industry.
AI is often associated with robots, that other breed of dangerous things which evaporate our jobs and will sooner or later revolt against us, like in The Terminator. Wouldn’t it be safer to stop developing these, just in case – this is an argument I face quite often.
Similar arguments pop up in surprising places, such as in election security and voting machines. After the infamous US election last year, there’s growing group of people wanting to trash all voting machines and bring back good old pen and paper to all polling stations. At the same time, almost no-one pays attention to the record low voter turnaround, which I think is a far bigger problem what comes to real democracy.
Human brain is quite good in sorting out and compressing memories (garbage collection in computing terms), strengthening the potentially useful ones and dropping the not-so-important ones. However, we get those ‘golden memories’ as side product, which instils us with an innate feeling that things were smoother and better in the past. The truth is that we have just forgotten the bad things.
AI research, short of the ‘actual’ AI, has already brought us an arsenal of good things such as driving safety features in cars, automated translation systems between hundreds of languages and revolutionary medical advancements. These actually save and enhance lives and give billions of people e.g. the chance to embrace the scientific method, with readily accessible education, regardless of language.
However, AI and global interconnectedness also mean that today’s, and especially tomorrow’s engineers and coders have far bigger responsibility of their designs and algorithms than ever before, as consequences of bad choices spread very quickly. Therefore, instead of just teaching coders the newest programming languages, studies of sustainability and philosophy in forms of ethics, morality and humanity are more important than ever.