A few of the world’s richest techie’s have come up with a plan to protect us while they continue research toward an even more high tech world. The threat they are worried about is right off the pages of a “Terminator” movie script. Elon Musk of Tesla and SpaceX, Peter Theil of Paypal, in association with tech giants Infosys and Amazon, have pledged one billion dollars toward “OpenAI”, a non-profit research facility with the goal of building only well-behaved Artificial Intelligence… the kind that will do us no harm.



Theoretical Physicist, Stephen Hawking, like Musk, Theil, Bill Gates, and Steve Wozniak, has stated a belief that AI may be the closest thing to a true "Pandora's Box". Hawking fears that once created these newly sentient electronic beings could use their superior capacity to continually increase their intelligence, rapidly evolving beyond human management, and relegate us to second-class citizenship. In a world where everything from major forms of transportation, power generation, and worldwide communications, to IV monitors and toasters can be remotely manipulated, it is a viable concern.

People in the tech world have already accepted that AI is coming. Mark Zuckerberg, the CEO of Facebook, announced that his 2016 New Years resolution was to build an artificially intelligent butler to help him around the house. Facebook has a team dedicated to developing software for AI. Google has purchased “Deep Mind”, a British company that writes software to make computers “think like humans”.

The questions that need asking are; in what form and with what protections will this technology arrive? Will it simply be a matter of having a discussion with our washing machine about how white we want or clothes, or will it extend to fully functional “bots” taking over the maintenance activities in our life – cooking, cleaning, bill paying, grocery buying. The vector is onward and upward. AI is happening and will be a part of our life very soon.



The fear this new organization, “OpenAI”, appears to be anticipating is the result of non-techie people, (like the vast majority of us), depending on corporate or government information to help us make decisions - and these organizations may have their own agenda such as seeking financial or political rewards. In their efforts to cash in on these new technologies either industry or government could, perhaps inadvertently, facilitate a self-aware electronic being.

The initial payout from AI has the potential to be huge. It includes the promise of a “Jetson’s” type future where all difficult, dirty and annoying work is accomplished by slave machines. Undoubtedly the creators of such technology would promise a risk free process. However, should they be wrong, Stephan Hawking suggests the consequence may be life altering. We could become the slaves – or worse – superfluous to their needs.

“OpenAI” may have as its goal to ensure that all forms of Artificial Intelligence are clearly, and obviously, beneficial for humankind - but the question arises, who are they to make such a value judgment?

Just consider for instance that we, as in all of humanity on this planet, presently encompass entire societies who believe dying, when done for the right reasons, is the ultimate achievement in life, because it’s followed by eternity in the good graces of their God. We have other societies whose goal is to live forever in a youthful and attractive form surrounded by the trappings of wealth and status. The capitalist, materialist vision versus the mortal-life-is-Hell and all true rewards come in the afterlife vision.

This is just a sampling. Millions of variations exist. In fact, it is quite possible that each of us has a unique vision for what exactly is the perfect future. But not everyone has a say. Elon Musk, Peter Theil, Infosys, Amazon, Google, Mark Zuckerberg; they will have a say and, although their intelligence in undeniable and their philanthropy is well established, the high-tech crowd isn't exactly a microcosm of world diversity. Other insights may be needed. Yet, even then, who among us has enough knowledge and understanding to truly know what the future of humankind should be, and what role intelligent machines should play in it... if any?



Artificial Intelligence isn't just about developing cool toys or work-saving “bots”. In a world where a machine in everyone's home can perform millions of actions at the speed of light, have access to virtually every piece of information on the planet, do it all wirelessly, and without the restraint of human emotions – to write an algorithm giving even one of them a sense of self – an ego – would make them potentially the most powerful being on Earth. Who can you really trust to protect us from that?