The most exciting breakthroughs of the future will not occur because of technology but because of our understanding of what it means to be human
In case you have not seen the developments in the field of Artificial Intelligence (A.I), here is the update. On December 11th, 2015 a number of technology heavyweights a.k.a Elon Musk, Peter Thiel, Sam Altman, Reid Hoffman, Jessica Livingston, Amazon Web Services (AWS), YC Research, Infosys and many others announced the Open A.I. initiative. This is the description of the company:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
For those of you who are wondering what the heck is Artificial Intelligence and why you should know about this. Read this TLTR;) post on Artificial Intelligence – The Road to Superintelligence by one of my favorite blogs Wait But Why. A.I is here and it matters A LOT. I am really glad to see that the OpenAI initiative has been launched, hopefully this does not lead to the same result as the Race to Nuclear Supremacy. I think the backers of OpenAI realized if they did not do something about this, it would most definitely lead to a race condition.
The above definition of OpenAI requires some thought, what they are saying is that they are investing in a company that is working to improve the public good. An altruistic motive to better humanity. In behavioral economics and game theory there is a lot of research that has gone into Altruism. There are a number of papers, that have argued one way or another that pure altruism without punishment usually leads to worse conditions than before the altruistic gesture. For more on this read this paper by Sung-Ha Hwang and Samuel Bowles titled Is altruism bad for cooperation? a simpler version of this paper is described in a Game by David K. Levin called If we are all better people the world would be a better place.
To put this in the context of day-to-day life: if we were all more altruistic we would choose to forgive and forget more criminal behavior. The behavior of criminals has a complication. More altruistic criminals would choose to commit fewer crimes. However, as crime is not punished so severely, they would be inclined to commit more crimes. If in the balance more crimes are committed, the world could certainly be a worse place.
With SuperIntelligence possibility, one has to wonder if all players will play fair with A.I., sadly that is never the case if one looks back at history. We have to wonder if this OpenAI initiative will be the force of good i.e protect against the evils that are possible with A.I.? I believe the statement that I started the blog post with a more realistic twist, the most exciting and terrible breakthroughs of the future will not occur because of technology but because of our understanding of what it means to be human. SuperIntelligence is a lottery ticket, it can lead to fantastic future or to the annihilation of our species. I have come to believe the probability of either of those outcomes is not important only the payoff. The payoff of one of the outcomes is unacceptable so I have to believe that OpenAI has been created to stop the annihilation scenario from happening. This still makes me ponder who will want it badly? those who want to destroy humanity or solve humanity’s biggest problems. I don’t know. What do you think?