Nick Bostrom (1973 – ) holds a Ph.D. from the London School of Economics (2000). He is a co-founder of the World Transhumanist Association (now called Humanity+) and co-founder of the Institute for Ethics and Emerging Technologies. He was on the faculty of Yale University until 2005, when he was appointed Director of the newly created Future of Humanity Institute at Oxford University. He is currently Professor, Faculty of Philosophy & Oxford Martin School; Director, Future of Humanity Institute; and Director, Program on the Impacts of Future Technology; all at Oxford University.
His recent book, Superintelligence: Paths, Dangers, Strategies, is the definitive work on superintelligence. A few of its main issues were discussed in his previous article, “Ethical Issues in Advanced AI.” Here is a brief outline of that article.
Introduction – “A superintelligence is any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. This definition leaves open how the superintelligence is implemented – it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else.” B states that there is no reason to believe we won’t have SI in the lifetime of some persons alive today.
Superintelligence (SI) is different – And in ways, we can’t even imagine.
Moral Thinking of SI – If morality is a cognitive pursuit, then SI should be able to solve moral issues in ways previously undreamt of.
Importance of Initial Motivations – It is crucial to design SI to be friendly.
Should Development Be Delayed or Accelerated? – “It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process.”
Given this promise, and considering B’s claim that SI will probably be developed anyway, we might as well do this asap. “If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence.”
Reflection – I have made my views on this clear many times. Despite the risks, we need to develop superintelligence promptly if we are to have any chance of surviving.