ESA Member, Prof. Nick Bostrom

PSX_20191204_042638

Professor Nick Bostrom is Honorary and Advisory Philosopher of USIA: United Sigma Intelligence Association.

Prof. Nick Bostrom is Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.)

He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.

He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. He was included on Prospect’s World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media.

“My interests cut across many disciplines and may therefore at the surface appear somewhat scattered, but they all reflect a desire to figure out how to orient ourselves with respect to important values. I refer to this as “macrostrategy”: the study of how long-term outcomes for humanity may be connected to present-day actions. My research seeks to contribute to this by answering particular sub-questions or by developing conceptual tools that help us think about such questions more clearly.

A key part of the challenge is often to notice that a problem even exists — to find it, formulate it, and then make enough initial progress in understanding it to let us break it into more tractable components and research tasks. Much of my work (and that of the Future of Humanity Institute) operates in such a pre-paradigm environment. We tend to work on problems that the rest of academia ignores either because the problems are not yet recognized as important or because it is unclear how one could conceivably go about doing research on them; and we try to advance understanding of them to the point where it becomes possible for a larger intellectual community to engage with them productively. For example, a few years ago, AI alignment fell into this category: hardly anybody thought it was important, and it seemed like the kind of thing a science fiction author might write novels about but that there was no way to study scientifically. By now, it has emerged as a bona fide research field, with people writing code and equations and making incremental progress. Significant cognitive work was required to get to this point.

I have also originated or contributed to the development of ideas such as the simulation argument, existential risk, transhumanism, information hazards, superintelligence strategy, astronomical waste, crucial considerations, observation selection effects in cosmology and other contexts of self-locating belief, anthropic shadow, the unilateralist’s curse, the parliamentary model of decision-making under normative uncertainty, the notion of a singleton, the vulnerable world hypothesis, along with a number of analyses of future technological capabilities and concomitant ethical issues, risks, and opportunities.

Technology is a theme in much of my work (and that of the FHI) because it is plausible that the long-term outcomes for our civilization depend sensitively on how we handle the introduction of certain transformative capabilities. Machine intelligence, in particular, is a big focus. We also work on biotechnology (both for its human enhancement applications and because of biosecurity concerns), nanotechnology, surveillance technology, and a bunch of other potential developments that could alter fundamental parameters of the human condition.

There is a “why” beyond mere curiosity behind my interest in these questions, namely the hope that insight here may produce good effects. In terms of directing our efforts as a civilization, it would seem useful to have some notion of which direction is “up” and which is “down”—what we should promote and what we should discourage. Yet regarding macrostrategy, the situation is far from obvious. We really have very little clue which of the actions available to present-day agents would increase or decrease the expected value of the long-term future, let alone which ones would do so the most effectively. In fact, I believe it is likely that we are overlooking one or more crucial considerations: ideas or arguments that might plausibly reveal the need for not just some minor course adjustment in our endeavours but a major change of direction or priority. If we have overlooked even just one such crucial consideration, then all our best efforts might be for naught—or they might even be making things worse. Those seeking to make the world better should therefore take it as important to get to the bottom of these matters, or else to find some way of dealing wisely with our cluelessness if it is inescapable.

The FHI works closely with the effective altruism community (e.g., we share office space with the Center for Effective Altruism) as well as with AI leaders, philanthropic foundations, and other policymakers, scientists, and organizations to ensure that our research has impact. These communication efforts are sometimes complicated by information hazard concerns. Although many in the academic world take it as axiomatic that discovering and publishing truths is good, this assumption may be incorrect; certainly it may admit of exceptions. For instance, if the world is vulnerable in some way, it may or may not be desirable to describe the precise way it is so. I often feel like I’m frozen in an ice block of inhibition because of reflections of this sort. How much easier things would be if one could have had a guarantee that all one’s outputs would be either positive or neutral, and one could go full blast!”

Prof. Nick Bostrom’s Hompage

Faculty Information, University of Oxford

*Reference from Prof. Nick Bostrom’s official homepage.

© 2007-2019 USIA: United Sigma Intelligence Association