Stuart Armstrong's research at the Future of Humanity Institute centres on formal decision theory, the risks and possibilities of Artificial Intelligence, the long term potential for intelligent life, and anthropic (self-locating) probability. He is particularly interested in finding decision processes that give the 'correct' answer under situations of anthropic ignorance and ignorance of one's own utility function, ways of mapping humanity's partially defined values onto an artificial entity, and the interaction between various existential risks. He aims to improve the understanding of the different types and natures of uncertainties surrounding human progress in the mid-to-far future.
- Artificial Intelligence
- Catastrophic risks
- Global disasters
- Human future
Dr Armstrong has extensive experience of working with the media across print and broadcast and including live interviews.