## AI and the Future of Humanity: A Study on Existential Risks A recent study published on arXiv addresses the issue of existential risk posed by artificial intelligence (AI) systems. Since the release of ChatGPT, the debate on the potential threat of AI to humanity has intensified. The paper presents a general framework for analyzing this risk. ## The Two Fundamental Premises The research is based on two main premises: 1. AI systems will become extremely powerful. 2. If AI systems become extremely powerful, they will destroy humanity. These premises are used to construct a taxonomy of survival stories, in which humanity persists into the far future. ## Survival Stories In each survival story, one of the two premises fails: scientific barriers prevent AI systems from becoming extremely powerful; a ban on AI research; AI goals that do not involve the destruction of humanity; the ability to reliably detect and disable systems with destructive goals. The study analyzes the challenges associated with each scenario and possible responses to mitigate the threats. ## Estimating the Probability of Doom Finally, the taxonomy is used to produce rough estimates of P(doom), the probability that humanity will be destroyed by AI. The research offers crucial insights to guide policies and strategies aimed at ensuring a safe future in the age of artificial intelligence.