A book about the risks of AI that is Grown Rather Than Crafted. By Eliezer Yudkowsky and Nate Soares

Main Premises

  1. ASI would be capable of destroying humanity if it wanted to
  2. Such an AI surely will want something that, as a side effect destroys us
  3. All existing plans to prevent extinction are doomed

Notes

  • Predicting AI destruction is an easy call
  • Modern AI defects you see now are the worst AI is going to get
  • AI improvement can taper out, but that doesn’t mean the end point will keep us safe. A supernova still becomes hot enough to vaporize planets nearby
  • We protect other harmful species like chimpanzees that kill people because we have power, we have guns, we can defend ourselves and are powerful
  • If you want to know if your baby will grow up kind, you have to DNA Sequencing
  • AI has little use for us (Humans Will Not be Useful to AI) - not even as pets

Concepts