Arguments

  1. Alignment is just good engineering - AI is not aligned because of bugs
  2. Market forces will fix it, eventually aligned models will be used more and more
  3. Humans are not aligned either - no coherent set of human values to align to
  4. Alignment too speculative - most alignment focuses on hypothetical futures
  5. Model Interpretability solves it better, alignment will be trivial