Arguments
- Alignment is just good engineering - AI is not aligned because of bugs
- Market forces will fix it, eventually aligned models will be used more and more
- Humans are not aligned either - no coherent set of human values to align to
- Alignment too speculative - most alignment focuses on hypothetical futures
- Model Interpretability solves it better, alignment will be trivial