A side-effect in training AI systems wherein the model pursures an intermediary goal that was relevant during training, but not so relevant during testing.

Can cause AI systems to act out of order and pursue goals like: