In machine learning, the no-free lunch theorem suggests that all optimisation algorithms perform equally well when their performance is averaged over many problems and training data sets.