# No free lunch theorem

## If an algorithm does well on some problems, then it pays for that on other problems / From Wikipedia, the free encyclopedia

#### Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about No free lunch theorem?

Summarize this article for a 10 year old

In mathematical folklore, the "**no free lunch**" (**NFL**) **theorem** (sometimes pluralized) of David Wolpert and William Macready, alludes to the saying "no such thing as a free lunch", that is, there are no easy shortcuts to success. It appeared in the 1997 "No Free Lunch Theorems for Optimization".^{[1]} Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).^{[2]}

This article needs additional citations for verification. (July 2022) |

In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems".^{[3]}

The "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them. Various investigators have extended the work of Wolpert and Macready substantively. In terms of how the NFL theorem is used in the context of the research area, the no free lunch in search and optimization is a field that is dedicated for purposes of mathematically analyzing data for statistical identity, particularly search^{[4]} and optimization.^{[1]}

While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research.^{[5]}^{[6]}