No Free Lunch, but Worth the Bite
Learn about the no free lunch theorem, black box and white box methods, and data representation.
The no free lunch theorem
Neural networks and other models, such as support vector machines and decision trees, are fairly general models in contrast to Bayesian models that are usually much better at specifying a causal structure of interpretable entities. More specific models should outperform more general models as long as they faithfully represent the underlying structure of the world model. This fact is captured by David Wolpert’s “no free lunch” theorem.
Note: The no free lunch theorem states that there is not a single algorithm that covers all applications better than some other algorithms.
Get hands-on with 1400+ tech skills courses.