This article gives a nice comparison between the Frequentist, Bayesian, Information-theoretic and Likelihood approach to statistics. The author claims that likelihood approach unifies and perhaps the least controversial among the proponents of the other three paradigms.
Bayesian methods have become popular with the rise of computers as many previously computationally expensive analysis become easier. The basic idea behind BS is that you combine the results of an experiment with some prior information to get the posterior probability. The selection of priors is controversial with frequentists, because two people might use different priors which makes the result subjective.
The frequentist (classical) school is associated with Sir Ronald Fisher, Neyman & Pearson. This approach is called as hypothesis testing or null hypothesis significance testing.
The information-theoretic approach is based on the concepts of information theory and thermodynamics (entropy). The basic idea is to compare alternative models according to how well they capture information in the data, which can be determined by the amount of unexplained variation (ie., residual sum of squares). A trade off between fit and complexity is done using information criteria such as Akaike's IC. AIC = −2 * log(likelihood)+2K, where K the number of parameters in the model. In practice, the AIC of various models is cmpared and one with lowest AIC is selected. This approach is used when there are many independent variables and small sample size and very little theory to suggest which variables should be included in the analysis.
Likelihood is similar to probability, but the area under the likelihood curve does not add up to one as it does in probability density. This approach treats data as fixed(rather than as a random variable), and the likelihood of two different models can be compared.