Skip to main content

A Principled Foundation for LCS

  • Conference paper
Book cover Learning Classifier Systems (IWLCS 2006, IWLCS 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4998))

  • 418 Accesses

Abstract

In this paper we promote a new methodology for designing LCS that is based on first identifying their underlying model and then using standard machine learning methods to train this model. This leads to a clear identification of the LCS model and makes explicit the assumptions made about the data, as well as promises advances in the theoretical understanding of LCS through transferring the understanding of the applied machine learning methods to LCS. Additionally, it allows us, for the first time, to give a formal and general, that is, representation-independent, definition of the optimal set of classifiers that LCS aim at finding. To demonstrate the feasibility of the proposed methodology we design a Bayesian LCS model by borrowing concepts from the related Mixtures-of-Experts model. The quality of a set of classifiers and consequently also the optimal set of classifiers is defined by the application of Bayesian model selection, which turns finding this set into a principled optimisation task. Using a simple Pittsburgh-style LCS, a set of preliminary experiments demonstrate the feasibility of this approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Wilson, S.W.: Classifier Fitness Based on Accuracy. Evolutionary Computation 3(2), 149–175 (1995)

    Article  Google Scholar 

  2. Butz, M.V., Pelikan, M.: Analyzing the Evolutionary Pressures in XCS. In: [23], pp. 935–942.

    Google Scholar 

  3. Butz, M.V., Goldberg, D.E., Tharakunnel, K.: Analysis and Improvement of Fitness Exploitation in XCS: Bounding Models, Tournament Selection and Bilateral Accuracy. Evolutionary Computation 11, 239–277 (2003)

    Article  Google Scholar 

  4. Butz, M.V., Kovacs, T., Lanzi, P.L., Wilson, S.: Toward a Theory of Generalization and Learning in XCS. IEEE Transaction on Evolutionary Computation 8, 28–46 (2004)

    Article  Google Scholar 

  5. Drugowitsch, J., Barry, A.M.: A Formal Framework for Reinforcement Learning with Function Approximation in Learning Classifier Systems. Technical Report 2006–02, University of Bath, U.K (January 2006)

    Google Scholar 

  6. Wilson, S.W.: Function Approximation with a Classifier System. In: [23], pp. 974–981.

    Google Scholar 

  7. Jacobs, R.A., Jordan, M.I., Nowlan, S., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 1–12 (1991)

    Article  Google Scholar 

  8. Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

  9. McCullach, P., Nelder, J.A.: Generalized Linear Models. Monographs on Statistics and Applied Probability. Chapman and Hall, Boca Raton (1983)

    Book  Google Scholar 

  10. Drugowitsch, J., Barry, A.M.: Mixing Independent Classifiers. In: [24]

    Google Scholar 

  11. Bishop, C.M., Svensén, M.: Bayesian Hierarchical Mixtures of Experts. In: Proceedings of the 19th Annual Conference on Uncertainty in Artificial Intelligence (UAI 2003), pp. 57–64. Morgan Kaufmann, San Francisco (2003)

    Google Scholar 

  12. Brown, G., Kovacs, T., Marshall, J.: UCSPv: Principled Voting in UCS Rule Populations. In: [24], pp. 1774–1782.

    Google Scholar 

  13. Grünwald, P.D.: A tutorial introduction to the minimum description length. In: Grünwald, P., Myung, J., Pitt, M.A. (eds.) Advances in Minimum Description Length Theory and Applications. Information Processing Series, pp. 3–79. MIT Press, Cambridge (2005)

    Google Scholar 

  14. Vapnik, V.N.: An Overview of Statistical Learning Theory. IEEE Transactions on Neural Networks 10(5), 988–999 (1999)

    Article  Google Scholar 

  15. Ueda, N., Ghahramani, Z.: Bayesian model search for mixture models based on optimizing variational bounds. Neural Networks 15, 1223–1241 (2002)

    Article  Google Scholar 

  16. MacKay, D.J.C.: Bayesian interpolation. Neural Computation 4(3), 415–447 (1992)

    Article  MATH  Google Scholar 

  17. Waterhouse, S., MacKay, D., Robinson, T.: Bayesian Methods for Mixtures of Experts. In: Touretzky, D.S.T., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8, pp. 351–357. MIT Press, Cambridge (1996)

    Google Scholar 

  18. Waterhouse, S.: Classification and Regression using Mixtures of Experts. PhD thesis, Department of Engineering, University of Cambridge (1997)

    Google Scholar 

  19. Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  20. Jaakkola, T.S.: Tutorial on variational approximation methods. In: Opper, M., Saad, D. (eds.) Advanced Mean Field Methods, pp. 129–160. MIT Press, Cambridge (2001)

    Google Scholar 

  21. Drugowitsch, J., Barry, A.M.: Generalised Mixtures of Experts, Independent Expert Training, and Learning Classifier Systems. Technical Report 2007–02, University of Bath, U.K (2007)

    Google Scholar 

  22. Chipman, H.A., George, E.I., McCulloch, R.E.: Bayesian CART Model Search. Journal of the American Statistical Association 93(443), 935–948 (1998)

    Article  Google Scholar 

  23. Spector, L., Goodman, E.D., Wu, A., Langdon, W.B., Voigt, H.M., Gen, M., Sen, S., Dorigo, M., Pezeshk, S., Garzon, M.H., Burke, E. (eds.): GECCO-2001: Proceedings of the Genetic and Evolutionary Computation Conference. Morgan Kaufmann, San Francisco (2001)

    Google Scholar 

  24. Thierens, D., Beyer, H.G., Birattari, M., Bongard, J., Branke, J., Clark, J.A., Cliff, D., Congdon, C.B., Deb, K., Doerr, B., Kovacs, T., Kumar, S., Miller, J.F., Moore, J., Neumann, F., Pelikan, M., Poli, R., Sastry, K., Stanley, K.O., Stützle, T., Watson, R.A., Wegener, I. (eds.): GECCO-2007: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation Conference 2007, vol. 2. ACM Press, New York (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Drugowitsch, J., Barry, A.M. (2008). A Principled Foundation for LCS. In: Bacardit, J., Bernadó-Mansilla, E., Butz, M.V., Kovacs, T., Llorà, X., Takadama, K. (eds) Learning Classifier Systems. IWLCS IWLCS 2006 2007. Lecture Notes in Computer Science(), vol 4998. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88138-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-88138-4_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-88137-7

  • Online ISBN: 978-3-540-88138-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics