Neural Networks in Statistics and Classification


Contents:
Since the forties of the 20th century mathematicians and engineers have tried to imitate the obvious intellectual abilities of animals or (wo)men when classifying objects, predicting trends, or controlling motions by designing electronic automata and mathematical algorithms. These devices and algorithms are collected under the name 'Artificial Neural Networks' (ANN). Given the enormous development of computer technology in speed and storage capacities, those procedures have been widely used in science and applications of many kinds (robot, control, trend prediction, credit scoring, pattern recognition et al.). The bases of ANNs have been largely developed by probabilists and statisticians.

The course presents an introduction into the mathematical and statistical bases of artificial neural networks, with special emphasis on the ubiquitous classification and prediction problems.

  1. A survey on various types of neural networks:
    MLP's, Hopfield nets, Boltzmann machines, Kohonen maps.
  2. Statistical decision theory and discriminant analysis (classification):
    Bayes procedures, likelihood methods, pattern recognition
  3. Multi-Layer-Perceptrons (MLPs) for solving optimality and classification problems,
    back-propagation feed-forward networks
  4. Nonparametric methods for regression and prognosis
  5. Learning vector quantization for pattern recognition
  6. Cluster analysis and self-organizing networks
  7. Hopfield networks
  8. Artificial neural networks in the context of physics and engineering.

Literature:


Requirements: Sufficient knowledge in probability and statistics

Back