This course handles underlying background theories for pattern recognition (PR) which is the start point for AI. It covers PR systems, Bayesian Classifier, likelihood-based PR, Discriminant Function-based PR, Support Vector Machine, NN-based PR, and other PR theories such as fuzzy theory, and so on.
Introduce students the fundamental concepts and intuition behind modern machine learning techniques and algorithms, beginning with topics such as perceptron to more recent topics such as boosting, support vector machines and Bayesian networks. Statistical inference will be the foundation for most of the algorithms covered in the course.
This course will discuss the key differences in architecture and algorithms between conventional information processing systems (e.g. von Neumann machines) and biological brains. Subsequently, we will try to come up with the scaffold of a basic design for a non-von Neumann type of brain-like information processing system.
This course covers the theory and application of neural networks. In particular, lectures explore the structure and function of neural networks and their learning and generalization. Also, various models of neural networks and their applications are illustrated.