Project 1 Use data (either synthetic or real) to experiment with Naive bayes classification method. When does it work well? When does it not work well?
Project 2 Experiment with both maximum likelihood estimation and parzen window estimation methods to compare them. When do they work well? When do they not work well? When is one method more advantageous than the other?
Project 3 Experiment with linear classifiers and support vector machines to classify data. When do they work well? When do they not work well? In what way is one method more advantageous than the other? How do these two methods compare with the previously studied MLE-based classification and parzen-window based classification?
Each project include both code and report