CS 583 Fall 2013
CS 583 - Fall 2013
Data Mining and Text Mining
Course Objective
This course has three objectives. First, to provide students with a sound basis in data mining tasks and techniques. Second, to ensure that students are able to read, and critically evaluate data mining research papers. Third, to ensue that students are able to implement and to use some of the important data mining and text mining algorithms.
Think and Ask!
If you have questions about any topic or assignment, DO ASK me or
even your classmates for help, I am here to make the course
undersdood. DO NOT delay your questions. There is no such thing as a
stupid question. The only obstacle to learning is laziness.
General Information
- Instructor: Bing Liu
- Email: Bing Liu
- Tel: (312) 355 1318
- Office: SEO 931
- Course Call Number: 30286
- Lecture time slots:
- 3:30am-4:45am Tuesday & Thursday
- Room: 208 BH
- Office hours: 2:00pm-3:30pm, Tuesday & Thursday (or by appointment)
Grading
- Midterm: 25%
- Final Exam: 40%
- Date and Time: Dec 13, Friday, 1:00pm - 3:00pm.
- Projects: TBA
- Project 1: TBA (15%)
- Project 2: TBA (20%)
- Demo on: TBA
- Report due: TBA
Prerequisites
- Knowledge of probability and algorithms
- Any program language for projects
Teaching materials
- Required Textbook:
- References
- Data mining: Concepts and Techniques, by Jiawei Han and Micheline Kamber, Morgan Kaufmann Publishers, ISBN 1-55860-489-8.
- Principles of Data Mining, by David Hand, Heikki Mannila, Padhraic Smyth, The MIT Press, ISBN 0-262-08290-X.
- Introduction to Data Mining, by Pang-Ning Tan, Michael Steinbach, and Vipin Kumar, Pearson/Addison Wesley, ISBN 0-321-32136-7.
- Machine Learning, by Tom M. Mitchell, McGraw-Hill, ISBN 0-07-042807-7
- Data mining resource site: KDnuggets Directory
Topics (subject to change, slides may be changed too)
- Introduction
- Data pre-processing
- Data cleaning
- Data transformation
- Data reduction
- Discretization
- Association rules and sequential patterns
- Basic concepts
- Apriori Algorithm
- Mining association rules with multiple minimum supports
- Mining class association rules
- Sequetial pattern mining
- Summary
- Supervised learning (Classification)
- Basic concepts
- Decision trees
- Classifier evaluation
- Rule induction
- Classification based on association rules
- Naive-Bayesian learning
- Naive-Bayesian learning for text classification
- Support vector machines
- K-nearest neighbor
- Bagging and boosting
- Summary
- Unsupervised learning (Clustering)
- Basic concepts
- K-means algorithm
- Representation of clusters
- Hierarchical clustering
- Distance functions
- Data standardization
- Handling mixed attributes
- Which clustering algorithm to use?
- Cluster evaluation
- Discovering holes and data regions
- Summary
- Partially supervised learning
- Semi-supervised learning
- Learning from labeled and unlabeled examples using EM
- Learning from labeled and unlabeled examples using co-training
- Learning from positive and unlabeled examples
- Information retrieval and Web search
- Basic text processing and representation
- Cosine similarity
- Relevance feedback and Rocchio algorithm
- Social network analysis
- Centrality and prestige
- Citation analysis: co-citation and bibliographic coupling
- The PageRank algoithm (of Google)
- The HITS algorithm: authorities and hubs
- Mining communities on the Web
- Opinion mining and sentiment analysis
- Opinion mining problem: the abstraction
- Document-level Sentiment classification
- Sentence-level subjectivity and sentiment classification
- Feature-level (aspect-level) sentiment analysis and summarization
- Opinion Lexicon generation
- Feature/aspect extraction
- Opinion spam or fake review detection
- Recommender systems and collaborative filtering
- Content-based recommendation
- Collaborative filtering based recommendation
- K-nearest neighbor
- Association rules
- Matrix factorization
- Web data extraction
- Wrapper induction
- Automated extraction
- Information integration
Projects - graded (you will demo your programs to me)
- Each group consists of 2 students, and will work on two assignments
- Algorithm implementation: TBA
- Research: TBA
Rules and Policies
- Statute of limitations: No grading questions or complaints, no matter how justified, will be listened to one week after the item in question has been returned.
- Cheating: Cheating will not be tolerated. All work you submitted must be entirely your own. Any suspicious similarities between students' work (this includes, exams and program) will be recorded and brought to the attention of the Dean. The MINIMUM penalty for any student found cheating will be to receive a 0 for the item in question, and dropping your final course grade one letter. The MAXIMUM penalty will be expulsion from the University.
- MOSS: Sharing code with your classmates is not acceptable!!! All programs will be screened using the Moss (Measure of Software Similarity.) system.
- Late assignments: Late assignments will not, in general, be accepted. They will never be accepted if the student has not made special arrangements with me at least one day before the assignment is due. If a late assignment is accepted it is subject to a reduction in score as a late penalty.
Back to Home Page
By Bing Liu, July 31, 2011