The most important aspect of computer science is problem solving, an essential skill for life.
Students study the design, development and analysis of software and hardware used to solve problems in a variety of business, scientific and social contexts. During this course, students will study techniques for how to go from raw data to a deeper understanding of the patterns and structures within the data, to support making predictions and decision making. Students would be expected to have some basic knowledge of linear algebra and calculus.
To understand the foundational skills in data analytics, including preparing and working with data; abstracting and modelling an analytic question; and using tools from statistics, learning and mining to address the question.
Syllabus and overview
Data Analytics involves being about to go from raw data to a deeper understanding of the patterns and structures within the data, to support making predictions and decision making. The course will cover a number of topics, including:
- Introduction to analytics, case studies - How analytics is used in practice. Examples of successful analytics work from companies such as Google, Facebook, Kaggle, and Netflix. Suggestions for the course project.
- Basic tools: command line tools, plotting tools, programming tools - The wide variety of tools available to work with data, including unix/linux command line tools for data manipulation (sorting, counting, reformatting, aggregating, joining); tools such as gnuplot for displaying and visualizing data; advanced programming tools such as Perl and Python for powerful data manipulation.
- Statistics: Probability recap, distributions, significance tests, R - The tools from statistics for understanding distributions and probability (means, variance, tail bounds). Hypothesis testing for determining the significance of an observation, and the R system for working with statistical data.
- Database: Data quality, data cleaning, Relational data, SQL, NoSQL - Problems found in realistic data: errors, missing values, lack of consistency, and techniques for addressing them. The relational data model, and the SQL language for expressing queries. The NoSQL movement, and the systems evolving around it.
- Regression: linear regression, least squares, logistic regression - Predicting new data values via regression models. Simple linear regression over low dimensional data, regression for higher dimensional data via least squares optimization, logistic regression for categoric data.
- Matrices: Linear Algebra, SVD, PCA - Matrices to represent relations between data, and necessary linear algerbraic operations on matrices. Approximately representing matrices by decompositions (Singular Value Decomposition and Principal Components Analysis). Application to the netflix prize.
- Clustering: hierarchical, k-means, k-center - Finding clusters in data via different approaches. Choosing distance metrics. Different clustering approaches: hierarchical agglomerative clustering, k-means (Lloyd's algorithm), k-center approximations. Relative merits of each method.
- Classification: Trees, NB, Support Vector Machines, Kernel Trick - Building models to classify new data instances. Decision tree approaches and Naive Bayes classifiers. The Support Vector Machines model and use of Kernels to produce separable data and non-linear classification boundaries. The Weka toolkit.
- Data Structures: Bloom Filters, Sketches, Summaries - Data structures to scale analytics to big data and data streams. The Bloom filter to represent large set values. Sketch data structures for more complex data analysis, and other summary data structures.
- Data Sharing: Privacy, Anonymization, Risks - The ethics and risks of sharing data on individuals. Technologies for anonymizing data: k-anonymity, and differential privacy.
- Graphs: Social Network Analysis, metrics, relational learning - Graph representations of data, with applications to social network data. Measurements of centrality and importance. Recommendations in social networks, and inference via relational learning.
By the end of the module, the student should be able to:
- Understand the principles and purposes of data analytics, and articulate the different dimensions of the area.
- Work with and manipulate a data set to extract statistics and features, coping with missing and dirty data.
- Apply basic data mining machine learning techniques to build a classifier or regression model, and predict values for new examples.
- Identify issues with scaling analytics to large data sets, and use appropriate techniques (NoSQL systems, data structures) to scale up the computation.
- Appreciate the need for privacy, identify privacy risks in releasing information, and design techniques to mediate these risks.
For this course, there will be 4 hours of teaching per day, comprised of lectures and small group teaching. The structure will be:
- 3 hours of lectures.
- A 1 hour seminar in small groups.
Students will also be given time each day for independent study. Towards the end of the third week, students will be provided with time for revision.
The module will be assessed via a 2-hour examination. It should be noted that the exam is not compulsory. Everyone who completes the course – whether or not they sit the exam - will receive a certificate of attendance. However, by taking the exam you will also receive a grade/mark for the course which can be helpful to you.
Course Reading List
The main text for this course is:
- Data Mining: Concepts and Techniques. Jiawei Han, Michelle Kanber, Jian Pei. Morgan Kaufman, 2011
- Data Manipulation with R. Phil Spector. Springer, 2008
- Machine Learning. Thom Mitchell. McGraw Hill, 1997
- Database Systems: An Application-oriented Approach, Introductory Version. Michael Kifer, Arthur Bernstein, Philip Lewis. Addison Wesley, 2004
- The Works: Anatomy of a City. Kate Ascher. Penguin, 2012