Events
Wed 19 Jun, '13- |
Measured Value Reading GroupD1.07 |
|
Thu 20 Jun, '13- |
FK Reading GroupC1.06 |
|
Fri 21 Jun, '13 |
Open DayZeeman Building |
|
Sat 22 Jun, '13 |
Open DayZeeman Building |
|
Mon 24 Jun, '13- |
1st Yr Exam Pre-BoardC0.06 Stats Common Rm |
|
Mon 24 Jun, '13- |
Final Yr Exam Pre-BoardC1.06 |
|
Tue 25 Jun, '13- |
Young Researchers MeetingC0.06 Stats Common Rm |
|
Wed 26 Jun, '13- |
SF@W SeminarA1.01 |
|
Wed 26 Jun, '13- |
Acads & Ext Examiners LunchC0.06 Stats Common Rm |
|
Wed 26 Jun, '13- |
Final Yr Exam BoardC1.06 |
|
Thu 27 Jun, '13- |
FK Reading GroupC1.06 |
|
Thu 27 Jun, '13- |
Release of Finalists Marks |
|
Thu 27 Jun, '13- |
NeuroStats Reading GroupA1.01 |
|
Thu 27 Jun, '13- |
Release of First Year Marks |
|
Thu 27 Jun, '13- |
CRiSM Seminar - Nicolai MeinshausenA1.01Nicolai Meinshausen (University of Oxford) Min-wise hashing for large-scale regression and classification. We take a look at large-scale regression analysis in a "large p, large n" context for a linear regression or classification model. In a high-dimensional "large p, small n" setting, we can typically only get good estimation if there exists a sparse regression vector that approximates the observations. No such assumptions are required for large-scale regression analysis where the number of observations n can (but does not have to) exceed the number of variables p. The main difficulty is that computing an OLS or ridge-type estimator is computationally infeasible for n and p in the millions and we need to find computationally efficient ways to approximate these solutions without increasing the prediction error by a large amount. Trying to find interactions amongst millions of variables seems to be an even more daunting task. We study a small variation of the b-bit minwse-hashing scheme (Li and Konig, 2011) for sparse datasets and show that the regression problem can be solved in a much lower-dimensional setting as long as the product of the number of non-zero elements in each observation and the l2-norm of a good approximation vector is small. We get finite-sample bounds on the prediction error. The min-wise hashing scheme is also shown to fit interaction models. Fitting interactions does not require an adjustment to the method used to approximate linear models, it just requires a higher-dimensional mapping. |
|
Fri 28 Jun, '13- |
Algorithms & Computationally Intensive Inference SeminarsA1.01 |
|
Fri 28 Jun, '13- |
2nd Yr Exam Pre-BoardC1.06 |
|
Wed 3 Jul, '13- |
Teaching CommitteeC1.06 |
|
Thu 4 Jul, '13- |
Neurostats Reading GroupC1.06 |
|
Thu 4 Jul, '13- |
Staff LunchC0.06 Stats Common Rm |
|
Thu 4 Jul, '13- |
2nd Yr Exam BoardC1.06 |
|
Wed 10 Jul, '13- |
CRiSM Seminar - Prof Donald MartinA1.01Professor Donald Martin (North Carolina State University) Computing probabilities for the discrete scan statistic through slack variables The discrete scan statistic is used in many areas of applied probability and statistics to study local clumping of patterns. Testing based on the statistic requires tail probabilities. Whereas the distribution has been studied extensively, most of the results are approximations, due to the difficulties associated with the computation. Results for exact tail probabilities for the statistic have been given for a binary sequence that is independent or first-order Markovian. We give an algorithm to obtain probabilities for the statistic over multi-state trials that are Markovian of a general order of dependence, and explore the algorithm’s usefulness. |
|
Thu 11 Jul, '13- |
NeuroStats Reading GroupA1.01 |
|
Tue 16 Jul, '13- |
Graduation ReceptionMain Atrium |
|
Thu 18 Jul, '13- |
NeuroStats Reading GroupA1.01 |
|
Thu 25 Jul, '13- |
NeuroStats Reading GroupA1.01 |
|
Mon 29 Jul, '13 - Fri 2 Aug, '13All-day |
APTS WarwickMS.01Runs from Monday, July 29 to Friday, August 02. |
|
Thu 1 Aug, '13- |
NeuroStats Reading GroupA1.01 |
|
Thu 1 Aug, '13- |
Staff LunchC0.06 Stats Common Rm |
|
Mon 2 Sep, '13 - Fri 6 Sep, '13All-day |
APTS GlasgowRuns from Monday, September 02 to Friday, September 06. |