Skip to main content Skip to navigation

Computer Science Colloquium

Organisers: Markus Brill, Yu Guan

Colloquia normally take place fortnightly on Wednesday 2pm at CS1.01 (but there are many exceptions, see below). Additional titles and abstracts will be added as details become available.

Upcoming Colloquia
21st March 2023

15:00 - 16:00
CS1.04

Petar VelickovicLink opens in a new window (DeepMind)

Title: Reasoning Algorithmically: from Toy Experiments to AGI Modules

Abstract: Neural networks that are able to reliably execute algorithmic computation may hold transformative potential to both machine learning and theoretical computer science. On one hand, they could enable the kind of extrapolative generalisation scarcely seen with deep learning models. On another, they may allow for running classical algorithms on inputs previously considered inaccessible to them.

Over the past few years, the pace of development in this area has gradually become intense. As someone who has been very active in its latest incarnation, I have witnessed these concepts grow from isolated 'toy experiments', through NeurIPS spotlights, all the way to helping detect patterns in complicated mathematical objects (published on the cover of Nature) and supporting the development of generalist reasoning agents.

In this talk, I will give my personal account of this journey, and especially how our own interpretation of this methodology, and understanding of its potential, changed with time. It should be of interest to a general audience interested in graphs, (classical) algorithms, reasoning, and building intelligent systems.

 

24th May 2023
Yali Du (King's College London)

Title/Abstract: TBD

Past Colloquiua
10th March 2023

14:00-15:00

MSB 2.23

 

Debmalya Mandal (Max Planck Institute)

Title: AI for Societal Decision-Making

Abstract: Large-scale machine learning models trained on human preferences have led to significant advances in AI capabilities. The next frontier of AI lies in building reliable decision-making systems that society can utilize to support consequential decisions. In this talk, I will first present a framework of AI-enabled societal decision-making and provide a brief overview of my research in several stages of the framework ranging from information elicitation to evaluation. The second part of the talk will focus on the preference aggregation stage, and I will outline my research on designing improved voting rules for aggregating diverse preferences. I will conclude by highlighting some exciting research directions at the intersection of reinforcement learning and preference aggregation with applications focused on societal problems.

Bio: Debmalya Mandal is a postdoctoral researcher at the Max Planck Institute for Software Systems in Saarbrücken, Germany. Previously, he was a postdoctoral fellow at the Data Science Institute of Columbia University. He obtained his Ph.D. in Computer Science from Harvard University where he was part of the EconCS group. He works on problems in AI and multi-agent systems, with an emphasis on theory and modeling, and with applications focused on societal problems. His paper has been selected for Oral presentation at the NeurIPS conference and he also received a Certificate of Distinction in Teaching from Harvard.

 

8th March 2023

14:00 - 15:00

CS1.01

 

Jose Camacho-Collados (Cardiff)

Title: Natural Language Processing and Social Media: Challenges, Applications and TweetNLP

Abstract: Social media represents a fundamental tool to understand the interactions and progress of society in the 21st century. Despite the large amount of information generated in social media platforms, understanding what is going on is not an easy task, even after the significant NLP progress in recent years. Its multilingual, dynamic, informal and multimodal nature means that standard techniques are seldom optimal.

In this talk, I will summarise some of the latest advances on social media processing, and in particular the development of specialised language models. I will also explain our latest project that addresses the temporal and dynamic nature in social media, with evaluation studies involving language modelling, named entity recognition and topic classification.

Finally, I will present our latest release, TweetNLP (tweetnlp.org), stemming from an international collaboration between academia and industry. TweetNLP is an integrated platform that enables a simple usage of cutting-edge NLP models for social media. I will quickly showcase TweetNLP’s online demo and will discuss a few applications for social media research, in particular for analysing political communication.

Bio: Jose Camacho-Collados is a Senior Lecturer and UKRI Future Leaders Fellow at Cardiff University, leading the Cardiff NLP group. Before joining Cardiff University, he completed his PhD in Sapienza University of Rome and was a Google AI PhD Fellow. Until recently, his research has focused on various semantics aspects in NLP with a distributional perspective. He wrote the “Embeddings in Natural Language Processing" book and is the current program co-chair of *SEM 2023. More related to the topic of the talk, in the last few years Jose has been working in social media analysis and applications, developing NLP tools specifically targeted to this domain.

24th Feb. 2023

14:00-15:00 (Friday)

CS1.01

 
Matthias CaroLink opens in a new window (Caltech)

Title: Quantum Computing Meets Machine Learning - A Math/TCS Perspective

Abstract: Machine learning, based on computational and statistical learning theory, constitutes a recent example of how influential applications arise from theoretical innovation. Quantum computing, rooted in quantum information theory, promises to be a future instance of the same phenomenon, providing a new paradigm of computation using features of quantum physics. In this talk, I will give an introduction to the theories underlying machine learning and quantum computing, and I will present recent developments at the intersection of the two fields. I will discuss machine learning models based on trainable quantum circuits, quantum extensions of computational learning theory, and learning from data collected in quantum experiments.

23rd Feb. 2023

14:00-15:00 (Thursday)

MB2.22

Grzegorz Lisowski (Warwick) - Viva Special

Title: An Algorithmic Analysis of Deliberation and Representation in Collective Behaviour

Abstract: The selection of a nominee by a group of players in the process of selecting a winner is present in many contexts. In sports, it is a major strategic problem to select the best team members. Crucially, in politics, this problem is essential for the process of primaries. We study the strategic behaviour of coalitions from the game-theoretic perspective. More precisely, we analyse the existence of a pure Nash equilibrium in the games capturing the strategic nomination problem. First, we adapt the well-known Hotelling-Downs model, capturing the strategic behaviour of political parties in primaries. Subsequently, we explore this problem for tournament-based rules. Nominee selection can also be influenced by the deliberation between the voters. To account for that, we investigate the complexity of checking the convergence of a synchronous, threshold-based protocol. Furthermore, we explore computational aspects of majority illusion, which occurs when a large number of agents in a network perceives the opinion, which is a minority view, as the one which is held by the majority of agents.

17th Jan. 2023

14:00-15:00

(R0.21Link opens in a new window)

 
*Joint DIMAP seminar and DCS Colloquium*
Adi ShamirLink opens in a new window (Weizmann Institute of Science)

Title: Efficient Detection of High Probability Cryptanalytic Properties of Boolean Functions

Abstract: A central problem in cryptanalysis is to find significant deviations from randomness in a given $n$-bit cryptographic primitive. When $n$ is small (e.g., an $8$-bit S-box), this is easy to do, but for large $n$, the only practical way to find such statistical properties was to exploit the internal structure of the primitive and to speed up the search with a variety of heuristic rules of thumb. However, such bottom-up techniques can miss many properties, especially in cryptosystems which are designed to have hidden trapdoors.
In this talk I will consider the top-down version of the problem in which the cryptographic primitive is given as a black box Boolean function, and reduce the complexity of the best known techniques for finding all its significant differential and linear properties by a factor of $2^{n/2}$.

Joint work with Itai Dinur, Orr Dunkelman, Nathan Keller, and Eyal Ronen.

Speaker Bio (Royal Society): Professor Adi Shamir was born in Israel in 1952, and received his PhD from the Weizmann Institute of Science in 1977. He is one of the founders of modern cryptography, and had made significant contributions to many of its branches. In 1977 he co-invented (together with Ron Rivest and Len Adleman) the RSA cryptosystem, which remains the best known and most commonly used public key encryption and signature scheme. Among his other inventions are secret sharing schemes, identity-based schemes, Zero-knowledge identification and signature schemes, ring signatures, and a variety of both classical and side-channel attacks on cryptosystems including differential cryptanalysis, cache attacks, bug attacks, and acoustic attacks. For these contributions he received the Pius XI Gold Medal in 1992, the Turing Award in 2002, the Israel Prize in 2008, and the Japan Prize in 2017. He is a member of the Israeli Academy, the US National Academy of Science, the Academia Europaea, the French Academy of Science, and the Royal Society.

 

11th Jan. 2023

14:00 - 15:00

(CS1.01)

Jiarui Gan (Oxford)

Title: Sequential persuasion and information design

Abstract: In Bayesian persuasion, a more-informed principal influences the actions of a less-informed agent by signaling information. We introduce a dynamic model of Bayesian persuasion, where the principal and the agent interact over time. The agent takes actions in each time step based on the current state, the principal's advice/signal, and their belief about the external parameter. Meanwhile, the action of the agent updates the state according to a stochastic process. The model is a generalization of the Markov decision process while it can also be seen as a special type of Markov games. It arises naturally in a number of applications, e.g., an app (the principal) can advise its user (the agent) on possible choices between actions based on additional real-time information the app has. In this talk, I will introduce this sequential model and present several complexity and algorithmic results about sequential information design, i.e., the problem of designing an optimal signaling strategy for the principal in this sequential model.

7th Dec. 2022

14:00-15:00

 
Qiang Zhang (Oxford)

Title: MRI Enhancement with AI Virtual Contrast Dye: Deep learning at forefront of cardiac imaging

Abstract: Since their first launch in 1988, intravenous (IV) contrasts, such as gadolinium-based contrast agents, have been commonly used in current MRI clinical practice, to enhance image contrast and detection of disease. However, IV contrast is cautioned in some patient groups, and increases scan time and costs. After three decades of IV contrasts in MRI clinical practice, AI deep learning emerges to provide a new perspective to rethink contrast enhancement approaches in diagnostic imaging. In this talk, I will present a new concept of enhancing MRI with AI instead of IV contrasts, a framework termed "Virtual Native Enhancement". We have validated it on two heart diseases, and are further extending it to the whole spectrum of commonly encountered heart diseases. This work aims to shift the paradigm of cardiovascular MRI practice from IV contrast enhancement to needle-free AI enhancement, improving its diagnostic utility, making it faster, safer and accessible for more centres and countries.

 

16 November 2022

14:00 - 15:00 (Cancelled)

 

Jose Camacho-Collados (Cardiff)

Title: Natural Language Processing and Social Media: Challenges, Applications and TweetNLP

Abstract: Social media represents a fundamental tool to understand the interactions and progress of society in the 21st century. Despite the large amount of information generated in social media platforms, understanding what is going on is not an easy task, even after the significant NLP progress in recent years. Its multilingual, dynamic, informal and multimodal nature, including the use of emojis, means that standard techniques are seldom optimal. In this talk, I will present advances on social media processing, and in particular the development of language models specialised on social media, as well as unified benchmarks for standard NLP tasks such as sentiment analysis or offensive language identification. I will also explain our latest project that addresses the temporal and dynamic nature in social media, which is one of the most important NLP challenges nowadays.

 

2 November 2022

14:00 - 15:00

 

Dongwon Lee (Penn State)

Title: Generative Language Model, Deepfake, and Fake News 2.0: Scenarios and Implications

Abstract: The recent explosive advancements in both generative language models in NLP and deepfake-enabling methods in Computer Vision have greatly helped trigger a new surge in AI research and introduced a myriad of novel AI applications. However, at the same time, these new AI technologies can be used by adversaries for malicious usages, opening a window of opportunity for fake news creators and state-sponsored hackers. In this talk, I will present a few scenarios where adversaries could exploit these cutting-edge AI techniques to produce more sophisticated fake news by synthesizing realistic artifacts or evade detection of fake news from state-of-the-art detectors. I will conclude the talk by discussing the important implications of the new type of fake news (i.e., Fake News 2.0) and some future research directions.

19 October 2022

14:00 - 15:00

 

Cambyse Rouze (TU Munchen)

Title: The learnability of quantum observables.

Abstract: In this talk, I will argue that every balanced quantum Boolean function has a geometrically influential variable. I will also derive a quantum analogue of the related Friedgut junta theorem about the approximation of Boolean functions of small total influence by functions of few variables. These results are based on the joint use of recently studied hypercontractivity and gradient estimates. Such generic tools also allow us to derive generalizations of these results in a general von Neumann algebraic setting beyond the case of the quantum hypercube, including examples in infinite dimensions relevant to quantum information theory such as continuous variables quantum systems. I will then comment on the implications of our results as regards to noncommutative extensions of isoperimetric inequalities, lower bounding the circuit complexity of quantum circuits, and the learnability of quantum observables.

 

5 October 2022

14:00 - 15:00

 

Nikos Aletras (Sheffield)

Title: How can we improve explanation faithfulness in NLP?

Abstract: Large pre-trained language models (LLMs) currently dominate performance across language understanding tasks. However, their complexity and opaque nature have opened up new challenges on how to extract faithful explanations (or rationales), which accurately represent the true reasons behind a model’s prediction when adapted to downstream tasks. In this talk, I will present recent work from my group on how we can improve the faithfulness of LLM predictions and a study of explanation faithfulness in out-of-domain settings.

 

23 August 2022

14:00 - 15:00
Room: MB 3.17 

Anindya De (UPenn)

Title: Identifying low dimensional data in high dimensional spaces.

Abstract: Assume a Boolean function f in the n-dimensional space such
that f(x) only depends on the projection of x on some unknown k << n
dimensional space. Given access to a black box for f, how efficiently
can you test this hypothesis? We show that (a) without any further
assumptions, this question is ill-posed, i.e., no finite number of
queries suffice. (b) However if f is mildly smooth (i.e., has bounded
surface area), this hypothesis can be tested with query complexity
independent of the ambient dimension n. Further, we show that our
query complexity is tight up to polynomial factors.

Joint work with Elchanan Mossel and Joe Neeman.

9 June 2022

14:00 - 15:00

Shay Moran (Technion)

Title: Extending Generalization Theory Towards Addressing Modern Challenges in Machine Learning

Abstract: Recent years have witnessed tremendous progress in the field of Machine Learning (ML). However, many of the recent breakthroughs demonstrate phenomena that lack explanations, and sometimes even contradict conventional wisdom. One main reason for this is because classical ML theory adopts a worst-case perspective which seems too pessimistic to explain practical ML: in reality data is rarely worst-case, and experiments indicate that often much less data is needed than predicted by traditional theory. In this talk we will discuss two variations of classical learning theory. These models are based on a distribution- and data-dependent perspective which complements the distribution-free worst-case perspective of classical theory, and is suitable for exploiting specific properties of a given learning task. A common theme of these models is their combinatorial nature. This can be seen as a continuation of the fruitful link between machine learning and combinatorics, which goes back to the discovery of the VC dimension more than 50 years ago.

Based on joint works with Noga Alon, Olivier Bousquet, Steve Hanneke, Ron Holzman, Ramon van-Handel, and Amir Yehudayoff
26 May 2022

16:00 - 17:00

Jessica Sorrell (UCSD)

Title: Reproducibility in Learning

Abstract: Reproducibility is vital to ensuring scientific conclusions are reliable, but failures of reproducibility have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the reproducibility crisis is the explosion of methods for data generation, screening, testing, and analysis, where only the combinations producing the most significant results are reported. Such practices can lead to erroneous findings that appear to be significant, but don’t hold up when other researchers attempt to replicate them.

In this talk, we introduce a new notion of reproducibility for randomized algorithms. This notion ensures that with high probability, an algorithm returns exactly the same output when run with two samples from the same distribution. Despite the exceedingly strong demand of reproducibility, there are efficient reproducible algorithms for several fundamental problems in statistics and learning. We will introduce these problems and the techniques used to design reproducible algorithms for them. Finally, we discuss connections to other well-studied notions of algorithmic stability, such as differential privacy.

This talk is based on joint work with Russell Impagliazzo (UCSD), Rex Lei (UCSD), and Toniann Pitassi (Columbia University), to appear in STOC 2022.

17 March 2022

14:00 - 15:00

Anuj Dawar (Cambridge)

Title: The Limits of Symmetric Computation

Abstract: The most famous open problem in theoretical computer science,
the P vs. NP problem, challenges us to prove that for some natural
search problems, no efficient algorithm is possible. At the moment,
we have no idea how to prove such a statement. In order to make
meaningful progress, we can restrict the class of algorithms we
consider and show that, within these restrictions, no efficient
algorithm exists. In this talk, I consider a natural restriction to
symmetric algorithms. I explain how symmetries arise naturally in
computational problems and why algorithms that respect these
symmetries have inherent limitations. Many of our most powerful
algorithmic techniques are symmetry-preserving, while others are not.
Exploring these limits offers a rich research agenda combining logic,
algebra and combinatorics with algorithms.

Bio: Anuj Dawar is Professor of Logic and Algorithms at the University of Cambridge, where he has worked since 1999. He has degrees from IIT, Delhi, the University of Delaware and obtained his PhD at the University of Pennsylvania. Before moving to Cambridge, he worked in Swansea. His research focuses on investigating computational complexity through the lens of logic. He served for five years as president of the European Association for Computer Science Logic. He has chaired (among others) the committees that award the Ackermann award, the Gödel prize, the Barry Cooper prize and the IPEC-Nerode award.

10 March 2022

14:00 - 15:00

Dominic Orchard (Kent)

Title: Programming for our Future -- The role of computer science in climate science

Abstract: Climate science and computer science are both relatively young fields of study that essentially "grew up" together; some of the first
applications of early electronic computers were to model and simulate
the weather. The case for the link between increasing levels of CO2
and global warming grew from early computer simulations of the global
climate. We are now all aware that time is running out for us to act
to reduce global climate change. In this talk, I will discuss the
continued role of computer science in helping us understand the
climate and the effect of mitigations and changes in our behaviour. I
will pick up on some interesting new techniques bringing machine
learning research into climate modelling, as well as discuss some of
the challenges in modern science around programming complex models. I will
also point out the wider role of computing in helping us address the
climate crisis directly.

Bio: Dominic Orchard is a researcher in both the theory and practice of
programming languages, including applications to climate science
research. He co-directs the Institute of Computing for Climate Science
at the University of Cambridge and is also a Senior Lecturer at the
School of Computing, University of Kent. He received an MEng in
Computer Science from the University of Warwick and a PhD in Computer
Science from the University of Cambridge.

10 February 2022

14:00 - 15:00

Yves-Alexandre de Montjoye (Imperial)

Title: Using Data while Protecting Privacy in the Digital Era

Abstract: We live in a time when information about most of our movements and actions is collected and stored in real time. The availability of large-scale behavioural data dramatically increases our capacity to understand and potentially affect the behaviour of individuals and collectives. The use of this data, however, raises legitimate privacy concerns. Anonymization is meant to address these concerns: allowing data to be fully used while preserving individuals' privacy. In this talk, I will first describe a line of research in attacks against de-identified datasets showing how traditional data protection mechanisms mostly fail to protect people's privacy in the age of big data. I will then describe what I see as a necessary evolution of the notion of data anonymization towards an anonymous use of data and discuss the pros and cons of some of the modern privacy engineering techniques currently developed ranging from Differential Privacy to Query-Based Systems. I will conclude by describing how, when combined, I think they will allow large-scale behavioural data to be used while giving individuals strong privacy guarantees.

Bio: Yves-Alexandre de Montjoye is an Associate Professor at Imperial College London. He currently is a Special Adviser on AI and Data Protection to EC Justice Commissioner Reynders and a Parliament-appointed expert to the Belgian Data Protection Agency (APD-GBA). In 2018-2019, he was a Special Adviser to EC Competition Commissioner Vestager co-authoring the Competition Policy for the Digital Era report. Yves-Alexandre worked for the Boston Consulting Group and acted as an expert for both the Bill and Melinda Gates Foundation and the United Nations. He received his PhD from MIT in 2015

13 January 2022

14:00 - 15:00

Paul Goldberg (Oxford)

Title: Complexity of total search problems

Abstract: There are many theorems of the form "for all, there exists" (such as all games have a Nash equilibrium, or all numbers have a prime factorisation).
They have corresponding computational search problems (e.g. given a number, find a prime factorisation), and the word "total" refers to the property that all inputs have solutions, not just some of them. They include some important problems that do not seem to have polynomial-time algorithms, even though some of them are easy in practice. In the talk, I give an overview of work on classifying their computational complexity.

Bio: Since 2013 I have been a professor at the Dept of Computer Science, University of Oxford. Previously I was a professor at the Dept of Computer Science, University of Liverpool, where I moved in 2006. I was founding head of the ECCO research group. At both Oxford and Liverpool, I’ve worked extensively on algorithmic game theory and related problems in algorithms and computational complexity. Prior to this I was at the Department of Computer Science, University of Warwick, from 1997. I have also worked at Aston University and Sandia National Labs (working on algorithmic problems from computational biology), USA. PhD (Edinburgh, ’92) in computational learning theory; BA in mathematics (Oxford, ’88).

07 October 2021

14:00 - 15:00

Mohamed Abdelfattah (Cornel Tech)

Title: Rethinking Deep Learning Computations: From AutoML to Hardware

Abstract: The explosion and availability of data has created a new computation paradigm based on deep learning. Consequently, we need to rethink both software and hardware to make these computations possible and to keep up with the ever-increasing computation demand. In this talk, I will describe how we use automated machine learning (AutoML) to enable on-device AI through neural architecture search, compression, hardware modelling and efficient search algorithms. Next, I will give an overview of my experience in designing hardware and compilers for deep learning acceleration – I will focus on a new automated co-design methodology that simultaneously improves efficiency and accuracy. Finally, I will focus on reconfigurable devices like FPGAs. I will describe how an embedded network-on-chip can transform FPGAs into a general-purpose computation platform well-suited for deep learning.

Bio: Mohamed is an incoming assistant professor of ECE at Cornell University and Cornell Tech. He is currently a research team lead at the Samsung AI Center in Cambridge UK, working on the codesign of deep learning algorithms and hardware. Before that, he was at Intel building an FPGA-based accelerator and compiler for deep neural networks. Mohamed did his PhD at the University of Toronto, during which he was awarded the Vanier Canada Graduate scholarship and three best paper awards for his work on embedded networks-on-chip for FPGAs.

21 October 2021

14:00 - 15:00

Milan Vojnovic (LSE)

Title: Learning to Schedule

Abstract: In this talk, I will present new results for two scheduling problems with unknown parameters that require using a learning algorithm along with a scheduling policy. The first problem is for a multi-class, multi-server queueing system with stochastic rewards of assignments of jobs to servers, with mean rewards depending on the features of jobs and servers according to a bilinear model. The goal is to maximize the expected reward of assignments while maintaining the queueing system stable. The second problem is scheduling jobs with stochastic holding costs with unknown mean values in a single server system. The goal in this problem is to minimize the expected total job holding cost. For both these problems, I will present new algorithms and regret bounds. The motivating application scenarios for these problems arise in data processing platforms, crowdsourcing systems, and other online platforms. Joint work with Jung-Hun Kim and Dabeen Lee

Bio: Milan Vojnovic is a Professor of Data Science, with the Department of Statistics at LSE. He was a Researcher at Microsoft Research from 2004 to 2016. He received a Ph.D. degree EPFL in 2003. He was an affiliated lecturer at the University of Cambridge, from 2014 to 2016. He held a visiting researcher position with Facebook from 2019 to 2021. Milan’s research interests are in optimisation, machine learning, and design of data-driven algorithms. His research work was recognised by various awards including a 2010 ACM Sigmetrics Rising Star Researcher award and a 2005 ERCIM Cor Baayen Award, and several best paper awards at various conferences. He received a Facebook Faculty Research Award in 2019 and a Criteo Faculty Research Award in 2018.

04 November 2021

17:00 - 18:00

Divyakant Agrawal (UC Santa Barbara)

Title: On the power of summary statistics in large-scale data management

Abstract: During the past two decades we have seen an unprecedented increase in the amount of data that is being generated from numerous internet-scale applications. Before this data can be subject to modelling and analysis, it becomes necessary to obtain summary statistics such as the cardinality of unique visitors, frequency counts of users from different states or countries, and in general finding the quantile and median information from the dataset. Efficient algorithms exist for computing the exact information over the data. Unfortunately, these algorithms require a considerable amount of time, scanning the data multiple times, or require additional storage that is linear to the size of the dataset itself. Approximation methods, with guaranteed error bounds, developed in the context of streaming data are extremely effective to extract useful and relatively accurate knowledge from big data. In this talk, we will start with a review of one such approximation technique for estimating frequency counts over streaming data where only insert operations are permitted. We will then present a refinement of this technique in the context of large datasets that need to handle both insert and delete operations.

Bio: Divyakant Agrawal is a Professor of Computer Science at the University of California at Santa Barbara. His research expertise is in the areas of distributed systems and databases. He is a fellow of the AAAS, ACM, and IEEE. He has also had visiting appointments at IBM Almaden Research Labs, NEC Research, ASK.com, Google, Qatar Computing Research Institute, and the National University of Singapore over the course of his professional career.

25 November 2021

14:00 - 15:00

Claus Brabrand (ITU Copenhagen)

Title: How to Increase the Diversity in Computer Science

Abstract: Women are widely underrepresented in Computing studies.
We will consider why this is a problem from various perspectives.
In 2015, only 10% of students on ITU’s Bachelor of Software Development were women. ITU decided to do something about this and a number of initiatives were launched. Now, the percentage of women has risen to more than 20%.
We will present an overview of ITU’s efforts to address this gender imbalance in Computing. Also, we will present recent research on how to change educational activities so that they appeal more to women and to students without prior programming experience. Finally, we will show the effect of ITU’s onboarding initiative BootIT and how this connects with increasing diversity.

Bio: Claus Brabrand holds a Ph.D. in Computer Science from the BRICS International Research Center at Aarhus University (January 2003). He is the writer, director, and co-producer of the award-winning educational short-film “Teaching Teaching & Understanding Understanding” (2006) used around the world for educational development. Since 2007, Claus Brabrand has been an Associate Professor at the IT University of Copenhagen, conducting research within the area of Programming Languages, Program Analysis, and Software Product Lines. Parallel to this, he has worked with Educational Development and "teaching teachers to teach", both at ITU, nationally, and internationally and has done a number of keynotes within this area.

13 May 2021

14:00 - 15:00

 

Eiko Yoneki (Cambridge)

Title: Optimising Computer Systems in Complex and Dynamic Parameter Space

Abstract: Performance tuning of computer systems is challenging for a variety of reasons. Modern computer systems expose many configuration parameters in a complex and massive parameter space. The systems are nonlinear and there is no method for quantifying or modelling such systems by performance tuning to the level of precision required. Auto-tuning has emerged using a black-box optimiser such as Bayesian Optimisation (BO). However, BO has limited scalability. Reinforcement Learning (RL) could be applied for combinatorial optimisation problems, but there is a gap between current research and practical RL deployments. I will introduce our framework to tackle these issues and demonstrate the potential of machine learning based methodologies for computer system optimisation.

Bio: Eiko Yoneki is a senior researcher in the Systems Research Group of the University of Cambridge Computer Laboratory and a Turing Fellow at the Alan Turing Institute. Her research interests span distributed systems, networking and databases, including large-scale graph processing. Her group’s current research focusses on auto-tuning of data processing/analytics framework to deal with complex parameter space using ML.

27 May 2021

14:00 - 15:00

 

Ueli Maurer (ETH)

Title: Constructive Cryptography

Abstract: Modularization is a key principle in any constructive discipline. One wants to obtain complex constructions as the composition of simpler
construction steps, where each step constructs an object satisfying a certain specification from other objects satisfying certain (weaker) specifications. Constructive cryptography is a framework in which cryptographic methods are understood (and defined) as such construction steps. For example, a secure encryption scheme constructs a secure channel from an authenticated channel and a shared secret key. The design of cryptographic protocols corresponds to the composition of such construction steps, where for example the shared secret key needed in the above construction can itself be constructed, e.g. by a key agreement protocol. The security proof for the protocol is then simply a consequence of the security proofs of the individual steps. In this talk we give a self-contained introduction to constructive cryptography suitable for a non-specialist audience and present some recent new developments.

Bio: Ueli Maurer is Full Professor of Computer Science at ETH Zurich. His research interests include the theory and applications of cryptography, information security, theoretical computer science, information theory, and discrete mathematics. One of his long-term research goals is to establish a constructive theory of cryptography and to apply it to the modular design of provably-secure cryptographic protocols. He has served as Editor-in-Chief of the Journal of Cryptology from 2002 to 2010. He is an IEEE Fellow, an ACM Fellow, an IACR Fellow, and a member of the German Academy of Sciences (Leopoldina). He received the 2013 Vodafone Innovation Award for Mobile Communications, the 2016 RSA Award for Excellence in Mathematics, and the 2016 Theory of Cryptography (TCC) Test-of-Time Award. Maurer has served many companies and start-ups as board member and consultant and has advised many government organsations. In 2018 he co-founded the Concordium project and foundation aiming at creating a global, universally trusted, provably incorruptible and secure transaction platform.

24 June 2021

14:00 - 15:00

 

Tobias Weinzierl (Durham)

Title: Stop talking to me (again): some communication-avoiding programming patterns

Abstract: In this talk, we generalise the term communication-avoiding. Beyond its well-established meaning from linear algebra, we make it comprise (i) the reduction of data volume, (ii) the elimination of (meta) data generation, (iii) the reduction of data exchange frequency, (iv) the homogenisation of data access, (v) data access hiding and (vi) the localisation of data transfers. These criteria apply to both classic data exchange between compute nodes as well as data movements on the chip. Communication-avoiding then tackles the problematic divergence sketched above.

Bio: Tobias Weinzierl is Professor in the Department of Computer Science at Durham University, Durham, UK. He has served at the Munich Centre for Advanced Computing (see the Springer edited book, Advanced Computing) before, and holds a PhD and habilitation from the Technical University Munich.

21 January 2021

14:00 - 15:00

Arthur Gervais (Imperial College; Liquidity Network)

Title: Flash Loans for Fun and Profit

Abstract: Credit allows a lender to loan out surplus capital to a borrower. In the traditional economy, credit bears the risk that the borrower may default on its debt, the lender hence requires upfront collateral from the borrower, plus interest fee payments. Due to the atomicity of blockchain transactions, lenders can offer flash loans, i.e., loans that are only valid within one transaction and must be repaid by the end of that transaction. This concept has lead to a number of interesting attack possibilities, some of which were exploited in February 2020. In this talk we will explore the implication of transaction atomicity and flash loans for the nascent decentralized finance (DeFi) ecosystem. We will analyze two existing attacks with ROIs beyond 500k% and formulate finding the attack parameters as an optimization problem over the state of the underlying Ethereum blockchain and the state of the DeFi ecosystem. We will show how malicious adversaries can efficiently maximize an attack profit and hence damage the DeFi ecosystem further. Specifically, we will present how two previously executed attacks can be “boosted” to result in a profit of 829.5k USD and 1.1M USD, respectively, which is a boost of 2.37× and 1.73×, respectively.

Bio: Arthur Gervais is a Lecturer (equivalent Assistant Professor) at Imperial College London, and the founder of Liquidity Network. He's passionate about information security and worked since 2012 on blockchain related topics, with a recent focus on Decentralized Finance (DeFi).
4 February 2021

14:00 - 15:00

Ronitt Rubinfeld (MIT)

Title: Local computation algorithms

Abstract: Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety. However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input? We survey recent work in the model of "local computation algorithms" which for a given input, supports queries by a user to values of specified bits of a legal output. The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. Though this model describes sequential computations, techniques from local distributed algorithms have been extremely important in designing efficient local computation algorithms. In this talk, we describe results on a variety of problems for which sublinear time and space local computation algorithms have been developed -- we will give special focus to finding maximal independent sets and generating random objects.

Bio: Ronitt Rubinfeld is a Edwin Sibley Webster Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, and is on the faculty at the University of Tel Aviv. Her research interests include sublinear time algorithms, local algorithms and algorithms for testing discrete distributions over large domains. Ronitt Rubinfeld has been an ONR Young Investigator, a Sloan Fellow, an invited speaker at the Internal Congress of Mathematics (2006) and an ACM Fellow.

15 March 2021

15:00 - 16:00

 

Amanda Prorok (Cambridge)

Title:
Learning to Communicate in Multi-Agent Systems

Abstract:
Effective communication is key to successful multi-agent coordination. Yet it is far from obvious what, how and when information needs to be shared among agents that aim to solve cooperative tasks. In this talk, I discuss our recent work on using Graph Neural Networks (GNNs) to solve multi-agent coordination problems. In my first case-study, I show how we use GNNs to find a decentralized solution to the multi-agent path finding problem, which is known to be NP-hard. I demonstrate how our policy is able to achieve near-optimal performance, at a fraction of the real-time computational cost. Secondly, I show how GNN-based reinforcement learning can be leveraged to learn inter-agent communication policies. In this case-study, I demonstrate how non-shared optimization objectives can lead to adversarial communication strategies. Finally, I address the challenge of learning robust communication policies, enabling a multi-agent system to maintain high performance in the presence of anonymous non-cooperative agents that communicate faulty, misleading or manipulative information.

Bio:
Amanda Prorok is an Assistant Professor (University Lecturer) in the Department of Computer Science and Technology, at Cambridge University, UK, and a Fellow of Pembroke College. Her mission is to find new ways of coordinating artificially intelligent agents (e.g., robots, vehicles, machines) to achieve common goals in shared physical and virtual spaces. Amanda Prorok has been honored by an ERC Starting Grant, an Amazon Research Award, an EPSRC New Investigator Award, an Isaac Newton Trust Early Career Award, and the Asea Brown Boveri (ABB) Award for the best thesis at EPFL in Computer Science.

29 October 2020

14:00 - 15:00

Video of the colloquium

Benjamin Guedj (UCL)

Title: A primer on PAC-Bayesian learning, followed by News from the PAC-Bayes frontline

Abstract: PAC-Bayes is a generic and flexible framework to address generalisation abilities of machine learning algorithms. It leverages the power of Bayesian inference and allows to derive new learning strategies. I will briefly present the key concepts of PAC-Bayes and illustrate a few of its recent successes, for deep neural networks, for coherent risk measures, contrastive learning and robust learning.

Bio: Benjamin Guedj is a Principal Research Fellow (~Associate professor) at University College London (Centre for Artificial Intelligence and Department of Computer Science). He is also a tenured research scientist at Inria, the top French research institute in mathematics and computer science, and a visiting researcher with the Alan Turing Institute. Since Sept. 2020, he is the founder and scientific director of The Inria London Programme, a joint lab between France and the UK. He holds a PhD in mathematics from Sorbonne Université (Paris, France) and focuses on statistical learning theory, PAC-Bayes, computational statistics, theory for deep learning, among other topics.

19 November 2020

17:00 - 18:00

Rocco Servedio (Columbia University)

Title: Testing noisy linear functions for sparsity

Abstract: This talk explores a question at the intersection of two well-studied topics: (1) sparse recovery, and (2) property testing. Consider the following basic sparse recovery problem: An algorithm is given noisy input-output examples of a linear function over a high-dimensional space. Given the promise that the linear function is sparse, breakthrough work of Candes, Romberg and Tao shows that the linear function can be recovered from a number of examples that scales logarithmically in the dimension (and this dependence is required). We look at this problem from the vantage point of property testing, by studying the complexity of merely determining whether the unknown linear function is sparse versus "far from" sparse. We show that this decision problem can be solved with a number of samples which is completely independent of the dimension if the background high-dimensional distribution of samples has i.i.d. components that are not Gaussian.

Bio: Rocco Servedio is a professor and chair of the Department of Computer Science at Columbia University. His research is in theoretical computer science, and his main research interests lie in computational complexity theory, computational learning theory, property testing, and the role of randomness in computation.

26 November 2020

14:00 - 15:00

 

John Daugman (University of Cambridge)

Title: Neural Computing, Identity, and Visual Recognition of Persons

Abstract: Brains and computers have long been used as metaphors for each other. In recent years the link has grown stronger, with the explosion of results in machine learning based on big multi-layer networks, and with data from wet neurobiology showing that each cubic millimeter (mm^3) of brain tissue contains about 3 kilometers of "wiring" (neural connections). This talk discusses vision both in machines and in mammals, focusing on pattern encoding mechanisms and their efficiency. The main application areas discussed are face recognition and the IrisCode, which has recently been used to enroll the entire 1.3 billion population of India in a national ID system. Unifying theoretical ideas are entropy, and visual encoders that are optimal under the Uncertainty Principle.

Bio: John Daugman received his degrees at Harvard University and then taught at Harvard before coming to Cambridge University, where he is Professor of Computer Vision and Pattern Recognition. His areas of research include computer vision, information theory, neuro computing and statistical pattern recognition. Awards for his work in science and technology include the Information Technology Award and Medal of the British Computer Society, the "Time 100" Innovators Award, and the OBE, Order of the British Empire. He is the founder and benefactor of the Cambridge Chrysalis Trust.

10 December 2020

14:00 - 15:00

 

Ava Khamseh (University of Edinburgh)

Title: Higher-order interactions in statistical physics, machine learning and biomedicine

Abstract: The problem of inferring pairwise and higher-order interactions in complex systems involving large numbers of interacting variables, from observational data, is fundamental to many fields. Known to the statistical physics community as the inverse problem, it has become accessible in recent years due to real and simulated big data being generated. In the first part of this talk, we discuss extracting interactions form data using a neural network approach, namely the Restricted Boltzmann Machine. In the second part, we discuss a model-independent and unbiased estimator of symmetric interactions for any system of binary and categorical variables, be it magnetic spins, nodes in a neural network, or gene networks in biology. The generality of this technique is demonstrated analytically and numerically in various examples.

Bio: Ava Khamseh received her PhD in Theoretical Particle Physics from the University of Edinburgh in 2017. She then became a Cross-Disciplinary Fellow (XDF) at the Institute of Genetics and Molecular Medicine at the University of Edinburgh. She is currently a Lecturer in Biomedical Artificial Intelligence, joint between the School of Informatics and IGMM. Her research involves designing experiments and developing causal mathematical, statistical and machine learning methods for applications to cancer biology and large-scale population biomedicine.

Third term, 2019-2020:

2 July 2020

14:00 - 15:00

Video of the colloquium

Eli Ben-Sasson (StarkWare)

Title: Blockchain Scalability and Privacy via Scalable and Transparent zk-STARKs

Abstract: Two of the biggest challenges of blockchains like Bitcoin and Ethereum are (1) scalability - growing thoughput 10x, 100x, 1000x, ... and (2) maintaining privacy. Scalable ZK proof systems can solve both problems. This talk will recount the theory-to-practice journey of ZK proofs and describes questions in algebraic coding theory whose resolution will help improve scalability and privacy even more.

Bio: Eli is a co-founder and president of StarkWare, and Chairman of its Board of Directors. Eli has been passionate about the theory (“moon math”) and realization of transparent computational integrity since 2001, when he was a post-doctoral researcher at MIT and Harvard University, after completing his PhD in CS at the Hebrew University. Prior to co-founding StarkWare, Eli was a Professor of Computer Science at Technion – Israel Institute of Technology, which he joined in 2005. He is also a co-inventor of the Zerocash decentralized anonymous payment system and a founding scientist of the Electric Coin Company (aka the Zcash company).
18 June 2020

15:00 - 16:00

Video of the colloquium

Aleksander Madry (MIT)

Title: Are All Features Created Equal?

Abstract: Our machine learning models have attained impressive accuracy on many benchmark tasks. Yet, these models remain remarkably brittle---small perturbations of natural inputs can completely degrade their performance. Why is this the case? In this talk, we show that this brittleness can, in part, be attributed to the fact that our models often make decisions in a very different way than humans do. Viewing neural networks as feature extractors, we study how features extracted by neural networks may diverge from those used by humans, and how adversarially robust models seem to make progress towards bridging this gap.

Bio: Aleksander Madry is a Professor of Computer Science in the MIT EECS Department and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent some time at Microsoft Research New England and on the faculty of EPFL. Aleksander’s research interests span algorithms, continuous optimization, science of deep learning and understanding machine learning from a robustness perspective. His work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and 2018 Presburger Award.

16 January 2020

14:00 - 15:00

MSB2.22

Andreas Vlachos (Cambridge)

Title: Automated Fact Checking: a natural language processing perspective

Abstract:
Fact checking is the task of verifying a claim against sources such as knowledge bases and text collections. While this task has been of great importance for journalism, it has recently become of interest to the general public as it is one of the weapons against misinformation. In this talk, I will first discuss the task and what should be the expectations from automated methods for it. Following this, I will present our approach for fact checking simple numerical statements which we were able to learn without explicitly labelled data. Then I will describe how we automated part of the manual process of the debunking website emergent.info, which later evolved into the Fake News Challenge with 50 participants. Finally, I will present the Fact Extraction and Verification shared task, which took place in 2018, as well as the recent second edition which conducted an adversarial evaluation of the proposed approaches.

4 February 2020

10:00 - 11:00
Unusual room: R0.21

Hanan Samet (University of Maryland) -- ACM Distinguished Speaker

Title: Reading News with Maps by Exploiting Spatial Synonyms

Abstract:
NewsStand is an example application of a general framework to enable people to search for information using a map query interface, where the information results from monitoring the output of over 10,000 RSS news sources and is available for retrieval within minutes of publication. The advantage of doing so is that a map, coupled with an ability to vary the zoom level at which it is viewed, provides an inherent granularity to the search process that facilitates an approximate search. This distinguishes it from today's prevalent keyword-based conventional search methods that provide a very limited facility for approximate searches and which are realized primarily by permitting a match via use of a subset of the keywords. However, it is often the case that users do not have a firm grasp of which keyword to use, and thus would welcome the search to also take synonyms into account. For queries to spatially-referenced data, the map query interface is a step in this direction as the act of pointing at a location (e.g., by the appropriate positioning of a pointing device) and making the interpretation of the precision of this positioning specification dependent on the zoom level is equivalent to permitting the use of spatial synonyms (i.e., letting spatial proximity play a role rather than only seeking an exact match of a query string). Of course, this is all predicated on the use of a textual specification of locations rather than a geometric one, which means that one must deal with the potential for ambiguity.

6 February 2020
14:00 - 15:00
MSB2.22

Jose Such (King's College)

Title: Multiparty Privacy and AI Security & Privacy

Abstract:
In this talk, I will provide an overview of two problems I have been contributing to in the past years: i) Multiparty Privacy; ii) AI Security & Privacy. Regarding i), privacy is not just about what an individual user discloses about herself, it also involves what others (e.g. her friends) may disclose about her. Multiparty privacy is concerned with information pertaining to several individuals and the conflicts that arise when the privacy preferences of these individuals differ. In particular, I will go through the research we have been doing on multiparty privacy in a social media context and the future challenges in general in this field. Regarding ii) I will be talking about the work we have been doing at the intersection of HCI and AI on security and privacy in AI-based systems, including privacy threats in AI-based systems, users’ perceptions of security and privacy in AI-based systems, socio-technical approaches to tackle AI-based discrimination against particular type of users (e.g. based on gender/ethnicity), and symbolic AI approaches that can help towards addressing security, privacy and transparency issues in AI-based systems.

27 February 2020
14:00 - 15:00

Carl Henrik Ek (Bristol)

Title: Modulated Bayesian Optimisation

Abstract:
Bayesian optimization (BO) methods often rely on the assumption that the objective function is well-behaved, but in practice, this is seldom true. Common approaches, which try to model the objective as precisely as possible, often fail to make progress by spending too many evaluations modelling irrelevant details. In this talk, I will introduce a set of surrogate models that aims focuses on the details that are informative for search while ignoring detrimental structure that is challenging to model from a few observations. We demonstrate that surrogate models with appropriate noise distributions can absorb challenging structures in the objective function by treating them as irreducible uncertainty.

12 March 2020
14:00 - 15:00

Tugkan Batu (LSE)

Title: Testing Discrete Distributions -- a CS Retrospective

Abstract: Making inferences about the distributional properties of data is arguably one of the most fundamental classes of data analysis tasks. The study of such inferences has been a scientific endeavour for a long time, typically, by statistics and information theory. In the last two decades, testing problems for distributions, especially, on discrete domains have been studied also in the theoretical computer science community, often from a slightly different perspective. The primary objective in this line of research is characterising the number of samples needed, in terms of the domain size or some function of the distributions involved, to perform a given inference task reliably. In this talk, I will present an overview of this very active field of research, describing some basic models and problems, landmark results, and future directions.

First term, 2019-2020:

10 October 2019

14:00 - 15:00

Sunil Prabhakar (Purdue University)
Title: Database Fidelity Without Trust

Abstract:
Ensuring the trustworthiness of data retrieved from a database is of utmost importance to users. Databases are typically subjected to updates from multiple clients, often at very high rates. The correctness of data stored in a database is defined by the faithful execution of only valid (authorized) transactions. In this talk we address the question of whether it is necessary to trust a database server in order to trust the data retrieved from it? The lack of trust arises naturally if the database server is owned by a third party, as in the case of cloud computing, or if it is likely that the server may have been compromised, or there is a malicious insider.

24 October 2019

14:00 - 15:00

Evangelia Kalyvianaki (Cambridge)

Title: THEMIS -- Fairness in Federated Stream Processing under Overload

Abstract:
Federated stream processing systems, which utilise nodes from multiple
independent domains, can be found increasingly in multi-provider cloud
deployments, internet-of-things systems, collaborative sensing
applications and large-scale grid systems. To pool resources from several
sites and take advantage of local processing, submitted queries are split
into query fragments, which are executed collaboratively by different
sites. When supporting many concurrent users, however, queries may exhaust
available processing resources, thus requiring constant load shedding.
Given that individual sites have autonomy over how they allocate query
fragments on their nodes, it is an open challenge how to ensure global
fairness on processing quality experienced by queries in a federated
scenario.

7 November 2019
14:00 - 15:00
Ata Kaban (Birmingham)

Title: Uncovering structure with randomness for learning in high dimensions

Abstract:
Random projection (RP) is a simple, computationally efficient linear
dimensionality reduction technique with interesting theoretical
properties for learning from data. For instance, it can regularise
ill-conditioned covariance matrices, and it preserves dot products
between vectors and their signs to an extent that depends on their
cosine similarity. It can speed up computations when dealing with high
dimensional data sets, while retaining control of generalisation
performance. Furthermore, in this talk we discuss how RP can also be
used as an analytic tool to better understand why learning from high
dimensional data is not always as difficult as it looks. We present
generalisation bounds where the use of RP highlights how learning can
take advantage of the presence of certain benign geometric structures
in the problem, and yields conditions that reduce
dimensional-dependence of error-guarantees in settings where such
dependence is known to be essential in general.

21 November 2019
14:00 - 15:00
Samson Abramsky (Oxford)

Title: Non-classicality, quantum resources and quantum advantage (slides)

Abstract:
A key objective in the field of quantum information and computation is to understand the advantage which can be gained in information processing tasks by the use of quantum resources. While a range of examples has been studied, to date a systematic understanding of quantum advantage is lacking.

Our focus here is on quantum resources which take the form of non-local, or more generally \emph{contextual}, correlations. Contextuality is one of the key signatures of non-classicality in quantum mechanics and has been shown to be a necessary ingredient for quantum advantage in a range of information processing tasks. We will describe a notion of simulation between quantum resources, and more generally between resources described in terms of contextual correlations, in the ``sheaf-theoretic'' framework for contextuality (Abramsky-Brandenburger).
The notion of simulation is expressed as a morphism of empirical models, in a form which allows the behaviour of one set of correlations to be simulated in terms of another using classical processing and shared randomization.

The talk will be essentially self-contained, introducing the various notions we will discuss.

3 December 2019
15:00 - 16:00
Jakob Nikolas Kather (University Hospital Aachen)

Title: Deep Learning for Precision Oncology

Abstract:
Precision oncology requires molecular and genetic testing of tumour tissue. For many tests, widespread implementation into clinical practise is limited because these biomarkers are costly, require significant expertise and are limited by tissue availability. However, virtually every cancer patient gets a biopsy as part of the diagnostic workup and this tissue is routinely stained with hematoxylin and eosin (HE). We have developed a deep learning-based technology to predict molecular features, prognosis and markers for treatment response directly from routine histology. This talk will summarize the state of the art in deep learning histopathology, demonstrate emerging use cases and discuss the clinical implications of deep learning-based molecular testing of solid tumours.

Thu 2 May, '19

14:00 - 15:00

Sarah Meiklejohn (UCL)
Title: Anonymity in Cryptocurrencies
(This talk will take place in CS1.04)
Abstract: A long line of recent research has demonstrated that existing cryptocurrencies often do not achieve the level of anonymity that users might expect they do, while at the same time another line of research has worked to increase the level of anonymity by adding new features to existing cryptocurrencies or creating entirely new cryptocurrencies. This talk will explore both of these lines of research, demonstrating both de-anonymization attacks and techniques for anonymity that achieve provably secure guarantees.
Thu 9 May, '19
14:00 - 15:00
Edith Elkind (Oxford)

Title: Fair Division of a Graph

Abstract: We consider fair allocation of indivisible items under an additional constraint: there is an undirected graph describing the relationship between the items, and each agent's share must form a connected subgraph of this graph. This framework captures, e.g., fair allocation of land plots, where the graph describes the accessibility relation among the plots. We focus on agents that have additive utilities for the items, and consider several common fair division solution concepts, such as proportionality, envy-freeness and maximin share guarantee.In particular, we prove that for acyclic graphs a maximin share allocation always exists and can be computed efficiently. We also discuss a continuous variant of this problem, where agents need to divide a collection of interconnected "cakes" and each agent's piece has to be connected, or, more generally, have a small number of connected components.

Based on joint work with Sylvain Bouveret, Katarina Cehlarova, Ayumi Igarashi, Dominik Peters, Xiaohui Bei and Warut Suksompong

Thu 6 Jun, '19

14:00 - 15:00

Andreas Damianou (Amazon, Cambridge)

(This talk will take place in CS1.04)

Title: Probability and uncertainty in Deep Learning

Abstract:
In this talk I motivate deep learning from the perspective of probability and uncertainty. The resulting models can maintain and, crucially, communicate their degree of belief regarding their states (e.g. how certain they are in their predictions) using Bayesian inference. I will also highlight the deep Gaussian process family of approaches, which can be seen as a means of performing probabilistic, Bayesian deep learning directly in the space of functions, rather than in the space of finite parameters. Prior knowledge of deep learning is not essential to follow this talk, in fact I will offer a quick overview in the beginning.

Thu 20 Jun, '19
14:00 - 15:00
Mario Berta (Imperial College London)

Title: Quantum Technologies for Cryptography

Abstract:
The rise of quantum technologies opens exciting possibilities for performing cryptographic tasks, but it also poses new threats. On the one hand, quantum mechanical devices are the basis of a variety of novel cryptographic protocols like key distribution or secure identification, which offer a significant advantage over standard routines. On the other hand, quantum computers will render some existing cryptographic schemes insecure. The interplay between these two aspects is best understood by analysing the power of adversaries which have access to a quantum computer - quantum adversaries. In my talk, I will discuss methods to analyse the power of quantum adversaries which shine light on both aspects.

Thu 11 Jul, '19

11:00 - 12:00

Radu Calinescu (University of York)

Title: Learning, synthesis and efficient analysis of probabilistic models

Abstract:
Probabilistic modelling is a powerful tool for establishing performance, dependability and other quantitative properties of systems and processes during design, verification and at runtime. However, the usefulness of this tool depends on the accuracy of the models being analysed, on the efficiency of the analysis, and on the ability to find models corresponding to effective system and process architectures and configurations. This talk will describe how recent approaches to probabilistic model learning, analysis and synthesis address major challenges posed by these prerequisites, significantly advancing the applicability of probabilistic modelling.