Skip to main content Skip to navigation

Mathematics and Philosophy sponsored events

  • Asaf Karagila (East Anglia) "Choice vs. Determinacy, who do you want to win?"

Wednesday 13 March 2019 17:00 - 18:00 B1.01 (Mathematics Institute)

Abstract: The axiom of determinacy tells us that every game on the natural numbers is determined. It has very nice implications on the structure of the real line, but it also contradicts the axiom of choice. In this expository talk we will discuss what are games, strategies, and how these are useful to prove all sort of things in mathematics. But also how the axiom of choice introduces chaos to our universe.

  • Final year Maths-Phil and MMath logic presentations

Wednesday, 27 February 16:00-19:00 B1.01 (Mathematics Institute)

Eric Munday "Temporal Logic and Kamp’s Theorem"

I will do my best to introduce Linear Time Temporal Logic equipped with Until and Since operators in a sensible manner, illustrating how it is similar to First Order Monadic Logic of Order. Then I shall proceed to sketch a proof of Kamp’s Theorem giving the barebones ideas behind the proof which will hopefully show why it’s both non trivial and pretty funky without requiring too many tools.

Alexander Gheorghiu "Jensen's Con(NFU) and the Theory of Tangled Types"

The consistency problem for Quine's ''New Foundations'' has remained open For over eighty years. A number of fascinating techniques and properties in type theory, set theory, and model theory, have been invented and employed, but so far none have achieved the full result. In 1969 Jensen exploited the Specker-Kaye equiconsistency lemma to get a partial solution: the consistency of a weak variant of NF, called NFU. Over the last twenty years Holmes has been modifying Jensen's methods to achieve the full result, and in 2015 proposed a solution. We investigate this line of inquiry which culminates in the hypothesis of a strange object called a Tangled Web of Cardinals, whose existence would end the eighty year search for a consistency proof of NF.

Ibrahim Mohammed "The Axiom of Choice in algebra"

I will explain some algebraic consequences of the axiom of choice with regards to group structures, field structures and depending on time field extentions. It turns out that the axiom of choice is equivalent to the statement that every non-empty set has a group structure, moreover assuming choice we can give any infinite set an algebraically closed field structure of any given prime characteristic.

If possible I will also talk about algebraically independent sets and transcendence bases of a field extention, which can be thought of as a continuation of the concept of linearly independent sets and basis of a vector space, where we do not allow zeroes from any degree polynomials instead of just linear ones.

  • Joel Hamkins (Oxford) "The rearrangement number: how many rearrangements of a series suffice to validate absolute convergence?" (Warwick Mathematics Colloquium)

Friday 19 October 16:00-17:00 B3.02 (Mathematics Institute)


The Riemann rearrangement theorem asserts that a series $\Sigma_n a_n$ is absolutely convergent if and only if every rearrangement $\Sigma_a a_{p(n)}$ of it is convergent, and furthermore, any conditionally convergent series can be rearranged so as to converge to any desired extended real value. How many rearrangements p suffice to test for absolute convergence in this way? The rearrangement number, a new cardinal characteristic of the continuum, is the smallest size of a family of permutations, such that whenever the convergence and value of a convergent series is invariant by all these permutations, then it is absolutely convergent. The exact value of the rearrangement number turns out to be independent of the axioms of set theory. In this talk, I shall place the rearrangement number into a discussion of cardinal characteristics of the continuum, including an elementary introduction to the continuum hypothesis and an account of Freiling's axiom of symmetry.

  • Thomas Forster (Cambridge) "How the universal set can be a set"

Monday 22 October 17:00-18:00 PS1.28 (Physical Sciences Building -- see linked map)

Abstract: It is widely and mistakenly believed that in Set Theory that there can be no universal set. In fact constructions of models of set theories with a universal set have been known for more than forty years, since early work of Church. I will give an introduction to these constructions, provide some illustrations and a brief discussion of their significance.

  • Ingo Blechschmidt (Augsburg) "Exploring hypercomputation with the effective topos"

Wednesday 8 February 17:00-18:00 D.107 (Mathematics Institute)


To any model of computation, such as lambda calculus or Turing machines, there is an associated "effective topos". This is an alternate mathematical universe in which the usual laws of logic don't hold.

For instance, for some models, anti-classical dream axioms like "any function $\mathbb{R} \rightarrow \mathbb{R}$ is continuous" and "any function $\mathbb{N} \rightarrow \mathbb{N}$ is computable" hold. The law of excluded middle however, "any proposition is either true or not true", is falsified in those toposes.

The effective topos has especially wondrous properties in case we employ models of hypercomputation, where computers can perform infinitely many calculational steps in finite time: Andrej Bauer proved that then the effective topos contains an injection $\mathbb{N}^{\mathbb{N}} \rightarrow \mathbb{N}$.

We can also employ physical models about the real world. In this case, statements which are in classical mathematics simply true become non-trivial statements about the nature of the physical world when interpreted in the effective topos.

Topos theory therefore provides an apt vehicle to study computation and alternative axioms of logic. Other applications include mechanical extraction of programs from proofs and a reconciliation of platonism with formalism.

Ingo will also be speaking at 16:00 on Tuesday 7 (B3.03) to the Algebraic Geometry seminar and at 15:00 on Thursday 9 February (MS.03) to the Geometry-Topology seminar. 

  • Andrew Arana (Paris 1) "Depth in mathematics"
Wednesday 30 November 17:00-18:00 D1.07 (Mathematics Institute)


Mathematicians judge some theorems, definitions, and proofs to be deep, ordinarily using this as an evaluation of merit. We articulate several different criteria for measuring depth, indicating pros and cons of each. We use Szemerédi's theorem as an example throughout, making the various proposals more concrete.

Wedneday 26 October 16:00-18:00 SO.09 (Social Sciences)
If (finite cardinal) numbers exist they would surely be abstract. We cannot see, touch, hear, taste or smell them; they do not emit or reflect signals; they leave no traces from which their existence and nature can be inferred. So how can we have any knowledge of them? This is an instance of a general problem about the metaphysics and epistemology of abstracta. After a whistle-stop tour of some major views about numbers, I will try to indicate how, given the view I favour, the knowledge problem might be solved, leaning on findings from the cognitive sciences.
  • Daniel Wolf (Leeds) "Model theory: Your new favourite area of maths"
Thursday 17 March 17:00-18:00, A1.01 (Mathematics Institute) room TBC

Model theory is a thriving area of mathematics with appeal to mathematicians and philosophers alike. In this talk I will act as a salesman for model theory, doing my best to convince the audience of both its utility and its beauty. I will attempt to mix some practical, hands-on technical aspects of the subject with its more philosophical features. I will start with some history and then move on to a key notion of model theory, namely that of a type. I will then go over the very useful technique of quantifier elimination before discussing some of model theory's great (and ongoing) successes, for example nonstandard analysis, Hrushovski's proof of the Manin–Mumford conjecture, and neo-stability. I will remark on philosophical implications throughout the talk. Note that I will assume knowledge of first-order logic, such as the compactness and Löwenheim–Skolem theorems.

Friday 4 March 16:00-17:00, B3.02 (Mathematics Institute)

Abstract: In his 1946 Princeton Bicentennial Lecture Gödel suggested the problem of finding a notion of definability for set theory which is "formalism free" in a sense similar to the notion of computable function --- a notion which is very robust with respect to its various associated formalisms. One way to interpret this suggestion is to consider standard notions of definability in set theory, which are usually built over first order logic, and change the underlying logic. We show that constructibility is not very sensitive to the underlying logic, and the same goes for hereditary ordinal definability (or HOD). This is joint work with Menachem Magidor and Jouko Väänänen.

  • Richard Kaye (Birmingham) "Toogle machines"
Wednesday 3 February 17:00-18:00, B3.03 (Mathematics Institute)
Abstract: I will discuss finite state machines (toggle machines) made from networks of 'toggles' - simple two state machines with one possible input and two possible output values. These are arguably the simplest building blocks one could use to make a 'computer'. A toggle machine is a network of finitely many such toggles linked together in some fixed way. ('Feedback' is allowed.) The halting problem for toggle machines is defined. The natural algorithm to solve this problem is in polynomial space, but we show that in fact this halting problem is in $\mathrm{P} \cap \mathrm{NP}$ (I do not know if it is in P.) A slightly more difficult decision problem for toggle machines will be shown to be $\mathrm{NP}$-complete. Some results showing that basic toggle networks cannot simulate all finite state automata will be presented and discussed. An enhanced version of the machine model is given (in three equivalent formulations) and shown to have $\mathrm{PSPACE}$-complete halting problem and to be powerful enough to simulate arbitrary finite state automata.
Friday 22 January 16:00-17:00, B3.02 (Mathematics Institute)
Abstract: This talk will explore a traditional question in the philosophy of mathematics — i.e. whether (or in what sense) the consistency of a mathematical theory entails the existence of a mathematical structure satisfying its axioms? It is sometimes argued that an answer to this question was provided by Gödel’s (1929) proof of the Completeness Theorem for first-order logic which states that every proof-theoretically consistent set of sentences in a first-order language possesses a first-order model. In order to assess this claim, I will examine Gödel’s result in the context of Hilbert’s work on the construction of arithmetical models of theories of geometry and analysis as well as more contemporary work on computable model theory.
  • Andrew Arana (University of Paris 1 Panthéon-Sorbonne) "Non-euclidean geometry and geometrical content"

Wednesday 28 October 17:00-18:00, B3.02 (Mathematics Institute)

Abstract: In 1868, Beltrami described himself as having identified a "real substrate” for hyperbolic geometry, "rather than admit the necessity for a new order of entities and concepts." He sought to show that hyperbolic geometry described the geometry on curved surfaces in "real", i.e. Euclidean, space. In 1873, Klein wrote that such investigations are "by no means intended to decide the validity of the parallel axiom, but only whether the parallel axiom is a mathematical consequence of the remaining axioms of Euclid". But relative consistency is a logical relationship between statements, whereas Beltrami’s reduction of hyperbolic geometry to the geometry of the surfaces of Euclidean objects is a geometric relationship between objects. The logical relationship identified by Klein, and soon Poincaré and Hilbert, is the heart of what we now consider at issue in the legitimization of non-euclidean geometry. Yet the move from Beltrami's view to this "modern" one involves a considerable conceptual leap, in which geometers came to accept that they could "reinterpret" the geometrical terms of geometric statements. In this talk I want to consider the nature of this conceptual leap, toward better understanding this "modern" understanding of the content of geometrical statements.

  • Sam Sanders (Ghent/Munich) "Reverse Mathematics: The playground of logic"

Wednesday 18 February 17:00-18:00, B3.03 (Mathematics Institute)

Abstract: Reverse Mathematics (RM) is a program in the foundations of mathematics. The aim of RM is to find the minimal axioms needed to prove theorems of ordinary mathematics. What emerges from RM is a classification of mathematical theorems based on Turing computability. In particular, one of the main results of RM is that most mathematical theorems fall into only five logical categories. We review the (pre-)history of RM and relate it to earlier foundational/philosophical programs by Hilbert, Brouwer, Weyl, and Turing.

  • Andrew Brooke-Taylor (Bristol) "Computing from oracles and uncountable cardinals"
Monday 26 January, 16:00-17:00, B3.01 (Mathematics Institute)

Abstract: Given an infinite string of 0s and 1s - an "oracle" - one can ask what can be computed from it, when you give your computer the ability to query what the nth bit is for any n. There are many results in the literature relating different properties of such oracles, mostly arrived at in an ad hoc manner, but I will show how many of them can be organised in a systematic way. This is done by analogy with so-call cardinal characteristics of the continuum, which are very well-studied (definitions for) uncountable cardinals that may be strictly less than the cardinality of the reals. Inequalities between these cardinals correspond to implications between properties of oracles, and whilst imperfect the analogy is instructive, highlighting some natural open questions.

  • Colin McLarty (Case Western) "What does it take to prove X?"

Wednesday 12 March, 17:00-18:00 MS.02 (Mathematics Institute)

Abstract: David Hilbert made a mistake about the relation between finite and infinite mathematics -- a great mistake only a giant could have made. It was revealed by a great achievement: Gödel's incompleteness theorems. Incompleteness is a reality in arithmetic and set theory, and central to a great deal in logic, but not because it arises often in specific mathematics. In fact one productive tactic in model theory is to focus on complete theories. But in using that tactic one is constantly aware of incompleteness, so as to guard against it. The talk will introduce and survey aspects of incompleteness.
Colin will taking part in several other events on 18-21 March as described here.

  • Zachiri McKenzie (Cambridge) "Ultraproducts in model theory" 
Monday 3 February, 17:00-18:00 D1.07 (Mathematics Institute)

Abstract: The ultraproduct construction is an algebraic technique that allows one to build models whose existence would normally be proved using compactness or omitting types arguments. Since its invention in early twentieth century this technique has found a variety of applications in model theory and set theory. I will describe the ultraproduct construction. I will prove the Łoś Theorem, which characterises the theory satisfied by an ultraproduct, and sketch a proof of the fact that every filter can be extended to an ultrafilter. I will then show how ultraproducts can be used to construct non-standard models of set theory and arithmetic.
  • Christopher Porter (University of Paris 7) "Defining Randomness"
Thursday 28 November, 16:00-17:00 S0.08 (Social Sciences)

Abstract:In the 1960s, there were a number of major developments in the study of random sequences. Two are particularly noteworthy: First, in 1965, Kolmogorov published a definition of random finite sequence based on a measure of complexity known as Kolmogorov complexity. Second, in 1966, Martin-Löf published a definition of random infinite sequence given in terms of a certain collection of effective statistical tests which are nowadays referred to as Martin-Löf tests. Although Martin-Löf had also shown that there is some relationship between his own definition of randomness for infinite sequences and Kolmogorov's for finite sequences, the exact relationship between these two definitions was not resolved until the early 1970s. The goal of my talk is to present the details of these two definitions of randomness and to explain precisely how they relate to one another.