Carl Friedrich von Weizsäcker-Zentrum

Carl Friedrich von Weizsäcker-Colloquium

Wednesday 17-18h

The Carl Friedrich von Weizsäcker-Colloquium hosts weekly talks from guests and members of the CFvW-Center. In the broadest sense the topics resemble the issues that are discussed in the center.

The Colloquium takes place on Wednesdays between 5pm and 6pm, until further notice via the Zoom-Platform. (usually we record them for our YouTube Channel)

To take part please follow this link.

Prof. Dr. Reinhard Kahle und Dr. Thomas Piecha.

To get the announcements for the colloquium-lectures please send an email to: Aleksandra Rötschke.

Upcoming Talk

Wednesday 5th of May 2021, 5pm s.t.

Dr. Antonio Piccolomini d'Aragona (Aix-Marseille):

Kreisel's informal rigour and Gödel's absolute provability. A tentative reading through and for Prawitz's semantics

In spite of their philosophical relevance, Kreisel’s theory of informal rigour and Gödel’s concept of absolute provability have proved elusive to rigorous mathematised treatments. In my talk, I will set out to connect Kreisel’s and Gödel’s ideas to Prawitz’s proof-based semantics. Prawitz’s semantics has been put forth and developed independently of Kreisel and Gödel, but some of its basic tenets may nonetheless match those of informal rigour and absolute provability. Both Kreisel and Gödel aim at bringing provability back into mathematical practice – against the post-Fregean and post-Hilbertian formalistic attitude – as well as at overstepping formal derivability – given Gödel’s and Turing’s limiting results. In order to do this, provability must become informal (i.e. independent of formal languages and systems) and absolute (i.e. formalism-free and/or universally applicable). This may be in line with the intuitionistic idea of giving provability a “semantic” role, an idea of which Prawitz’s semantics is a well-known instance. As a result, I argue that Prawitz’s semantics shares some issues with Kreisel’s informal rigour, while the link with Gödel’s absolute provability is more difficult to be established.

List of Lectures

7.7.2021: reserved

30.6.2021: Prof. Dr. Mathias Frisch (Hannover): TBA

23.6.2021: Dr. Daniel Kostić (Radboud University): TBA

16.6.2021: Dr. Julia R.S. Bursten (University of Kentucky): TBA

9.6.2021: Dr. Silvia De Toffoli (Princeton University): TBA

2.6.2021 (15-18h): Prof. Dr. Aaron Sloman (University of Birmingham): TBA

19.5.2021: Dr. Vincenzo Politi (University of Oslo): TBA

12.5.2021 (18-19h!): Sandro Radovanović (University of Belgrade): Effects of Affirmative actions in algorithmic decision-making

5.5.2021: Dr. Antonio Piccolomini d'Aragona (Aix-Marseille): Kreisel's informal rigour and Gödel's absolute provability. A tentative reading through and for Prawitz's semantics

In spite of their philosophical relevance, Kreisel’s theory of informal rigour and Gödel’s concept of absolute provability have proved elusive to rigorous mathematised treatments. In my talk, I will set out to connect Kreisel’s and Gödel’s ideas to Prawitz’s proof-based semantics. Prawitz’s semantics has been put forth and developed independently of Kreisel and Gödel, but some of its basic tenets may nonetheless match those of informal rigour and absolute provability. Both Kreisel and Gödel aim at bringing provability back into mathematical practice – against the post-Fregean and post-Hilbertian formalistic attitude – as well as at overstepping formal derivability – given Gödel’s and Turing’s limiting results. In order to do this, provability must become informal (i.e. independent of formal languages and systems) and absolute (i.e. formalism-free and/or universally applicable). This may be in line with the intuitionistic idea of giving provability a “semantic” role, an idea of which Prawitz’s semantics is a well-known instance. As a result, I argue that Prawitz’s semantics shares some issues with Kreisel’s informal rigour, while the link with Gödel’s absolute provability is more difficult to be established.

21.4.2021: Dr. Christian Feldbacher-Escamilla (Düsseldorf): AI for a Social World – A Social World for AI

AI is not only supposed to help to tackle social problems but it is also frequently used to in fact solve such problems. AI-assisted systems play an increasingly important role in the legal domain, the health sector, environmental research, public policy-making and the like. Research in this field is numerous and diverse. In this talk, we want to argue, however, that it is also interesting to have a look at the opposite direction: How can our knowledge of the social world and its structural features help us to approach problems of AI? In particular, we will investigate how a social perspective on problems of justification helps us to address epistemic problems of machine learning theory.
 

 

6.4.2021: Prof. Dr. Helen Longino (Stanford University): Critical Contextual Empiricism, Diversity and Inclusiveness

Watch the recording on Facebook.

Humanity looks to the scientific community, now more than ever, in order to provide solutions to today's challenges. Decisions made by scientists thus directly and deeply influence human lives. The Carl Friedrich von Weizsäcker Center is interested in the foundations of responsible science. For example, how can we identify and avoid scientific misconduct, e.g. plagiarism and fraud, or the abuse of science for commercial purposes? How do we navigate issues of morally questionable research, research funding, and global inequalities? How can scientists ensure optimal knowledge production in the face of the replication crisis, cognitive biases in science, and the politics of peer review? Further, how can we protect scientists from becoming commodities when their products are so ardently sought by politicians and society?

 

6.4.2021: Prof. Dr. Nancy Cartwright (Durham University): Responsible Science - Responsible Use

Watch the Recording on Facebook.

Humanity looks to the scientific community, now more than ever, in order to provide solutions to today's challenges. Decisions made by scientists thus directly and deeply influence human lives. The Carl Friedrich von Weizsäcker Center is interested in the foundations of responsible science. For example, how can we identify and avoid scientific misconduct, e.g. plagiarism and fraud, or the abuse of science for commercial purposes? How do we navigate issues of morally questionable research, research funding, and global inequalities? How can scientists ensure optimal knowledge production in the face of the replication crisis, cognitive biases in science, and the politics of peer review? Further, how can we protect scientists from becoming commodities when their products are so ardently sought by politicians and society?

 

24.2.2021: Prof. Dr. Marco Panza (Paris 1) & Prof. Dr. Daniele Struppa (Chapman University): Agnostic Science and Mathematics

We'll firstly illustrate the notion of agnostic science (science without understanding), and reflect, then, on the effect that the practicer of agnostic science has on the use of maths in science, and for the development of maths itself.

 

 

17.2.2021: Dr. Benedikt Ahrens (Birmingham): The Univalence Principle

Michael Makkai's "Principle of Isomorphism" stipulates that mathematical reasoning is invariant under equivalence of mathematical structures. Inspired by Makkai, Vladimir Voevodsky conceived the Univalent Foundations (UF) of Mathematics as a foundation of mathematics in which only equivalence-invariant properties and constructions can be formulated. Coquand and Danielsson proved that UF indeed provides an isomorphism-invariant language for set-level structures, such as groups and rings, that form a 1-category. Ahrens, Kapulkin, and Shulman proved an extension for 1-categories: any property and construction that can be expressed in UF transfers along equivalence of categories—as long as “categories” are correctly defined to satisfy a local “univalence” condition. In the semantics of UF in simplicial sets, this univalence condition corresponds to Charles Rezk’s completeness condition for (truncated) Segal spaces.

 

In this talk, based on joint work with Paige Randall North, Michael Shulman, and Dimitris Tsementzis, I will show how to generalize this result to other higher-categorical structures. We devise a notion of signature and theory that specifies the data and properties of a mathematical structure. Our main technical achievement lies in the definition of isomorphism between two elements of a structure, which generalizes the notion of isomorphism between two objects in a category. Such isomorphisms yield the companion notion of univalence of a structure. Our main result says that for univalent structures M, N of a signature, the identity type M = N coincides with the type of equivalences M ≃ N. This entails that any property and construction on a univalent structure transfers along a suitable notion of equivalence of structures. Our signatures encompass the aforementioned set-level structures but also topological spaces, (multi-)categories, presheaves, fibrations, bicategories, and many other (higher-)categorical structures.

10.2.2021: Marcel Ertel (Tübingen): Independence and truth-value determinacy in set theory

We discuss the philosophical significance of classical and more recent results in the metamathematics of set theory: the Gödel-Cohen independence theorem of the Continuum Hypothesis (CH) from first-order set theory; Zermelo's quasi-categoricity result characterizing models of second-order set theory and Lavine's improvement thereof in an extended first-order framework (using Feferman's idea of a "full schema" allowing substitution of formulas from arbitrary language-expansions); and Väänänen's internal categoricity results.

 

In light of these technical results, we assess the ongoing debate between proponents of a set-theoretic multiverse (likening the CH to Euclid's parallel postulate in geometry) and defenders of the determinacy of the truth-value of the CH. We present two arguments against the multiverse view, and end with a discussion of the philosophical difficulties in explaining what it means 'to be a solution of the continuum problem'.

3.2.2021: Paulo Guilherme Santos (Tübingen): k-provability in PA

We study the decidability of k-provability in PA – the decidability of the relation 'being provable in PA with at most k steps' – and the decidability of the proof-skeleton problem – the problem of deciding if a given formula has a proof that has a given skeleton (the list of axioms and rules that were used). The decidability of k-provability for the usual Hilbert-style formalisation of PA is still an open problem, but it is known that the proof-skeleton problem is undecidable for that theory. Using new methods, we present a characterisation of some numbers k for which k-provability is decidable, and we present a characterisation of some proof-skeleton for which one can decide whether a formula has a proof whose skeleton is the considered one (these characterisations are natural and parameterised by unification algorithms).

 

27.1.2021: Dr. Roberta Bonacina (Tübingen): Introduction to Homotopy Type Theory III

Homotopy type theory is a vibrant research field in contemporary Mathematics. It aims at providing a foundation of Mathematics extending Martin-Löf type theory with the central notion of univalence, which induces a connection between types and homotopy spaces.
We will begin the short course defining the simple theory of types, and showing how it can be extended to Martin-Löf type theory and then to Homotopy type theory. We will stress the propositions-as-types interpretation between the type theories and intuitionistic logic, and study in detail the notion of equality. Then we will show how classical logic can be done in this intuitionistic setting, allowing to introduce the law of excluded middle and the axiom of choice as axioms. Finally, we will analyse the different definitions of equivalence, which are fundamental to introduce univalence.

 

Lecture notes

20.1.2021: Dr. Roberta Bonacina (Tübingen): Introduction to Homotopy Type Theory II

Homotopy type theory is a vibrant research field in contemporary Mathematics. It aims at providing a foundation of Mathematics extending Martin-Löf type theory with the central notion of univalence, which induces a connection between types and homotopy spaces.
We will begin the short course defining the simple theory of types, and showing how it can be extended to Martin-Löf type theory and then to Homotopy type theory. We will stress the propositions-as-types interpretation between the type theories and intuitionistic logic, and study in detail the notion of equality. Then we will show how classical logic can be done in this intuitionistic setting, allowing to introduce the law of excluded middle and the axiom of choice as axioms. Finally, we will analyse the different definitions of equivalence, which are fundamental to introduce univalence.

 

Lecture notes

13.1.2021: Dr. Roberta Bonacina (Tübingen): Introduction to Homotopy Type Theory I

Homotopy type theory is a vibrant research field in contemporary Mathematics. It aims at providing a foundation of Mathematics extending Martin-Löf type theory with the central notion of univalence, which induces a connection between types and homotopy spaces.
We will begin the short course defining the simple theory of types, and showing how it can be extended to Martin-Löf type theory and then to Homotopy type theory. We will stress the propositions-as-types interpretation between the type theories and intuitionistic logic, and study in detail the notion of equality. Then we will show how classical logic can be done in this intuitionistic setting, allowing to introduce the law of excluded middle and the axiom of choice as axioms. Finally, we will analyse the different definitions of equivalence, which are fundamental to introduce univalence.

 

Lecture notes

16.12.2020: Prof. Dr. Klaus Mainzer (München & Tübingen): Verification and Standardization of Artificial Intelligence – Results of the German Steering Group (HLG) of AI Standardization Roadmap

9.12.2020: Prof. Dr. Eberhard Knobloch (TU Berlin): Leibnizens Konzept einer ars characteristica oder ars combinatoria. Beispiele aus der Mathematik

Leibnizens Konzept einer ars characteristica oder ars combinatoria verdeutlicht den engen Zusammenhang zwischen seinem philosophischen und seinem mathematischen Denken. Der theoretische erste Teil des Vortrags stellt dieses Konzept mit seinen vier Vorteilen vor. Symbolische Algebra diente Leibniz als Modell für dieses Konzept. Daher wird der zweite Teil des Vortrags das Konzept an algebraischen Beispielen, insbesondere am Beispiel der symmetrischen Funktionen exemplifizieren. Diese waren sein zentrales Hilfsmittel bei der Suche nach der algorithmischen Auflösung einer algebraischen Gleichung beliebigen Grades.

 

2.12.2020: Dr. Richard Lawrence (Tübingen): Hankel's formalism, Frege's logicism, and the analytic-synthetic distinction

I will discuss some research on Hermann Hankel, an early proponent of a formalist viewpoint in the foundations of mathematics, and the relation of his view to Gottlob Frege's logicism. I will argue that Hankel had an important influence on Frege. In particular, Hankel's understanding of the analytic-synthetic distinction, and his argument against Kant's view of arithmetic, play an important role in Frege's understanding of his logicism in the Foundations of Arithmetic. Frege thinks of the distinction the same way Hankel does, and shares Hankel's basic strategy for arguing that arithmetic is analytic, rather than synthetic. Given these similarities, an important question arises about how Frege's view differs from Hankel's; I will close with some comments about the differences.

 

Also, here is a link to the paper, in case anyone is interested in reading it: https://philpapers.org/rec/LAWFHA

25.11.2020: Dr. Michael T. Stuart (Tübingen): Guilty Artificial Minds: An Experimental Study of Blame Attributions for Artificially Intelligent Agents

The concepts of blameworthiness and wrongness are of fundamental importance in human moral life. But to what extent are humans disposed to blame artificially intelligent agents, and to what extent will they judge their actions to be morally wrong? To make progress on these questions, we adopted two novel strategies. First, we break down attributions of blame and wrongness into more basic judgments about the epistemic and conative state of the agent, and the consequences of the agent’s actions. In this way, we are able to examine any differences between the way participants treat artificial agents in terms of differences in these more basic judgments about, e.g., whether the artificial agent “knows” what it is doing, and how bad the consequences of its actions are. Our second strategy is to compare attributions of blame and wrongness across human, artificial, and group agents (corporations). Others have compared attributions of blame and wrongness between human and artificial agents, but the addition of group agents is significant because these agents seem to provide a clear middle-ground between human agents (for whom the notions of blame and wrongness were created) and artificial agents (for whom the question is open).

 

18.11.2020: Natalie Clarius, B.A. (Tübingen): Automated Model Generation, Model Checking and Theorem Proving for Linguistic Applications

We present a model generator, model checker and theorem prover we developed for applications in linguistics. Alongside a live demonstration of the system, we will discuss a selection of phenomena with respect to their formal and computational tractability, as well as the theoretical foundations and limitations of such automated reasoning systems.

 

11.11.2020: Dr. Maël Pégny (Tübingen): Machine Learning and Privacy: What's Really New?

In this presentation, I will try to capture the new challenges for the respect of privacy raised by machine learning. I will use both a (very) long term perspective inspired by anthropological work on the effects of cognitive techniques and the origins of writing, and a short-term perspective based on comparisons with other types of algorithms and data treatment. I will try to show that machine learning has very specific and fundamental effects, which include challenging some of the basic categories on which our legal data protection regimen was built.

 

4.11.2020: Prof. Dr. Klaus Mainzer (München & Tübingen): Künstliche Intelligenz im globalen Wettstreit der Wertsysteme

Das „Atomzeitalter“, von dem Carl Friedrich von Weizsäcker in den 1950er und 1960er Jahren ausging, war gestern. Heute und morgen geht es um Digitalisierung und Künstliche Intelligenz (KI) – ein globales Zukunftsthema, das unsere Lebens- und Arbeitswelt dramatisch verändert. In Corona-Zeiten erhält diese Entwicklung eine zusätzliche Beschleunigung. Diese technischen Möglichkeiten treffen auf unterschiedliche weltanschauliche Resonanzböden, auf denen wie in USA oder China Big Business, Technokratien und Staatsmonopolismus gedeihen können. Wie kann ein europäisches Wertesystem dazu beitragen, dass KI zur nachhaltigen Innovation wird?

 

Literaturhinweise:
K. Mainzer, Künstliche Intelligenz. Wann übernehmen die Maschinen? Springer 2. Aufl. 2019 (engl. Übers. Springer 2019);
ders., Leben als Maschine: Wie entschlüsseln wir den Corona-Kode? Von der Systembiologie und Bioinformatik zu Robotik und Künstlichen Intelligenz, Brill Mentis 2020

Video des Vortrags: https://www.youtube.com/watch?v=Tf4ccAetTSM