## Carl Friedrich von Weizsäcker Kolloquium

Every week, The Carl Friedrich von Weizsäcker colloquium welcomes guest speakers and members of the center for various talks surrounding our main research areas. Open to everyone!

**Usual time and place:** Wednesdays from 5 to 6 pm (CET) via Zoom

**Organizers: **Prof. Dr. Reinhard Kahle & Dr. Thomas Piecha

Want to get notified of upcoming colloquium sessions? Send an email to our secretary Aleksandra Rötschke to subscribe to announcements!

Looking for a specific talk? Check out our YouTube playlist!

## Nächster Vortrag

## Anstehende Vorträge

July 20, 2022 [Online]

###### KI‐Kontextanalysen als notwendiges Entscheidungswerkzeug für den verantwortungsvollen Einsatz von Machine Learning Verfahren

*Anna Hoffmann (Hoffmann Consulting & Facilitation, Berlin & Potsdam)*

Bei der Einführung von ML‐Verfahren in softwaregestützte Arbeitsprozesse ist oft die Rede davon, dass Technologien entwickelt werden sollen, die menschenzentriert sind. Aktuell betont das Margrethe Vestager als eines der Ziele der „Neuen Europäischen Innovationsagenda“^{1}. Doch wie kann das konkret gelingen, ohne bei den oft oberflächlichen Ansätzen von UX‐Design stehen zu bleiben?

Der Fragenkatalog der KI‐Kontextanalyse bietet eine Antwort auf die Frage, wie ML-basierte KI‐Systeme schon ab dem ersten Entwurf des Use‐Cases darauf durchleuchtet werden können, ob es sich um ein menschenzentriertes und verantwortungsvolles Business‐Design handelt, welche Risiken sich zeigen und wo nachgebessert werden kann.

In Punkt 4.4 „KI‐spezifisches Risikomanagement“ des Berichts der im Jahr 2018 eingesetzten Enquete‐Kommission „Künstliche Intelligenz – Gesellschaftliche Verantwortung und wirtschaftliche, soziale und ökologische Potenziale“ des Deutschen Bundestages heißt es: „Erst die Betrachtung des individuellen Anwendungskontextes und der individuellen Einsatzumgebung erlauben eine umfassende Bewertung der mit dem Gebrauch von Algorithmen und KI‐Systemen einhergehenden Kritikalität.“^{2}

Genau auf diesem Gesichtspunkt baut die KI‐Kontextanalyse auf, die den spezifischen Anwendungskontext sowie die spätere Nutzerumgebung vertieft in den Blick nimmt. Es handelt sich dabei um eine Weiterentwicklung der „Gestaffelte Kontextsitzungen“ nach Thomas Bevan („Theory of Machines“) ergänzt um Fragestellungen aus der „Human Centered AI“ (Mittelstand 4.0‐Kompetenzzentrum Usability, Manuel Kulzer)^{3}. Die vorgestellte Form der KI‐Kontextanalyse wird zurzeit am Fraunhofer‐Institut für Produktionsanlagen und Konstruktionstechnik (IPK) in Berlin für einen weitreichenden KI-Use Case erprobt.

^{1} Neue Europäische Innovationsagenda (europa.eu); Abruf 7.7.2022

^{2} Drucksache 19/23700 (bundestag.de), S. 66; Abruf 8.7.2022

^{3} Mittelstand 4.0‐Kompetenzzentrum Usability Kompetenzzentrum Usability (kompetenzzentrum‐usability.digital)

July 13, 2022 [Online]

###### Das Institute for Advanced Study der TU München - Interdisziplinarität und Internationalisierung der Forschung

*Prof. Dr. Drs. h.c. Michael Molls (*Technical University Munich, Institute for Advanced Study*)*

June 29, 2022 [ Hörsaal Doblerstr. 33 + Online]

###### Logical Approaches to Relativized Classes

*Prof. Dr. Isabel Oitavem (Universidade NOVA de Lisboa)*

Relative complexity corresponds to a major part of our computability paradigm. The concept of accessing oracles (or databasis) during computation is in line with the interactive features of computation nowadays. However, machine independent approaches to relativized complexity classes are particularly challenging, because relativization brings the model of computation into play. For instance, even if P and NP coincide (P=NP is an open problem), there exists an oracle A such that P^A =/= NP^A. I.e. the same class of functions/predicates with the same oracle may lead to different relativized complexity classes. The polynomial hierarchy is a hierarchy of relativized classes whose two first levels are P and NP. In this talk we describe a machine independent approach to all levels of the polynomial hierarchy and to the hierarchy itself. This work was published in TCS 900 (2022) 25-34, https://doi.org/10.1016/j.tcs.2021.11.016.

June 22, 2022 [ Hörsaal Doblerstr. 33]

###### The Proof-Theoretic Square

*Dr. Antonio Piccolomini d'Aragona (University of Siena and Aix-Marseille University)*

In this talk, I focus on the interaction between two dichotomies in Prawitz’s proof-theoretic semantics, i.e. the dichotomy between monotonicity and non- monotonicity of validity of arguments over a specific atomic base (call this the local level), and the dichotomy between schematicity and non-schematicity of validity of arguments over all atomic bases (call this the global level). I argue that these dichotomies undergo some conceptual symmetries, both internally - i.e. the opposition at the local level is in a way conceptually analogous to the opposition at the global level - and externally - i.e. the alternative at the local level is somehow conceptually mirrored by the alternative at the global level. These symmetries may be understood as imposing a quite strict constraint on the overall semantic framework, i.e. one requires non-monotonicity at the local level iff one requires schematicity at the global level and, vice versa, one requires monotonicity at the local level iff one requires non-schematicity at the global level. This returns two conceptually and extensionally distinct proof-theoretic semantics, which both seem to be compatible with Prawitz’s philosophical tenets. However, I also argue that the aforementioned symmetries stem from a deeper interaction at play in Prawitz’s semantics, namely, the interaction between non-logical meanings and meaning - i.e. justification - of non-primitive inference rules. Based on this deeper interaction, two further “mixed” readings (monotonicity/ schematicity and non-monotonicity/non-schematicity) can be said to be compatible with Prawitz’s intentions - and are actually found in the literature. I finally claim that further combinations given by the interaction between non-logical meanings and meaning of rules are either void or equivalent to one of the four possibilities above. Thus, we are left with a group of four Prawitz-compatible semantics, forming a diagram whose arrows are “harmonically” oriented by the interaction between non-logical meanings and meaning of rules.

June 15, 2022 [Online]

##### Justification Logic - Introduction and Recent Developments

*Prof. Dr. Thomas Studer (University of Bern)*

Justification logics are closely related to modal logics and can be viewed as a refinement of the latter with machinery for justification manipulation. Justifications are represented directly in the language by terms, which can be interpreted as formal proofs in a deductive system, evidence for knowledge, and so on. This more expressive language proved beneficial in both proof theory and epistemology and helped investigate problems ranging from a classical provability semantics for intuitionistic logic to the logical omniscience problem. In this talk, we will give an introduction to justification logic and present recent developments in the field such as conflict tolerant logics and formalizations of zero-knowledge proofs.

May 25, 2022 [Online]

###### Archaeometry: brief overview

*Prof. Dr. Ioannis Liritzis (European Academy of Sciences and Arts & Henan University**)*

The best place for finding the past is no other than an “archaeological site”. An archaeological site is a site where the past activity is preserved and it is traced by various things like food remains, structures, humanly manufactured objects, and others. Archaeometry provides answers to archaeological questions concerning material culture.

I could divide the wide archaeometry subject to 7 sub-fields: 1) Chronology/ Dating, 2) Characterization & Provenance (chemical analysis, statistics) 3) Bioarchaeology (stable isotopes, aDNA, ancient diet), 4) Conservation (analysis & preventive-passive conservation/ restoration), 5) Archaeoastronomy (measuring time, determining rituals, celebrations), 6) ArchaeoGeophysical Prospection (locating buried antiquities), 7) 3D reconstruction.

Investigating the past requires remains of material culture and human artifacts and man-made constructions, geoarchaeological materials, human remains of organic and inorganic origin. For all these investigations multiple techniques are available to support instrumental analysis including nuclear, spectroscopic, chemical, electronic devices. Regarding the popular research & applications of computer systems the cyber-archaeology has emerged making use of big data collection, storage, documentation, processing in the field by a various diversified equipment, with a result the digital reconstruction of monuments and artifacts, in the field and in the museums.

The documentation is an emerging field with fast development especially for the preservation of at-risk antiquities from natural, and anthropogenic destructions. Also, documentation of the techniques used to make a work of art in a non-destructive (without sampling) readings of electromagnetic rays in the optical IR, UV, NIR and beyond. This is achieved by presenting energy spectra and a kind of tomography which digitally reveals underlying layers of an overpainted work of art like a palimpsest.

Selective a few examples are given from of case studies dated by radiocarbon, luminescence, archeoastronomy, characterization, analysis, and provenance. Archaeometry deciphers the past, and natural sciences with archaeometry strengthen and develop the spirit of interdisciplinarity, delves into the past and retains our memory. In the remote past we meet our future and enhance sustainability-growth but ecumenical values too. Archaeometry is the heart in the present in which past and future meet….

May 18, 2022 [Online]

###### Pythian Games & Pythiad, European Delphic Intellectual Movement. A Celebration of Antiquity Reimagined: Delphic Festivals

*Prof. Dr. Ioannis Liritzis (European Academy of Science and Arts)*

Europe is the cradle of civilization and Europe explored both opposite directions, East & West. In the year 586 BC, the first Pythian Games (artistic with some small-scale athletics) took place in Delphi Ancient Greece. Delphic Intellectual Movement is a branding initiative under the leadership and Aegis of European Academy of Sciences & Arts.

Our commitment is to Revive a Code of Ethics and re-establish the European roots of arts, culture and rational thought, resetting self-knowledge, reassessing the values of freedom, human and international rights. The European Academy of Sciences & Arts is committed to promoting scientific and societal progress.

The aim of the Pythian Games revival is to re-establish moral and social values in modern era, via various cultural-scientific activities, for a) the acquaintance and fostering of ancient Classical Culture, as it is emerging from the Delphic spirit, a cultural and spiritual cradle of Europe and at the same time an ecumenical symbol of knowledge, and, b) the scientific research on issues of dissemination of the ancient classical Logos, Values and Virtues, for the benefit of human beings and the environment, with the aid of modern technology, as this is shown from within the art, philosophy and philological witnesses.

Our entanglement and goal refer to the reclamation of lethargic classical human values by gradually reviving the Pythian Games with a modern balanced prospective.

The Implementation

1. The composition of a musical polymedia event of an orchestrated symphonic work

2. Reanimation of the Pythian Games every four (4) years

2.1 Artistic- cultural competition, of tangible and intangible culture (music, dance, prose-poetry, etc).

2.2 Small-scale athletics.

3. Pythiads. In between the 4 years and every 2 years, comprising of:

3.1 High-tech breakthrough achievements exclusively in the digital and cyber-technologies to cultural heritage (tangible and intangible) (virtual reality, haptic technology, drama and multimedia applications, 3D reconstructions et.c) reconstructing the past tangible and intangible culture.

3.2 International Symposium of World Interdisciplinary Experts in Ecumenical Principles for a better living; a World Forum of holistic approach of current and future wellbeing in a harmonic & balanced manner.

## 2022

February 2, 2022 [Online]

##### Normative Systems and Their Conflicts - in Law and in AI

*Réka Markovich JD PhD (Université du Luxembourg)*

I present a formalism I develop to reason about different normative systems and their applicability addressing a concern existing both in law and in Artificial Intelligence.

January 26, 2022 [Online]

##### Inferential Deflationism

*Prof. Dr. Luca Incurvati (Amsterdam)*

Deflationists about truth hold that the function of the truth predicate is to enable us to make certain assertions we could not otherwise make. Pragmatists claim that the utility of negation lies in its role in registering incompatibility. The pragmatist insight about negation has been successfully incorporated into bilateral theories of content, which take the meaning of negation to be inferentially explained in terms of the speech act of rejection. In this talk, I will implement the deflationist insight in a bilateral theory by taking the meaning of the truth predicate to be explained by its inferential relation to assertion. This account of the meaning of the truth predicate is combined with a new diagnosis of the Liar Paradox: its derivation requires the truth rules to preserve evidence, but these rules only preserve commitment. The result is a novel inferential deflationist theory of truth. The theory solves the Liar Paradox in a principled manner and deals with a purported revenge paradox in the same way. If time permits, I will show how the theory and simple extensions thereof have the resources to axiomatise the internal logic of several supervaluational hierarchies, thereby solving open problems of Halbach (2011) and Horsten (2011). This is joint work with Julian Schlöder.

## 2021

## Dec 15 2021 Philipp Stecher

December 15, 2021 [Online]

##### Concepts' Lifecycle in Artificial Intelligent Systems

*Philipp Stecher (Tübingen)*

Understanding how humans acquire and process concepts has been an ongoing pursuit for millennia. On the other hand, artificial intelligence (AI) has advanced rapidly in recent decades, helping to algorithmize and thereby better understand humans’ concept-processing capabilities. Especially in the recent past, AI scholars achieved remarkable progress: Today’s deep learning algorithms are expanding the capabilities of AI, enabling it to incorporate and process increasingly complex representations of the world (aka "concepts"). However, although AI’s capabilities are often described as “humanlike”, current state-of-the-art AI algorithms can barely meet these expectations. In contrast, recent research postulates significant differences in how current AI systems process concepts of the world as opposed to humans. While people can build rich, integrated concepts that can be applied across domains based on sparse data, today’s AI often requires large amounts of data to create rather superficial concepts that are only applicable to the domain in which the AI is operating. The illustrated gaps in concept-processing capabilities, as well as the recent advancements in AI, reflect the starting point for this research, which aims to shed light on the question of how modern AI systems process concepts in contrast to humans. To this end, a taxonomy will be derived that clusters the concept processing capabilities of modern AI systems. Hereafter, these capabilities will be systematically compared to the concept-processing capabilities of humans. The paper may close with recommendations for further research and concluding remarks on the status of concept-processing capabilities of current AI.

## Dec 8 2021 Dr. Guus Eelink

December 8, 2021 [Online]

##### Is Protagoras a Relativist about Truth?

*Dr. Guus Eelink (Oxford)*

Relativism about truth is a view that has fascinated many philosophers. Some philosophers have flirted with it (or have been accused of doing so), whereas others have attempted to show that is an incoherent view. Recently, some philosophers of language have endorsed local versions of relativism about truth. Many historians of philosophy have claimed that relativism about truth goes all the way back to Ancient Greek philosophy, and that it was espoused by Protagoras of Abdera, a philosopher from the 5th century BC. This claim is mainly based on Plato's dialogue Theaetetus, which contains the most elaborate discussion of Protagoras' view of truth. In the Theaetetus, Plato ascribes to Protagoras the view that 'whatever is believed is true for the believer'. Many scholars have claimed that the qualification 'for the believer' is meant to relativize truth to the believer. I shall argue that this interpretation of Protagoras' view is incorrect. I shall argue that Protagoras is not a relativist about truth. Instead, Protagoras holds that all beliefs are absolutely true. I shall argue that the qualification 'for the believer' is meant to explain how beliefs are absolutely true. The believer is part of the metaphysical explanation of how beliefs are absolutely true. I am not the first interpreter who claims that the qualification 'for the believer' is not meant to relativize the truth predicate. However, what is distinctive about my interpretation is that 'whatever is believed is true for the believer' entails 'whatever is believed is true (simpliciter)'. I shall show that my interpretation can account for Plato's celebrated argument that Protagoras' view is self-defeating.

## Oct 20 2021 Prof. Dr. Jan von Plato

October 20, 2021 [Online]

##### How Gödel Discovered His Incompleteness Theorems

*Prof. Dr. Jan von Plato (University of Helsinki)*

Gödel surprised the mathematical world by his famous incompleteness theorems of arithmetic of 1931. The way he arrived at these results is described against two shorthand notebooks of his that have been recently transcribed.

## July 28 2021 Prof. Dr. Maximilian Schich

July 28, 2021 [Online]

##### Cultural Analysis Situs

*Prof. Dr. Maximilian Schich (Tallinn University, Estonia)*

The disciplines of complex network science, of art and cultural history, and of computation have a common ancestor in the analysis situs of Gottfried Wilhelm Leibniz. Unfortunately, this shared conceptual origin remains hidden so far within a history of science that is tragically bifurcated, due to the branching evolution of disciplinary focus, due to changes in language, and due to sometimes forced scholarly migration. This talk, which is based on the first chapter* of an upcoming book, breaks the mutual tear lines of citation between disciplines to enable a common future. What lies at stake is the surprisingly deep-rooted and shared foundation of the emerging enterprise of a systematic science of art and culture. This enterprise currently flourishes mainly in departments of multidisciplinary information science, network and complexity science, and applications in industry. It promises nothing less than an integration of humanistic inquiry and a physics of cultures and cultural production.

(preprint: https://doi.org/10.11588/artdok.00006347).

## July 21 2021 Dr. Christoph Peylo

July 28, 2021 [Online]

##### AI and Ethics - Why This Is a Topic of Interest and Why Enterprises (Should) Have a Stake in It

*Dr. Christoph Peylo (Project "Digital Trust," Bosch)*

## July 14 2021 Prof. Dr. Federico Pailos

July 14, 2021 [Online]

##### Why Metainferences Matter

*Prof. Dr. Federico Pailos (Buenos Aires)*

In this talk, I will present new arguments that shed light on the importance of metainferences of every level, and metainferential standards of every level, when (semantically) characterizing a logic. This implies that a logician cannot be agnostic about metainferences, metametainferences, etc. The arguments I will introduce show why a thesis that Dave Ripley defends in [1] and [2] is false. This is how he presents it.

Note that a meta0counterexample relation X [i.e., a counterexample relation for infer- ences, which is (in most contexts) equivalent to a satisfaction relation for inferences], on its own, says nothing at all about validity of metaninferences for 0 < n. Despite this, there is a tendency to move quickly from X to [X] [i.e., a full counterexample relation for every metainferential level], at least for some purposes... For example, [3] (p. 360, notation changed) says “[A]bsent any other reasons for suspicion one should probably take [X] to be what someone has in mind if they only specify X.” I don’t think this tendency is warranted. Most of the time, when someone has spec- ified a meta0counterexample relation (which is to say an ordinary counterexample relation), they do not have the world of all higher minferences [i.e., metainferences of any level], full counterexample relations, etc, in mind at all. They are often focused on validity for meta0inferences (which is to say inferences). ([1], page 12.)

Though I do think that, in a sense, people do have in mind [X] when they say X, I will not argue for that. I just want to defend that they should have something like that in mind. Specifically, I will show why the following position should be revised:

As I’ve pointed out, an advocate of ST as a useful meta0counterexample relation has thereby taken on no commitments at all regarding metancounterexample relations for 1 ≤ n. ([1], page 16).)

Or, as Ripley puts in somewhere else:

... if someone specifies just a metanconsequence relation, they have not thereby settled on any particular metan+1 consequence relation. ([2]).)

If Ripley’s statements are true, then two different logicians may count as advocates of the same inferential logic (or any metainferential logic of level n), despite adopting quite different criteria regarding what counts as a valid metainference (or a valid metainference of level n+1). If Ripley is right, then not only can a supporter of a (non-transitive) logic like ST accept or reject the metainference corresponding to (some version of) the Cut rule, but also she can admit a metainferential counterexample relation that correspond to a trivial or an empty metainferential consequence relation. Moreover, this might have repercussions on the inferential level, as an

empty metainferential logic invalidates any metainference with an empty set of premises and a valid ST-inference as a conclusion. Thus, the only available option is to admit that inferences, on the one hand, and metainference with an empty set of premises and that inference as its only conclusion, on the other hand, are not only different, but also non-equivalent things. Something similar happens if we chose a trivial metainferential counterexample relation while adopting ST at the inferential level. In this case, there will be invalid ST-inferences that turns out to be valid in its metainferential form, forcing this logician to chose between one of the options that we have specified before.

This is a particular strong result, and it is even stronger than what might initially seem, in two senses: (1) it does not depend on the notion of metainferential validity being favoured—e.g., whether one thinks that the local way to understand it is better than the global, or the other way around; (2) it does not depend on the special features of the (mixed) inferential/metainferential relations, as this result can be replied for any pair of (mixed) metainferential relations of level n/n+1.

References

[1] D. Ripley. One step is enough. (Manuscript).

[2] D. Ripley. A toolkit for metainferential logics. (Manuscript).

[3] C. Scambler. Classical Logic and the Strict Tolerant Hierarchy. Journal of Philosophical Logic, page forthcoming, 2019. DOI: doi.org/10.1007/s10992-019-09520-0.

## June 30 2021 Prof. Dr. Mathias Frisch

June 30, 2021 [Online]

##### Uses and Misuses of Models in Pandemic Policy Advice

*Prof. Dr. Mathias Frisch (Hannover)*

Have epidemiological models of the Covid-19 pandemic been a failure, as some have argued? Have policy makers violated their epistemic duty in the pandemic by acting on deeply uncertain evidence? In this talk I examine the epistemic status of epidemiological models and possible roles they can play in scientific policy advice in situations characterized by missing data and poorly constrained parameter-values. I will argue that some criticisms presuppose an overly narrow conception of possible uses of models. But I will discuss pitfalls in using models for policy advice to which some of the criticisms draw attention. Finally, I will suggest one framework for how to use uncertain modeling results in policy decisions under extreme urgency.

## June 23 2021 Dr. Daniel Kostić, Prof. Dr. Nathalie Niquil

June 23, 2021 [Online]

##### Perspectivism and Vertical-Horizontal Explanatory Modes in Ecological Networks

*Dr. Daniel Kostić (Radbound University) and Prof. Dr. Nathalie Niquil (CNRS)*

We show how the perspectival criteria help to determine the explanatory relevance in ecological network models. We provide counterfactual analysis of explanatory power of a marine network model, which includes the perspectival criteria for using a horizontal mode (when the counterfactual relata are at the same level) and a vertical mode (when the counterfactual relata are at different levels). Distinguishing vertical and horizontal counterfactual modes is important for understanding how are different organizational levels of a system functionally related as well as how do exogeneous changes affect each of the levels. We show that perspectival criteria play a more important epistemic role than merely informing the modeling decisions. If such criteria were not available, it wouldn’t have been intelligible how the relevant counterfactual figures in an explanation. They determine explanatory relevance conditions for a counterfactual. Based on this theoretical framework, we further point out how our analysis can be used in designing more sustainable spatial management policies for aquatic resources.

## June 9 2021 Dr. Silvia de Toffoli

June 9, 2021 [Online]

##### What Are Mathematical Diagrams?

*Dr. Silvia de Toffoli (Princeton University)*

Although traditionally neglected, mathematical diagrams have recently attracted much attention from philosophers of mathematics. By now, the literature includes several case studies investigating the role of diagrams both in discovery and proof. Certain preliminary questions have, however, been bypassed. What are diagrams exactly? Are there different types of diagrams? In the scholarly literature, the term “mathematical diagram” is used in diverse ways. I propose a working definition that carves out the phenomena that are of most importance for a taxonomy of diagrams in the context of a practice-based philosophy of mathematics, privileging examples from contemporary mathematics. In doing so, I move away from vague, ordinary notions. I define mathematical diagrams as forming notational systems and as being geometric/topological representations or two-dimensional representations (or both). I also examine the relationship between mathematical diagrams and spatiotemporal intuition. By proposing a precise definition, I explain (away) certain controversies in the existing literature. Moreover, I shed light on why mathematical diagrams are so effective in certain instances, and, at other times, dangerously misleading.

## June 2 2021 Prof. Dr. Aaron Sloman

June 2, 2021 [Online]

##### Unsolved Problems Linking Physics, Biology, Consciousness, Philosophy of Mathematics, and Chemical Information Processing

*Prof. Dr. Aaron Sloman (University of Birmingham)*

There are types of spatial intelligence, detecting and employing varieties of spatial possibility, necessity,and impossibility, that cannot be explained by currently known mechanisms. Evidence from newly hatched animals suggests that mechanisms using still unknown chemistry-based forms of computation can provide information that goes beyond regularity detection, concerned with possibility spaces and their restrictions. Ancient human spatial intelligence may be based on multi-generational discovery of what is possible, necessarily the case, or impossible, in complex and changing environments, using related mechanisms of spatial cognition, centuries before Euclid, that enabled discoveries regarding possibility, impossibility and necessity in spatial structures and processes, long before modern mathematical, symbolic, logic-based, or algebraic formalisms were available.

Immanuel Kant characterised such mathematical cognition in terms of three distinctions largely ignored in contemporary psychology, neuroscience, and AI research: non-empirical/empirical, analytic/synthetic, and necessary/contingent. He argued that ancient geometric cognition was not based simply on empirical generalization, nor on logical deduction from arbitrary definitions. The truths discovered were non-empirical, synthetic, and non-contingent.

Neither formal logic-based characterizations of mathematics (used in automated theorem provers), nor postulated neural networks collecting statistical evidence to derive probabilities can model or explain such mathematical discoveries. E.g. necessity and impossibility are not extremes on a probability scale.

Unexplained facts about spatial competences of newly hatched animals, before neural networks can be trained in the environment, may be related to mechanisms underlying ancient spatial intelligence in humans and other animals.

Chemical mechanisms inside eggs, available before hatching, somehow co-existing with the developing embryo, apparently suffice. Such mechanisms may be partly analogous to types of "virtual machinery" only recently developed in sophisticated forms that provide services across the internet (like zoom meetings) that "float persistently" above the constantly changing, particular physical mechanisms at work, without occupying additional space.

While chemical mechanisms in early stages of reproduction are well-studied, little is known about enormously complex types of machinery required for later stages, e.g., of chick production, including creation of control mechanisms required for actions soon after hatching. I suggest that development of the foetus uses many stages of control by increasingly sophisticated *virtual *machines controlling and coordinating chemical mechanisms as they create new chemical mechanisms *and* new layers of virtual achinery.

Different sub-types must have evolved at different times, and the later, more complex virtual machines may have to be assembled by earlier virtual machines, during foetus development, whereas earliest stages of reproduction simply use molecular mechanisms controlling formation and release of chemical bonds linking relatively simple chemical structures.

I suspect Alan Turing's work on chemistry-based morphogenesis (published 1952) was a side effect of deeper, more general, thinking about uses of chemistry-based spatial reasoning in intelligent organisms. But he died without publishing anything to support that suspicion, though he did assert in 1936 that machines can use mathematical ingenuity, but not mathematical intuition, without explaining the difference (on which Kant might have agreed). We may never know how far his thinking had progressed by the time he died.

Extended version:

https://www.cs.bham.ac.uk/research/projects/cogaff/misc/unsolved.html

## May 19 2021 Dr. Vincenzo Politi

May 19, 2021 [Online]

##### Anticipative Reflection in an Interdisciplinary Research Team: A Case Study

*Dr. Vincenzo Politi (University of Oslo)*

Responsible Research and Innovation (RRI) and similar science policy frameworks aim at reinforcing the social responsibility of science and technology by promoting a reflective and anticipatory attitude among researchers. Such an attitude requires the ability to imagine future scenarios in order to predict and assess the potential societal implications of innovative research. Responsible research, therefore, requires a future-oriented attitude. ‘Future’, however, may mean different things. In this talk, I discuss the results of a qualitative study conducted with an interdisciplinary research team working on innovative personalised targeted cancer therapies. The study reveals that, within the research team, different individuals think about different kinds of future. Depending on which kind of future they think about, researchers anticipate different kinds of impact of their work, which I define ‘internal’ and ‘external impact’. In the conclusions, I will investigate which kind of knowledge and expertise researchers should be equipped with in order to develop the ability to think about the future implications of their work.

## May 12 2021 Sandro Radovanović

May 12, 2021 [Online]

##### Effects of Affirmative Actions in Algorithmic Decision-Making

*Sandro Radovanović (University of Belgrade)*

In today's business, decision-making is heavily dependent on algorithms. Algorithms may originate from operational research, machine learning, or decision theory. Regardless of their origin, the decision-maker may create unwanted disparities regarding race, gender, or religion. More specifically, automation of the decision-making process can lead to unethical acts with legal consequences. To mitigate unwanted consequences of algorithmic decision-making one must adjust either input data, algorithms, or decisions. In this talk, an overview of fairness in algorithmic decision-making from a machine learning point of view is going to be presented, as well as developed approaches in the literature. This talk aims at presenting a way to ensure fairness in algorithmic decision.making that ensures a lack of disparate impact as well as ensuring equal odds. After presenting the methodology, we discuss what is flawed with approaches that the machine-learning community adopted while "fighting unfairness," and as a result what are the properties of true affirmative actions in algorithmic decision-making and how they can be achieved.

## May 5 2021 Dr. Antonio Piccolomini d'Aragona

May 5, 2021 [Online]

##### Kreisel's Informal Rigour and Gödel's Absolute Provability: A Tentative Reading through and for Prawitz's Semantics

*Dr. Antonio Piccolomini d'Aragona (Aix-Marseille)*

In spite of their philosophical relevance, Kreisel’s theory of informal rigour and Gödel’s concept of absolute provability have proved elusive to rigorous mathematised treatments. In my talk, I will set out to connect Kreisel’s and Gödel’s ideas to Prawitz’s proof-based semantics. Prawitz’s semantics has been put forth and developed independently of Kreisel and Gödel, but some of its basic tenets may nonetheless match those of informal rigour and absolute provability. Both Kreisel and Gödel aim at bringing provability back into mathematical practice – against the post-Fregean and post-Hilbertian formalistic attitude – as well as at overstepping formal derivability – given Gödel’s and Turing’s limiting results. In order to do this, provability must become informal (i.e. independent of formal languages and systems) and absolute (i.e. formalism-free and/or universally applicable). This may be in line with the intuitionistic idea of giving provability a “semantic” role, an idea of which Prawitz’s semantics is a well-known instance. As a result, I argue that Prawitz’s semantics shares some issues with Kreisel’s informal rigour, while the link with Gödel’s absolute provability is more difficult to be established.

## April 21 2021 Dr. Christian Feldbacher-Escamilla

April 21, 2021 [Online]

##### AI for a Social World - A Social World for AI

*Dr. Christian Feldbacher-Escamilla (Düsseldorf)*

AI is not only supposed to help to tackle social problems but it is also frequently used to in fact solve such problems. AI-assisted systems play an increasingly important role in the legal domain, the health sector, environmental research, public policy-making and the like. Research in this field is numerous and diverse. In this talk, we want to argue, however, that it is also interesting to have a look at the opposite direction: How can our knowledge of the social world and its structural features help us to approach problems of AI? In particular, we will investigate how a social perspective on problems of justification helps us to address epistemic problems of machine learning theory.

## April 6 2021 Prof. Dr. Helen Longino

April 6, 2021 [Online]

##### Critical Contextual Empiricism, Diversity and Inclusiveness

*Pof. Dr. Helen Longino (Stanford University)*

Watch the recording on Facebook.

Humanity looks to the scientific community, now more than ever, in order to provide solutions to today's challenges. Decisions made by scientists thus directly and deeply influence human lives. The Carl Friedrich von Weizsäcker Center is interested in the foundations of responsible science. For example, how can we identify and avoid scientific misconduct, e.g. plagiarism and fraud, or the abuse of science for commercial purposes? How do we navigate issues of morally questionable research, research funding, and global inequalities? How can scientists ensure optimal knowledge production in the face of the replication crisis, cognitive biases in science, and the politics of peer review? Further, how can we protect scientists from becoming commodities when their products are so ardently sought by politicians and society?

## April 6 2021 Prof. Dr. Nancy Cartwright

April 6, 2021 [Online]

##### Responsible Science - Responsible Use

*Prof. Dr. Nancy Cartwright (Durham University)*

Watch the Recording on Facebook.

Humanity looks to the scientific community, now more than ever, in order to provide solutions to today's challenges. Decisions made by scientists thus directly and deeply influence human lives. The Carl Friedrich von Weizsäcker Center is interested in the foundations of responsible science. For example, how can we identify and avoid scientific misconduct, e.g. plagiarism and fraud, or the abuse of science for commercial purposes? How do we navigate issues of morally questionable research, research funding, and global inequalities? How can scientists ensure optimal knowledge production in the face of the replication crisis, cognitive biases in science, and the politics of peer review? Further, how can we protect scientists from becoming commodities when their products are so ardently sought by politicians and society?

## Feb 24 2021 Prof. Dr. Marco Panza, Prof. Dr. Daniele Struppa

February 24, 2021 [Online]

##### Agnostic Science and Mathematics

*Prof. Dr. Marco Panza (Paris 1) and Prof. Dr. Daniele Struppa (Chapman University)*

We'll firstly illustrate the notion of agnostic science (science without understanding), and reflect, then, on the effect that the practicer of agnostic science has on the use of maths in science, and for the development of maths itself.

## Feb 17 2021 Dr. Benedikt Ahrens

February 17, 2021 [Online]

##### The Univalence Principle

*Dr. Benedikt Ahrens (Birmingham)*

Michael Makkai's "Principle of Isomorphism" stipulates that mathematical reasoning is invariant under equivalence of mathematical structures. Inspired by Makkai, Vladimir Voevodsky conceived the Univalent Foundations (UF) of Mathematics as a foundation of mathematics in which only equivalence-invariant properties and constructions can be formulated. Coquand and Danielsson proved that UF indeed provides an isomorphism-invariant language for *set-level* structures, such as groups and rings, that form a 1-category. Ahrens, Kapulkin, and Shulman proved an extension for 1-categories: any property and construction that can be expressed in UF transfers along equivalence of categories—as long as “categories” are correctly defined to satisfy a local “univalence” condition. In the semantics of UF in simplicial sets, this univalence condition corresponds to Charles Rezk’s completeness condition for (truncated) Segal spaces.

In this talk, based on joint work with Paige Randall North, Michael Shulman, and Dimitris Tsementzis, I will show how to generalize this result to other higher-categorical structures. We devise a notion of signature and theory that specifies the data and properties of a mathematical structure. Our main technical achievement lies in the definition of isomorphism between two elements of a structure, which generalizes the notion of isomorphism between two objects in a category. Such isomorphisms yield the companion notion of univalence of a structure. Our main result says that for univalent structures M, N of a signature, the identity type M = N coincides with the type of equivalences M ≃ N. This entails that any property and construction on a univalent structure transfers along a suitable notion of equivalence of structures. Our signatures encompass the aforementioned set-level structures but also topological spaces, (multi-)categories, presheaves, fibrations, bicategories, and many other (higher-)categorical structures.

## Feb 10 2021 Marcel Ertel

February 10 2021 [Online]

##### Independence and Truth-Value Determinacy in Set Theory

*Marcel Ertel (Tübingen)*

We discuss the philosophical significance of classical and more recent results in the metamathematics of set theory: the Gödel-Cohen independence theorem of the Continuum Hypothesis (CH) from first-order set theory; Zermelo's quasi-categoricity result characterizing models of second-order set theory and Lavine's improvement thereof in an extended first-order framework (using Feferman's idea of a "full schema" allowing substitution of formulas from arbitrary language-expansions); and Väänänen's internal categoricity results.

In light of these technical results, we assess the ongoing debate between proponents of a set-theoretic multiverse (likening the CH to Euclid's parallel postulate in geometry) and defenders of the determinacy of the truth-value of the CH. We present two arguments against the multiverse view, and end with a discussion of the philosophical difficulties in explaining what it means 'to be a solution of the continuum problem'.

## Feb 3 2021 Paulo Guilherme Santos

February 3, 2021 [Online]

##### k-Provability in PA

*Paulo Guilherme Santos (Tübingen)*

We study the decidability of k-provability in PA – the decidability of the relation 'being provable in PA with at most k steps' – and the decidability of the proof-skeleton problem – the problem of deciding if a given formula has a proof that has a given skeleton (the list of axioms and rules that were used). The decidability of k-provability for the usual Hilbert-style formalisation of PA is still an open problem, but it is known that the proof-skeleton problem is undecidable for that theory. Using new methods, we present a characterisation of some numbers k for which k-provability is decidable, and we present a characterisation of some proof-skeleton for which one can decide whether a formula has a proof whose skeleton is the considered one (these characterisations are natural and parameterised by unification algorithms).

## Jan 27, 20, 13 2021 Dr. Roberta Bonacina

January 27, 2021 + January 20, 2021 + January 13, 2021 [Online]

##### Introduction to Homotopy Type Theory I, II and III

*Dr. Roberta Bonacina (Tübingen)*

Homotopy type theory is a vibrant research field in contemporary Mathematics. It aims at providing a foundation of Mathematics extending Martin-Löf type theory with the central notion of univalence, which induces a connection between types and homotopy spaces.

We will begin the short course defining the simple theory of types, and showing how it can be extended to Martin-Löf type theory and then to Homotopy type theory. We will stress the propositions-as-types interpretation between the type theories and intuitionistic logic, and study in detail the notion of equality. Then we will show how classical logic can be done in this intuitionistic setting, allowing to introduce the law of excluded middle and the axiom of choice as axioms. Finally, we will analyse the different definitions of equivalence, which are fundamental to introduce univalence.

## 2020

## Dec 16 2020 Prof. Dr. Klaus Mainzer

December 16, 2020 [Online]

##### Verification and Standardization of Artificial Intelligence - Results of the German Steering Group (HLG) of AI Standardization Roadmap

*Prof. Dr. Klaus Mainzer (München)*

German Standardization Roadmap on Artificial Intelligence

## Dec 9 2020 Prof. Dr. Eberhard Knobluch

December 9, 2020

##### Leibnizens Konzept Einer ars Characteristica oder ars Combinatoria: Beispiele aus der Mathematik

*Prof. Dr. Eberhard Knobluch (TU Berlin)*

Leibnizens Konzept einer ars characteristica oder ars combinatoria verdeutlicht den engen Zusammenhang zwischen seinem philosophischen und seinem mathematischen Denken. Der theoretische erste Teil des Vortrags stellt dieses Konzept mit seinen vier Vorteilen vor. Symbolische Algebra diente Leibniz als Modell für dieses Konzept. Daher wird der zweite Teil des Vortrags das Konzept an algebraischen Beispielen, insbesondere am Beispiel der symmetrischen Funktionen exemplifizieren. Diese waren sein zentrales Hilfsmittel bei der Suche nach der algorithmischen Auflösung einer algebraischen Gleichung beliebigen Grades.

## Dec 2 2020 Dr. Richard Lawrence

December 2, 2020

##### Hankel's Formalism, Frege's Logicism, and the Analytic-Synthetic Distinction

*Dr. Richard Lawrence (Tübingen)*

I will discuss some research on Hermann Hankel, an early proponent of a formalist viewpoint in the foundations of mathematics, and the relation of his view to Gottlob Frege's logicism. I will argue that Hankel had an important influence on Frege. In particular, Hankel's understanding of the analytic-synthetic distinction, and his argument against Kant's view of arithmetic, play an important role in Frege's understanding of his logicism in the *Foundations of Arithmetic*. Frege thinks of the distinction the same way Hankel does, and shares Hankel's basic strategy for arguing that arithmetic is analytic, rather than synthetic. Given these similarities, an important question arises about how Frege's view differs from Hankel's; I will close with some comments about the differences.

Link to the corresponding paper: https://philpapers.org/rec/LAWFHA

## Nov 25 2020 Dr. Michael T. Stuart

November 25, 2020

##### Guilty Artificial Minds: An Experimental Study of Blame Attributions for Artificially Intelligent Agents

*Dr. Michael T. Stuart (Tübingen)*

The concepts of blameworthiness and wrongness are of fundamental importance in human moral life. But to what extent are humans disposed to blame artificially intelligent agents, and to what extent will they judge their actions to be morally wrong? To make progress on these questions, we adopted two novel strategies. First, we break down attributions of blame and wrongness into more basic judgments about the epistemic and conative state of the agent, and the consequences of the agent’s actions. In this way, we are able to examine any differences between the way participants treat artificial agents in terms of differences in these more basic judgments about, e.g., whether the artificial agent “knows” what it is doing, and how bad the consequences of its actions are. Our second strategy is to compare attributions of blame and wrongness across human, artificial, and group agents (corporations). Others have compared attributions of blame and wrongness between human and artificial agents, but the addition of group agents is significant because these agents seem to provide a clear middle-ground between human agents (for whom the notions of blame and wrongness were created) and artificial agents (for whom the question is open).

## Nov 18 2020 Natalie Clarius

November 18, 2020

##### Automated Model Generation, Model Checking and Theorem Proving for Linguistic Applications

*Natalie Clarius, B.A. (Tübingen)*

We present a model generator, model checker and theorem prover we developed for applications in linguistics. Alongside a live demonstration of the system, we will discuss a selection of phenomena with respect to their formal and computational tractability, as well as the theoretical foundations and limitations of such automated reasoning systems.

## Nov 11 2020 Dr. Maël Pégny

November 11, 2020

##### Machine Learning and Privacy: What's Really New?

*Dr. Maël Pégny (Tübingen)*

In this presentation, I will try to capture the new challenges for the respect of privacy raised by machine learning. I will use both a (very) long term perspective inspired by anthropological work on the effects of cognitive techniques and the origins of writing, and a short-term perspective based on comparisons with other types of algorithms and data treatment. I will try to show that machine learning has very specific and fundamental effects, which include challenging some of the basic categories on which our legal data protection regimen was built.

## Nov 4 2020 Prof. Dr. Klaus Mainzer

November 4, 2020

##### Künstliche Intelligenz im Globalen Wettstreit der Wertsysteme

*Prof. Dr. Klaus Mainzer (München)*

Das „Atomzeitalter“, von dem Carl Friedrich von Weizsäcker in den 1950er und 1960er Jahren ausging, war gestern. Heute und morgen geht es um Digitalisierung und Künstliche Intelligenz (KI) – ein globales Zukunftsthema, das unsere Lebens- und Arbeitswelt dramatisch verändert. In Corona-Zeiten erhält diese Entwicklung eine zusätzliche Beschleunigung. Diese technischen Möglichkeiten treffen auf unterschiedliche weltanschauliche Resonanzböden, auf denen wie in USA oder China Big Business, Technokratien und Staatsmonopolismus gedeihen können. Wie kann ein europäisches Wertesystem dazu beitragen, dass KI zur nachhaltigen Innovation wird?

Literaturhinweise:

K. Mainzer, Künstliche Intelligenz. Wann übernehmen die Maschinen? Springer 2. Aufl. 2019 (engl. Übers. Springer 2019);

ders., Leben als Maschine: Wie entschlüsseln wir den Corona-Kode? Von der Systembiologie und Bioinformatik zu Robotik und Künstlichen Intelligenz, Brill Mentis 2020

Video des Vortrags: https://www.youtube.com/watch?v=Tf4ccAetTSM