Carl Friedrich von Weizsäcker-Zentrum

Carl Friedrich von Weizsäcker Colloquium

Every week, The Carl Friedrich von Weizsäcker colloquium welcomes guest speakers and members of the center for various talks surrounding our main research areas. Open to everyone!

Usual time and place: Wednesdays from 5 to 6 pm (CET) at the Carl Friedrich von Weizsäcker Center, Doblerstr. 33, Room 1.3 and via Zoom

Organizers: Prof. Dr. Reinhard Kahle & Dr. Thomas Piecha

Want to get notified of upcoming colloquium sessions? Send an email to our secretary Aleksandra Rötschke to subscribe to announcements!

Looking for a specific talk? Check out our YouTube playlist!


Next up

January 08, 2025

The epistemology of Machine Learning

Prof. Dr. Dr. Claus Beisbart (University of Berne, CH)


Upcoming Talks

January 08, 2025

The epistemology of Machine Learning

Prof. Dr. Dr. Claus Beisbart (University of Berne, CH)

January 15, 2025

Beyond Footnotes: Lakatos's meta-philosophy and the history of science

Prof. Dr. Samuel Schindler (University of Aarhus, DK)

January 15, 2025 4pm

Some ideas for a Kuhnian-Lakatosian reading of the opposition between realism and constructivism in logic and the foundations of mathematics

Dr. Antonio Piccolomini d'Aragona (University of Tübingen)

 

January 22, 2025

TBA

Prof. Dr. Ladislav Kvasz (Czech Academy of Sciences)

January 29, 2024

Philosophical Problems in Complex Systems

Prof. Dr. Meinard Kuhlmann (Johannes Gutenberg University of Mainz, DE)


Past Colloquium Sessions

2024

December 16, 2024 Eduardo Skapinakis and Marcel Ertel (University of Tübingen)

December 16, 2024

Logical theories for querying NP

Eduardo Skapinakis and Marcel Ertel (University of Tübingen)

 

What are the computational resources that we use in a mathematical proof? What statements can we prove once we restrict them? These questions can be made formal in the framework of applicative theories, a branch of Feferman's program of explicit mathematics. In this talk, we will introduce logical theories that utilize computational resources from the polynomial-time hierarchy, a generalization of non-deterministic polynomial time, and compare them to theories characterizing deterministic polynomial time. We will give a gentle introduction to explicit mathematics and complexity theory, discuss the motivations for relating them to each other, and present some results from ongoing research.

December 11, 2024 Prof. Dr. Wolfgang Spohn (Universität Konstanz)

December 11, 2024

What is Reflexive Rationality?

Prof. Dr. Wolfgang Spohn (Universität Konstanz)

 

Reason is perhaps the most fundamental trait of humanity, and rationality – a roughly synonymous, but often preferred term because of its normative connotations – is a fundamental topic of philosophy since its inception. Practical rationality in its ideal form is generally and precisely captured by decision theory and, in its social variant, by game theory. These ideals are often considered to be fixed and completed. Work can continue, it seems, only in applications, specializations, and more realistic, subideal accounts of practical rationality. My contention is that this ideal is not fixed at all and still open to normative discussion. I am presently working at a book titled: “Reflexive Rationality: Rethinking Decision and Game Theory”. Its intention is to improve our ideal accounts of practical rationality. In my talk I would like to explain a few basic ideas of what I call reflexive rationality in an intuitive, informal way.

December 4, 2024 Prof. Dr. Michel Janssen (Lichtenberg Group for History and Philosophy of Physics, University of Bonn & School of Physics & Astronomy, University of Minnesota)

December 4, 2024 [only online]

COI Stories II: Inferences or Incentives?

Prof. Dr. Michel Janssen (Lichtenberg Group for History and Philosophy of Physics, University of Bonn & School of Physics & Astronomy, University of Minnesota)

 

In 2002*, I introduced COI (Common Origin Inference) as a subspecies of IBE (Inference to the Best Explanation). Several of my examples of COIs from the history of science, however, were not really about inference. In these examples, COI served not so much to increase the degree of belief in the inferred explanation as to establish the pursuit-worthiness of that explanation. In view of this (and sticking to the acronyms), I argue that we should allow the ‘I’ in both COI and IBE to stand not only for ‘Inference’ but also for ‘Incentive’ (to pursue some [common-origin] explanation). In science, the ‘I’ in most cases stands for ‘Incentive’. Switching from ‘Inference’ to ‘Incentive’ provides an answer to an objection to IBE articulated most forcefully by Wesley Salmon in 2001: Why should an elegant explanation be more likely than an ugly one? Or, to rephrase Salmon’s question in the terminology introduced by Peter Lipton, a staunch defender of IBE: why should likeliness track loveliness? Interpreting the ‘I’ in COI and IBE as ‘Incentive’, one can take the position that pursuit-worthiness tracks loveliness, which is more plausible.

* COI Stories: Explanation and Evidence in the History of Science. Perspectives on Science. The MIT Press Volume 10, Number 4, Winter 2002 pp. 457-522, muse.jhu.edu/article/43975

November 27, 2024 Prof. Dr. Uljana Feest (University of Hannover, DE)

November 27, 2024  6pm s.t. [ Meeting will take place here: Psychologisches Institut, Schleichstr. 4., Hörsaal 4.329]

Explorative vs. confirmatory research: A false dichotomy?

Prof. Dr. Uljana Feest (University of Hannover, DE)

 

The replication crisis in psychology has given rise to intense soul searching with respect to questionable research practices, such as p-hacking and HARKing. One response to these problems is to call for researchers to preregister their statistical hypotheses so as to make sure that the results they report as confirmed by their data are in fact the ones they set out to test. There is an argument in the literature that retrospective theorizing or data analysis is only legitimate if it is explicitly labeled as “exploratory.” In my talk, I will argue that the underlying understanding of exploratory research is counterintuitive and philosophically problematic. Turning to the philosophy of science literature about exploratory experimentation and exploratory modeling, I will propose an account of exploratory research that is better able to capture the dynamics of research and that allows for a more precise understanding of why the explorarory/confirmatory distinction is problematic on both descriptive and normative grounds. (The talk is based on research that was conducted jointly with Berna Devezer)

November 20,2024 Dr. Ivo Pezlar (Institute of Philosophy of the Czech Academy of Sciences)

20. November 2024

A logic of judgmental existence and its relation to proof irrelevance

Dr. Ivo Pezlar (Institute of Philosophy of the Czech Academy of Sciences)

 

In this talk, we introduce a simple natural deduction system for reasoning with judgments of the form "there exists a proof of A" to explore the notion of judgmental existence following Martin-Löf's methodology of distinguishing between judgments and propositions. In this system, the existential judgment can be internalized into a modal notion of propositional existence that is closely related to truncation modality, a key tool for obtaining proof irrelevance, and lax modality. We provide a computational interpretation in the style of the Curry-Howard isomorphism for the existence modality and show that the corresponding system has some desirable properties such as normalization or subject reduction.

October 30, 2024 Dr. Federico Pailos (University of Tübingen)

October 30, 2024

Naive non-substructural solutions to the Validity Paradox (plus an informational application)

Dr. Federico Pailos (University of Tübingen)

 

I will present naive non-substructural solutions to the Validity Paradox. The main idea behind these solutions is to break up the link between a valuation satisfying an inference and a valuation not being a counterexample to that inference. A philosophical consequence of this technical move is that, in some of these theories, VD will be valid while at the same time invalid, while in others VD won't be invalid, though it will not be valid either. No matter what path is chosen, it will not be possible to have the problematic instances of VD as axioms of the derivations that lead to the paradoxical result. The first one cannot have them as axioms because the problematic instances are not valid, and the second one cannot have them either because they are invalid. Given the details of these proposals, new forms of both (i) naive validity, and (ii) non-substructurality arise. Moreover, validity gaps and gluts might represent the informational state of a computer regarding a particular way to process such absence of information or contradictory information.

October 23, 2024 Dr. Niels Linneman (University of Geneva, CH & Rotman Institute, CAN)

October 23, 2024

Should we expect life in the universe? Rethinking the Anthropic Principle

Dr. Niels Linneman (University of Geneva, CH & Rotman Institute, CAN)

 

The anthropic principle (AP) states that "what we can expect to observe must be restricted by the conditions necessary for our presence as observers". But the phrase "our presence as observers" cannot be uniquely interpreted in the context of the theories within which AP is meant to be understood and applied: namely, for 'effective' theories. We thus describe and defend a reformulation of AP, which we dub the 'effective observation principle' (EOP). EOP describes what we can expect to observe in physical settings by considering our 'observational situation' (and not, specifically, 'observers')---understood solely in terms of effective theories.

Preprint: https://philsci-archive.pitt.edu/23934/

Joint work with Feraz Azhar (Department of Philosophy, University of Notre Dame, Indiana US).

October 16, 2024 PD Dr. Jens Lemanski

October 16, 2024 4:30-5:30 pm [hybrid]

Multimodality in natural and artificial intelligence: Philosophical Perspectives

PD Dr. Jens Lemanski (Universität Münster & FernUniversität in Hagen)

 

The concept of multimodality is currently being explored in numerous fields of intelligence research, including animal and human communication, as well as machine learning. One of the most significant challenges currently being addressed in all areas of intelligence research is that of reasoning. The question thus arises as to whether there exists a modality type, such as text-based approaches, that is afforded a privileged access in the domain of reasoning. This question is linked to the assumption that visual and tactile modalities, such as diagrams or gestures, can process less complex information more efficiently and have a lower degree of conventionalisation than other types of modality. However, empirical evidence indicates that reasoning tasks are solved faster and more effectively not only by natural beings but also by machine agents when multimodal rather than unimodal solution strategies are employed. 

The lecture addresses two key aspects: firstly, it considers the philosophical approach that determines the choice between unimodal and multimodal reasoning; secondly, it demonstrates that there have been numerous approaches in the history of philosophy that have advocated multimodal reasoning and have drawn different boundaries of complexity and conventionalisation for visual and tactile modalities. These insights have the potential to inform and enhance contemporary research into intelligence.
 

July 17, 2024 Prof. Dr. Isabel Oitavem, Universidade Nova de Lisboa

July 17, 2024

The Positive Polynomial-Time Functions

Prof. Dr. Isabel Oitavem, Universidade Nova de Lisboa
*Monotone functions* abound in the theory of computation. They have been comprehensively studied in the setting of circuit complexity, via negation-free circuits (usually called "monotone circuits"). However, the study of *uniform* monotone computation is a much less developed subject. Grigni and Sipser began a line of work studying the effect of restricting "negation" in computational models. One shortfall of their work was that deterministic classes lacked a bona fide treatment, with positive models only natively defined for nondeterministic classes. This means that positive versions of, say, P must rather be obtained via indirect characterisations, e.g. as ALOGSPACE. Later work by Lautemann, Schwentick and Stewart solved this problem by proposing a model of deterministic computation whose polynomial-time predicates coincide with several characterisations of P once "negative" operations are omitted. This induces a robust definition of the class posP, the positive polynomial-time predicates.
We extend the work of Lautemann, Schwentick and Stewart on characterisations of the positive polynomial-time predicates to function classes. The focus of the talk is to obtain a function algebra for the positive polynomial-time functions, posFP, by imposing a simple uniformity constraint on the bounded recursion operator in Cobham's characterisation of polynomial-time functions.
This is joint work with Anupam Das, see 10.4230/LIPIcs.CSL.2018.18.

Thursday, July 11, 2024 4pm [hybrid] Prof. Dr. Roberto Giuntini, University of Cagliari und Technical University of Munich

Thursday, July 11, 2024 4pm [hybrid]

Machine Learning meets Quantum Mechanics

 Prof. Dr. Roberto Giuntini, University of Cagliari und Technical University of Munich

Research in the broad area of pattern recognition, machine learning, and quantum computing has inspired new ideas about some important general problems that arise in several disciplines, including information theory (classical and quantum), logic, cognitive science and neuroscience, and philosophy.
One of the fundamental questions that these disciplines often face is the following: How are abstract concepts formed and recognized on the basis of
previous (natural or artificial) experiences? This problem has been studied, with a variety of methods and tools, both in the context of human intelligence and artificial intelligence.In this seminar, the problem will be addressed within the framework of machine learning and quantum computing.
Machine learning can be defined as the art and science of making computers learn from data how to solve problems (or recognize and classify new objects) without being explicitly programmed. Quantum computing describes the processing of information using tools based on the laws of quantum theory. Today, we are witnessing a dramatic explosion of data, and the problem of extracting and recognizing only "useful information" from these data is crucial but extremely resource consuming. On the other hand, quantum computing has shown that there exist quantum algorithms that allow a formidable acceleration in solving problems that, in their current state, would require exponential times. The realization of the so-called noisy intermediate-scale quantum (NISQ) computers is now a reality. Therefore, the combination of machine learning and quantum computing appears inevitable. This "marriage" is favored by the fact that one of the fundamental features of quantum theory is that it can deal with incomplete information in a particularly natural and efficient way, a feature that is of primary importance in machine learning. The approach that I will present in this seminar (called Quantum-Inspired Machine Learning) consists of formally translating the process of (supervised) classification of (classical) machine learning by using the formalism of quantum theory in such a way that the resulting classification algorithms can be implemented on non-necessarily quantum computers. In particular, I will address the problem of binary classification of classical datasets, presenting a classifier (called the Helstrom Quantum Classifier (HQC), based on the Helstrom protocol, which is used to distinguish between two quantum states (mathematically represented by density matrices). HQC acts on density matrices, which, in our model, encode the patterns of a classical dataset. Experimental benchmark results show that, in many cases, the accuracy of HQC is superior to that of many classical classifiers. Finally, we will show how the improvement in HQC performance is positively correlated with the increase in the number of "quantum copies" of each (encoded) classical pattern.

July 10, 2024 Duarte Maia (University of Chicago)

July 10, 2024

An Introduction to Reverse Math

Duarte Maia (University of Chicago)

 

Reverse Math is a relatively recent (mid 70's) branch of logic, which can in some sense be seen as formalizing the question: "What does it mean for a theorem to imply another?" This is slightly more difficult than it may seem. Clearly we cannot be referring to logical implication: Otherwise, since every theorem is true by definition, by the truth table for implication any two theorems imply each other...

A possible way to interpret it (aside from the colloquial "I know it when I see it") is to consider implication within a weaker set of axioms, weak enough that the theorems that you care about aren't necessarily true to begin with, and so implications between them are nontrivial. In this talk, I'll introduce you to the most common base system, called RCA0 (R-C-A-Nought), and I'll walk you through some of the basics of reverse math, explaining how some theorems which may at first seem completely unrelated are actually equivalent.

July 03, 2024 Prof. Dr. Marco Giovanelli (Università degli Studi di Torino (UNITO), IT)

July 03, 2024 

Appearance and Reality: Einstein and the Early Debate on the Reality of Length Contraction

Prof. Dr. Marco Giovanelli (Università degli Studi di Torino (UNITO), IT)

In 1909, Ehrenfest published a note in the Physikalische Zeitschrift showing that a Born-rigid cylinder could not be set into rotation without stresses, as elements of the circumference would be contracted but not the radius. Ignatowski and Varicak challenged Ehrenfest’s result in the same journal, arguing that the stresses would emerge if length contraction were a real dynamical effect, as in Lorentz’s theory. However, no stresses are expected to arise according to Einstein’s theory, where length contraction is only an apparent effect due to an arbitrary choice of clock synchronization. Ehrenfest and Einstein considered this line of reasoning dangerously misleading and took a public stance in the Physikalische Zeitschrift, countering that relativistic length contraction is both apparent and real. It is apparent since it disappears for the comoving observer, but it is also real since it can be experimentally verified. By drawing on his lesser-known private correspondence with Varicak, this paper shows how Einstein used the Ehrenfest paradox as a tool for an ‘Einsteinian pedagogy’. Einstein’s argumentative stance is contrasted with Bell’s use of the Dewan-Beran thread-between-spaceships paradox to advocate for a ‘Lorentzian pedagogy’. The paper concludes that the disagreement between the two ways of ‘teaching special relativity’ stems from divergent interpretations of philosophical categories such as ‘reality’ and ‘appearance’.

June 26, 2024  Dr Eric Oberheim (Humboldt University of Berlin)

June 26, 2024 

Paul Feyerabend: From the Limited Validity of Falsificationism to ‘Anything Goes!’

Dr Eric Oberheim (Humboldt University of Berlin)

The lecture explains Paul Feyerabend’s peculiar views on meaning and method and how they developed from the early 1950s into Feyerabend’s mature early philosophy of science in his landmark essay “Explanation, Reduction and Empiricism” (1962, henceforth /ERE/), where Feyerabend criticizes Carl Hempel on “Explanation”, Ernst Nagel on “Reduction” and Karl Popper on “Empiricism” (hence the title ERE); and then, how this conclusion was generalized to ‘Anything goes!’ on the 17th December 1967, when Feyerabend explicitly announced his break from Popper’s school in two histrionic letters sent on the same day (one to Imre Lakatos and one to John Watkins) that explain his epiphany and how he had ‘awoken’ from his “Popperian slumber”. Feyerabend outlines his new “‘position’” (in ‘scare quotes’), which is “anything goes” except what is compatible with “hedonism”, to be entitled “Against Method” (following Susan Sontag). This marks the transition from his early to his later philosophy in a /reversal on realism/: from recommending correcting common knowledge with science to recommending protecting common knowledge from scientism in a ‘historical turn’.
The lecture begins when Ludwig Wittgenstein attended a meeting of the ‘Third Vienna Circle’ (the Viktor ‘Kraft Circle’) to discuss a problem with Popper’s account of ‘basic statements’ at Feyerabend’s invitation (Popper considered Wittgenstein to be his archrival.) Feyerabend attempted to combine Wittgenstein’s insight that one sentence can make two incompatible statements (like a duck-rabbit, for example, ‘the ball fell’ meant it was pushed by its impetus before it meant it was pulled by gravity) with Popper’s ‘critical rationalist’ maxim to increase testability. About a decade later, Feyerabend finally explicitly tried to criticize the worst in both (Wittgenstein and Popper) while developing the best in each. He tried to explain how Wittgenstein’s genuinely philosophical problems arise from ‘grammar’ (as antiquated scientific principles) and explained their role in science, while criticizing Wittgenstein’s ‘quietism’. Feyerabend also argued, within Popper’s critical rationalist framework, against Popper’s account of ‘basic statements’ and concluded that falsificationism has only a limited validity (/ERE/ 1962). It is limited to modeling testing commensurable theories within a shared framework (normal science), because it cannot take scientific revolutions into account. Falsificationism fails in scientific revolutions because meaning variance in the (non-logical) terms used to state the competing theories renders them incommensurable (deductible disjoint), so that while there can be crucial experiments between incommensurable theories, general theories cannot be /falsified/ in Popper’s sense. Instead, Feyerabend proposes a new model for the empirical justification of replacing an established theory with an incommensurable alternative based on David Bohm’s example (Einstein’s prediction of Brownian motion). Feyerabend’s test model is conjectures and /novel/ (theory-loaded) corroborations in contrast to Popper’s conjectures and refutations by falsification. The lecture then briefly explains two significant events that took place after /ERE/ (1962), which led Feyerabend to generalize from the limited validity of falsificationism to the limited validity of all methodological rules (“/Against Method/” and ‘anything goes’), before the lecture concludes by characterizing Feyerabend’s reversal on realism from recommending correcting common knowledge to recommending protecting common knowledge from scientism, as a lesson learned from Wittgensteinian belatedly finally sunk in – science contains not only formulae and rules for their application but entire traditions.

June 19, 2024 Prof. Dr. Klaus Hentschel (University of Stuttgart)

June 19, 2024

History of the concept of photons and of their mental models - concept formation in slow motion

Prof. Dr. Klaus Hentschel (University of Stuttgart)

How are complex concepts (such as that of ‘light quanta’, terminologically introduced in 1905, renamed `photons' in 1926) formed? How is terminological change interlinked with the development of concepts and with mental mode­l­ing underlying these terms? Well-defined terms usually come chronically late, much later than the thoughts and mental models which eventually lead to them. And even after such terms are available, substantial disagreement concerning their meaning might still exist – various different, indeed conflicting mental models are often associated with one and the same term. Light quanta are paradigmatic for such processes which took unusually long in this case and is therefore particulary suited for a close study of this process 'in slow motion'. My claim is that we need a closer combination of history of science with a history of terminology (/Begriffs­ge­schich­te/), the history of ideas and a more cognitively oriented history of mental models in science. My talk will sketch some of the more general claims on the basis of handpicked examples.

June 5, 2024 Prof. Dr. Julia Sánchez-Dorado (University of Sevilla, ES)

June 5, 2024

Creativity, pursuit and epistemic tradition

Prof. Dr. Julia Sánchez-Dorado (University of Sevilla, ES)

The so-called “standard view” on creativity says that there are two necessary, and jointly sufficient, conditions for creativity, novelty and value (Boden 2004; Gaut 2010). This view has been recently challenged by philosophers of science like Hills and Bird (2018), who propose to eliminate the value condition in the explanation of the phenomenon of creativity, since it only promotes an already “widespread and deeply misguided” approval of creativity as something necessarily positive for scientific research and other aspects of societal life. In this talk, I will argue against Hills and Bird (2018), while rescuing an important element of their criticism: the value condition in the explanation of creativity needs to be spelt out in more detail in order to avoid misattributions of unwarranted merits to creative people and creative products (such as creative scientific hypotheses and models). I propose to characterize such value condition as “pursuitworthiness”, building on the notion of "context of pursuit" originally proposed by Laudan (1977). To be pursuitworthy is to possess a prospective kind of worth, to have the potential of being epistemically fertile in the future, if further investigated. Using some examples from the Earth Sciences, I argue that creative scientific instances are, qua creative, valuable in the sense of pursuitworthy.

May 29, 2024 Dr. Konstantin Genin (University of Tübingen)

May 29, 2024

A Popperian's Progress

Dr. Konstantin Genin

A major goal of twentieth-century philosophy of science was to show how science could make progress toward the truth even if, at any moment, our best theories are false. To that end, Popper and others tried to develop a theory of truthlikeness, hoping to prove that theories get closer to the truth over time. That program encountered several notable setbacks. I propose the following: a method for answering an empirical question is progressive if the chance of outputting the true answer is strictly increasing with sample size. Surprisingly, many standard statistical methods are not even approximately progressive. What's worse, many problems do not admit strictly progressive solutions. However, I prove that it is often possible to approximate progressiveness arbitrarily well. Furthermore, every approximately progressive method must obey a version of Ockham’s razor. So it turns out that addressing the problem of progress uncovers a solution to another perennial problem: how can we give a non-circular argument for preferring simple theories when the truth may well be complex?

May 15, 2024 Prof. Patrick Blackburn (Roskilde University)

May 15, 2024 [Online]

This time, as grandfather ...

Prof. Patrick Blackburn (Roskilde University)

 

Arthur Prior is best known as the father of tense logic, reasonably well known as the father of hybrid logic, and (sadly) almost completely unknown as (arguably) the father of description logic. In this talk, however, I will introduce Arthur Prior in a different role: this time, as grandfather of modal logics with propositional quantifiers.

Throughout his career, Arthur Prior made extensive use of modal logics with propositional quantifiers. As we now know, propositional quantifiers typically give rise to highly complex modal logics. Despite this, propositional quantifiers are syntactically simple and were philosophically attractive to Prior. I shall discuss how Prior used such quantifiers, focussing on in his late use of them to define nominals in hybrid logic. This will lead to a brief discussion of what I call Prior's Ideal Language (PIL), a hybrid modal logic enriched with propositional quantifiers, that draws together ideas he was developing before his death in 1969.

May 08, 2024 [Hybrid] Jun.-Prof. Balthasar Grabmayr (University of Tübingen)

May 08, 2024 [Hybrid]

The Structuralist Turn in (Meta-)Mathematics

Jun.-Prof. Balthasar Grabmayr (University of Tübingen)

The emergence of metamathematical investigations at the turn of the 20th century brought with it the introduction of formal languages as objects of mathematical inquiry. While Hilbert conceived of formal expressions as strictly spatial objects, namely, as strings of symbols, Gödel’s celebrated technique of arithmetisation showed that strings can be replaced  by numbers or sets as the objects of metamathematics. As I will show, this technical innovation has led to a structuralist turn in metamathematics. According to the structuralist view, the subject matter of metamathematics are syntax structures, which can be exemplified in multiple ways. In this talk, I will isolate a weak structuralist tenet that underlies most (if not all) strands of structuralism. I will then argue that this tenet fails in the case of metamathematics. I will conclude that structuralism, at least in its current form, is not a tenable view for all of mathematics.
 

April 23, 2024 3pm-4pm Prof. Finnur Dellsén (University of Iceland)

April 23, 2024 3pm-4pm [Hybrid, Hörsaal 1.3]

Is Methodological Disagreement a Threat to Progress?

Prof. Finnur Dellsén (University of Iceland)

Many academic disciplines, especially philosophy, are rife with methodological disagreements of various sorts. Their members disagree not only about which methods are most appropriate, but also about how those methods should be used, and indeed the outcomes of properly using a method in a particular case. It is natural and common to think that such methodological disagreements are a threat to the progress of said disciplines. However, the supposed argumentative route from methodological disagreement to lack of progress has not yet been carefully spelled out. In this paper, we formulate, analyze, and evaluate several different arguments that seem to establish such a route. It turns out that the more threatening of these arguments rely on conceptions of what it is to make progress that, we suggest, there are independent reasons to reject. One upshot is thus that methodological disagreement does not plausibly preclude progress. With that said, another upshot is that methodological disagreement remains a threat to the extent to which we can know that we’ve made progress in a given instance.

February 7, 2024 [Hybrid], Dr. Enno Fischer (Ruhr-Universiät Bochum)

February 7, 2024 [Hybrid]

Pursuitworthiness in Particle Physics

Dr. Enno Fischer (Ruhr-Universiät Bochum)

Experiments at the frontiers of particle physics often involve expensive research facilities, and they need to be planned over very long time periods. Decisions regarding the planning of such experiments can have a major impact on the development of research fields. Philosophers of science have developed various accounts of the epistemic pursuitworthiness of research programs. In this talk I will apply such accounts to current developments in particle physics and provide a critical view on specific promises associated with supersymmetric theories.

January 31, 2024 [Hybrid] Dr. Patrick Duerr (University of Tübingen)

January 31, 2024 [Hybrid]

The Discovery of the Expanding Universe: Philosophical and Historical Dimensions

Dr. Patrick Duerr (University of Tübingen)

What constitutes a scientific discovery? Which role do discoveries play in science, its dynamics and social practices? Must every discovery be attributed to an individual discoverer (or a small number of discoverers)? The talk will explore these questions by first critically examining extant philosophical explications of scientific discovery, each of which will be found to be unsatisfactory. As a simple, natural and powerful alternative, I proffer the “change-driver model”: in a nutshell, it takes discoveries to be cognitive scientific results that have epistemically advanced science. The model overcomes the shortcomings of its precursors, whilst preserving their insights. I demonstrate its intensional and extensional superiority, especially with respect to the link between scientific discoveries and the dynamics of science, as well as the award system of science. Both as an illustration, and as an application to a recent controversy, with science-political import, I shall finally apply the considered models of discovery to one of the most momentous discoveries of science: the expansion of the universe. I oppose the 2018 proposal of the International Astronomical Union as too simplistic vis-à-vis the historical complexity of the episode. The change-driver model yields a more nuanced and circumspect verdict: (i) The redshift-distance relation shouldn’t be named the “Hubble-Lemaître Law”, but “Slipher-Lundmark-Hubble-Humason Law”; (ii) Its interpretation in terms of an expanding universe, however, Lemaître ought to be given credit for; but (iii) The establishment of the expansion of the universe, as an evidentially sufficiently warranted result, is a communal achievement, emerging in the 1950s or 1960s.

January 10, 2024 Prof. Dr. Ulrich Felgner

January 10th, 2024 [Hybrid]

Prof. Dr. Ulrich Felgner (Universität Tübingen)

Was sind Begriffe und was sollen sie leisten?

Von den beiden Fragen, die im Titel auftreten, kann die zweite schnell beantwortet werden. Mit einem Begriff soll der gedankliche Zugriff auf sinnlich Wahrgenommenes gelingen. Andernfalls wäre es dem Denken gar nicht zugänglich. Eine Antwort auf die erste der beiden Fragen ist sehr viel schwieriger zu finden. Dürfen Begriffe sich auf "Vorstellungen" stützen oder auf die Angabe von Merkmalen? Sind Begriffe "logische Gebilde", wie es Nicolai Hartmann sich dachte? Über die Beiträge zur Lösung dieser Fragen, die von Aristoteles, Aetios, Boethius, W. v. Ockham in der Antike und im Mittelalter, sowie David Hilbert et al. in der Neuzeit gegeben wurden, soll berichtet werden. Schließlich soll eine überzeugende Antwort gefunden werden.

The event takes place as part of the World Logic Day 2024.

2023

October 25, 2023 Prof. Dr. Pieter Sjoerd Hasper

October 25, 2023 [Hybrid]

The System of Aristotle’s Theory of Fallacies

Prof. Dr. Pieter Sjoerd Hasper (University of Hamburg)

Initially, Aristotle presents his list of fallacies in the Sophistical Refutations as just that, as a list of fallacies which one happens to encounter in eristic discussions. However, starting from SE 6 he makes it clear there is a system behind the list, first distinguishing between the incorrectness at the root of each type of fallacy and the corresponding source of delusion which makes that type seem a correct piece of reasoning, and finally formulating a completeness claim for his list towards the end of SE 8. Elsewhere in the SE, when discussing possible objections to his diagnosis of a particular fallacious argument, he insists that ‘the correction of arguments depending on the same point must be the same’. Thus Aristotle seems to have a systematic conceptual framework in mind allowing him to set up a correspondence between types of incorrectness and sources of delusion, resulting in an exhaustive classification of possible fallacies and determining that each token fallacy subsumes really only under one fallacy type. In the paper I will propose such a framework and discuss how it makes sense of each type of fallacy, the fallacy of accident proving the hardest. Aristotle’s very strong subsumption principle seems to be based on additional assumptions about the meaning of words.

July 5, 2023 Dr. Guus Eelink

July 5, 2023 [Hybrid]

Falsehood Without Reference to the Unreal

Dr. Guus Eelink (CFvW Center)

If something is false, it is not a reality. But if it is not a reality, how can it be an object of speech at all? In other words, how is it possible for a falsehood to be expressed? I shall argue that these are precisely the questions raised by a puzzle Plato considers at Sophist 237b-e, against a tendency in the literature to see this puzzle as having no such direct bearing on the issue of falsehood. I shall also argue that Plato provides a satisfactory solution to the puzzle by means of his account of falsehood, which crucially hinges on a distinction between naming or referring on the one hand and saying on the other hand. Moreover, I shall argue that Plato’s account of falsehood differs from many modern accounts in that it does not involve such entities as states of affairs or propositions. 

June 28, 2023 Professor Christopher Shields

June 28, 2023 [Hybrid]

The Univocity of Existence

Professor Christopher Shields (San Diego)

Let Ontological Monism be the view that being is univocal: although there are many kinds of beings—humans and electrons and facial expressions—some fleeting and some stable, some mind- and language-independent and some partly or fully constituted by human intentions, and also many kinds of kinds—compositional, functional, and socially constructed to name a few—being itself, qua being, does not admit of kinds, or, if you prefer, being does not admit of distinct ways of being. There is only being—and that is the being captured by the ‘There is’ at the head of this very sentence. Ontological Monism divides into two types: primitive and definitional. Primitive Ontological Materialism holds that being is simple and indefinable; Definitional Ontological Materialism holds that being, like, for instance, water, admits of a single, non-disjunctive essence-specifying definition, as water is defined as H2O. Contrasted with Ontological Monism are Ontological Pluralism, the view that there is more than one kind of being, or, perhaps, more than one way of being, and Ontological Scalarity, the view that being admits of degrees. Given these terms, we may state our thesis: Ontological Pluralism and Ontological Scalarity are false, or at least unmotivated, while Ontological Monism is true, or at least intelligibly motivated. Further, as independent of that thesis, one should appreciate that though the opponents of Ontological Monism sometimes embrace both Pluralism and Scalarity, these views are actually inconsistent with one another. The first chore, though, is to get clear about the thesis of Ontological Pluralism; doing so proves a non-trivial matter. 

June 21, 2023 Maël Pegny

June 21, 2023 [Online]

Some limitations of algorithmic fairness, and their relations with knowledge

Maël Pegny (CFvW Center, Tübingen)

In this presentation, I will show several limitations of the current debates on algorithmic fairness. I will argue that those debates do not take sufficiently into account the following issues:
-Does the decision rest solely on predictions, or also on non-predictive criteria?
-what are the effects of data noise, as opposed to data bias, for fairness issues?
- is the social good being distributed by the decision truly desirable for the applicants?
All those issues will bring to the fore the relation between fairness and limitations of knowledge.

June 14, 2023 Sheena F. Bartscherer

June 14, 2023 [Online]

Methods on Pause: Participant Observation and the Distant Social

Sheena F. Bartscherer (Humboldt-Universität zu Berlin)

May 24, 2023 Prof. Dr. Mattia Petrolo

May 24, 2023

Reasoning about algorithmic opacity

Prof. Dr. Mattia Petrolo (Centre for Philosophy of Science of the University of Lisbon)

A recurring problem discussed in explainable AI is the so-called epistemic opacity problem, that is, a problem about the epistemic accessibility and reliability of algorithms. In this work, we provide an original epistemological characterization of the opacity of algorithms based on a tripartite analysis of their components. Against this background, we introduce a formal setting for reasoning about an agent’s epistemic attitudes toward an algorithm and investigate what are the conditions that should be met to achieve epistemic transparency.
(Joint work with Ekaterina Kubyshkina)

April 26, 2023 Dr. Bartosz Więckowski

April 26, 2023, 5pm | Hybrid (Zoom and Raum 1.2, Doblerstr. 33)

Towards a modal proof theory for reasoning from counterfactual assumptions

Dr. Bartosz Więckowski (Goethe-Universität Frankfurt)

In current research on structural proof theory, counterfactual inference is typically studied from a model-theoretic perspective. On this perspective, possible worlds models are methodologically basic. Model-theoretically defined consequence relations come first, and structural proof systems, usually transmitted via Hilbert-style axiom systems, have to be defined for these consequence relations. Structural proof theory is thus methodologically secondary. Importantly, the logics usually extend classical logic. By contrast, on the proof-theoretic perspective on counterfactual inference, we start from a certain primacy of inferential practice and proof theory. Proof- theoretic structure comes first. Meaning is explained in terms of proofs. Models are required neither for the formal explanation of the meaning of counterfactuals nor for that of counterfactual inference. Taking a proof- theoretic perspective and a constructive stance on meaning and truth (cf. BHK), we extend the rudimentary intuitionistic subatomic natural deduction system for counterfactual implication presented in [1] with rules for conjunction. The proof system is modal insofar as derivations in it make use of modes of assumptions which are sensitive to the factuality status (factual, counterfactual, independent) of the formula that is to be assumed.

[1] Więckowski, B. (forthcoming). Counterfactual assumptions and counterfactual implications. In T. Piecha and K. F. Wehmeier, eds., Peter Schroeder-Heister on Proof-Theoretic Semantics. Outstanding Contributions to Logic, Springer.

February 16, 2023 Dr. Antje Rumberg

February 16, 2023

Transitions through Time and Possibility 

Dr. Antje Rumberg

This talk will be centred upon the notion of a transition. The setting is the theory of branching time, pioneered by Arthur Prior, which depicts the future as genuinely open; the underlying picture is that of a tree which is linear towards the past and branches into multiple possible futures. In my talk, I will discuss two ways in which we can make use of transitions in this setting to capture dynamic aspects of the world. First, I will show how employing transitions as a parameter of truth in the semantic evaluation allows for a dynamic representation of the interrelation of actuality, possibility, and time and enables a perspicuous treatment of future contingents. In a second step, I will lift the considerations from the structural level to the level of occurrents. I will illustrate how modelling processes and their modal-temporal properties in terms of transitions allows for a dynamic representation of happenings and doings and makes room for interaction and intervention.

February 1, 2023 Prof. Dr. Paolo Crivelli

February 1, 2023

Two Ways in Which False Statements State the Things Which Are Not

Prof. Dr. Paolo Crivelli (University of Geneva)

In the Sophist, the Visitor and Theaetetus agree that to judge (or state) falsehoods is to judge (or state) the things which are not. It is because judging (and stating) the things which are not is allegedly impossible that the dialogue’s central section embarks on an examination of not-being. It is therefore puzzling to realize that at the point of the dialogue where they examine false judgement (and false statement) as an episode of judging (and stating) the things which are not, the two inquirers agree that falsehood can come about also by reference to the things which are: while an affirmative false judgement (or statement) comes about when the cognizer (or speaker) posits that the things which are not are, a negative false judgement (or statement) comes about when the cognizer (or speaker) posits that the things which are are not. The puzzlement has two reasons: first, one gets the impression that the account of false judgement (or statement) as addressing the things which are not is supposed to cover all cases (rather than, roughly, half of them); secondly, if at least in some cases a false judgement (or statement) addresses the things which are, the possibility of false judgement (and statement) is not threatened by the difficulties that bedevil not-being, so that much of the central section of the Sophist turns out to be pointless. A passage of the Parmenides solves the puzzle by showing that the cases of false judgement (or statement) which in the Sophist are described as addressing the things which are should also be regarded as addressing the things which are not.

January 25, 2023 Zhao Fan

January 25, 2023, 3PM CET

Rethinking Turing’s Analysis of Computability

Zhao Fan (Kobe University)

Alan Turing's 1936 analysis of computability has been well-studied in the literature. Nevertheless, there are sharp disagreements regarding the motivation, justification, and implication of his analysis of computability. Some scholars contend that Turing's analysis is descriptive, mind-dependent, and causal. Others maintain that Turing's analysis is normative, mind-independent, and logical. In this talk, I will first consider the reasons for accepting these two readings. I will then demonstrate that both readings are problematic and propose an alternative reading.

2022

December 21, 2022 Bobby Vos

December 21, 2022

Theory-Centrism and the Formal Study of Macro-Units

Bobby Vos (Cambridge)

In this presentation, I set out to do three things. First, I will argue for the claim that formal philosophy of science is excessively theory-centric. To this end, I will first clarify what I take both ‘formal philosophy of science’ and ‘theory-centrism’ to consist in. Following this, I argue that theory-centrism is problematic, as it places certain supra-theoretical aspects of scientific enquiry outside the scope of formal analysis; a point that will be illustrated by an example from the history of science. Second, I will explore one way in which this problem of theory-centrism may be overcome, i.e.: the adoption of a supra-theoretical unit of analysis (or macro-unit, for short) by formal philosophers of science. This leads us to a largely forgotten research programme, centred around the formal study of macro-units. I briefly discuss two instances of the formal study of macro-units—to wit: the formalization of Kuhn’s notion of paradigm and the formalization of Laudan’s notion of research tradition—and use these examples to illustrate how supra-theoretical aspects of science may be captured in formal terms. Finally, I will identify three ways in which the formal study of macro-units, as embodied in the two aforementioned examples, may be further improved upon, focusing in particular on the treatment of pragmatic concepts in these two accounts.

November 30, 2022 Peter Schroeder-Heister

November 30, 2022 [hybrid]

Proof-theoretic validity and proof-theoretic completeness: A reassessment of Prawitz's conjecture

Peter Schroeder-Heister (University of Tübingen)

In 1971 Prawitz conjectured that intuitionistic logic is complete with respect to validity-based proof-theoretic semantics. In work by Piecha, Sanz and myself this conjecture was refuted by certain counterexamples; furthermore, certain abstract criteria for incompleteness results were given. I now argue that the conjecture can be validated after all (in a certain sense), and that both completeness and incompleteness can co-exist. We just need to distinguish between validity with respect to a standard model of proof-theoretic semantics and validity with respect to all possible models, analogously to the situation in second-order logic and in first-order arithmetic. This idea is related to an approach put forward by Stafford and Nascimento in their paper forthcoming in Analysis.

November 16, 2022 Kai F. Wehmeier

November 16, 2022

Is there a tension between quantification and extensionality?

Kai F. Wehmeier (University of California Irvine)

Perhaps surprisingly, the title question has received different answers in the literature. I will reconstruct the reasoning underlying each of the two answers that have been given. (Not surprisingly, these answers are NO – the "traditional" answer – and YES – the "insurgent" answer.) Adjudicating the controversy is easy in one sense (strictly speaking, the traditional answer is correct) but difficult in another, since the insurgent camp is better seen as disputing the rules for finding an answer than the actual answer given by the traditionalists on the latters' own terms.

July 20, 2022 Anna Hoffmann

July 20, 2022 [Online]

KI‐Kontextanalysen als notwendiges Entscheidungswerkzeug für den verantwortungsvollen Einsatz von Machine Learning Verfahren

Anna Hoffmann (Hoffmann Consulting & Facilitation, Berlin & Potsdam)

Bei der Einführung von ML‐Verfahren in softwaregestützte Arbeitsprozesse ist oft die Rede davon, dass Technologien entwickelt werden sollen, die menschenzentriert sind. Aktuell betont das Margrethe Vestager als eines der Ziele der „Neuen Europäischen Innovationsagenda“1. Doch wie kann das konkret gelingen, ohne bei den oft oberflächlichen Ansätzen von UX‐Design stehen zu bleiben?

Der Fragenkatalog der KI‐Kontextanalyse bietet eine Antwort auf die Frage, wie ML-basierte KI‐Systeme schon ab dem ersten Entwurf des Use‐Cases darauf durchleuchtet werden können, ob es sich um ein menschenzentriertes und verantwortungsvolles Business‐Design handelt, welche Risiken sich zeigen und wo nachgebessert werden kann.

In Punkt 4.4 „KI‐spezifisches Risikomanagement“ des Berichts der im Jahr 2018 eingesetzten Enquete‐Kommission „Künstliche Intelligenz – Gesellschaftliche Verantwortung und wirtschaftliche, soziale und ökologische Potenziale“ des Deutschen Bundestages heißt es: „Erst die Betrachtung des individuellen Anwendungskontextes und der individuellen Einsatzumgebung erlauben eine umfassende Bewertung der mit dem Gebrauch von Algorithmen und KI‐Systemen einhergehenden Kritikalität.“2

Genau auf diesem Gesichtspunkt baut die KI‐Kontextanalyse auf, die den spezifischen Anwendungskontext sowie die spätere Nutzerumgebung vertieft in den Blick nimmt. Es handelt sich dabei um eine Weiterentwicklung der „Gestaffelte Kontextsitzungen“ nach Thomas Bevan („Theory of Machines“) ergänzt um Fragestellungen aus der „Human Centered AI“ (Mittelstand 4.0‐Kompetenzzentrum Usability, Manuel Kulzer)3. Die vorgestellte Form der KI‐Kontextanalyse wird zurzeit am Fraunhofer‐Institut für Produktionsanlagen und Konstruktionstechnik (IPK) in Berlin für einen weitreichenden KI-Use Case erprobt.

1 Neue Europäische Innovationsagenda (europa.eu); Abruf 7.7.2022
2 Drucksache 19/23700 (bundestag.de), S. 66; Abruf 8.7.2022
3 Mittelstand 4.0‐Kompetenzzentrum Usability Kompetenzzentrum Usability (kompetenzzentrum‐usability.digital)

Vortragsfolien

July 13, 2022 Prof. Dr. Drs. h.c. Michael Molls

July 13, 2022 [Online]

Das Institute for Advanced Study der TU München - Interdisziplinarität und Internationalisierung der Forschung

Prof. Dr. Drs. h.c. Michael Molls (Technical University Munich, Institute for Advanced Study)

 

June 29, 2022 Prof. Dr. Isabel Oitavem

June 29, 2022 [ Hörsaal Doblerstr. 33 + Online]

Logical Approaches to Relativized Classes

Prof. Dr. Isabel Oitavem (Universidade NOVA de Lisboa)

Relative complexity corresponds to a major part of our computability paradigm. The concept of accessing oracles (or databasis) during computation is in line with the interactive features of computation nowadays. However, machine independent approaches to relativized complexity classes are particularly challenging, because relativization brings the model of computation into play. For instance, even if P and NP coincide (P=NP is an open problem), there exists an oracle A such that P^A =/= NP^A. I.e. the same class of functions/predicates with the same oracle may lead to different relativized complexity classes. The polynomial hierarchy is a hierarchy of relativized classes whose two first levels are P and NP. In this talk we describe a machine independent approach to all levels of the polynomial hierarchy and to the hierarchy itself. This work was published in TCS 900 (2022) 25-34, https://doi.org/10.1016/j.tcs.2021.11.016.

 

June 22, 2022 Dr. Antonio Piccolomini d'Aragona

June 22, 2022 [ Hörsaal Doblerstr. 33]

The Proof-Theoretic Square

Dr. Antonio Piccolomini d'Aragona (University of Siena and Aix-Marseille University)

In this talk, I focus on the interaction between two dichotomies in Prawitz’s proof-theoretic semantics, i.e. the dichotomy between monotonicity and non- monotonicity of validity of arguments over a specific atomic base (call this the local level), and the dichotomy between schematicity and non-schematicity of validity of arguments over all atomic bases (call this the global level). I argue that these dichotomies undergo some conceptual symmetries, both internally - i.e. the opposition at the local level is in a way conceptually analogous to the opposition at the global level - and externally - i.e. the alternative at the local level is somehow conceptually mirrored by the alternative at the global level. These symmetries may be understood as imposing a quite strict constraint on the overall semantic framework, i.e. one requires non-monotonicity at the local level iff one requires schematicity at the global level and, vice versa, one requires monotonicity at the local level iff one requires non-schematicity at the global level. This returns two conceptually and extensionally distinct proof-theoretic semantics, which both seem to be compatible with Prawitz’s philosophical tenets. However, I also argue that the aforementioned symmetries stem from a deeper interaction at play in Prawitz’s semantics, namely, the interaction between non-logical meanings and meaning - i.e. justification - of non-primitive inference rules. Based on this deeper interaction, two further “mixed” readings (monotonicity/ schematicity and non-monotonicity/non-schematicity) can be said to be compatible with Prawitz’s intentions - and are actually found in the literature. I finally claim that further combinations given by the interaction between non-logical meanings and meaning of rules are either void or equivalent to one of the four possibilities above. Thus, we are left with a group of four Prawitz-compatible semantics, forming a diagram whose arrows are “harmonically” oriented by the interaction between non-logical meanings and meaning of rules.

June 15, 2022 Prof. Dr. Thomas Studer

June 15, 2022 [Online]

Justification Logic - Introduction and Recent Developments

Prof. Dr. Thomas Studer (University of Bern)

Justification logics are closely related to modal logics and can be viewed as a refinement of the latter with machinery for justification manipulation. Justifications are represented directly in the language by terms, which can be interpreted as formal proofs in a deductive system, evidence for knowledge, and so on. This more expressive language proved beneficial in both proof theory and epistemology and helped investigate problems ranging from a classical provability semantics for intuitionistic logic to the logical omniscience problem. In this talk, we will give an introduction to justification logic and present recent developments in the field such as conflict tolerant logics and formalizations of zero-knowledge proofs.

May 25, 2022 Prof. Dr. Ioannis Liritzis (European Academy of Science and Arts)

May 25, 2022 [Online]

Archaeometry: brief overview

Prof. Dr. Ioannis Liritzis (European Academy of Sciences and Arts & Henan University)

The best place for finding the past is no other than an “archaeological site”. An archaeological site is a site where the past activity is preserved and it is traced by various things like food remains, structures, humanly manufactured objects, and others. Archaeometry provides answers to archaeological questions concerning material culture.
I could divide the wide archaeometry subject to 7 sub-fields: 1) Chronology/ Dating, 2) Characterization & Provenance (chemical analysis, statistics) 3) Bioarchaeology (stable isotopes, aDNA, ancient diet), 4) Conservation (analysis & preventive-passive conservation/ restoration), 5) Archaeoastronomy (measuring time, determining rituals, celebrations), 6) ArchaeoGeophysical Prospection (locating buried antiquities), 7) 3D reconstruction.
Investigating the past requires remains of material culture and human artifacts and man-made constructions, geoarchaeological materials, human remains of organic and inorganic origin. For all these investigations multiple techniques are available to support instrumental analysis including nuclear, spectroscopic, chemical, electronic devices. Regarding the popular research & applications of computer systems the cyber-archaeology has emerged making use of big data collection, storage, documentation, processing in the field by a various diversified equipment, with a result the digital reconstruction of monuments and artifacts, in the field and in the museums.
The documentation is an emerging field with fast development especially for the preservation of at-risk antiquities from natural, and anthropogenic destructions. Also, documentation of the techniques used to make a work of art in a non-destructive (without sampling) readings of electromagnetic rays in the optical IR, UV, NIR and beyond. This is achieved by presenting energy spectra and a kind of tomography which digitally reveals underlying layers of an overpainted work of art like a palimpsest.
Selective a few examples are given from of case studies dated by radiocarbon, luminescence, archeoastronomy, characterization, analysis, and provenance. Archaeometry deciphers the past, and natural sciences with archaeometry strengthen and develop the spirit of interdisciplinarity, delves into the past and retains our memory. In the remote past we meet our future and enhance sustainability-growth but ecumenical values too. Archaeometry is the heart in the present in which past and future meet….
 

May 18, 2022 Prof. Dr. Ioannis Liritzis

May 18, 2022 [Online]

Pythian Games & Pythiad, European Delphic Intellectual Movement. A Celebration of Antiquity Reimagined: Delphic Festivals

Prof. Dr. Ioannis Liritzis (European Academy of Sciences and Arts & Henan University)

Europe is the cradle of civilization and Europe explored both opposite directions, East & West. In the year 586 BC, the first Pythian Games (artistic with some small-scale athletics) took place in Delphi Ancient Greece. Delphic Intellectual Movement is a branding initiative under the leadership and Aegis of European Academy of Sciences & Arts.
Our commitment is to Revive a Code of Ethics and re-establish the European roots of arts, culture and rational thought, resetting self-knowledge, reassessing the values of freedom, human and international rights. The European Academy of Sciences & Arts is committed to promoting scientific and societal progress.
The aim of the Pythian Games revival is to re-establish moral and social values in modern era, via various cultural-scientific activities, for a) the acquaintance and fostering of ancient Classical Culture, as it is emerging from the Delphic spirit, a cultural and spiritual cradle of Europe and at the same time an ecumenical symbol of knowledge, and, b) the scientific research on issues of dissemination of the ancient classical Logos, Values and Virtues, for the benefit of human beings and the environment, with the aid of modern technology, as this is shown from within the art, philosophy and philological witnesses.
Our entanglement and goal refer to the reclamation of lethargic classical human values by gradually reviving the Pythian Games with a modern balanced prospective.
The Implementation
1. The composition of a musical polymedia event of an orchestrated symphonic work
2. Reanimation of the Pythian Games every four (4) years
2.1 Artistic- cultural competition, of tangible and intangible culture (music, dance, prose-poetry, etc).
2.2 Small-scale athletics.
3. Pythiads. In between the 4 years and every 2 years, comprising of:
3.1 High-tech breakthrough achievements exclusively in the digital and cyber-technologies to cultural heritage (tangible and intangible) (virtual reality, haptic technology, drama and multimedia applications, 3D reconstructions et.c) reconstructing the past tangible and intangible culture.
3.2 International Symposium of World Interdisciplinary Experts in Ecumenical Principles for a better living; a World Forum of holistic approach of current and future wellbeing in a harmonic & balanced manner.

February 2, 2022 Réka Markovich

February 2, 2022 [Online]

Normative Systems and Their Conflicts - in Law and in AI

Réka Markovich JD PhD (Université du Luxembourg)

I present a formalism I develop to reason about different normative systems and their applicability addressing a concern existing both in law and in Artificial Intelligence.

January 26, 2022 Luca Incurvati

January 26, 2022 [Online]

Inferential Deflationism

Prof. Dr. Luca Incurvati (Amsterdam)

Deflationists about truth hold that the function of the truth predicate is to enable us to make certain assertions we could not otherwise make. Pragmatists claim that the utility of negation lies in its role in registering incompatibility. The pragmatist insight about negation has been successfully incorporated into bilateral theories of content, which take the meaning of negation to be inferentially explained in terms of the speech act of rejection. In this talk, I will implement the deflationist insight in a bilateral theory by taking the meaning of the truth predicate to be explained by its inferential relation to assertion. This account of the meaning of the truth predicate is combined with a new diagnosis of the Liar Paradox: its derivation requires the truth rules to preserve evidence, but these rules only preserve commitment. The result is a novel inferential deflationist theory of truth. The theory solves the Liar Paradox in a principled manner and deals with a purported revenge paradox in the same way. If time permits, I will show how the theory and simple extensions thereof have the resources to axiomatise the internal logic of several supervaluational hierarchies, thereby solving open problems of Halbach (2011) and Horsten (2011). This is joint work with Julian Schlöder.

2021

December 15, 2021 Philipp Stecher

December 15, 2021 [Online]

Concepts' Lifecycle in Artificial Intelligent Systems

Philipp Stecher (Tübingen)

Understanding how humans acquire and process concepts has been an ongoing pursuit for millennia. On the other hand, artificial intelligence (AI) has advanced rapidly in recent decades, helping to algorithmize and thereby better understand humans’ concept-processing capabilities. Especially in the recent past, AI scholars achieved remarkable progress: Today’s deep learning algorithms are expanding the capabilities of AI, enabling it to incorporate and process increasingly complex representations of the world (aka "concepts"). However, although AI’s capabilities are often described as “humanlike”, current state-of-the-art AI algorithms can barely meet these expectations. In contrast, recent research postulates significant differences in how current AI systems process concepts of the world as opposed to humans. While people can build rich, integrated concepts that can be applied across domains based on sparse data, today’s AI often requires large amounts of data to create rather superficial concepts that are only applicable to the domain in which the AI is operating. The illustrated gaps in concept-processing capabilities, as well as the recent advancements in AI, reflect the starting point for this research, which aims to shed light on the question of how modern AI systems process concepts in contrast to humans. To this end, a taxonomy will be derived that clusters the concept processing capabilities of modern AI systems. Hereafter, these capabilities will be systematically compared to the concept-processing capabilities of humans. The paper may close with recommendations for further research and concluding remarks on the status of concept-processing capabilities of current AI.

December 8, 2021 Dr. Guus Eelink

December 8, 2021 [Online]

Is Protagoras a Relativist about Truth?

Dr. Guus Eelink (Oxford)

Relativism about truth is a view that has fascinated many philosophers. Some philosophers have flirted with it (or have been accused of doing so), whereas others have attempted to show that is an incoherent view. Recently, some philosophers of language have endorsed local versions of relativism about truth. Many historians of philosophy have claimed that relativism about truth goes all the way back to Ancient Greek philosophy, and that it was espoused by Protagoras of Abdera, a philosopher from the 5th century BC. This claim is mainly based on Plato's dialogue Theaetetus, which contains the most elaborate discussion of Protagoras' view of truth. In the Theaetetus, Plato ascribes to Protagoras the view that 'whatever is believed is true for the believer'. Many scholars have claimed that the qualification 'for the believer' is meant to relativize truth to the believer. I shall argue that this interpretation of Protagoras' view is incorrect. I shall argue that Protagoras is not a relativist about truth. Instead, Protagoras holds that all beliefs are absolutely true. I shall argue that the qualification 'for the believer' is meant to explain how beliefs are absolutely true. The believer is part of the metaphysical explanation of how beliefs are absolutely true. I am not the first interpreter who claims that the qualification 'for the believer' is not meant to relativize the truth predicate. However, what is distinctive about my interpretation is that 'whatever is believed is true for the believer' entails 'whatever is believed is true (simpliciter)'. I shall show that my interpretation can account for Plato's celebrated argument that Protagoras' view is self-defeating. 

October 20, 2021 Prof. Dr. Jan von Plato

October 20, 2021 [Online]

How Gödel Discovered His Incompleteness Theorems

Prof. Dr. Jan von Plato (University of Helsinki)

Gödel surprised the mathematical world by his famous incompleteness theorems of arithmetic of 1931. The way he arrived at these results is described against two shorthand notebooks of his that have been recently transcribed.

July 28, 2021 Prof. Dr. Maximilian Schich

July 28, 2021 [Online]

Cultural Analysis Situs

Prof. Dr. Maximilian Schich (Tallinn University, Estonia)

The disciplines of complex network science, of art and cultural history, and of computation have a common ancestor in the analysis situs of Gottfried Wilhelm Leibniz. Unfortunately, this shared conceptual origin remains hidden so far within a history of science that is tragically bifurcated, due to the branching evolution of disciplinary focus, due to changes in language, and due to sometimes forced scholarly migration. This talk, which is based on the first chapter* of an upcoming book, breaks the mutual tear lines of citation between disciplines to enable a common future. What lies at stake is the surprisingly deep-rooted and shared foundation of the emerging enterprise of a systematic science of art and culture. This enterprise currently flourishes mainly in departments of multidisciplinary information science, network and complexity science, and applications in industry. It promises nothing less than an integration of humanistic inquiry and a physics of cultures and cultural production.

(preprint: https://doi.org/10.11588/artdok.00006347).

July 21, 2021 Dr. Christoph Peylo

July 28, 2021 [Online]

AI and Ethics - Why This Is a Topic of Interest and Why Enterprises (Should) Have a Stake in It

Dr. Christoph Peylo (Project "Digital Trust," Bosch)

 

July 14, 2021 Prof. Dr. Federico Pailos

July 14, 2021 [Online]

Why Metainferences Matter

Prof. Dr. Federico Pailos (Buenos Aires)

In this talk, I will present new arguments that shed light on the importance of metainferences of every level, and metainferential standards of every level, when (semantically) characterizing a logic. This implies that a logician cannot be agnostic about metainferences, metametainferences, etc. The arguments I will introduce show why a thesis that Dave Ripley defends in [1] and [2] is false. This is how he presents it.

Note that a meta0counterexample relation X [i.e., a counterexample relation for infer- ences, which is (in most contexts) equivalent to a satisfaction relation for inferences], on its own, says nothing at all about validity of metaninferences for 0 < n. Despite this, there is a tendency to move quickly from X to [X] [i.e., a full counterexample relation for every metainferential level], at least for some purposes... For example, [3] (p. 360, notation changed) says “[A]bsent any other reasons for suspicion one should probably take [X] to be what someone has in mind if they only specify X.” I don’t think this tendency is warranted. Most of the time, when someone has spec- ified a meta0counterexample relation (which is to say an ordinary counterexample relation), they do not have the world of all higher minferences [i.e., metainferences of any level], full counterexample relations, etc, in mind at all. They are often focused on validity for meta0inferences (which is to say inferences). ([1], page 12.)

Though I do think that, in a sense, people do have in mind [X] when they say X, I will not argue for that. I just want to defend that they should have something like that in mind. Specifically, I will show why the following position should be revised:

As I’ve pointed out, an advocate of ST as a useful meta0counterexample relation has thereby taken on no commitments at all regarding metancounterexample relations for 1 ≤ n. ([1], page 16).)

Or, as Ripley puts in somewhere else:
... if someone specifies just a metanconsequence relation, they have not thereby settled on any particular metan+1 consequence relation. ([2]).)

If Ripley’s statements are true, then two different logicians may count as advocates of the same inferential logic (or any metainferential logic of level n), despite adopting quite different criteria regarding what counts as a valid metainference (or a valid metainference of level n+1). If Ripley is right, then not only can a supporter of a (non-transitive) logic like ST accept or reject the metainference corresponding to (some version of) the Cut rule, but also she can admit a metainferential counterexample relation that correspond to a trivial or an empty metainferential consequence relation. Moreover, this might have repercussions on the inferential level, as an

empty metainferential logic invalidates any metainference with an empty set of premises and a valid ST-inference as a conclusion. Thus, the only available option is to admit that inferences, on the one hand, and metainference with an empty set of premises and that inference as its only conclusion, on the other hand, are not only different, but also non-equivalent things. Something similar happens if we chose a trivial metainferential counterexample relation while adopting ST at the inferential level. In this case, there will be invalid ST-inferences that turns out to be valid in its metainferential form, forcing this logician to chose between one of the options that we have specified before.

This is a particular strong result, and it is even stronger than what might initially seem, in two senses: (1) it does not depend on the notion of metainferential validity being favoured—e.g., whether one thinks that the local way to understand it is better than the global, or the other way around; (2) it does not depend on the special features of the (mixed) inferential/metainferential relations, as this result can be replied for any pair of (mixed) metainferential relations of level n/n+1.

References

[1] D. Ripley. One step is enough. (Manuscript).
[2] D. Ripley. A toolkit for metainferential logics. (Manuscript).

[3] C. Scambler. Classical Logic and the Strict Tolerant Hierarchy. Journal of Philosophical Logic, page forthcoming, 2019. DOI: doi.org/10.1007/s10992-019-09520-0.

June 30, 2021 Prof. Dr. Mathias Frisch

June 30, 2021 [Online]

Uses and Misuses of Models in Pandemic Policy Advice

Prof. Dr. Mathias Frisch (Hannover)

Have epidemiological models of the Covid-19 pandemic been a failure, as some have argued? Have policy makers violated their epistemic duty in the pandemic by acting on deeply uncertain evidence? In this talk I examine the epistemic status of epidemiological models and possible roles they can play in scientific policy advice in situations characterized by missing data and poorly constrained parameter-values. I will argue that some criticisms presuppose an overly narrow conception of possible uses of models. But I will discuss pitfalls in using models for policy advice to which some of the criticisms draw attention. Finally, I will suggest one framework for how to use uncertain modeling results in policy decisions under extreme urgency. 

June 23, 2021 Dr. Daniel Kostić, Prof. Dr. Nathalie Niquil

June 23, 2021 [Online]

Perspectivism and Vertical-Horizontal Explanatory Modes in Ecological Networks

Dr. Daniel Kostić (Radbound University) and Prof. Dr. Nathalie Niquil (CNRS)

We show how the perspectival criteria help to determine the explanatory relevance in ecological network models. We provide counterfactual analysis of explanatory power of a marine network model, which includes the perspectival criteria for using a horizontal mode (when the counterfactual relata are at the same level) and a vertical mode (when the counterfactual relata are at different levels). Distinguishing vertical and horizontal counterfactual modes is important for understanding how are different organizational levels of a system functionally related as well as how do exogeneous changes affect each of the levels. We show that perspectival criteria play a more important epistemic role than merely informing the modeling decisions. If such criteria were not available, it wouldn’t have been intelligible how the relevant counterfactual figures in an explanation. They determine explanatory relevance conditions for a counterfactual. Based on this theoretical framework, we further point out how our analysis can be used in designing more sustainable spatial management policies for aquatic resources.

June 9, 2021 Dr. Silvia de Toffoli

June 9, 2021 [Online]

What Are Mathematical Diagrams?

Dr. Silvia de Toffoli (Princeton University)

Although traditionally neglected, mathematical diagrams have recently attracted much attention from philosophers of mathematics. By now, the literature includes several case studies investigating the role of diagrams both in discovery and proof. Certain preliminary questions have, however, been bypassed. What are diagrams exactly? Are there different types of diagrams? In the scholarly literature, the term “mathematical diagram” is used in diverse ways. I propose a working definition that carves out the phenomena that are of most importance for a taxonomy of diagrams in the context of a practice-based philosophy of mathematics, privileging examples from contemporary mathematics. In doing so, I move away from vague, ordinary notions. I define mathematical diagrams as forming notational systems and as being geometric/topological representations or two-dimensional representations (or both).  I also examine the relationship between mathematical diagrams and spatiotemporal intuition. By proposing a precise definition, I explain (away) certain controversies in the existing literature. Moreover, I shed light on why mathematical diagrams are so effective in certain instances, and, at other times, dangerously misleading.

June 2, 2021 Prof. Dr. Aaron Sloman

June 2, 2021 [Online]

Unsolved Problems Linking Physics, Biology, Consciousness, Philosophy of Mathematics, and Chemical Information Processing

Prof. Dr. Aaron Sloman (University of Birmingham)

There are types of spatial intelligence, detecting and employing varieties of spatial possibility, necessity,and impossibility, that cannot be explained by currently known mechanisms. Evidence from newly hatched animals suggests that mechanisms using still unknown chemistry-based forms of computation can provide information that goes beyond regularity detection, concerned with possibility spaces and their restrictions. Ancient human spatial intelligence may be based on multi-generational discovery of what is possible, necessarily the case, or impossible, in complex and changing environments, using related mechanisms of spatial cognition, centuries before Euclid, that enabled discoveries regarding possibility, impossibility and necessity in spatial structures and processes, long before modern mathematical, symbolic, logic-based, or algebraic formalisms were available.

Immanuel Kant characterised such mathematical cognition in terms of three distinctions largely ignored in contemporary psychology, neuroscience, and AI research: non-empirical/empirical, analytic/synthetic, and necessary/contingent. He argued that ancient geometric cognition was not based simply on empirical generalization, nor on logical deduction from arbitrary definitions. The truths discovered were non-empirical, synthetic, and non-contingent.

Neither formal logic-based characterizations of mathematics (used in automated theorem provers), nor postulated neural networks collecting statistical evidence to derive probabilities can model or explain such mathematical discoveries. E.g. necessity and impossibility are not extremes on a probability scale.

Unexplained facts about spatial competences of newly hatched animals, before neural networks can be trained in the environment, may be related to mechanisms underlying ancient spatial intelligence in humans and other animals.

Chemical mechanisms inside eggs, available before hatching, somehow co-existing with the developing embryo, apparently suffice. Such mechanisms may be partly analogous to types of "virtual machinery" only recently developed in sophisticated forms that provide services across the internet (like zoom meetings) that "float persistently" above the constantly changing, particular physical mechanisms at work, without occupying additional space.

While chemical mechanisms in early stages of reproduction are well-studied, little is known about enormously complex types of machinery required for later stages, e.g., of chick production, including creation of control mechanisms required for actions soon after hatching. I suggest that development of the foetus uses many stages of control by increasingly sophisticated virtual machines controlling and coordinating chemical mechanisms as they create new chemical mechanisms and new layers of virtual achinery.

Different sub-types must have evolved at different times, and the later, more complex virtual machines may have to be assembled by earlier virtual machines, during foetus development, whereas earliest stages of reproduction simply use molecular mechanisms controlling formation and release of chemical bonds linking relatively simple chemical structures.

I suspect Alan Turing's work on chemistry-based morphogenesis (published 1952) was a side effect of deeper, more general, thinking about uses of chemistry-based spatial reasoning in intelligent organisms. But he died without publishing anything to support that suspicion, though he did assert in 1936 that machines can use mathematical ingenuity, but not mathematical intuition, without explaining the difference (on which Kant might have agreed). We may never know how far his thinking had progressed by the time he died.

Extended version:
https://www.cs.bham.ac.uk/research/projects/cogaff/misc/unsolved.html

May 19, 2021 Dr. Vincenzo Politi

May 19, 2021 [Online]

Anticipative Reflection in an Interdisciplinary Research Team: A Case Study

Dr. Vincenzo Politi (University of Oslo)

Responsible Research and Innovation (RRI) and similar science policy frameworks aim at reinforcing the social responsibility of science and technology by promoting a reflective and anticipatory attitude among researchers. Such an attitude requires the ability to imagine future scenarios in order to predict and assess the potential societal implications of innovative research. Responsible research, therefore, requires a future-oriented attitude. ‘Future’, however, may mean different things. In this talk, I discuss the results of a qualitative study conducted with an interdisciplinary research team working on innovative personalised targeted cancer therapies. The study reveals that, within the research team, different individuals think about different kinds of future. Depending on which kind of future they think about, researchers anticipate different kinds of impact of their work, which I define ‘internal’ and ‘external impact’. In the conclusions, I will investigate which kind of knowledge and expertise researchers should be equipped with in order to develop the ability to think about the future implications of their work.

May 12, 2021 Sandro Radovanović

May 12, 2021 [Online]

Effects of Affirmative Actions in Algorithmic Decision-Making

Sandro Radovanović (University of Belgrade)

In today's business, decision-making is heavily dependent on algorithms. Algorithms may originate from operational research, machine learning, or decision theory. Regardless of their origin, the decision-maker may create unwanted disparities regarding race, gender, or religion. More specifically, automation of the decision-making process can lead to unethical acts with legal consequences. To mitigate unwanted consequences of algorithmic decision-making one must adjust either input data, algorithms, or decisions. In this talk, an overview of fairness in algorithmic decision-making from a machine learning point of view is going to be presented, as well as developed approaches in the literature. This talk aims at presenting a way to ensure fairness in algorithmic decision.making that ensures a lack of disparate impact as well as ensuring equal odds. After presenting the methodology, we discuss what is flawed with approaches that the machine-learning community adopted while "fighting unfairness," and as a result what are the properties of true affirmative actions in algorithmic decision-making and how they can be achieved.

May 5, 2021 Dr. Antonio Piccolomini d'Aragona

May 5, 2021 [Online]

Kreisel's Informal Rigour and Gödel's Absolute Provability: A Tentative Reading through and for Prawitz's Semantics

Dr. Antonio Piccolomini d'Aragona (Aix-Marseille)

In spite of their philosophical relevance, Kreisel’s theory of informal rigour and Gödel’s concept of absolute provability have proved elusive to rigorous mathematised treatments. In my talk, I will set out to connect Kreisel’s and Gödel’s ideas to Prawitz’s proof-based semantics. Prawitz’s semantics has been put forth and developed independently of Kreisel and Gödel, but some of its basic tenets may nonetheless match those of informal rigour and absolute provability. Both Kreisel and Gödel aim at bringing provability back into mathematical practice – against the post-Fregean and post-Hilbertian formalistic attitude – as well as at overstepping formal derivability – given Gödel’s and Turing’s limiting results. In order to do this, provability must become informal (i.e. independent of formal languages and systems) and absolute (i.e. formalism-free and/or universally applicable). This may be in line with the intuitionistic idea of giving provability a “semantic” role, an idea of which Prawitz’s semantics is a well-known instance. As a result, I argue that Prawitz’s semantics shares some issues with Kreisel’s informal rigour, while the link with Gödel’s absolute provability is more difficult to be established.

April 21, 2021 Dr. Christian Feldbacher-Escamilla

April 21, 2021 [Online]

AI for a Social World - A Social World for AI

Dr. Christian Feldbacher-Escamilla (Düsseldorf)

AI is not only supposed to help to tackle social problems but it is also frequently used to in fact solve such problems. AI-assisted systems play an increasingly important role in the legal domain, the health sector, environmental research, public policy-making and the like. Research in this field is numerous and diverse. In this talk, we want to argue, however, that it is also interesting to have a look at the opposite direction: How can our knowledge of the social world and its structural features help us to approach problems of AI? In particular, we will investigate how a social perspective on problems of justification helps us to address epistemic problems of machine learning theory.
 

April 6, 2021 Prof. Dr. Helen Longino

April 6, 2021 [Online]

Critical Contextual Empiricism, Diversity and Inclusiveness

Pof. Dr. Helen Longino (Stanford University)

Watch the recording on Facebook.

Humanity looks to the scientific community, now more than ever, in order to provide solutions to today's challenges. Decisions made by scientists thus directly and deeply influence human lives. The Carl Friedrich von Weizsäcker Center is interested in the foundations of responsible science. For example, how can we identify and avoid scientific misconduct, e.g. plagiarism and fraud, or the abuse of science for commercial purposes? How do we navigate issues of morally questionable research, research funding, and global inequalities? How can scientists ensure optimal knowledge production in the face of the replication crisis, cognitive biases in science, and the politics of peer review? Further, how can we protect scientists from becoming commodities when their products are so ardently sought by politicians and society?

April 6, 2021 Prof. Dr. Nancy Cartwright

April 6, 2021 [Online]

Responsible Science - Responsible Use

Prof. Dr. Nancy Cartwright (Durham University)

Watch the Recording on Facebook.

Humanity looks to the scientific community, now more than ever, in order to provide solutions to today's challenges. Decisions made by scientists thus directly and deeply influence human lives. The Carl Friedrich von Weizsäcker Center is interested in the foundations of responsible science. For example, how can we identify and avoid scientific misconduct, e.g. plagiarism and fraud, or the abuse of science for commercial purposes? How do we navigate issues of morally questionable research, research funding, and global inequalities? How can scientists ensure optimal knowledge production in the face of the replication crisis, cognitive biases in science, and the politics of peer review? Further, how can we protect scientists from becoming commodities when their products are so ardently sought by politicians and society?

February 24, 2021 Prof. Dr. Marco Panza, Prof. Dr. Daniele Struppa

February 24, 2021 [Online]

Agnostic Science and Mathematics

Prof. Dr. Marco Panza (Paris 1) and Prof. Dr. Daniele Struppa (Chapman University)

We'll firstly illustrate the notion of agnostic science (science without understanding), and reflect, then, on the effect that the practicer of agnostic science has on the use of maths in science, and for the development of maths itself.

February 17, 2021 Dr. Benedikt Ahrens

February 17, 2021 [Online]

The Univalence Principle

Dr. Benedikt Ahrens (Birmingham)

Michael Makkai's "Principle of Isomorphism" stipulates that mathematical reasoning is invariant under equivalence of mathematical structures. Inspired by Makkai, Vladimir Voevodsky conceived the Univalent Foundations (UF) of Mathematics as a foundation of mathematics in which only equivalence-invariant properties and constructions can be formulated. Coquand and Danielsson proved that UF indeed provides an isomorphism-invariant language for set-level structures, such as groups and rings, that form a 1-category. Ahrens, Kapulkin, and Shulman proved an extension for 1-categories: any property and construction that can be expressed in UF transfers along equivalence of categories—as long as “categories” are correctly defined to satisfy a local “univalence” condition. In the semantics of UF in simplicial sets, this univalence condition corresponds to Charles Rezk’s completeness condition for (truncated) Segal spaces.

In this talk, based on joint work with Paige Randall North, Michael Shulman, and Dimitris Tsementzis, I will show how to generalize this result to other higher-categorical structures. We devise a notion of signature and theory that specifies the data and properties of a mathematical structure. Our main technical achievement lies in the definition of isomorphism between two elements of a structure, which generalizes the notion of isomorphism between two objects in a category. Such isomorphisms yield the companion notion of univalence of a structure. Our main result says that for univalent structures M, N of a signature, the identity type M = N coincides with the type of equivalences M ≃ N. This entails that any property and construction on a univalent structure transfers along a suitable notion of equivalence of structures. Our signatures encompass the aforementioned set-level structures but also topological spaces, (multi-)categories, presheaves, fibrations, bicategories, and many other (higher-)categorical structures.

February 10, 2021 Marcel Ertel

February 10 2021 [Online]

Independence and Truth-Value Determinacy in Set Theory

Marcel Ertel (Tübingen)

We discuss the philosophical significance of classical and more recent results in the metamathematics of set theory: the Gödel-Cohen independence theorem of the Continuum Hypothesis (CH) from first-order set theory; Zermelo's quasi-categoricity result characterizing models of second-order set theory and Lavine's improvement thereof in an extended first-order framework (using Feferman's idea of a "full schema" allowing substitution of formulas from arbitrary language-expansions); and Väänänen's internal categoricity results.

In light of these technical results, we assess the ongoing debate between proponents of a set-theoretic multiverse (likening the CH to Euclid's parallel postulate in geometry) and defenders of the determinacy of the truth-value of the CH. We present two arguments against the multiverse view, and end with a discussion of the philosophical difficulties in explaining what it means 'to be a solution of the continuum problem'.

February 3, 2021 Paulo Guilherme Santos

February 3, 2021 [Online]

k-Provability in PA

Paulo Guilherme Santos (Tübingen)

We study the decidability of k-provability in PA – the decidability of the relation 'being provable in PA with at most k steps' – and the decidability of the proof-skeleton problem – the problem of deciding if a given formula has a proof that has a given skeleton (the list of axioms and rules that were used). The decidability of k-provability for the usual Hilbert-style formalisation of PA is still an open problem, but it is known that the proof-skeleton problem is undecidable for that theory. Using new methods, we present a characterisation of some numbers k for which k-provability is decidable, and we present a characterisation of some proof-skeleton for which one can decide whether a formula has a proof whose skeleton is the considered one (these characterisations are natural and parameterised by unification algorithms).

January 27 + 20 + 13, 2021 Dr. Roberta Bonacina

January 27, 2021 + January 20, 2021 + January 13, 2021  [Online]

Introduction to Homotopy Type Theory I, II and III

Dr. Roberta Bonacina (Tübingen)

Homotopy type theory is a vibrant research field in contemporary Mathematics. It aims at providing a foundation of Mathematics extending Martin-Löf type theory with the central notion of univalence, which induces a connection between types and homotopy spaces.
We will begin the short course defining the simple theory of types, and showing how it can be extended to Martin-Löf type theory and then to Homotopy type theory. We will stress the propositions-as-types interpretation between the type theories and intuitionistic logic, and study in detail the notion of equality. Then we will show how classical logic can be done in this intuitionistic setting, allowing to introduce the law of excluded middle and the axiom of choice as axioms. Finally, we will analyse the different definitions of equivalence, which are fundamental to introduce univalence.

Lecture notes

2020

December 16, 2020 Prof. Dr. Klaus Mainzer

December 16, 2020 [Online]

Verification and Standardization of Artificial Intelligence - Results of the German Steering Group (HLG) of AI Standardization Roadmap

Prof. Dr. Klaus Mainzer (München)

German Standardization Roadmap on Artificial Intelligence

 

December 9, 2020  Prof. Dr. Eberhard Knobloch

December 9, 2020 

Leibnizens Konzept Einer ars Characteristica oder ars Combinatoria: Beispiele aus der Mathematik

Prof. Dr. Eberhard Knobloch (TU Berlin)

Leibnizens Konzept einer ars characteristica oder ars combinatoria verdeutlicht den engen Zusammenhang zwischen seinem philosophischen und seinem mathematischen Denken. Der theoretische erste Teil des Vortrags stellt dieses Konzept mit seinen vier Vorteilen vor. Symbolische Algebra diente Leibniz als Modell für dieses Konzept. Daher wird der zweite Teil des Vortrags das Konzept an algebraischen Beispielen, insbesondere am Beispiel der symmetrischen Funktionen exemplifizieren. Diese waren sein zentrales Hilfsmittel bei der Suche nach der algorithmischen Auflösung einer algebraischen Gleichung beliebigen Grades.

December 2, 2020 Dr. Richard Lawrence

December 2, 2020 

Hankel's Formalism, Frege's Logicism, and the Analytic-Synthetic Distinction

Dr. Richard Lawrence (Tübingen)

I will discuss some research on Hermann Hankel, an early proponent of a formalist viewpoint in the foundations of mathematics, and the relation of his view to Gottlob Frege's logicism. I will argue that Hankel had an important influence on Frege. In particular, Hankel's understanding of the analytic-synthetic distinction, and his argument against Kant's view of arithmetic, play an important role in Frege's understanding of his logicism in the Foundations of Arithmetic. Frege thinks of the distinction the same way Hankel does, and shares Hankel's basic strategy for arguing that arithmetic is analytic, rather than synthetic. Given these similarities, an important question arises about how Frege's view differs from Hankel's; I will close with some comments about the differences.

Link to the corresponding paper: https://philpapers.org/rec/LAWFHA

November 25, 2020 Dr. Michael T. Stuart

November 25, 2020 

Guilty Artificial Minds: An Experimental Study of Blame Attributions for Artificially Intelligent Agents

Dr. Michael T. Stuart (Tübingen)

The concepts of blameworthiness and wrongness are of fundamental importance in human moral life. But to what extent are humans disposed to blame artificially intelligent agents, and to what extent will they judge their actions to be morally wrong? To make progress on these questions, we adopted two novel strategies. First, we break down attributions of blame and wrongness into more basic judgments about the epistemic and conative state of the agent, and the consequences of the agent’s actions. In this way, we are able to examine any differences between the way participants treat artificial agents in terms of differences in these more basic judgments about, e.g., whether the artificial agent “knows” what it is doing, and how bad the consequences of its actions are. Our second strategy is to compare attributions of blame and wrongness across human, artificial, and group agents (corporations). Others have compared attributions of blame and wrongness between human and artificial agents, but the addition of group agents is significant because these agents seem to provide a clear middle-ground between human agents (for whom the notions of blame and wrongness were created) and artificial agents (for whom the question is open).

November 18, 2020 Natalie Clarius

November 18, 2020 

Automated Model Generation, Model Checking and Theorem Proving for Linguistic Applications

Natalie Clarius, B.A. (Tübingen)

We present a model generator, model checker and theorem prover we developed for applications in linguistics. Alongside a live demonstration of the system, we will discuss a selection of phenomena with respect to their formal and computational tractability, as well as the theoretical foundations and limitations of such automated reasoning systems.

November 11, 2020 Dr. Maël Pégny

November 11, 2020

Machine Learning and Privacy: What's Really New?

Dr. Maël Pégny (Tübingen)

In this presentation, I will try to capture the new challenges for the respect of privacy raised by machine learning. I will use both a (very) long term perspective inspired by anthropological work on the effects of cognitive techniques and the origins of writing, and a short-term perspective based on comparisons with other types of algorithms and data treatment. I will try to show that machine learning has very specific and fundamental effects, which include challenging some of the basic categories on which our legal data protection regimen was built.

November 4, 2020 Prof. Dr. Klaus Mainzer

November 4, 2020

Künstliche Intelligenz im Globalen Wettstreit der Wertsysteme

Prof. Dr. Klaus Mainzer (München)

Das „Atomzeitalter“, von dem Carl Friedrich von Weizsäcker in den 1950er und 1960er Jahren ausging, war gestern. Heute und morgen geht es um Digitalisierung und Künstliche Intelligenz (KI) – ein globales Zukunftsthema, das unsere Lebens- und Arbeitswelt dramatisch verändert. In Corona-Zeiten erhält diese Entwicklung eine zusätzliche Beschleunigung. Diese technischen Möglichkeiten treffen auf unterschiedliche weltanschauliche Resonanzböden, auf denen wie in USA oder China Big Business, Technokratien und Staatsmonopolismus gedeihen können. Wie kann ein europäisches Wertesystem dazu beitragen, dass KI zur nachhaltigen Innovation wird?

Literaturhinweise:
K. Mainzer, Künstliche Intelligenz. Wann übernehmen die Maschinen? Springer 2. Aufl. 2019 (engl. Übers. Springer 2019);
ders., Leben als Maschine: Wie entschlüsseln wir den Corona-Kode? Von der Systembiologie und Bioinformatik zu Robotik und Künstlichen Intelligenz, Brill Mentis 2020

Video des Vortrags: https://www.youtube.com/watch?v=Tf4ccAetTSM