Uni-Tübingen

Dilectiss Liu

Contact:dilectiss.liu[at]uni-tuebingen.de
Verfahren eröffnet: 29. April 2021
Promotionskolloquium: 19. Juli 2021

 

Biographical Information

  • Feb 2018–Present: Research Assistant and PhD Student at the Research Training Group 1808: Ambiguity – Production and Perception, Eberhard Karls University of Tübingen
  • Dec 2017–Apr 2018: Research Assistant for the EU project RE.CRI.RE – Between the Representation of the Crisis and the Crisis of Representation, Ludwig-Maximilians University of Munich
  • Oct 2015–Oct 2017: MA in Logic and Philosophy of Science, Munich Center for Mathematical Philosophy; Thesis: ”Steps Towards Reconstructive Pragmatism – A Metaphilosophical Proposal Post Conceptual Analysis”; Supervisor: Prof. DDr. Hannes Leitgeb; Second reader: Dr. Norbert Gratzl; DAAD Graduate Scholarship
  • Feb 2015–Jul 2015: Visiting Scholar at Department of Philosophy, University of Sydney; Supervisor: Prof. Dr. Mark Colyvan
  • Feb 2011–Dec 2014: Bachelor of Arts with Honours, Australian National University; Thesis: “The Value of Knowledge”; Supervisor: Dr. Brian Garrett; Major in Mathematics, Major in Philosophy; Defence Signals Directorate Mathematics Prize – 2013

 

Abstract: "Machine Philosophy – A Foundation for Philosophical Methodology Against Philosophical Exceptionalism" (working title)

This essay serves two purposes. One, it is an argument against a kind of philosophical exceptionalism – the idea that philosophical methodology is distinct from the empirical sciences. Two, it provides a foundation for constructing a manual on how to do proper philosophy.
This project has been motivated by a combination of two affairs. First, the utter chaos in philosophical practice and treatises on philosophical methodology over the past century. In particular, the conflicts on the role of ordinary language, the arbitrary distinctions between doing normative vs. doing descriptive projects, the mysterious status of intuitions and their pervasiveness in philosophical practice. These issues have manifested in all areas of philosophy. A few of the most infamous candidates are debates on what knowledge is, debates on whether the mind is physical, debates on whether ethics is an objective matter. The blatant problem is that philosophers cannot seem to agree on anything due to, simply speaking, not having an established set of criteria for agreement. This is especially dire given that the empirical sciences have established a working – though imperfect – manual on how to do proper science. The lack of progress in philosophy has been measured against the steady progress in the sciences, which is a direct consequence of an established scientific method – in particular – on the convergence of scientific opinions. Although, the idea of a manual on how to do proper philosophy may prima facie seem like an oxymoron. This appearance stems from the age-old idea that philosophy is somehow unique in that it’s the only field in which everything can be questioned, including the very method of enquiry. However, this outdated idea is based on a conflation between epistemic uncertainty and reasonable doubt. We know that theories in the empirical sciences are, by the nature of their justification, epistemically uncertain. However, given our understanding of probabilistic reasoning, we no longer draw the faulty inference that an uncertain proposition could be reasonably doubted. Our understanding of being reasonable has also undergone revision in the late twentieth century from an outdated goal of epistemic certainty to a more pragmatic and wholistic one where being reasonable means being pragmatically reasonable – or if one dislikes the term ‘pragmatic’ – being reasonable with respect to a domain or goal. For example, one need not be a pragmatist to accept that violating logical closure is not irrational under most circumstances, even in pure mathematics. This is because of the fact that some information is simply more valuable than others. Given our more refined understanding of probability, there is no reason to think that everything can be reasonably questioned, in philosophy or not. The lack of agreement in philosophy is not a virtue, but a vice. It is not a consequence of some unique openness of philosophy, but a consequence of a methodological blackbox. I suspect that the lifeblood of this methodological blackbox is its comforting illusion of an unrestricted possibility for creative input. However, as I see it, and as a matter of fact given how professional philosophy is funded, philosophy is an epistemic, not an artistic enquiry. The goal of philosophy is truth. So if we wish to defend the value of philosophy and display real philosophical progress, we simply need to get out of this fuzzy blackbox.
The second affair that motivated me to start this project was the progress of the statistical sciences, especially machine learning, both in academia and in practice. I see this as the perfect catalyst for overcoming our methodological blackbox. The basic reason is simple: the success of statistical reasoning has bypassed and rendered innocuous the issues of Cartesian uncertainty. Although, I should qualify this statement. Machine learning isn’t a recent idea. However, before the early 2000s, data was relatively scarce and expensive, and computers weren’t quite as fast for processing large quantities of data. So machine learning models simply didn’t perform well enough to compete with A.I.s, which are based on explicit boolean instructions. Even now, philosophy clings onto boolean reasoning as the default way of argumentation. The entire validity of the Gettier arguments in epistemology has been reliant on boolean reasoning. Take it away and the Gettier cases would fall apart. However, in the last decade, big data became possible as data has become cheaper and more abundant, while computing power multiplied. In the light of these two development, machine learning models began to outperform A.I.s. Furthermore, they have solved issues that A.I.s have long struggled to tackle, such as defeating the best human player in GO, or providing reliable natural language translations. And in some areas such as automated driving, machine learning is indispensable. This problem solving capability of machine learning has shown us that contrary to traditional assumptions, low-fidelity trial-and-error learning can be more efficient and accurate than high-fidelity explicit instructional learning. It has shown us that in general, having more data is better than having smarter algorithms, if the goal of our enquiry is to fit a useful curve to a given set of data. And this curve-fitting is the activity of theorising in the sciences. Scientists have learned that the real goal of a scientific theory are in reality practical: how well the theory generalises (which translates to how well the theory works in practice, how well it predicts etc.), or how well the theory explains the data. Truth is measurable only via these indirect means, and the best way to achieve these means is to do statistics. Evidence should be granted statistical significance rather than be accepted or denied. i.e. The success of machine learning has shown us that our age-old obsessions with epistemic certainty, boolean reasoning, fitting the data perfectly are distractions from real philosophical work. And it is the goal of this essay to rectify this situation.

 

Research Interests

  • Epistemology (esp. contextualism, justification, models of belief & decision, social epistemology, machine epistemology)
  • Logic & philosophy of language (esp. theory of definition, ‘approximate truth’, formal semantics, meaning and use, vagueness)
  • Carnapian rational reconstruction (esp. of concepts/definitions)
  • Pragmatism (esp. utility measures, weighing of criteria, relevance)
  • Normativity (esp. value measures, permissibility, rationality, rational choice)