Juristische Fakultät

Algorithmic Collusion and Antitrust: Economic, legal and AI perspectives discussed on April 19th in Tübingen

The symposium on algorithmic collusion brought together speakers from different institutions and fields to discuss the matter. The event was hosted by the Tübingen AI MEETS LAW platform. 

Stefan Thomas (Tübingen Law Faculty/CZS Institute for AI and Law) kicked off the meeting with an introduction from a legal perspective. He briefly explained the types of algorithmic pricing, differentiated between explicit and tacit collusion, with only the former making a cartel, and elaborated on the difficulties of applying established jurisprudence, which focuses on human-made decisions, to machine learning phenomena. 

In autonomous algorithmic pricing, the rival’s reaction to prices is generally tacit, making it difficult to distinguish coordination from parallel independent pricing. This raises the problems of, first, what a non-collusive price may look like, and, second, how to test if an algorithm has led to collusion. 

In conclusion, Stefan emphasized that these legal problems can only be resolved with interdisciplinary input from machine learning and economics. He referenced a new paper that he and Roman Inderst have recently released (see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4816287).

The symposium then continued with the machine learning expert Carsten Eickhoff (Health NLP Lab University of Tübingen), who gave insights from a technological perspective. He quickly caught the audience’s attention with an introduction and recap of different machine learning methods, such as supervised, unsupervised and reinforcement learning, before delving deeper into the field of explainable machine learning. This is where scientists try to understand model behavior. Up until recently, this included testing different input and using statistical analysis to study the output for spurious correlations. For the most part, the model still remained a black box for the human observer. This problem only becomes more difficult with increasing model sizes, as the current trend for foundation models suggests. A new approach called mechanistic interpretability tries to overcome this challenge by locating individual components of deep neural networks such as neurons, heads, and layers that are strongly activated when input is propagated through the system. This may help to make the algorithm more explainable.

Roman Inderst (Department of Economics, Frankfurt University) then reflected on the topic from an economic perspective. He homed in on the point that collusion in a legal sense can differ from a strictly economic definition of collusion. The economists’ concept of collusion can be described as a higher price equilibrium sustained through a system of detection and punishment that includes both explicit and tacit collusion, while a legal conception of collusion may even cover unilateral information disclosure leading to supracompetitive prices.

Roman then summarized the current state of scholarly work, which showed that, in laboratory settings, algorithms can seemingly learn to collude. However, the research design of those studies is oftentimes very restricted and simple, making it difficult to draw conclusions in relation to real markets. He concluded by stating that a thorough counterfactual analysis is key to any assessment of algorithmic collusion. 

Rounding things up, David Gilo (Buchmann Faculty of Law Tel Aviv and former President of the Israel Competition Authority) commented on the previous interventions and added his view on the matter. He opined that pricing algorithms can help overcome two obstacles of tacit collusion, namely stability and coordination. For the former, pricing algorithms may promote fast detection and learn to establish reward or punishment schemes. For the latter, pricing algorithms may facilitate coordination by using digital pricing platforms to signal. These can be factors that are difficult to tackle with interventions. David Gilo then expounded on those challenges outlining a perspective where hypothetical competitive prices are modeled and can then serve as a benchmark for analysis and enforcement. 

At the end, a vivid discussion unfolded, deepening the issues raised on the panel. Interestingly, according to Carsten Eickhoff, engineering may not be the problem when it comes to preventing algorithms from pursuing a collusive strategy. Rather, it is upon economists and lawyers to find a clear definition of what they consider an illicit market outcome in the first place. 

Text: Maximilan Jaques & Stefan Thomas