What, if anything, limits the automation of scientific inference? Restricting attention to the development of ontologies and theories that can be used to predict and control (and setting aside explanatory tasks), I defend a three part answer: First, automated discovery has already exceeded its grasp. Machine learning methods do not yield causal models, do not provide novel ontologies that can support generalizable causal theories, and are frequently deployed to generate predictive models in contexts beyond their range of applicability. This might suggest that the limits on automated inference are quite strong. However, I argue in the second place that automated scientific inference has been operating with "one hand tied behind its back" by excluding existing causal methods and eschewing novel, machine-generated ontologies. The more philosophically interesting of these self-imposed limitations is the rejection of unfamiliar ontologies. I suggest this rejection is contrary to the aims of automated discovery, and that if the restraints on existing methods are loosed, there are likely vast new vistas for (automated) science to explore. This optimistic view is tempered by my third thesis, namely that there does exist a significant impediment to human-free automated science, and it is deeply related to difficulties faced in the development of autonomous vehicles.