My work focuses on the structure and content of representations in deep neural networks. State-of-the-art machine learning systems are ubiquitous in modern life. Deep neural networks exhibit a remarkable level of predictive success worthy of the popular label of artificial intelligence (AI). These systems find applications in everything from algorithmic decision assistance in medicine and criminal justice to playing board games like chess and Go. Our epistemic networks are increasingly intertwined with machine learning algorithms, and scientists rely on them in computational models. However, deep learning systems are opaque in ways that make explaining their capacities intractable. Philosophers of science are uniquely positioned to investigate critical questions concerning the widespread implementation of AI systems. How can deep neural networks teach us about the brain? How do these models generate explanations in science? To what extent does science demand transparency in AI? How should rapid technological advancements in AI inform public policy? How can we promote more humane and egalitarian implementations of AI in public life? Understanding how AI exploits abstract representations can shed light on these important questions.
University of Houston Philosophy 2021
University of Houston Philosophy 2019