Exploring the intersection of philosophy and artificial intelligence

The ethical exploration of artificial intelligence in philosophy

In today’s rapidly evolving technological landscape, the integration of artificial intelligence (AI) with philosophical inquiries has become a necessity. At the helm of this intersection lies a groundbreaking course offered here at Mt. A, where students embark on a journey to dissect the ethical, societal, and existential implications of AI. Dr. Andrew Inkpen provides a glimpse into the rich array of topics covered and its consequent impact on a students’ intellectual growth.

I inquired about Dr. Inkpen’s motivation behind the inception of this groundbreaking course. He mentions his profound recognition of the pressing need to bridge the gap between philosophy and AI. Dr. Inkpen elaborated on how the course aims to navigate the ethical, societal, and existential implications of AI, offering students a platform to engage deeply with these complex issues. With a diverse group of students hailing from various academic backgrounds, the course serves as a catalyst for interdisciplinary dialogue and critical inquiry. Dr. Inkpen’s commitment to fostering an inclusive learning environment, coupled with his passion for exploring the philosophical dimensions of AI, underscores the course’s significance in addressing the transformative impact of emerging technologies on society.

Comprising philosophical inquiries and practical applications of AI, the course is designed to provoke critical thinking and foster interdisciplinary conversations. From foundational questions about the nature of intelligence to complex ethical dilemmas surrounding AI’s role in society, the course navigates through a diverse array of themes. One of the pivotal aspects of the course is its exploration of the definition and manifestation of intelligence. Dr. Inkpen explained how discussions with experts like Katherine Stinson and Murray Shanahan shed light on the complicated nature of intelligence, prompting students to question whether AI mirrors human cognition or embodies a distinct form of intelligence altogether.

Additionally, I was surprised to find out that the course also explores the historical underpinnings of intelligence, as highlighted by Stephen Cave’s examination of its eugenic roots mentioned by Dr. Inkpen. The concept of intelligence is explored by unraveling the societal constructs and biases embedded. By doing this students gain a deeper understanding of its implications in the context of AI development. The philosophical inquiry extends to ethical considerations surrounding AI, embodied in discussions on existential risks and accountability. Also, drawing from the insights of Nick Bostrom a renowned philosopher at the University of Oxford, and Stuart Russell, professor of computer science at the University of California students grapple with the profound implications of AI for human existence and the challenges of ensuring responsible AI governance.

Gabriel Theriault – Argosy Illustrator

The course also dives into the creative potential of AI as exemplified by AlphaGo’s groundbreaking move in the game of Go explored in this course. By engaging with philosophical perspectives on AI-generated art and creativity, students explore the boundaries between human and machine intelligence, challenging traditional notions of creativity and expression. Beyond philosophical ideas, the course also explores the practical applications of AI in various domains, including healthcare and labor automation. Through guest lectures and discussions led by experts like Crystal Sharp, students gain insights into the real-world implications of AI technologies and their impact on society.

In addition to interviewing Dr. Inkpen, I reached out to a student in the seminar course, Matya Stavnitzky regarding their perspective on the course. Matya mentioned that since the course is a seminar there is no lecturing at all with the course consisting of readings and discussions. Matya also let me know that a variety of topics have been discussed in the course including consciousness, biases in algorithms, and AI art. It was stated that, “a big takeaway from the class for many students is the environmental impact AI has, especially large models such as chatGPT.” Stavnitzky also stated, “we don’t often think of software as having large impacts on the physical world, so learning about the carbon footprint of AI has been sobering.” Matya expressed their concern with how AI has been applied in recent years without the considerations for the limitations in data. Stavnitzky then expressed their opinion that, “we have been using AI systems to correct human fallibility without fully examining the ways machines can be fallible.” Stavnitzky elaborated on their explicit concern by mentioning that machine error is often harder to identify than human error.

I was intrigued by the contemporary success of the course so I inquired about  the future of similar courses in Philosophy at Mt. A. Dr. Inkpen reflected on the future of similar courses within philosophy departments providing valuable insights into the evolving landscape of higher education. While acknowledging the logistical challenges, his optimism underscores a growing recognition of the necessity to integrate philosophical inquiry with emerging technological domains like artificial intelligence. His emphasis on interdisciplinary collaboration and specialized faculty expertise highlights the need for innovative approaches to curriculum development. As institutions continue to grapple with the ethical and societal implications of AI, Dr. Inkpen’s vision offers a compelling blueprint for the expansion of philosophical ideas in shaping responsible AI governance and fostering critical inquiry in the digital age.

The fact that AI is being discussed in Philosophy courses is important and also fascinating because it offers a transformative learning experience that transcends disciplinary boundaries, igniting curiosity, and fostering intellectual growth. Through the lens of philosophy, students gain a complete understanding of AI’s profound implications for humanity, paving the way for informed dialogue and responsible innovation in the field of AIe.

One Response

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles