Guiding Truth and Understanding

There are too few philosopher’s in the AI revolution right now. Philosophy is also a much less understood concept than AI. These days people may use multiple AI tools in their work or personal life. However, philosophy is one of the least frequently studied disciplines in university. Next to mathematics, it requires the most mental processing power to engage with.

That means no matter how intelligent or functioning an artificial intelligence system like an LLM is, it is only trained on a corpus of text. Simply reading and regurgitating philosophic ideas is the lowest form of practicing the discipline of philosophy. So philosopher’s in the AI revolution have a doubly difficult task. They must continue forwarding the progress of one of our most important human activities (philosophy) while engaging with something that could be existentially threatening to our species itself.

Background On AI In A Philosophic Context

The term “artificial intelligence” (AI) was first coined in 1956 by computer scientist John McCarthy during the Dartmouth Conference, a pivotal event that marked the formal establishment of AI as a distinct field of study. 

Despite its widespread use, the term “artificial intelligence” is often misunderstood, leading to several misconceptions:

Equating AI with Human Intelligence: Many assume that current AI systems possess human-like understanding and consciousness. In reality, AI models operate based on complex algorithms and data processing without genuine comprehension or awareness. 

Overestimating AI’s Capabilities: There’s a common belief that AI can autonomously solve any problem. However, AI systems are typically specialized for specific tasks and lack the general problem-solving abilities inherent to human intelligence. 

Underestimating AI’s Limitations: Some view AI as infallible, overlooking issues like biases in training data and the potential for errors in AI-generated outputs. Recognizing these limitations is crucial for responsible AI development and deployment. 

Dissecting Misconceptions

These misconceptions highlight the importance of philosophical inquiry to critically assess and guide the development and integration of AI technologies in society.

Unlike engineers, whose work focuses on building and refining AI systems, philosophers are trained to ask deeper questions: What does it mean to “know” something? How do beliefs and opinions shape actions, and how does the absence of such internal states affect moral accountability in machines? These questions become especially urgent as AI increasingly mirrors human-like behaviors without truly “thinking” in human terms.

For example, human beings often express attitudes or opinions they do not fully understand or articulate, and AI can do the same through programming and learned patterns. This raises critical philosophical questions: Does the expression of an attitude require understanding, or is its impact more important than its origin? Philosophers, tasked with examining these nuances, can guide humanity toward clearer definitions of what it means to be human in contrast to what it means to simulate humanity.

Uncrowned Philosopher Kings

Moreover, philosophers are needed not just as kings or decision-makers but as guides for society—helping to uncover truths buried under layers of technological and ideological confusion. In an age where truth is fragmented and contested, philosophers bring a commitment to inquiry, skepticism, and intellectual honesty that is essential for charting a way forward. They can postulate truths yet to be discovered, provide frameworks for ethical decision-making, and help reconcile human values with technological advances.

Beyond Abstractions

Philosophy has often been regarded as a discipline concerned with abstract, lofty questions—pondering the nature of existence, morality, and truth in ways seemingly detached from the practical challenges of daily life. However, the rise of artificial intelligence challenges this perception, revealing philosophy’s vital role in addressing some of the most pressing questions of our time. At its core, the development and integration of AI are not merely technical endeavors but deeply human ones. These systems, designed to mimic intelligence, force us to reconsider what it means to think, reason, and act with purpose. AI systems increasingly impact human lives through decisions in healthcare, justice, education, and the economy, and these decisions are only as fair or meaningful as the frameworks we use to shape them.

This is where philosophy steps in—not as a distant academic exercise, but as a toolkit for questioning assumptions, analyzing complex systems, and identifying deeper truths that inform ethical decision-making. In the rush to advance AI, there is a temptation to focus solely on efficiency and capability without fully grappling with their societal implications. Who decides what is ethical? What values guide the creation of these systems? How do we ensure these technologies align with the principles of human dignity and equity? These are philosophical questions, not just technical ones, and answering them requires a commitment to rigorous, interdisciplinary inquiry.

No Ivory Towers

Philosophy, in this sense, is not an ivory tower pursuit but a practical necessity. It offers humanity a compass for navigating the uncertainties of the AI revolution—ensuring that the technologies we create serve human flourishing rather than undermining it. Philosophers are needed not only to help shape the ethical boundaries of AI but also to help society understand itself in light of these advancements. By providing the intellectual tools to navigate these challenges, philosophy ensures that we remain not only creators of technology but also custodians of its purpose and meaning.

Philosophy, in this sense, is not an ivory tower pursuit but a practical necessity. It offers humanity a compass for navigating the uncertainties of the AI revolution—ensuring that the technologies we create serve human flourishing rather than undermining it. Philosophers are needed not only to help shape the ethical boundaries of AI but also to help society understand itself in light of these advancements.