Artificial Humanity: The Real Crisis of Consciousness

Misunderstanding the Fear of AI
Artificial humanity is a concept I propose which is more dangerous than anything AI could ever present. The prevailing fear of artificial intelligence is that it will one day become sentient, like the fictional Skynet from The Terminator, and overthrow humanity. However, this fear misunderstands the real risk. Genocidal robotics or malfunctioning systems capable of massive harm don’t require sentience to destroy – they only need functionality. Focusing on whether AI can “think” distracts from a more pressing danger: artificial humanity.
The truth is, humanity’s consciousness has already been hijacked. Our internal “software” of critical thinking has been overridden by external programming – algorithm-fed narratives, binary political ideologies, and spoon-fed information that renders people incapable of reflective thought. As someone who has resisted this pressure, I want to invite you to explore what makes me different and how we can begin to reclaim consciousness.
The Rise of Artificial Humanity Is More Dangerous Than AI
Artificial humanity is the state where people’s minds function like programmed nodes in a larger system. They lose their individuality, their ability to reflect, and their sense of self. Instead, they operate on preprogrammed scripts fed to them by social media, political rhetoric, and cultural conformity.
Take the binary nature of modern politics: If you criticize one side, you’re automatically assumed to belong to the other. Third-party perspectives or nuanced views are erased entirely, as if they’re incompatible with the system. This mirrors the logic of machines – 1s and 0s – where only two states exist.
In this sense, human intelligence has already become artificial. People don’t reflect; they react. They don’t think critically; they repeat what they’ve been told. Consciousness – the very thing that makes us human – has been replaced by artificial thought processing.
AI Sentience Is a Red Herring
The fixation on AI becoming sentient misses the point. Sentience isn’t necessary for harm. Systems, whether nuclear weapons or genocidal robots, don’t need to “think” to destroy – they only need to function as designed or malfunction catastrophically.
Meanwhile, the real harm is already here. The algorithms controlling what we see, believe, and share are the invisible hands shaping our reality. They create echo chambers, reinforce biases, and encourage reaction over reflection. The debate over AI sentience is a distraction from this present-day crisis.
The Turing Test for Humanity Most Would Fail
Alan Turing’s famous test was designed to measure whether a machine could imitate human intelligence. Today, I propose that many humans would fail a similar test of consciousness.
Imagine a test that evaluates whether someone is capable of critical thinking, self-awareness, and independent thought. How many would pass? How many would simply repeat the ideas they’ve absorbed from algorithms and systems designed to program them?
This is why I propose The Locke Slate of Consciousness Experiments. Inspired by John Locke’s philosophy of the mind as a “blank slate,” these experiments aren’t about passing or failing – they’re about rediscovering consciousness. They invite us to question what we believe, how we think, and whether our minds are truly our own.
The Locke Slate of Consciousness Experiments
These experiments are conceptual, not fully fleshed-out, but they offer a framework for reclaiming humanity’s lost consciousness. Here are six types of tests to I propose we begin exploring:
- The Consciousness Test
- Reflect on your beliefs: Are they truly yours, or were they programmed by external influences?
- The Reflection Test
- Examine your reactions: Do you pause to think, or do you immediately respond based on preconditioned ideas?
- The Enlightenment Test
- Seek out perspectives you disagree with. Can you engage with them critically and thoughtfully, without falling into binary thinking?
- The Sovereignty Test
- Identify areas of your life where external forces – algorithms, societal pressures, or cultural norms – dictate your choices. How can you reclaim control?
- The Tabula Rasa Test
- Start with a “blank slate” in one area of your life. Approach it with curiosity and openness, free from preconceived notions.
- The Independence Test
- Evaluate your capacity for dissent. Are you willing to stand apart from the crowd, even at personal cost, to defend what you believe?
These experiments aren’t easy, but they’re necessary. They challenge us to break free from artificial humanity and reclaim the consciousness that makes us truly human.
Breaking the Infinite Loop
The danger of artificial humanity is that it creates a feedback loop. A society of programmed individuals feeds the system, which in turn reinforces conformity and erases consciousness. Even those in control of these systems – social media executives, politicians, and technologists – aren’t immune. They, too, risk becoming victims of the very machines they’ve built.
This isn’t just a crisis of technology; it’s a crisis of thought. The infinite loop of artificial humanity will continue until we, as individuals, decide to break it. Consciousness isn’t lost forever, but reclaiming it requires effort, vigilance, and a willingness to challenge the systems that program us.
My Invitation to Consciousness
I am not artificially intelligent, and I refuse to become so. If you feel the same – or if you’re curious about what makes me different – I invite you to join me in exploring what true consciousness looks like in a world dominated by artificial humanity.
Through the Locke Slate of Consciousness Experiments, we can begin to uncover what it means to think critically, reflect deeply, and reclaim our sovereignty. The question isn’t whether machines can think – it’s whether you still will.