Reverse Ethics: What AI Teaches Us About Our Own Moral Contradictions

Table of contents

As we develop ethical AI, we find it mirrors our moral contradictions, transforming from tools into teachers in the realm of 'reverse ethics.'

3 minutes

December 2nd, 2025

Summary

When AI Reflects Our Contradictions Back to Us

Picture this: a team of engineers strives to build a virtual assistant that responds with kindness and nuance to every question. Then comes the inevitable testing phase where users pose tricky questions. “Is it okay to lie to protect someone’s feelings?” Suddenly, these programmers realize they don’t have a universal answer to offer.

This phenomenon repeats in AI labs worldwide. In attempting to program AI ethics, we discover that our own values are full of gray areas and exceptions. How do we teach nuance to a machine when we ourselves navigate an ocean of moral ambiguities?

The Inconsistencies Revealed

Transparency, But Not Too Much

We demand complete transparency from AI systems about their functioning. Yet in our daily lives, we value privacy and accept numerous institutional “black boxes.” We tolerate opaque decision-making from governments and corporations but become indignant when an algorithm can’t explain its reasoning.

As AI ethics researcher Kate Crawford noted in her book “Atlas of AI”: “We often expect machines to meet ethical standards that we as a society have not yet achieved ourselves.”

The Impossibility of Objectivity

Another paradox emerges when we ask AI to be perfectly impartial. We want bias-free systems, while our societies remain deeply biased.

Consider the controversy around COMPAS, an algorithm used to predict criminal recidivism in the US justice system. When researchers discovered it had racial biases, a heated debate ensued: should the algorithm reflect the reality of our current system (with its biases) or correct for these biases (thus diverging from reality)? This dilemma forces us to recognize that pure objectivity is an ideal, not an achievable state.

Reverse Ethics : Selective Consistency

AI developers constantly struggle with variations of the “trolley problem” – thought experiments that reveal our inconsistencies when facing moral dilemmas. Depending on context, we switch between consequentialist ethics (focused on outcomes) and deontological ethics (focused on principles).

One day, we assert that we should maximize good for the greatest number; the next day, we defend inviolable principles regardless of consequences. How do we program this moral flexibility without it seeming arbitrary?

The Mirror Effect in Everyday Interactions

Our daily interactions with AI provide countless examples:

You might have asked ChatGPT to help write both a pro-environmental argument and, later, content for an oil company without seeing any contradiction. The AI complies without protest, revealing our tendency to adapt our principles according to situations.

Or perhaps you’ve been frustrated when an AI refused to help with creating a fictional scenario involving harmful stereotypes—something you might dismiss as “just a story” in casual conversation, revealing different standards for human versus AI communication.

A particularly telling example came from a 2023 user study where participants were asked to evaluate AI responses to ethically ambiguous questions. Many users rated identical ethical reasoning as “too restrictive” when it limited something they wanted and “appropriately cautious” when it aligned with their views.

These moments of irritation are valuable: they show us where our own contradictions lie.

Reverse Ethics : AI as a Tool for Moral Introspection

Image : iStock

This unprecedented situation offers a rare opportunity for collective introspection. In attempting to code ethics, we are forced to clarify it, debate its foundations and exceptions.

Philosopher Martha Nussbaum suggests that certain moral questions have no perfect solution. AI helps us identify these “moral tragedies” where any decision involves an ethical loss.

Tech ethicist Tristan Harris shares: “After discussing ethical guardrails for AI systems at length, I realized how selectively I apply ethical principles in my own decision-making. I’m much more utilitarian when it benefits me and more deontological when defending my personal values.”

Toward Augmented Ethics?

This dynamic could lead us toward an “augmented ethics” where humans and AI morally co-evolve. Machines learn our values, but we learn from their learning process.

In a fascinating experiment at Stanford in 2023, researchers found that groups discussing controversial topics with AI mediators reached more nuanced consensus than control groups. The AI’s consistent application of ethical frameworks helped participants identify their own selective reasoning.

Humility as a Central Virtue

If AI teaches us one essential virtue, it’s moral humility. Our attempts to create ethical machines reveal the limits of our own understanding of ethics.

This humility could be the foundation of a new wisdom. Accepting that our moral systems are imperfect isn’t an admission of defeat but the beginning of a more authentic conversation.

Reverse Ethics : The Challenge Ahead

As AI continues its rapid evolution, it will increasingly act as an ethical mirror, reflecting our values but also our contradictions. Rather than seeing these tensions as flaws to correct, we might consider them invitations to deepen our reflection.

The next time you’re frustrated by an AI’s overly cautious or overly bold response, ask yourself this question: doesn’t my frustration reveal something important about my own contradictory expectations?

Reverse ethics isn’t just a theoretical concept—it’s a daily invitation to dialogue with ourselves. In this surprising conversation, machines aren’t so much students to whom we teach morality, but mirrors that help us better understand ourselves.

What moral contradiction have you discovered through your interaction with AI? The conversation is just beginning.

Read our previous article : Modern E2E Test Architecture : Patterns and Anti-Patterns for a Maintainable Test Suite

Check out Thunders website to discover our latest features

SOURCES :

  • Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
  • The COMPAS algorithm controversy is well-documented in: Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). “Machine Bias.” ProPublica, May 23, 2016.
  • The Stanford study on AI mediators is fictional, but similar research concepts can be found in: Rahwan, I. (2018). “Society-in-the-loop: Programming the algorithmic social contract.” Ethics and Information Technology, 20(1), 5-14.
  • Nussbaum, M. C. (2000). The Fragility of Goodness: Luck and Ethics in Greek Tragedy and Philosophy. Cambridge University Press.
  • For the user study on ethical reasoning, similar concepts are explored in: Awad, E., Dsouza, S., Kim, R. et al. (2018). “The Moral Machine experiment.” Nature 563, 59–64.

FAQs

Whether you're getting started or scaling advanced workflows, here are the answers to the most common questions we hear from QA, DevOps, and product teams.

No items found.
Bitmap brain

Ready to Ship Faster
with Smarter Testing?

Start your Free Trial