Abstract image representing AI accountability and oversight.

Accountability

AI is Deciding for Me, But Who’s Watching AI?

AI not only aids decision-making; it makes decisions on our behalf—often without our awareness, transparency, accountability, or public oversight.

Artificial Intelligence (AI) has been quietly influencing our lives for decades, often without our conscious awareness. However, recent advancements, particularly in Generative AI, have thrust AI into the spotlight, marking a pivotal moment reminiscent of the iPhone's transformative impact.

AI is fundamentally changing how we live – faster and more invisibly than ever before. It's not just in labs or sci-fi movies anymore. It's in our pocket, on our screen, influencing what we see, the choices we make, and the opportunities available to us.

  • Internet searches are filtered by AI, prioritizing the information we see first.
  • Social media algorithms shape our worldview, reinforcing certain opinions while hiding others, creating Social Media Filter Bubbles.
  • AI hiring tools decide if our job applications are viewed.
  • Banking AI is affecting whether we get a loan, a mortgage, or financial support.
  • Healthcare AI helps diagnose diseases – but it might not work equally well for all patients.

So, the real question is: Who's watching AI?

In a world where AI is increasingly integrated into various aspects of our daily lives, it is imperative for us to understand its use cases, implications and recognize our role in shaping its ethical development.

Why It Matters: AI Systems Reflect Our World, Flaws and All

AI systems are now integral parts of our lives in areas like hiring, lending, law enforcement, and healthcare. However, it is important to note that the historical data employed for training AI can be skewed, leading to inherent biases and real-world consequences.

  • Facial Recognition Errors: Studies have shown that facial recognition software often misidentifies individuals from certain racial groups, leading to false arrests and other grave consequences (AIMultiple).
  • Job Screening Discrimination: Amazon’s AI hiring tool was scrapped after it was found to favor male applicants over female ones because it was trained on past hiring patterns that were already biased (Reuters).
  • Healthcare Bias: An AI tool designed to detect skin cancer performed significantly worse for patients with darker skin tones because its training data mostly came from lighter-skinned individuals. This means people of color are less likely to get correct diagnoses (Prolific).
  • Soap Dispenser Issue: An AI-powered soap dispenser was unable to detect darker skin tones, resulting in a failure to dispense soap. Although subtle, this example highlights the potential for bias in AI to be integrated into everyday technologies (Policy Options).

Fairness, Transparency, and Accountability

AI’s biggest problem? It’s a black box.

The complex nature of AI algorithms makes it challenging to understand their decision-making processes and difficult to address potentially biased or unfair outcomes. Without transparency, how can we trust AI to make fair decisions?

  • AI Image Generation Bias: In 2024, Google's Gemini chatbot faced backlash after generating racially inappropriate images, highlighting the need for greater oversight and ethical considerations in AI development (The Wall Street Journal).
  • AI in Immigration Screening: The UK’s Home Office uses AI to prioritize immigration cases, but critics say the system favors certain nationalities while fast-tracking others, leading to unfair outcomes (The Guardian).
  • Facial Recognition & Surveillance: Many cities—including some in Canada—use AI-powered surveillance. But studies show these systems misidentify people of color more often, leading to higher rates of wrongful identification and profiling (Knight Columbia).

AI Isn't Just Data—It's About People

AI bias or flaws result in real consequences for real people.

  • AI Amplifying Misinformation: The introduction of AI Applications like X's Grok has been linked to an increase in online racist abuse, showing how AI can be misused to amplify harmful content (The Guardian).
  • Housing Discrimination: An AI-driven tenant screening system wrongfully denied housing to renters based on biased data. A class-action lawsuit forced the company to pay over $2.2 million in settlements (AP News).
  • AI in Credit Scoring: AI has given lower credit scores to minority applicants, even when they had similar financial histories as white applicants—deepening financial inequality. (Arxiv).
  • AI in the Workplace: Some companies use AI to watch employee productivity—tracking mouse movements, keystrokes, and webcam activity. But who decides what’s “productive” and what’s not? (The Australian).
AI is more than just a tool—it’s a decision-maker. If allowed to be used to enhance our strengths, it can advance humanity, but if left unchecked, it could reinforce inequality, limit opportunities, and erode privacy.

Final Thoughts: Embracing Our Role in AI's Evolution

AI is experiencing a transformative moment, akin to the advent of the iPhone, rapidly integrating into various sides of our lives. This evolution presents both opportunities and challenges. By staying informed, actively taking part in public consultations, and advocating for fairness and transparency, we can ensure that AI developments align with societal values and promote equity for all. Our collective actions today will shape the ethical landscape of AI for future generations.

We don’t get that future by waiting.

We start building it today. By showing up, speaking up, and shaping a responsible world.