Responsible AI Principles
Our framework is built on four key principles designed to empower every individual to engage with AI safely and consciously.
Understanding
The first step to using AI responsibly is recognizing when it is at work. AI has real power and shapes many everyday experiences. What you watch, how services respond to you and how information is organized, everything is influenced by AI. Understanding AI means seeing it clearly. It is a technology created by people, trained on large amounts of data and designed to support decisions at speed + scale. When you are aware of this, you stay in control. You can use AI with confidence, question its outputs when needed and make informed choices. Use its power as a tool to speed up tasks and don't let it make decisions for you.
Fairness
An AI system is only as fair in it's assessments as the data it’s trained on. AI learns from real-world data and may reflect or accentuate the same gaps/inequalities that exist in our society. A responsible approach to AI means paying attention to how its decisions affect different people. It asks us to consider who benefits, who may be overlooked and whether outcomes are equitable. For example, an AI used in hiring or identification may perform better for some groups than others if fairness is not actively considered. Ensuring fairness is a shared responsibility. By questioning, reviewing and improving AI systems, we help make sure they serve the public good and reflect the diversity of the communities they impact.
Transparency
Transparency in AI means you have the right to understand how decisions about you are made. When AI influences things like what content you see, whether a service is approved or how information is ranked, those decisions should not feel like a mystery. A responsible approach to AI means systems should be able to explain their outcomes in clear, human language. It asks simple questions: Why did this happen? What information was used? Who is responsible for the result? Transparency puts people in the loop. By asking for explanations and expecting clarity, citizens help ensure AI decisions can be questioned, checked and trusted—so technology works with the public, not around it.
Agency
Agency is the right to be more than a data point. AI can sort, score and predict, but it cannot fully understand a person’s values, intentions or lived experiences. This principle affirms that decisions affecting people should not be reduced to automated outputs alone. AI may assist decisions, inform choices, but it should not define them. Preserving agency ensures that human judgment, context and responsibility remain at the centre. Technology should expand human possibility rather than narrowing it.