top of page

Tool

Model Dignity Check

What it is: A pre-launch checklist for ensuring AI systems preserve human dignity and catch blind spots before they scale.

Client

Year

Services

project details

Where you would use it:

·       Before any AI system or feature goes live

·       When reviewing existing AI implementations

·       During design phases to catch problems early

·       When updating or retraining AI models

The tool: Five Questions to ask before launch:

·       Who becomes invisible when we optimize? Name specific people, not categories—“elderly residents in walk-ups” not “some users”

·       What “normal” is baked into the training data? Every dataset tells a story about who matters—whose reality shaped this system?

·       How does this perform for our most vulnerable users? Test on edge cases—the users with least power, resources, or technical literacy

·       Can affected humans understand and contest decisions? Opacity breeds distrust—is there a real path to challenge the algorithm?

·       Does this strengthen or erode human agency? Are we augmenting human judgment or replacing it?

How to use it: Before launch, document written answers to all five questions. Be specific—vague answers hide real problems. If any answer troubles you, redesign before deploying. This isn’t a compliance checkbox but a discipline for catching what pure optimization misses. Run this check again whenever you update or retrain the system.

more  Tools

Check out more tools

The Mirror Test

The Mirror Test

The Curiosity Loop

The Curiosity Loop

There's Little Time To Waste

Don't wait, get started today building your personal or organizational roadmap to thriving in the age of AI.

Start a Project

Start a Project

bottom of page