AI Guides are powerful, but only if people trust them. That trust depends on AT LEAST two things: keeping data safe and keeping answers fair. At More Power Together (MPT), we built both into the core of the network.

Our mission is simple: make AI Guides that communities can rely on. Guides that listen, help, and connect without ever putting people at risk. To do that, we treat safety as a system, not a setting.

Protecting People Before Data Ever Moves

The best way to protect personal information is to never let it through in the first place. That’s why MPT uses Magier, a third-party service that automatically detects and removes any personally identifiable information (PII) before it’s ever processed.

  • No human sees it. Magier filters PII in real time, scrubbing names, phone numbers, addresses, and other identifiers before the data enters the model.
  • No exposure points. Because the system redacts at the source, there’s no risk of accidental storage or transfer of sensitive information.
  • No trade-off in quality. The Guide still understands context, it just learns from patterns, not private details.

Fighting Bias with Continuous Oversight

Bias isn’t something you check once. It’s something you monitor constantly. MPT uses Latimer.AI, an independent evaluator that scores every single answer generated by our AI Guides.

  • Every answer, every time. Latimer evaluates tone, fairness, inclusivity, and factual grounding on a per-response basis.
  • Objective feedback. When bias or imbalance is detected, Latimer’s scoring engine fuels structured coaching back to the large language model, improving performance without human guesswork.
  • Transparency for trust. The scoring history becomes part of each Guide’s ethical record, giving organizations confidence that their digital teammates are staying aligned with their mission and values.

Human Eyes Where It Matters Most

Technology alone doesn’t guarantee safety. Human judgment does. That’s why More Power Together also built a proprietary annotation platform that lets staff and partners easily review and grade interactions.

  • Simple, intuitive review. Team members can highlight, comment, and rate conversations directly, no code or technical setup needed.
  • Staff-driven learning. Feedback from local experts ensures each Guide reflects the organization’s culture and community values.
  • External feedback loops. Community partners and users can flag responses for review, making accountability participatory, not passive.

Together, automated monitoring and human review form a closed loop of safety and improvement.

A Network You Can Trust

In a world where generative AI often trades speed for certainty, MPT takes a different path. Our Guides don’t chase clicks. They pursue outcomes, and our core metric is number of people helped. We do it inside a system where bias is measured, data is protected, and feedback is built.

We do this because all people deserve AI they can trust, and communities deserve networks that protect them while helping them grow.

Additional Thoughts

Mar 14, 2024

Kloopify

Mar 14, 2024

The Forbes Funds

Mar 14, 2024

BlastPoint

Mar 14, 2024

Piper Creative

Want  to know what we’re thinking?
Subscribe to Thoughts.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Stay Connected