AlphaGen logo AlphaGen
Problem Solution About Legal Contact
Problem Solution About Legal Contact us
← Legal Documents

On this page

  • Responsible AI Policy
Version1.0.0
CategoryPolicy

Policy

responsible-ai-policy

Responsible AI Policy

AlphaGen Holdings Limited ("AlphaGen")

Companies House no. 17084844 — registered in England and Wales

Effective date: 2026-04-27

Version: 1.0.0

Contact: responsible-ai@alpha-gen.ai

This policy sets out the operational rules AlphaGen follows

when developing and deploying AI / machine-learning systems. It

complements the public-facing AI Transparency

page (which tells users where AI is used) and the

Acceptable Use Policy (which tells

users what they may not do with the output). This policy tells

users what we commit to.

We anchor this policy to the OECD AI Principles, the

EU AI Act (where applicable), the UK AI Safety Institute

guidance, and the NIST AI Risk Management Framework.

---

1. Principles

We design, deploy, and operate AI systems against six

principles, in this order of precedence when they conflict:

  1. Safety — AI must not cause physical or psychological

harm.

  1. Lawfulness — AI must comply with applicable law in every

jurisdiction where it is deployed.

  1. Fairness — AI must not produce decisions that

systematically disadvantage protected groups.

  1. Transparency — users must be able to understand, in

plain language, when AI is in use and what it's doing.

  1. Accountability — every AI decision is traceable to

identifiable humans, processes, and data.

  1. Effectiveness — AI must demonstrably do what we say it

does.

Where two principles conflict, we resolve in favour of the one

listed earlier. When the conflict is non-trivial, the

Responsible AI Review Board (§7) reviews the decision.

---

2. Customer Data and model training

2.1 Default: no training on Customer Data

AlphaGen does not use Customer Data to train, fine-tune, or

otherwise improve any model that is not exclusively for the

Customer's account. This is the default for every Customer.

2.2 Opt-in: per-Customer LoRA only

Customers may grant the "training" scope under the DPA

and the Privacy Policy

consent flow. Granting the scope authorises AlphaGen to use that

Customer's HITL corrections to train **a per-Customer LoRA

adapter** (see docs/legal/privacy/PUBLIC_PRIVACY_POLICY.md and

docs/lora/). The adapter is:

  • Stored in the Customer's account namespace.
  • Never shared with any other Customer.
  • Deleted on Customer termination per Master Agreement §11.6.
  • Visible to the Customer in the LoRA tab with full provenance

(which corrections trained which adapter version).

2.3 Opt-in: cross-Customer model improvement

A separate, narrowly-scoped "contribute" consent allows a

Customer to contribute heavily-aggregated, privacy-preserving

signals (e.g. a per-class precision-recall trace) to AlphaGen's

generic detector roadmap. The signal is:

  • Aggregated across at least 5 Customers before being used.
  • Stripped of any direct identifier or per-clip content.
  • Restricted to numeric metrics — never raw imagery, never

text, never audio.

  • Audited by the DPO before each release.

2.4 Public datasets

Generic detectors are seeded from publicly available datasets

under permissive licences. We document each dataset, its

licence, and any known biases in the model card published

alongside each release.

2.5 Synthetic data

Where appropriate, we use synthetic data generated by

deterministic simulators or generative models we control. No

real person's data is used in synthetic-only training paths.

---

3. Bias evaluation

Every production-released model is evaluated for bias along

demographic axes relevant to its intended deployment. The

evaluation is documented in the model card for that

release, which is bundled with the release artefact and

referenced from the changelog.

For detectors that may operate on people, the standard

evaluation set covers:

  • Apparent age groups (child / adolescent / adult / senior).
  • Apparent gender presentation (male / female / non-binary /

unknown).

  • Skin-tone groups (Fitzpatrick I-II, III-IV, V-VI), where

detectable.

  • Lighting conditions (daylight / low-light / backlit /

artificial).

  • Occlusion levels (clear / partial / heavy).

Reported metrics include per-group precision, recall, F1, and

calibration. **A model is not released to production if any

group's recall is more than 10 percentage points below the

best group's**, unless the Responsible AI Review Board records

an explicit exception with mitigation (e.g. a usage warning in

the AI Transparency page).

---

4. Human oversight

4.1 Human in the loop

The platform is Human-in-the-Loop (HITL) by design. Every

production output is subject to operator review through the

masking, salience, intent, and discovery games. The Customer

chooses how much HITL coverage to apply per clip via the trust-

weighting model.

4.2 Automated decisions

AlphaGen does not, on its own controllership, make automated

decisions producing legal or similarly significant effects on

natural persons under Article 22 UK GDPR. Where AlphaGen is

processor for a Customer making such decisions, the Customer is

responsible for ensuring an Article 22 valid basis and for

implementing the safeguards required by law. Our [Acceptable Use

Policy](./acceptable-use-policy.md) §3.2 makes this explicit.

4.3 Cognitive load

The cognitive-load model that gates flash-task / mode-switch /

break suggestions for HITL operators never auto-applies. It

shows a non-blocking suggestion the operator can accept or

dismiss. We log the suggestion, the operator's decision, and the

post-decision performance to evaluate whether suggestions help.

---

5. Explainability

Where the Customer or a data subject reasonably asks:

  1. Which model produced this output? — answer is in the

provenance metadata bundled with the export (see [AI

Transparency](./ai-transparency.md) §3.1).

  1. Why this output and not another? — for detection /

propagation, we provide top-N alternatives with confidence

scores. For LLM synthesis (Pass 4), we provide the prompt

and the chain-of-thought (where the underlying model exposes

it). We do not reverse-engineer the underlying foundation

model on demand.

  1. Could we change the input to get a different output? —

we provide counterfactual hints where the model exposes

gradients (e.g. saliency maps for the detector); we do not

guarantee counterfactual explanations for the LLM

synthesis.

We do not market our system as offering full counterfactual

explainability for every output. Where a Customer needs that

guarantee for downstream regulated use, we recommend a

contractual carve-out and additional manual review.

---

6. Incident handling for AI-specific issues

In addition to the security incident process in the [Trust &

Security](./trust-security.md) page, AlphaGen handles **AI-

specific incidents** — model failures with potential rights

or safety impact — through a dedicated playbook:

  • Model regression: detected by automated guard-rails

(architectural-invariants test, smoke-training, A/B guard

before LoRA activation). A regression freezes the affected

Customer's release until reviewed.

  • Hallucination report: Customer or data subject reports an

output that is materially false. We log, investigate, and (if

the underlying model is at fault) include the case in the next

bias-evaluation cycle.

  • Bias report: triggers a re-evaluation against the standard

bias set. If the re-evaluation finds a regression, the model

rolls back.

  • Misuse report: routes to the Trust & Safety on-call per

the Acceptable Use Policy §8.

---

7. Responsible AI Review Board

The Board is composed of:

  • The Chief Technology Officer (chair).
  • The Data Protection Officer.
  • The Head of Engineering.
  • An independent external advisor (currently [TO BE CONFIRMED]).

The Board meets at least quarterly and reviews:

  • Each newly trained / fine-tuned model before release;
  • Every reported AI-specific incident;
  • Every requested exception to a bias-evaluation threshold;
  • Every new capability that could enable a prohibited use under

the Acceptable Use Policy §5.

Decisions and exceptions are minuted and retained for 6 years.

---

8. Use cases we will NOT support

Independent of the AUP's user-side prohibitions, AlphaGen will

not develop, deploy, or sell capabilities for:

  • Lethal autonomous weapons or military targeting systems.
  • Facial recognition for mass surveillance of identified

individuals.

  • Social-scoring systems that produce automated rating of

natural persons.

  • Predictive policing or pre-crime detection on individuals.
  • Election-influence or political-deception generation.
  • Generation of non-consensual intimate imagery of real persons.
  • "Emotion recognition" in workplace or educational settings,

except where the data subject has explicit, freely-given,

revocable consent.

These exclusions match the EU AI Act's prohibited / high-risk

list and apply globally. They cannot be lifted by Order Form.

---

9. Updates

This policy is reviewed at least annually by the Responsible AI

Review Board. Material changes are recorded in the document-

control table below and announced in the Privacy Policy

Changelog.

---

Document control

| Version | Date | Author | Notes |

|---|---|---|---|

| 1.0.0 | 2026-04-27 | AlphaGen Responsible AI Review Board | Initial publication. Anchors the no-training default, per-Customer LoRA opt-in, bias evaluation thresholds, and prohibited use-case list. |

  • Legal Documents

AlphaGen Holdings Limited · Registered in England & Wales · hello@alpha-gen.ai