Governance
Bitmodels aims to govern AI in a world that largely exists without governance. Creators that work with us are given expressed ownership of their likeness, and the confidence that it will never be used without consent for training data, or marketing purposes.
Category:
Design
Author:
Archer
Read:
10 mins
Location:
Los Angeles
Date:
Feb 5, 2024




AI Governance, Likeness, and the Responsibility to Represent
Artificial intelligence has reached a point where representation is no longer abstract. Faces, bodies, expressions, and identities can now be recreated, synthesized, and recontextualized with extraordinary fidelity. This capability introduces immense creative potential—but it also introduces a responsibility that cannot be deferred to technology alone. At Bitmodels, we view AI governance not as a compliance exercise, but as a foundational design principle. The question is no longer whether AI systems can recreate likeness, but how they should, and under what conditions. As the boundaries between synthetic and real representations continue to blur, the protection of individual likeness is becoming one of the most critical issues in the future of AI regulation and creative practice.

Likeness Is Not Data — It Is Identity
A person’s likeness is not merely a collection of pixels or features. It is inseparable from identity, reputation, agency, and consent. Unlike generic datasets or abstract styles, likeness carries social, legal, and emotional weight. Historically, image rights were governed within relatively narrow domains—photography, film, advertising—where authorship and distribution were easier to trace. AI changes this by enabling likeness to be reconstructed, transformed, or simulated without direct capture and at unprecedented scale. This shift requires a reframing of governance. Likeness must be treated not as derivative data, but as personal representation, deserving of explicit protection.




Bitmodels’ Approach: Consent, Control, and Containment
At Bitmodels, our governance framework rests on three core principles: 1. Explicit Consent We do not recreate or deploy identifiable likenesses without clear, informed permission from the individual or authorized rights holder. Consent is not assumed, implied, or generalized. It is specific to use, scope, and context. This means: No unlicensed replication of real individuals No “lookalike” generation intended to circumvent consent No training or fine-tuning on protected likeness without authorization Consent is not a one-time checkbox—it is an ongoing condition. 2. Controlled Representation Even with consent, representation must be bounded. We design systems to limit how a likeness can be used, modified, or redistributed. This includes: Defined use cases (e.g., editorial, creative, commercial) Restrictions on sensitive contexts Clear lineage between original approval and final output The goal is not to extract maximum flexibility from likeness, but to preserve alignment between representation and intent. 3. Auditability and Traceability AI systems must be accountable. We prioritize workflows where generated outputs can be traced back to: The consent framework under which they were created The model configuration used The context in which they are deployed This is essential not only for internal governance, but for future regulatory compatibility. One of the failures of past technological rollouts—from social media to data brokerage—was the assumption that governance could be layered on after scale had already been achieved. AI cannot afford this mistake. Once likeness is misused, the harm is difficult to reverse. Reputation, trust, and personal safety are not easily restored after unauthorized representation. For this reason, Bitmodels integrates governance at the model, process, and cultural level. Ethical safeguards are not external constraints; they are part of how creative work is structured from the start. Globally, AI likeness legislation is converging around a clear shift toward stronger, more explicit protections of individual identity, expanding traditional image and publicity rights to include synthetic representations, AI-generated lookalikes, and posthumous usage. Lawmakers are increasingly prioritizing consent-centric frameworks, emphasizing explicit opt-in mechanisms, clear rights of revocation, and placing the burden of compliance on creators and platforms rather than individuals. At the same time, regulators are recognizing that responsibility cannot rest solely with end users, and are moving toward holding platforms accountable for implementing safeguards, monitoring misuse, and maintaining transparent documentation. A critical legal distinction is also emerging between stylistic influence—which remains broadly permissible—and identifiable likeness, which is becoming increasingly protected under law. Together, these trends suggest a future where governance is not a constraint on creativity, but a structural requirement for its ethical and sustainable practice.

