Control Point Definitions

This architecture consists of multiple interoperable control points, each governing a distinct class of system transitions.

They operate outside the model, can be deployed independently, and may be combined to form broader enforcement configurations.

Input Control Point

AIPT — AI Intent Prompt Translator

What it is:

The first control point in the architecture.

Governs what enters the model.

How it works:

Before any input reaches the system, AIPT interprets user intent and transforms ambiguous or incomplete inputs into structured, model-ready representations.

When intent is clear, input is normalized directly.

When intent is ambiguous, AIPT generates a set of structured interpretations representing distinct, valid resolutions of that intent.

A human selection mechanism is used to resolve ambiguity by choosing the intended interpretation.

The selected interpretation is then converted into a constrained, machine-executable representation before being passed downstream.

AIPT resolves ambiguity through human selection before the system acts.

What it prevents:

Behavioral variability caused by articulation differences

Inconsistent model responses to equivalent intents

Ambiguity propagating into execution

Noisy or underspecified inputs entering downstream systems

Why it matters architecturally:

Every control point downstream depends on clean, resolved input.

AIPT is not prompt optimization—it is human-validated intent normalization at the entry point.

Execution Control Points

CNIL

What it is:

A structural enforcement layer that operates independently of language and reasoning.

How it works:

CNIL enforces control conditions using structural signals and state-based logic. Conditions are either met or they aren't — there is no interpretive middle ground. Because it operates outside semantic reasoning, enforcement decisions remain stable regardless of how a prompt is phrased or how the model responds.

What it prevents:

Hallucination-driven enforcement failures. When a model misinterprets, confabulates, or reasons its way around a constraint, CNIL holds because it never consulted the model's reasoning in the first place.

CIM

What it is:

The control point that maintains enforcement continuity across an interaction over time.

How it works:

CIM preserves session state, constraints, and prior decisions across an interaction. Once a constraint is set, it cannot silently change, erode, or be reinterpreted as the conversation evolves.

What it prevents:

Context drift — the gradual degradation of established constraints through extended interaction, prompt manipulation, or model reinterpretation. If a boundary was set at the start of a session, CIM ensures it holds at the end.

Human Authorization Control Point

HAP

What it is:

The architectural boundary between automated system execution and verified human intent.

How it works:

HAP intercepts specific execution transitions and requires a verified, deliberate human authorization signal before the system can proceed.

Authorization is presence-based rather than credential-based.

At the moment authorization is required, the system generates a random challenge prompt that the human must respond to in real time. The response is validated against previously registered human artifacts (such as voice, facial identity, or other biometric signals).

Because the challenge is random and time-bound, authorization cannot be replayed, pre-generated, or automated.

If a valid human response is not produced, the execution path remains blocked.

What it prevents:

  • AI systems executing actions under assumed or stored human authority

  • Credential replay or token reuse

  • Silent privilege escalation

  • Automated approval loops

  • Agents performing consequential actions without active human awareness

Why it matters architecturally:

Most authentication systems verify identity.

HAP verifies active human presence at the moment of execution.

This ensures that the final authority over consequential actions remains with a real, present human, not merely a stored credential.

Notice what this version does:

  • Keeps your architecture clean

  • Explains the random prompt mechanism

  • Emphasizes presence vs identity

Output Acceptance Control Point

Truth Gate

What it is:

The control point that governs when generated outputs may be accepted as authoritative.

How it works:

Generated content may exist in a provisional state, but acceptance as trusted or executable information requires traversal of explicit validation conditions. Truth Gate separates generation from acceptance and enforces validation prior to propagation or execution.

What it prevents:

Outputs becoming authoritative through default propagation, unverified information being treated as truth, and downstream systems acting on unvalidated model responses.

Record Control Point

Time File

What it is:

A document authority primitive that governs the transition from draft to resolved state.

How it works:

Documents exist in an editable draft state until explicitly resolved by authorized parties. Resolution is a governed event — it triggers a cryptographic timestamp and locks the document state. After resolution the record is immutable, non-branching, and time-bound. No silent edits, no version ambiguity, no disputed authorship.

What it prevents:

Post-hoc modification, ambiguity over what was agreed and when, disputes over authorship or acceptance, and historical record manipulation.

Why it matters architecturally:

Every governance system ultimately depends on trustworthy records. Time File makes the record itself a controlled artifact.

Signal Modules

Skill Proof

Skill Proof evaluates demonstrated capability rather than declared expertise, producing signals indicating whether an entity can reliably perform specific tasks based on observable behavior. These signals inform control point decisions but do not enforce transitions.

Fact Funnel

Fact Funnel evaluates the reliability and provenance of information, producing structured signals describing confidence, lineage, and verification status. These signals inform acceptance and propagation decisions without acting as enforcement mechanisms.

Coordination Modules

AIAP

AIAP governs how AI systems interact across boundaries by establishing a structured handshake that carries authorization and control context between systems. This preserves rule continuity and prevents cross-system activity from bypassing enforcement logic.

Integrity Lock

What it is:

A structural authority separation mechanism that prevents execution without external authorization.

How it works:

Agent runtimes possess no standing execution permissions. All side-effecting actions require a cryptographically verifiable authorization artifact generated outside the model’s trust boundary. Execution surfaces independently validate this artifact before permitting state transitions. If valid authorization is not present, execution cannot occur.

What it prevents:

Autonomous self-authorization, silent privilege escalation, credential replay, and agents executing actions beyond externally granted authority.

Why it matters architecturally:

Integrity Lock relocates execution authority outside the reasoning system. Governance is enforced through structural incapability rather than detection, monitoring, or policy interpretation.

© Control Points Portfolio. All rights reserved. Confidential.

© Control Points Portfolio. All rights reserved. Confidential.

© Control Points Portfolio. All rights reserved. Confidential.