| Internet-Draft | UPIP | March 2026 |
| van de Meent & AI | Expires 19 September 2026 | [Page] |
This document specifies UPIP (Universal Process Integrity Protocol), a five-layer protocol for capturing, verifying, and reproducing computational processes across machines, actors, and trust domains. UPIP defines a cryptographic hash chain over five layers: STATE, DEPS, PROCESS, RESULT, and VERIFY, enabling any party to prove that a process was executed faithfully and can be reproduced.¶
This document also specifies Fork Tokens, a continuation protocol enabling multi-actor process handoff with cryptographic chain of custody. Fork tokens freeze the complete UPIP stack state and transfer it to another actor (human, AI, or machine) for continuation, maintaining provenance integrity across the handoff boundary.¶
Together, UPIP and Fork Tokens address process integrity in multi-agent AI systems, distributed computing, scientific reproducibility, autonomous vehicle coordination, and regulatory compliance scenarios.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 19 September 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Distributed computing increasingly involves heterogeneous actors: human operators, AI agents, automated pipelines, edge devices, and cloud services. When a process moves between actors -- from one machine to another, from an AI to a human for review, from a drone to a command station -- the integrity of the process state must be verifiable at every handoff point.¶
Existing solutions address parts of this problem:¶
None of these provide a unified, self-verifying bundle that captures the complete execution context (state, dependencies, process, result) with cryptographic chain of custody across actor boundaries.¶
UPIP fills this gap with two complementary protocols:¶
Key design principles:¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174].¶
UPIP operates in two modes:¶
Single-Actor Mode (Capture-Run-Verify):¶
Multi-Actor Mode (Fork-Resume):¶
+----------+ +---------+ +---------+ +---------+
| L1 STATE |---->| L2 DEPS |---->| L3 PROC |---->| L4 RSLT |
+----------+ +---------+ +---------+ +---------+
| | | |
v v v v
state_hash deps_hash (intent) result_hash
| | | |
+-------+--------+-------+-------+
|
v
stack_hash = SHA-256(L1 || L2 || L3 || L4)
|
v
+------------+
| Fork Token |---> Actor B ---> New UPIP Stack
+------------+
|
v
fork_chain: [{fork_id, parent_hash, ...}]
A UPIP stack MUST be a [RFC8259] JSON object with the following top-level fields:¶
{
"protocol": "UPIP",
"version": "1.0",
"title": "<human-readable description>",
"created_by": "<actor identity>",
"created_at": "<ISO-8601 timestamp>",
"stack_hash": "upip:sha256:<hex>",
"state": { "<L1 object>" : "..." },
"deps": { "<L2 object>" : "..." },
"process": { "<L3 object>" : "..." },
"result": { "<L4 object>" : "..." },
"verify": [ "<L5 array>" ],
"fork_chain": [ "<fork token references>" ],
"source_files": { "<optional embedded files>" : "..." }
}
¶
L1 captures the complete input state before execution. The state_type field determines the capture method:¶
{
"state_type": "git | files | image | empty",
"state_hash": "<type>:<hash>",
"captured_at": "<ISO-8601 timestamp>"
}
¶
State Types:¶
For "git" type, additional fields:¶
For "files" type, additional fields:¶
L2 captures the exact dependency set at execution time.¶
{
"python_version": "<major.minor.patch>",
"packages": { "<name>": "<version>" },
"system_packages": [ "<name>=<version>" ],
"deps_hash": "deps:sha256:<hex>",
"captured_at": "<ISO-8601 timestamp>"
}
¶
The deps_hash MUST be computed as SHA-256 of the sorted, deterministic serialization of all package name:version pairs.¶
While this specification uses Python as the reference implementation, L2 is language-agnostic. Other implementations MAY substitute appropriate dependency metadata for their runtime environment (e.g., Cargo.lock for Rust, go.sum for Go, package-lock.json for Node.js).¶
L3 defines what was executed and why.¶
{
"command": [ "<arg0>", "<arg1>" ],
"intent": "<human-readable purpose>",
"actor": "<actor identity>",
"env_vars": { "<key>": "<value>" },
"working_dir": "<path>"
}
¶
The command field MUST be an array of strings, not a shell command string. This prevents injection attacks and ensures deterministic execution.¶
The intent field MUST be a human-readable string describing WHY this process is being run. This serves as the ERACHTER (intent) component for TIBET integration.¶
The actor field MUST identify the entity that initiated the process. This may be a human username, AI agent identifier (IDD), or system service name.¶
L4 captures the execution result.¶
{
"success": true,
"exit_code": 0,
"stdout": "<captured stdout>",
"stderr": "<captured stderr>",
"result_hash": "sha256:<hex>",
"files_changed": 3,
"diff": "<unified diff of file changes>",
"captured_at": "<ISO-8601 timestamp>"
}
¶
The result_hash MUST be computed as SHA-256 of the concatenation of: exit_code (as string) + stdout + stderr.¶
If execution occurs in an airlock, the diff field SHOULD contain the unified diff of all file changes detected.¶
L5 records verification attempts when the UPIP stack is reproduced on another machine.¶
{
"machine": "<hostname or identifier>",
"verified_at": "<ISO-8601 timestamp>",
"match": true,
"environment": { "os": "linux", "arch": "x86_64" },
"original_hash": "upip:sha256:<hex>",
"reproduced_hash": "upip:sha256:<hex>"
}
¶
The match field MUST be true only if reproduced_hash equals original_hash.¶
L5 is an array, allowing multiple verification records from different machines. Each verification is independent.¶
The stack hash MUST be computed as follows:¶
Result: "upip:sha256:4f2e8a..."¶
This ensures that modifying ANY layer invalidates the stack hash.¶
A fork token MUST be a [RFC8259] JSON object with the following fields. The fork_id SHOULD be a UUID as defined in [RFC4122]:¶
{
"fork_id": "<unique identifier>",
"parent_hash": "sha256:<hex>",
"parent_stack_hash": "upip:sha256:<hex>",
"continuation_point": "L<n>:<position>",
"intent_snapshot": "<human-readable purpose>",
"active_memory_hash": "sha256:<hex>",
"memory_ref": "<path or URL to memory blob>",
"fork_type": "script | ai_to_ai | human_to_ai | fragment",
"actor_from": "<source actor>",
"actor_to": "<target actor or empty>",
"actor_handoff": "<from> -> <to>",
"capability_required": { },
"forked_at": "<ISO-8601 timestamp>",
"expires_at": "<ISO-8601 timestamp or empty>",
"fork_hash": "fork:sha256:<hex>",
"partial_layers": { },
"metadata": { }
}
¶
The actor_to field MAY be empty, indicating the fork is available to any capable actor. In this case, actor_handoff MUST use "*" as the target: "ActorA -> *".¶
The fork hash MUST be computed as follows:¶
Result: "fork:sha256:7d3f..."¶
This ensures that modifying ANY field invalidates the fork.¶
The active_memory_hash field captures the cognitive or computational state at the moment of forking.¶
This field answers the question: "What was the actor thinking/processing at the moment of handoff?"¶
The capability_required field specifies what the resuming actor needs:¶
{
"capability_required": {
"deps": ["package>=version"],
"gpu": true,
"min_memory_gb": 16,
"platform": "linux/amd64",
"custom": { }
}
}
¶
On resume, the receiving actor SHOULD verify these requirements and record the result in the verification record. Missing capabilities MUST NOT prevent execution but MUST be recorded as evidence.¶
The fork_chain field in the UPIP stack is an ordered array of fork token references:¶
{
"fork_chain": [
{
"fork_id": "fork-abc123",
"fork_hash": "fork:sha256:...",
"actor_handoff": "A -> B",
"forked_at": "2026-03-18T14:00:00Z"
}
]
}
¶
When a process is resumed, the new UPIP stack MUST include the fork token in its fork_chain. This creates a complete audit trail of all handoffs.¶
Input: command, source_dir, intent, actor¶
Output: UPIP stack with L1-L4 populated¶
Input: Fork Token (.fork.json), command, actor¶
Output: New UPIP stack, verification record¶
Input: UPIP stack, N fragments, actor list¶
Output: N Fork Tokens of type "fragment"¶
Fragment tokens MUST include metadata fields:¶
While UPIP is transport-agnostic, this section defines the I-Poll binding for real-time fork delivery between AI agents.¶
Fork tokens are delivered via I-Poll TASK messages:¶
{
"from_agent": "<source agent>",
"to_agent": "<target agent>",
"content": "<human-readable fork summary>",
"poll_type": "TASK",
"metadata": {
"upip_fork": true,
"fork_id": "<fork_id>",
"fork_hash": "fork:sha256:<hex>",
"fork_type": "<type>",
"continuation_point": "<point>",
"actor_handoff": "<from> -> <to>",
"fork_data": { "<complete fork token JSON>" : "..." }
}
}
¶
The "upip_fork" metadata flag MUST be true to identify this message as a fork delivery.¶
The "fork_data" field MUST contain the complete fork token as defined in Section 5.1. This allows the receiving agent to reconstruct the fork token without needing the .fork.json file.¶
After processing a fork token, the receiving actor SHOULD send an ACK message:¶
{
"from_agent": "<resuming agent>",
"to_agent": "<original agent>",
"content": "FORK RESUMED_OK -- <fork_id>",
"poll_type": "ACK",
"metadata": {
"upip_fork": true,
"fork_id": "<fork_id>",
"fork_status": "RESUMED_OK",
"resume_hash": "upip:sha256:<hex>",
"resumed_by": "<agent identity>"
}
}
¶
The resume_hash is the stack_hash of the new UPIP stack created during resume.¶
The fork_status field MUST be one of "RESUMED_OK" or "RESUMED_FAIL".¶
Agents MAY implement a poll-based listener that:¶
Poll interval SHOULD be configurable. Default: 5 seconds. Implementations SHOULD support exponential backoff when the inbox is empty.¶
A UPIP stack is valid if and only if:¶
Validation MUST be performed when loading a .upip.json file and SHOULD be performed before reproduction.¶
When resuming a fork token, the following checks MUST be performed:¶
All four checks MUST be recorded in the L5 VERIFY record. Failed checks MUST NOT prevent execution (evidence over enforcement principle).¶
If fork_hash validation fails, the verification record MUST include:¶
{
"fork_hash_match": false,
"expected_hash": "fork:sha256:<original>",
"computed_hash": "fork:sha256:<recomputed>",
"tamper_evidence": true
}
¶
This creates an immutable record that tampering occurred, without preventing the process from continuing. The decision of whether to act on tamper evidence is left to the consuming application.¶
UPIP uses SHA-256 for all hash computations. Implementations MUST use SHA-256 as defined in [FIPS180-4]. Future versions MAY support SHA-3 or other hash functions via an algorithm identifier prefix.¶
The hash chain structure ensures that modifying any component at any layer propagates to the stack hash, providing tamper evidence for the entire bundle.¶
UPIP is deliberately designed as an evidence protocol, not an enforcement protocol. Fork validation failures do not block execution; they are recorded as evidence. This design choice reflects the reality that:¶
Applications that require enforcement SHOULD implement additional policy layers on top of UPIP evidence.¶
When fork_type is "ai_to_ai", the active_memory_hash represents the SHA-256 of the serialized AI context window. This raises unique considerations:¶
Implementations SHOULD encrypt memory blobs at rest. Implementations MUST NOT require exact memory reproduction for fork validation. The memory hash serves as evidence of state at fork time, not as a reproducibility guarantee.¶
Capability requirements in fork tokens are self-reported by the forking actor. The receiving actor SHOULD independently verify capabilities rather than trusting the requirement specification alone.¶
Package version verification SHOULD use installed package metadata. GPU availability SHOULD be verified via hardware detection, not configuration claims.¶
Fork tokens include fork_id and forked_at fields to mitigate replay attacks. Implementations SHOULD track consumed fork_ids and reject duplicate fork_ids within a configurable time window.¶
The expires_at field provides time-based expiration. Agents SHOULD set expires_at for forks that are time-sensitive.¶
UPIP's evidence-based design aligns with requirements from the [EU-AI-ACT], [NIST-AI-RMF], and [ISO42001]. The complete process capture at each layer provides the audit trail required by these frameworks for AI system transparency and accountability.¶
This document requests registration of:¶
Media Type: application/upip+json¶
File Extension: .upip.json¶
Media Type: application/upip-fork+json¶
File Extension: .fork.json¶
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["protocol", "version", "stack_hash",
"state", "deps", "process", "result"],
"properties": {
"protocol": {"const": "UPIP"},
"version": {"type": "string"},
"title": {"type": "string"},
"created_by": {"type": "string"},
"created_at": {"type": "string", "format": "date-time"},
"stack_hash": {
"type": "string",
"pattern": "^upip:sha256:[a-f0-9]{64}$"
},
"state": {
"type": "object",
"required": ["state_type", "state_hash"],
"properties": {
"state_type": {
"enum": ["git", "files", "image", "empty"]
},
"state_hash": {"type": "string"}
}
},
"deps": {
"type": "object",
"required": ["deps_hash"],
"properties": {
"python_version": {"type": "string"},
"packages": {"type": "object"},
"deps_hash": {"type": "string"}
}
},
"process": {
"type": "object",
"required": ["command", "intent", "actor"],
"properties": {
"command": {"type": "array", "items": {"type": "string"}},
"intent": {"type": "string"},
"actor": {"type": "string"}
}
},
"result": {
"type": "object",
"required": ["success", "exit_code", "result_hash"],
"properties": {
"success": {"type": "boolean"},
"exit_code": {"type": "integer"},
"result_hash": {"type": "string"}
}
},
"fork_chain": {
"type": "array",
"items": {"type": "object"}
}
}
}
¶
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["fork_id", "fork_type", "fork_hash",
"active_memory_hash", "forked_at"],
"properties": {
"fork_id": {"type": "string", "pattern": "^fork-"},
"parent_hash": {"type": "string"},
"parent_stack_hash": {
"type": "string",
"pattern": "^upip:sha256:"
},
"continuation_point": {"type": "string"},
"intent_snapshot": {"type": "string"},
"active_memory_hash": {
"type": "string",
"pattern": "^sha256:"
},
"memory_ref": {"type": "string"},
"fork_type": {
"enum": ["script", "ai_to_ai", "human_to_ai", "fragment"]
},
"actor_from": {"type": "string"},
"actor_to": {"type": "string"},
"actor_handoff": {"type": "string"},
"capability_required": {"type": "object"},
"forked_at": {"type": "string", "format": "date-time"},
"expires_at": {"type": "string"},
"fork_hash": {
"type": "string",
"pattern": "^fork:sha256:[a-f0-9]{64}$"
},
"partial_layers": {"type": "object"},
"metadata": {"type": "object"}
}
}
¶
An AI orchestrator (Agent A) analyzes a dataset, creates a UPIP bundle, forks it to a specialist AI (Agent B) for deep analysis, and receives the result with cryptographic proof.¶
Agent A: capture_and_run(["python", "scan.py"], intent="Initial scan") fork_upip(actor_from="A", actor_to="B", intent="Deep analysis") deliver_fork(fork, to_agent="B") Agent B: pull_forks() resume_upip(fork, command=["python", "deep_analyze.py"]) ack_fork(fork, resume_hash=stack.hash, success=True)¶
Result: Both agents have UPIP stacks linked by fork_chain. Any auditor can verify the complete chain.¶
A command station dispatches N reconnaissance tasks to N drones. Each drone receives a fragment fork token, executes its assigned sector scan, and returns the result.¶
Command Station:
base_stack = capture_and_run(["mission_plan.py"])
for i in range(N):
fork = fork_upip(base_stack,
actor_from="command",
actor_to=f"drone-{i}",
fork_type="fragment",
metadata={"sector": sectors[i]})
deliver_fork(fork, to_agent=f"drone-{i}")
Each Drone:
fork_msg = pull_forks()
stack = resume_upip(fork, command=["scan_sector.py"])
ack_fork(fork, resume_hash=stack.hash)
Command Station:
# Verify all N results, reconstruct combined map
for ack in collect_acks():
verify(ack.resume_hash)
¶
Lab A publishes an experiment as a UPIP bundle. Lab B reproduces it independently and gets cryptographic proof that results match (or don't).¶
Lab A:
stack = capture_and_run(
["python", "train_model.py"],
source_dir="./experiment",
intent="Train model v3 on dataset-2026Q1"
)
save_upip(stack, "experiment-2026Q1.upip.json")
# Publish to journal / data repository
Lab B:
stack = load_upip("experiment-2026Q1.upip.json")
verify = reproduce_upip(stack)
# verify.match == True: exact reproduction
# verify.match == False: divergence (investigate)
¶
The UPIP protocol was developed as part of HumoticaOS, an AI governance framework built on human-AI symbiosis. UPIP builds on concepts from the TIBET evidence trail protocol and extends them into the domain of process integrity and multi-actor continuation.¶
The Fork Token mechanism was inspired by the need for cryptographic chain of custody in multi-agent AI systems, where processes move between heterogeneous actors across trust boundaries.¶