| Internet-Draft | RATS and Behavioral Evidence | January 2026 |
| Kamimura | Expires 14 July 2026 | [Page] |
This document provides an informational discussion of the conceptual relationship between remote attestation, as defined in RFC 9334 (RATS Architecture), and behavioral evidence recording mechanisms. It observes that these two verification capabilities address fundamentally different questions - attestation addresses "Is this system in a trustworthy state?" while behavioral evidence addresses "What did the system actually do?" - and discusses how they could conceptually complement each other in accountability frameworks. This document is purely descriptive: it does not propose any modifications to RATS architecture, define new mechanisms or protocols, or establish normative requirements. It explicitly does not define any cryptographic binding between attestation and behavioral evidence.¶
This note is to be removed before publishing as an RFC.¶
Discussion of this document takes place on the Remote ATtestation ProcedureS (RATS) Working Group mailing list (rats@ietf.org), which is archived at https://mailarchive.ietf.org/arch/browse/rats/.¶
Source for this draft and an issue tracker can be found at https://github.com/veritaschain/draft-kamimura-rats-behavioral-evidence.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 14 July 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The IETF RATS (Remote ATtestation ProcedureS) Working Group has developed a comprehensive architecture for remote attestation [RFC9334], enabling Relying Parties to assess the trustworthiness of remote systems through cryptographic evidence about their state. This attestation capability addresses a fundamental question in distributed systems: "Is this system in a trustworthy state?"¶
A related but distinct verification need exists in many operational contexts: the ability to verify what actions a system has actually performed after the fact. This question - "What did the system actually do?" - is addressed by behavioral evidence recording mechanisms, which create tamper-evident records of system actions and decisions.¶
This document observes that these two verification capabilities address different aspects of system accountability and discusses their conceptual relationship. The document does not propose any technical integration, protocol, or cryptographic binding between these mechanisms. Any discussion of "complementary" use is purely conceptual and does not imply a composed security property.¶
This document is purely INFORMATIONAL and NON-NORMATIVE. It:¶
The language in this document uses descriptive terms (MAY, COULD, CAN) exclusively to indicate possibilities and observations. This document does not use normative requirements language (MUST, SHOULD, SHALL) as there are no mandatory behaviors or requirements being specified.¶
This document treats behavioral evidence recording systems in general terms, using VeritasChain Protocol (VCP) [VCP-SPEC] as one illustrative example among various possible approaches. Other systems such as Certificate Transparency [RFC6962] and general append-only log architectures employ similar cryptographic techniques for different purposes.¶
This document is motivated by an observation that attestation and behavioral evidence recording, while both contributing to system accountability, answer fundamentally different questions. Understanding this distinction could help system architects avoid conflating these mechanisms or assuming one substitutes for the other.¶
Remote attestation, as defined in [RFC9334], enables a Relying Party to assess whether an Attester is in a trustworthy state at the time of attestation. When attestation succeeds, the Relying Party gains assurance that:¶
However, attestation alone does NOT establish:¶
Behavioral evidence recording mechanisms create tamper-evident records of system actions and decisions. When properly implemented, such mechanisms could provide:¶
However, behavioral evidence recording alone does NOT establish:¶
When an observer has access to both valid Attestation Results for a system AND a verifiable behavioral evidence trail from that system, the observer could potentially reason:¶
Critical Limitation: This reasoning is purely conceptual and informal. This document explicitly does NOT claim that considering attestation and behavioral evidence together creates any composed security property. Significant trust gaps remain (see Section 1.2.4).¶
Even when both attestation and behavioral evidence are available, significant trust gaps remain that this document does not address:¶
These gaps would need to be addressed by specific technical mechanisms not defined in this document. Deployments considering both attestation and behavioral evidence should carefully analyze their threat model and not assume that informal complementarity provides strong security guarantees.¶
The following examples illustrate domains where both capabilities could be relevant. These examples are illustrative only and do not constitute normative guidance:¶
This document reuses terminology from the RATS Architecture [RFC9334] without modification or extension. The following terms are used exactly as defined in that document:¶
The following terms are used in this document to describe behavioral evidence concepts. These terms are grounded in general systems and security literature rather than being newly defined by this document.¶
Note on "Audit" Terminology: The term "audit" in this document follows common systems engineering usage (e.g., "audit log", "audit trail") referring to chronological records of system events maintained for post-hoc examination. This usage is consistent with standard security terminology as found in sources such as NIST SP 800-92 (Guide to Computer Security Log Management) [NIST-SP800-92] and general operating systems literature. It does not imply regulatory auditing, financial auditing, or compliance certification in any jurisdiction-specific sense.¶
This section describes an observational framework for understanding how attestation and behavioral evidence recording address different verification needs. This framework is purely conceptual and does not define any technical integration or protocol.¶
The RATS architecture [RFC9334] addresses trustworthiness assessment through remote attestation. At its core, attestation answers questions about system state:¶
These questions are fundamentally about the properties and characteristics of a system at a point in time or across a measurement period. The RATS architecture provides mechanisms for generating, conveying, and appraising Evidence that enables Relying Parties to make trust decisions about Attesters.¶
Key characteristics of attestation as defined by RATS:¶
Behavioral evidence recording mechanisms address a different category of verification need. Rather than assessing system state, they record what a system has done:¶
These questions are fundamentally about system behavior over time. Verifiable behavioral evidence mechanisms could provide ways to record, preserve, and verify the integrity of behavioral records, enabling after-the-fact examination of system actions.¶
Key characteristics of behavioral evidence systems (in general terms):¶
As an illustrative example, VCP [VCP-SPEC] defines audit trails using three integrity layers: event integrity (hashing), structural integrity (Merkle trees), and external verifiability (digital signatures and anchoring). Certificate Transparency [RFC6962] uses similar cryptographic techniques for a different purpose (public logging of certificates). Other behavioral evidence systems could employ different mechanisms.¶
The distinction between attestation and behavioral evidence can be understood as a separation of concerns:¶
| Aspect | Attestation (RATS) | Behavioral Evidence |
|---|---|---|
| Primary Question | Is this system trustworthy? | What did this system do? |
| Focus | System state | System behavior |
| Temporal Scope | Point-in-time or measurement period | Historical record of actions |
| Primary Use Case | Trust decision before/during interaction | Post-hoc examination and accountability |
| Trust Anchor | Hardware/software roots of trust | Logging infrastructure integrity |
This separation suggests that attestation and behavioral evidence address different needs. This document observes that neither mechanism fully substitutes for the other, but explicitly does not claim that using both together creates a composed security property (see Section 7).¶
This section discusses the conceptual relationship between attestation and behavioral evidence. All discussion in this section is observational and does not define any protocol, binding, or security composition.¶
A key observation is that attestation and behavioral evidence answer different questions:¶
Neither question's answer implies the other's:¶
Attestation and behavioral evidence may operate on different temporal rhythms:¶
One conceptual model involves attestation confirming system state at discrete moments, while behavioral evidence records actions between those moments. However, this document explicitly notes that no cryptographic mechanism is defined to bind these two types of evidence together. The "gap" between attestation events represents a period during which system state could change without detection.¶
This document explicitly does NOT define any cryptographic binding between Attestation Results and behavioral evidence records. Such a binding would require:¶
None of these are provided by this document. Any deployment considering both mechanisms should not assume that informal correlation provides the security properties that a formal cryptographic binding might offer.¶
This section provides a purely illustrative, non-normative example of how attestation and behavioral evidence could conceptually relate in a hypothetical scenario. This example:¶
Consider a hypothetical automated decision-making system:¶
This example is purely conceptual. Actual deployments would require careful security analysis specific to their threat model.¶
To maintain clarity about this document's limited scope, the following items are explicitly out of scope and are NOT addressed:¶
This document does NOT:¶
This document does NOT:¶
This document does NOT:¶
The sole purpose of this document is to observe and explain the conceptual relationship between attestation and behavioral evidence as distinct mechanisms addressing different verification questions.¶
This document is purely informational and does not define any protocols or mechanisms. However, because it discusses the conceptual relationship between two security-relevant mechanisms, the following security considerations are important.¶
This document explicitly does NOT claim that considering attestation and behavioral evidence together creates any composed security property. In particular:¶
Readers should be cautioned against assuming that having both attestation and behavioral evidence provides comprehensive security. Specifically:¶
At a conceptual level (without defining any specific protocol), deployments considering both attestation and behavioral evidence should be aware of risks including:¶
These considerations are presented at a conceptual level to inform threat modeling. This document does not define mechanisms to address these risks.¶
The following security considerations apply independently:¶
This document does not alter the RATS threat model as defined in [RFC9334]. It introduces no new attack surfaces to the RATS architecture. Any deployment-specific threat analysis should consider attestation and behavioral evidence as separate mechanisms with independent trust assumptions and failure modes.¶
This document has no IANA actions.¶
The author thanks the RATS Working Group for developing the comprehensive attestation architecture documented in RFC 9334. This document builds upon and respects the careful design work reflected in that architecture. The author also thanks reviewers who provided feedback emphasizing the importance of clearly distinguishing conceptual observations from security claims.¶