| Internet-Draft | SONAR | November 2025 |
| Ramadan | Expires 6 May 2026 | [Page] |
This document specifies SONAR (Statistical Observation Network for Attestation and Reach), a protocol combining three technical constructs to enable verifiable multicast distribution: (1) Physical layer efficiency through native IP multicast transmission achieving O(1) network bandwidth regardless of receiver population, (2) Cryptoeconomic receiver accountability via on-chain stake deposits, Verifiable Random Functions (VRF) for unpredictable sampling, and blockchain-recorded attestation reporting, and (3) Multicast source authentication through ALTA-based asymmetric loss-tolerant packet verification enabling real-time content integrity proofs. SONAR achieves constant 6% bandwidth overhead through separation of content authentication (broadcast to all receivers) from coverage verification (statistical sampling with German Tank Problem estimation). The protocol employs zero-knowledge proof aggregation via zkSNARKs for sample sizes exceeding 1,000 users, providing privacy protection (preventing on-chain correlation attacks), cost efficiency (80-90% reduction), and maintaining O(1) on-chain verification cost while scaling to populations exceeding 10^8 receivers. SONAR enables cryptographically verifiable multicast distribution without trusted intermediaries or per-receiver encryption overhead.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 6 May 2026.¶
Copyright (c) 2025 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Multicast distribution offers significant efficiency advantages over unicast for large-scale content delivery, reducing bandwidth costs by 99.99% or more. However, the lack of verifiable delivery mechanisms prevents widespread commercial adoption. Content providers cannot verify that infrastructure operators actually delivered content to claimed receivers, while infrastructure operators cannot prove delivery to enable billing. This bilateral trust deficit blocks the formation of liquid markets for multicast capacity.¶
Existing multicast authentication schemes ([RFC4082], [I-D.ietf-mboned-ambi], [I-D.krose-mboned-alta]) address content authentication but do not provide per-receiver coverage proof. Per-receiver encryption defeats multicast efficiency by requiring O(N) bandwidth where N is the number of receivers.¶
SONAR solves this problem through statistical sampling: rather than proving delivery to every receiver, SONAR proves delivery to a random sample with known statistical confidence. This enables verification of populations exceeding 10^7 receivers with constant bandwidth overhead.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This document uses the following terms:¶
SONAR is designed around the following principles:¶
All receivers verify content authenticity using ALTA protocol [I-D.krose-mboned-alta]. This provides:¶
ALTA is chosen over alternatives (TESLA, AMBI) because:¶
Bandwidth overhead: Approximately 6% for typical configurations.¶
Random sample of m receivers (typically 0.1% of population) provide attestations via internet return path. Statistical inference provides population coverage estimate with confidence interval.¶
Sample selection uses Verifiable Random Function (VRF) to prevent adversarial selection. Attestations include packet statistics enabling loss rate estimation and fraud detection.¶
Broadcast overhead: Only sample selection message (320 KB per test).¶
Return path overhead: Distributed across m users (128 bps per selected user).¶
zkSNARK proofs SHOULD be employed when sample size m exceeds 1,000 users, primarily for privacy protection. For small communities (N < 100,000), individual viewing patterns become correlatable on-chain, enabling de-anonymization attacks. zkSNARKs provide aggregated proof of coverage statistics while hiding individual user attestations.¶
Additional benefits: 80-90% cost reduction via off-chain storage, constant-size verification (328 bytes regardless of m), and scalability to populations exceeding 10^8.¶
Challenge protocol enables spot-checking of individual attestations via Merkle proof while maintaining aggregate privacy.¶
SONAR employs ALTA with the following parameters:¶
Each multicast packet MUST include ALTA authentication data:¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Sequence Number (32 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |S| Reserved | MAC Count | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | | + Previous Packet Hash (256 bits) + | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + MAC 1 (128 bits) + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + MAC 2 (128 bits) + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ~ ... (additional MACs) ~ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | Ed25519 Signature (512 bits) | + (present if S=1, every Kth packet) + | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | ~ Content Payload ~ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+¶
Total overhead calculation:¶
Upon receiving packet i, receiver performs the following steps:¶
Receiver MUST buffer packets until sufficient MACs received for verification (depth = p packets).¶
Content provider generates verifiable random sample using VRF to prevent adversarial selection:¶
VRF properties ensure:¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Message Type = 0x01 | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Session ID (64 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Test Window Start (32 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Test Window End (32 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Number of Selected Users (32 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + VRF Proof (80 bytes) + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | ~ Selected User Public Keys (32 bytes each, m total) ~ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | Ed25519 Signature (512 bits) | + (signs entire message) + | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+¶
Size calculation for m=10,000 users:¶
Broadcast frequency: Once per test period P (recommended: P = 900-7200 seconds)¶
Bandwidth: 320 KB / P seconds¶
Selected users MUST respond within response window T_response (recommended: 60 seconds):¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Message Type = 0x02 | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + User Public Key (256 bits) + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Session ID (64 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Packet Min Observed (32 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Packet Max Observed (32 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Packets Received (32 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + Sample Content Hash (256 bits) + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Response Timesonar (64 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + + | Ed25519 Signature (512 bits) | + (signs entire message) + | | + + | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+¶
Total size: 160 bytes per attestation¶
Each selected user submits attestation as blockchain transaction:¶
Advantages: Simple, immediate verification¶
Disadvantages: High on-chain cost for large m¶
Users submit to off-chain storage (e.g., Nitro Data Anchor), then aggregator creates zkSNARK proof:¶
On-chain size: 328 bytes (constant regardless of m)¶
Cost reduction: 99.998% vs direct submission for m=10,000¶
Given sender transmitted N packets and user j reports packet_max_j, population size estimate:¶
N_hat = max(packet_max_1, ..., packet_max_m)
+ max(...) / m - 1
¶
Loss rate estimate for user j:¶
span_j = packet_max_j - packet_min_j + 1 loss_j = 1 - (packets_received_j / span_j)¶
Aggregate loss rate:¶
loss_avg = (1/m) * sum(loss_j for j in sample)¶
If packet_max_j ≈ N, user kept pace with real-time stream (multicast reception). If packet_max_j << N, user lagged significantly (potential unicast forwarding).¶
For sample size m from population N, coverage estimate has confidence interval:¶
Standard error: SE = sqrt(p_hat * (1 - p_hat) / m)
95% confidence: CI_95 = p_hat ± 1.96 * SE
Population coverage: Coverage = N * p_hat
Coverage_CI = N * (p_hat ± 1.96 * SE)
¶
Example: N = 10,000,000 users, m = 10,000 sample, p_hat = 0.95:¶
SE = sqrt(0.95 * 0.05 / 10000) = 0.00218 CI_95 = 0.95 ± 0.00427 = [0.946, 0.954] Coverage = 9,500,000 ± 42,700 users¶
Minimum sample size for desired margin of error E:¶
m_min = (1.96^2 * p_hat * (1 - p_hat)) / E^2¶
For E = 0.001 (0.1% margin) with p_hat = 0.95:¶
m_min = (3.84 * 0.95 * 0.05) / 0.000001 = 18,240 samples¶
Zero-knowledge proof aggregation via zkSNARKs SHOULD be employed when sample size m exceeds 1,000 users. This threshold is determined by three factors:¶
Implementation via Nitro Termina Technology:¶
Aggregator constructs binary Merkle tree from attestations:¶
Merkle proof for attestation A_j:¶
Aggregator generates proof π for statement S:¶
Statement S:
"I know m attestations {A_1, ..., A_m} such that:
1. MerkleRoot({SHA256(A_j)}) = R
2. Each A_j contains valid Ed25519 signature
3. Aggregate loss rate < threshold (e.g., 0.05)
4. Median(packet_max) > threshold (e.g., 0.95 * N)"
¶
Public inputs: {R, m, aggregate_stats, thresholds}¶
Witness: {A_1, ..., A_m, Merkle_paths, signatures}¶
Proof size: Approximately 200 bytes (constant regardless of m)¶
Smart contract verifies zkSNARK proof:¶
Verification cost: ~100,000 gas (constant regardless of m)¶
Any party may challenge aggregator by requesting Merkle proof for specific user:¶
A malicious relay might receive content via unicast forwarding and claim multicast reception. Detection relies on bandwidth constraints:¶
For unicast replication to B recipients:¶
Required bandwidth: B * R_content If B * R_content > R_relay, relay must lag Lag accumulates at rate: (B * R_content - R_relay)¶
Minimum test period for detection:¶
P_min = L / (alpha - 1)
where alpha = (B * R_content) / R_relay
L = acceptable loss rate
¶
Example: B=1M recipients, R_content=25 Mbps, R_relay=10 Gbps, L=0.05:¶
alpha = (1,000,000 * 25) / 10,000 = 2,500 P_min = 0.05 / (2,500 - 1) ≈ 0.00002 seconds¶
Conclusion: For typical broadcast scenarios, any test period P > 1 second provides overwhelming detection certainty. Optimal P is determined by cost-benefit tradeoff, not detection requirements.¶
Test period selection based on use case:¶
| Use Case | Test Period P | Tests/Hour | Cost/Hour* | Detection Latency |
|---|---|---|---|---|
| Live Events | 300s (5 min) | 12 | $12 | <5 min |
| Prime Time TV | 900s (15 min) | 4 | $4 | <15 min |
| Off-Peak Content | 3600s (1 hour) | 1 | $1 | <1 hour |
| ISP SLA Reporting | 7200s (2 hours) | 0.5 | $0.50 | <2 hours |
*Assumes m=10,000 users, $0.0001 per transaction¶
Maximum recommended P: 7200 seconds (2 hours)¶
Rationale: Beyond 2 hours, network state staleness reduces actionable value of coverage data.¶
SONAR must resist the following adversarial behaviors:¶
Cryptographic security is complemented by economic incentives:¶
Game-theoretic analysis shows honest participation is Nash equilibrium when fraud detection probability exceeds 0.001% (easily achieved through random spot checks).¶
SONAR reveals the following information:¶
SONAR does NOT reveal:¶
Privacy Attack Vector for Small Communities:¶
Without zkSNARK aggregation, direct on-chain attestations enable correlation attacks. For small communities (N < 100,000), attackers can:¶
Privacy decreases inversely with community size. For N=5,000, individual identification probability exceeds 75% through cross-referencing. For N=10,000,000, crowd anonymity provides natural protection.¶
RECOMMENDATION: zkSNARK aggregation MUST be used when sample size m > 1,000, SHOULD be used when m > 100. This protects small community viewers from de-anonymization while maintaining cryptographic coverage proof.¶
This document requests IANA to create a new registry for SONAR message types:¶
Registry Name: SONAR Message Types¶
Registration Procedure: IETF Review¶
Reference: This document¶
Initial allocations:¶
| Value | Description | Reference |
|---|---|---|
| 0x01 | Sample Challenge | Section 5.1.2 |
| 0x02 | User Attestation | Section 5.2.1 |
| 0x03 | zkSNARK Aggregated Proof | Section 6.3 |
This appendix provides a concrete deployment example for a New York City television station broadcasting to 10 million concurrent viewers.¶
Content Authentication (ALTA):¶
Statistical Sampling:¶
Broadcast Overhead:¶
Per-Hour Costs:¶
Revenue Impact:¶
The author thanks Jake Holland and Kyle Rose (Akamai) for the ALTA protocol specification and insights on multicast authentication. Thanks to the IETF MBONED working group for feedback on earlier versions of this work. Special thanks to the Blockcast team for real-world deployment experience with decentralized multicast networks.¶