| Internet-Draft | AGEWS Report | January 2026 |
| Nottingham & Thomson | Expires 13 July 2026 | [Page] |
The Workshop on Age-Based Restrictions on Content Access was convened by the Internet Architecture Board (IAB) and World Wide Web Consortium (W3C) in October 2025. This report summarizes its significant points of discussion and identifies topics that may warrant further consideration and work.¶
Note that this document is a report on the proceedings of the workshop. The views and positions documented in this report are those of the workshop participants and do not necessarily reflect IAB or W3C views and positions.¶
This note is to be removed before publishing as an RFC.¶
The latest revision of this draft can be found at https://intarchboard.github.io/draft-iab-agews-report/draft-iab-agews-report.html. Status information for this document may be found at https://datatracker.ietf.org/doc/draft-iab-agews-report/.¶
Source for this draft and an issue tracker can be found at https://github.com/intarchboard/draft-iab-agews-report.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 13 July 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶
Regulators and legislators around the world are increasingly restricting what can be made available to young people on the Internet, in an effort to reduce the online harms that they encounter.¶
In October 2025, the Internet Architecture Board and the World Wide Web Consortium convened the Workshop on Age-Based Restrictions on Content Access. It brought together technologists, civil society advocates, business interests, and government stakeholders to discuss the nuances of the introduction of such measures.¶
The primary focus was "to perform a thorough examination of the technical and architectural choices that are involved in solutions for age-based restrictions on access to content", with a goal of "build[ing] a shared understanding of the properties of various proposed approaches."¶
See the workshop announcement [ANNOUNCE] for details. This report summarises the proceedings of the workshop.¶
This document is a report on the proceedings of the workshop. The views and positions documented in this report were expressed during the workshop by participants and do not necessarily reflect the IAB's or W3C's views and positions, nor those of all participants.¶
Furthermore, the content of the report comes from presentations given by workshop participants and notes taken during the discussions, without interpretation or validation. Thus, the content of this report follows the flow and dialogue of the workshop but does not attempt to capture a consensus.¶
Participants agreed to conduct the workshop under the Chatham House Rule [CHATHAM-HOUSE], so this report does not attribute statements to individuals or organizations without express permission. Most submissions to the workshop were public and thus attributable; they are used here to provide substance and context.¶
Appendix B lists the workshop participants, unless they requested that this information be withheld.¶
The IAB/W3C workshop on Age-Based Restrictions on Content Access brought together a diverse group of participants from technical, policy, regulatory, and research communities to examine how the Internet might accommodate demands for age-based access controls. Over three days, discussions traversed the intersection of technology, governance, human rights, and social expectations, with a recurring emphasis on privacy, accountability, and the preservation of the open architecture of the Internet.¶
The workshop began with a framing session that emphasized the Internet's original design as a universal, non-segmented space. Participants observed that the web does not natively distinguish between adult and child users, and that governments are creating regulatory environments that shift responsibility from parents and individuals to service providers. The scope of discussion was tightly defined: not the morality or policy of age restrictions, but the technical, architectural, and human-rights implications of enforcing them. The challenge, many participants agreed, lay in building mechanisms that are accurate, respect privacy, maintain global interoperability, and avoid creating infrastructure that could be repurposed for censorship or surveillance.¶
Early exchanges focused on terminology and scope: whether "age verification" should be understood narrowly as identity checking or more broadly as "age assurance." The conversation also touched on the diversity of cultural expectations about parental authority and the variety of legal frameworks emerging across jurisdictions. Some participants warned of "slippery slope" effects, where mechanisms designed for age checks might evolve into tools for broader identity enforcement. Several noted that while liability drives many policy decisions, technical design should aim to minimize harm and avoid over-centralization. The question of who bears responsibility for child safety -- platforms, regulators, or device manufacturers -- surfaced repeatedly.¶
Human-rights principles were foregrounded as a basis for evaluation. Privacy was discussed not only in terms of data protection law (including techniques like minimization) but as protection from unwanted exposure or interaction. Freedom of expression and opinion was considered, particularly how adults and children both have rights to communicate, access information and associate, free from the chilling effects of surveillance or discrimination. The group revisited long-standing Internet design tenets, such as decentralization and the end-to-end principle, asking how they should inform modern architectures that could easily drift toward central control. Some argued that successful systems must remain open, interoperable, and reversible, while others cautioned that any solution -- even a well-intentioned one -- would inevitably reshape the Internet's social and economic balance.¶
Technical sessions explored a spectrum of enforcement models: service-based, network-based, and device-based. Service-enforced systems place the compliance burden on websites, risking fragmentation and user fatigue from repeated verification flows. Network-based filtering -- already common in some jurisdictions -- offers broad coverage but limited accuracy and significant privacy trade-offs. Device-enforced models, in which operating systems mediate access based on a one-time verification, were praised for their potential usability and consistency but criticized for potential concentration of power among major vendors. Many participants noted that a pluralistic approach is more likely to be successful, recognizing that no single architecture can meet all requirements equally across jurisdictions.¶
Privacy-enhancing technologies (PETs) such as anonymous credentials and zero-knowledge proofs were discussed as promising, though not necessarily sufficient, tools. In particular, PETs don't address all privacy concerns, and likewise don't address wider issues around access to underlying sources of truth. Furthermore, some participants cautioned that PETs cannot prevent circumvention or censorship, are relatively untested, and that open-sourcing code does not automatically make systems trustworthy. A recurring concern was that while credential-based verification may work well in countries with unified ID systems, it risks excluding people without access to such credentials and entrenching inequalities.¶
Discussions on parental controls and network operator roles highlighted practical tensions between effectiveness, usability, and user rights. Although some participants saw value in layered approaches combining device, service, and network measures, others noted the high complexity and low adoption of parental-control tools even where available. The workshop also revisited the ethical dimension: whether designing better tools might unintentionally legitimize over-broad or intrusive regulation.¶
By the third day, participants reflected on the need for collaboration across disciplines and institutions. Many acknowledged that while complete solutions are unlikely in the short term, articulating shared vocabulary, architectural roles, and evaluation properties was an essential foundation. There was broad agreement that future work should map risks against possible architectures, document trade-offs in neutral terms, and communicate clearly with policymakers to prevent outcomes that could undermine Internet openness.¶
The meeting closed with reflections on what process might be followed to take proposed solutions through a standards process. Both IETF and W3C representatives outlined how exploratory work might proceed within their respective frameworks, stressing that standardization would require consensus, open participation, and time.¶
While this workshop would not provide specific standards proposals or take positions on the advisability of regulatory proposals, it was suggested that leadership bodies, including the Internet Architecture Board and Technical Architecture Group, could be places to make such statements.¶
While the current status quo -- piecemeal, opaque, and often privacy-eroding -- was unsatisfactory to most participants, many cautioned that hasty solutions could entrench worse problems. This led to growing recognition that protecting children online must not come at the expense of the Internet's foundational freedoms, and that sustained, multi-stakeholder collaboration is the only viable path forward.¶
This section highlights aspects of discussion at the workshop that appeared to be most impactful.¶
Many participants remarked that the workshop allowed them to appreciate perspectives that they had not fully considered previously. Although several substantial efforts have included industry, civil society, government, and technologists, collaboration across all stakeholders appears to be rare.¶
This was especially evident when considering the involvement of the technical community. Although there have been a number of consultations by governments and other bodies, involvement of the technical community is often limited to participation by the policy representatives of tech companies. This can lead to an underappreciation of the architectural impact and related harms of the design decisions made.¶
Architectures effective for the goals and less likely to have profound harmful consequences may require the cooperation of multiple actors fulfilling different Section 3.2. To that end, standardization may be especially important for interoperable, collaborative development of architectures involving both servers and clients.¶
Some participants also noted that approaches where liability rests only on one party -- for example, a content or platform provider -- are unlikely to lead to the desired results, because this creates disincentives for the cooperation that is necessary for meaningful reduction of harms. An approach that considers the roles of the young, their parents, device manufacturers, operating system vendors, content providers, and society overall was believed to be more likely to succeed.¶
One of the more substantive discussions on architecture involved presentations on the functional roles involved in any system [HANSON].¶
Four key roles were identified:¶
The verifier role determines whether a person falls into a target age range.¶
The enforcer is responsible for ensuring that a person who does not satisfy the verifier is unable to access age-restricted content or services.¶
The policy selector is responsible for determining which policies should apply to the user, based on their jurisdiction, status or preferences.¶
The rater is responsible for determining whether content or services require age restrictions and the age ranges that apply.¶
In addition, it was noted that ratings and laws are often limited by geography or jurisdiction, so it is often necessary for services to first identify the applicable jurisdiction. It was generally accepted that this function often uses IP geolocation mappings, despite acknowledged limitations around accuracy and susceptibility to circumvention using VPNs.¶
Early discussions highlighted how not all participants used the same terminology when referring to different activities or functions. There was a recognition of the value of shared language, and some participants pointed to [ISO-IEC-27566-1], which establishes key terms, including:¶
Age assurance is an umbrella term for technology that provides some entity with information about the age of a person. This is understood to encompass multiple classes of specific methods, including age verification, age estimation, and age inference. Age assurance does not need to result in a specific age; age ranges are often preferred as these can have better privacy properties.¶
Age verification refers to gaining high assurance that a person is within a given age range. Strong assurances are often tied to official or governmental documentation, so age verification can involve the use of government-issued digital credentials.¶
Age estimation uses statistical processes that process physical or behavioral characteristics of a person to produce a probabilistic value for how old someone is or whether their age is in a target range. A variety of techniques are used, the most common being facial age estimation, which uses machine learning models to estimate how old a person is based on still or moving images of their face.¶
Age inference draws on external data sources to determine whether a person fits a given age range. This method requires identification information, such as an email address or phone number, to find relevant records. For example, evidence of online activity prior to a certain date in the past might support the view that a person is older than a target threshold.¶
Age gating is the process of restricting access to something based on the age of the person requesting access.¶
Relating these functions to the roles described in Section 3.2, all age assurance types fit the "verifier" role, where age gating applies to the "enforcer" role.¶
Privacy was a recurrent theme at the workshop, but it was clear that there are multiple considerations at play when talking about privacy. The question of privacy was often caught up in discussions of trust, where approaches each depend on different sorts of trust between the different actors.¶
Participants identified privacy as important to maintaining trust in any system that involves age assurance or age gating.¶
Where private information is used by the actors in a proposed architecture, those actors might need to be trusted to handle that private information responsibly. In that approach, the importance of different safeguards on personal information, such as the prompt disposal of any personal information -- a practice that many age verification providers promise -- becomes a core part of what might allow people to trust that system.¶
Several people observed that the sort of trust that is asked from people might not correspond with the role that certain entities play in people's lives. This will depend on context, where "adult" content providers generally serve anonymous users, whereas social media often already has a lot of personal information on users.¶
In either case, users might have no prior knowledge of -- or trust in -- providers that are contracted to provide age assurance functions. It was observed that one likely consequence of some arrangements is to train people to become more trusting of strange sites that ask for personal information.¶
Alternatively, it might be that trust in the system is not vested in actors, but instead the system as a whole. This is possible if no information is made available to different actors, removing the need to trust their handling of private information. For this to be achievable, the use of zero-knowledge proofs or similar cryptographic techniques was seen as a way to limit what each entity learns. Some participants noted, however, that these techniques do not address circumvention or censorship risks, still introduce new information into the ecosystem, and may concentrate trust in particular software implementations.¶
Other aspects of trust were considered equally important from different perspectives. Services that rely on an independent age assurance provider need to trust that the provider makes an accurate determination of age, at least to the extent that they might be held liable in law. They also need to trust that the service respects privacy, lest the use of a low-quality provider could create other forms of liability or drive away potential customers.¶
A recurrent theme in discussion was the insufficiency of any particular age assurance technique in ensuring that people are not unjustifiably excluded. All age assurance methods discussed fail to correctly classify some subset of people:¶
Age verification that depends on government-issued credentials will fail when people do not hold accepted credentials. This includes people who do not hold credentials and those who hold credentials, but not those that are recognized.¶
Age estimation produces probabilistic information about age that can be wrong by some number of years, potentially excluding people near threshold ages. This manifests as both false acceptance (people who are outside the target age range being accepted) and false rejection (people who are in the target age range being rejected). Where there is a goal of minimizing the false acceptance rate, that increases the number of false rejections.¶
Age inference techniques can fail due to lack of information.¶
Discussion often came back to an approach that is increasingly recommended for use in age verification, where multiple methods are applied in series. Checks with lower friction -- those that require less active participation from people -- or that are less invasive of privacy are attempted first. Successive checks are only used when a definitive result cannot be achieved.¶
Some participants noted that inconsistent friction and invasiveness create a different kind of discrimination, one that can exacerbate existing adverse discrimination. For example, the accuracy of age estimation for people with African heritage is often significantly lower than for those with European ancestry [FATE]. This is attributed to the models used being trained and validated using datasets that have less coverage of some groups. People who are affected by this bias are more likely to need to engage with more invasive methods.¶
One consequence of having multiple imperfect techniques is the need to recognize that any system will be imperfect. That cuts in both directions:¶
Some people will never be able to satisfy age assurance checks and will therefore be excluded by strict assurance mandates. Here, discussions acknowledged that purely technical systems are likely inadequate.¶
Some people who should be blocked from accessing content or services will find ways to circumvent restrictions. In this context, the term "advanced persistent teenager" was recognized as characterizing the nature of the "adversary": individuals who are considered too young to access content, but who are highly motivated, technically sophisticated, and have time to spare.¶
Offering more choices to people can improve privacy because they get to choose the method that suits them. However, when a chosen method fails, having to engage with additional methods has a higher privacy cost.¶
Some participants argued that accepting these risks is necessary in order to gain any of the benefits that age-based restrictions might confer. However, it was clear that other participants were unwilling to accept potential impositions on individual rights in light of the insufficiency of restrictions in providing meaningful protection; see Section 3.7.¶
How the identified roles (see Section 3.2) are arranged into architectures was some of the more substantive discussion. [JACKSON] describes some of the alternatives, along with some of the implications that arise from different arrangements.¶
Throughout this discussion, it was acknowledged that active deployments tended to fall into a common basic pattern. Several participants noted that this is a somewhat natural consequence of some of the constraints that actors are subject to.¶
An observation was made that laws often seek to designate a single entity as being responsible for ensuring that age restrictions are effective. That lawmakers feel the need to designate a responsible entity is due to constraints on how laws function, but one that creates other constraints.¶
Another constraint identified was the need for specialist expertise in order to administer all of the multiple different age assurance techniques; see Section 3.5. This means that there is a natural tendency for services to contract with specialist age assurance services.¶
Some of the proposed architectures were better able to operate under these constraints. Others required greater amounts of coordination, further emphasizing the importance of collaboration identified in Section 3.1.¶
In discussion of the constraints on different architectures, it was common for participants to point to a particular aspect of a given approach as carrying risks. Indeed, the final reckoning of risks produced a long list of potential issues that might need mitigation.¶
Architectures are not equally vulnerable to different risks, so a more thorough analysis is needed to identify how each risk applies to a different approach. An analysis that considers the constraints and assumptions necessary to successfully deploy different architectures is a contribution that would likely be welcomed by the community.¶
Experts in child safety frequently acknowledged that restricting access to selected content cannot be assumed to be sufficient. The task of ensuring that children are both kept appropriately safe, while preparing them for the challenges they will face in their lifetimes, is a massively complex task.¶
A recurrent theme was the old maxim, "it takes a village to raise a child". This concept transcends cultural boundaries and was recognized. The role of parents, guardians, educators, governments, and online services in creating an environment in which children can thrive and grow.¶
Content and service restrictions are likely only a small part of a suite of actions that combine to provide children with protection, but also support and encouragement. This theme was raised several times, despite the goal of the discussion being to explore technical and architectural questions.¶
Restrictions are necessarily binary and lacking in nuance. Though questions of what to restrict were out of scope for the workshop, discussions often identified subject matter that highlighted the challenges inherent in making simplistic classifications. Participants acknowledged the importance of the role of the adults who support children in their life journey. For example, on the subject of eating disorders, which can be challenging to classify, participants pointed to the importance of being able to recognize trends and inform and engage responsible adults. Ultimately, each child has their own challenges and the people around them are in the best position to provide the support that best suits the child.¶
The concept of age-appropriate design was raised on several occasions. This presents significant privacy challenges in that it means providing more information about age to services. However, it was recognized that there are legal and moral obligations on services to cater to the needs of children of different age groups. This is a more complex problem space than binary age restrictions, as it requires a recognition of the different needs of children as they get older.¶
Age verification has a significant potential security impact upon the Internet; see Section 3.4.¶
This document has no IANA actions.¶
The following sections cover each of the main topics of discussion sessions.¶
We will launch the workshop with a greeting, a round of introductions, and an explanation of the terms of engagement, background, goals and non-goals of the workshop.¶
Successfully deploying age restrictions at Internet scale has many considerations and constraints. We will explore them at a high level in order. The goal is to discuss within the group about the scope of topics that the workshop will seek to address.¶
Architectural principles give us a framework for evaluating additions and changes to the Internet. Technical principles are subject to a number of other considerations, in particular human rights principles. We will review the principles that might apply to age-based restrictions, explain their function, impact, and how they are applied. Including human rights impacts, such as:¶
Privacy and Security¶
Safety and efficacy¶
Censorship and Access¶
Access to the Internet¶
Freedom of Expression¶
And effects on the internet and web architecture, such as:¶
We now want to look at some of the higher-level considerations that apply regardless of approach. We will look at some different perspectives on how to think of the overall problem. Discussion will seek to find how those perspectives can be shaped to guide choices.¶
The Internet standards community is in the unique position to make controlled changes to the architecture of the Internet, and so there are multiple ways and places to deploy age restrictions. We will examine the options, with an eye to the deployment properties of each location and configuration, as related to the architectural principles. In particular, it will consider the establishment of new roles as well as the use of existing ones.¶
There are several active and proposed systems for age restriction on the Internet. We will review them from the perspective of their interaction with the architectural principles, potential impacts, and with consideration of the enforcement options. Including:¶
We will follow up on incomplete discussions and revisit architectural learnings.¶
We will summarise what we have discussed and learned thus far.¶
Attendees of the workshop are listed with their primary affiliation. Attendees from the program committee (PC), the Internet Architecture Board (IAB), and W3C Technical Architecture Group (TAG) are also marked.¶
Steve Bellovin¶
Hadley Beeman, TAG (PC)¶
Matthew Bocci, IAB (Observer)¶
Christian Bormann, SPRIND¶
Marcos Cáceres, TAG (Observer)¶
Andrew Campling, 419 Consulting¶
Sofıa Celi, Brave¶
David Cooke, Aylo¶
Iain Corby, Age Verification Providers Association¶
Dhruv Dhody, IAB (Observer)¶
Nick Doty, Center for Democracy and Technology (PC)¶
Sarah Forland, New America Open Technology Institute¶
Jérôme Gorin, Ecole Polytechnique¶
Alexis Hancock, Electronic Freedom Foundation¶
Julia Hanson, Apple¶
Wes Hardaker, University of Southern California Information Sciences Institute¶
Kyle den Hartog, Brave¶
Dennis Jackson, Mozilla¶
Leif Johansson, SIROS Foundation¶
Mallory Knodel, Article 19¶
Mirja Kühlewind, IAB (Observer)¶
Jonathan Langley, Ofcom UK¶
Veronica Lin, Carnegie Mellon University¶
Thibault Meunier, Cloudflare¶
Tom Newton, Qoria¶
Mark Nottingham, IAB (PC Co-Chair)¶
Georgia Osborn, Ofcom UK¶
Tommy Pauly, IAB (PC)¶
John Perrino, Internet Society¶
Eric Rescorla, Knight-Georgetown Institute¶
Beatriz Rocha, Ceweb.br¶
Omari Rodney, Yoti¶
Gianpaolo Scalone, Vodafone¶
Sarah Scheffler, Carnegie Mellon University¶
Andrew Shaw, UK National Cyber Security Centre¶
Aline Sylla, German Federal Commissioner for Data Protection and Freedom of Information¶
Martin Thomson, TAG (PC Co-Chair)¶
Carmela Troncoso, EPFL, the Swiss Federal Institute of Technology in Lausanne¶
Benjamin VanderSloot, Mozilla¶
Tara Whalen, World Wide Web Consortium (PC)¶
During the workshop, participants were asked to name potential impacts -- whether positive or negative -- that could be seen in association with the introduction of an age control mechanism. This list is not exhaustive, focuses largely on the challenges surrounding the introduction of such a mechanism, and does not imply that all points were agreed to by all participants'¶
Centralization¶
Fragmentation of the Internet¶
Increased costs for running a Web site¶
Chilling effects on use of the Internet¶
VPNs proliferate¶
Chilling effects on the publication of borderline content¶
Less content being available online¶
Restricting people to a few platforms / services¶
More use/utility of the Internet due to a perception of safety¶
More (or all) online services require a verified login¶
Device compatibility¶
"Advanced Persistent Teenagers"¶
Difficulties regarding jurisdiction checking¶
Spillover to other software (e.g., VPNs)¶
Displacing users from compliant to non-compliant sites¶
False sense of addressing the problem¶
Dealing with conflict of laws¶
Operators pulling out of territories¶
Increasing the footprint of the deep web¶
Imposition of cultural norms on other jurisdictions¶
Technical solutions are reused for other purposes (scope creep)¶
Dealing with obsolete and non-compliant systems¶
Lack of access (e.g. due to lack of device support)¶
Refugees, stateless people, people without identity¶
Harm to vulnerable people¶
Not addressing other vulnerable groups (i.e., not age-based)¶
Lack of availability of redress mechanisms¶
Users' rights to restitution¶
Loss of control over and access to data¶
Risk to anonymity¶
Loss of ability to run software of your choice¶
Air cover for blocking the Internet¶
User control of the content they see online¶
Costs to society (e.g., regulatory overhead)¶
Increased online tracking and state surveillance¶
Use as a censorship mechanism¶
Advancing foreign policy goals with censorship¶
Abuse of guardians who don't cut off their wards¶
During the workshop, participants were asked to nominate the properties that they believed would be advantageous or even essential for a solution in this space to have. This set of requirements and desiderata was recognised as not all being achievable, as some goals are in tension with others.¶
Underage don't access content that's inappropriate¶
Not trivially by-passable¶
Flexible enough to be provided through different means¶
Bound to the user¶
Reliable¶
Handles user-generated content¶
Enables differential experiences or age-appropriate design (not just blocking)¶
Agile by design -- assume adversarial engagement¶
Difficult to bypass¶
Accurate¶
Inclusive¶
Fair -- avoids or minimises bias¶
Does not create inequalities (e.g., across education, other properties)¶
Discriminates solely upon age, not other properties¶
Works on open devices¶
Device independence¶
Usable by people of all ages to increase their safety online¶
User choice in who verifies their age, and how¶
No clear losers¶
Accessible to people with disabilities¶
Includes appeal mechanisms for incorrect age determinations¶
Able to handle arbitrary composition of different jurisdictional requirements (possibly down to school level)¶
Applicable globally¶
Applies the rule of law in the jurisdiction where it applies universally¶
No concentration of power in any one entity (or small group of them)¶
No concentration of power in any country¶
Aligned to legal duties¶
Based upon a valid legal basis¶
Not perfect¶
Technically robust¶
Not a single, sole solution¶
Stable -- resilient¶
Alignment of incentives among participants¶
Simple to implement¶
Resistance to repurposing for censorship¶
Unable to be used for surveillance¶
Addresses risk of verification becoming over-prevalent¶
Accountable governance¶
Open Standards-based¶