The W3C Accessibility Guidelines (WCAG) 3.0 provide a wide range of recommendations for making web content more accessible to users with disabilities. Following these guidelines will address many of the needs of users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. These guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other web of things devices. They address various types of web content including static content, interactive content, visual and auditory media, and virtual and augmented reality. The guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.

Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple outcomes to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each outcome.

This specification is expected to be updated regularly to keep pace with changing technology by updating and adding methods, outcomes, and guidelines to address new needs as technologies evolve. For entities that make formal claims of conformance to these guidelines, several levels of conformance are available to address the diverse nature of digital content and the type of testing that is performed.

W3C Accessibility Guidelines 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [[WCAG22]] and previous versions, but does not deprecate these versions. WCAG 3.0 will incorporate content from and partially extend User Agent Accessibility Guidelines 2.0 [[UAAG20]] and Authoring Tool Accessibility Guidelines 2.0 [[ATAG20]]. While there is a lot of overlap between WCAG 2.X and WCAG 3.0, WCAG 3.0 includes additional tests and different scoring mechanisms. As a result, WCAG 3.0 is not backwards compatible with WCAG 2.X. WCAG 3.0 does not supersede WCAG 2.2 and previous versions; rather, it is an alternative set of guidelines. Once these guidelines become a W3C Recommendation, the W3C will advise developers, content creators and policy makers to use WCAG 3.0 in order to maximize future applicability of accessibility efforts. However, content that conforms to earlier versions of WCAG continue to conform to those versions.

See WCAG 3 Introduction for an introduction and links to WCAG technical and educational material.

To comment, file an issue in the W3C silver GitHub repository. The Working Group requests that public comments be filed as new issues, one issue per discrete comment. It is free to create a GitHub account to file issues. If filing issues in GitHub is not feasible, send email to public-agwg-comments@w3.org (comment archive). In-progress updates to the guidelines can be viewed in the public editors' draft.

Introduction

Summary

The W3C Accessibility Guidelines (WCAG) 3.0 show ways to make web content accessible to people with disabilities. WCAG 3.0 is a newer standard than the Web Content Accessibility Guidelines (WCAG) 2.2. You may use WCAG 2.2 or the new standard.

What’s new in WCAG 3.0?

About WCAG 3.0

This introduction provides a brief background to WCAG 3.0. Detailed information about the structure of the guidelines and inputs into their development is available in the Explainer for W3C Accessibility Guidelines (WCAG) 3.0. That document is recommended reading for anyone new to WCAG 3.

This specification presents a new model and guidelines to make web content and applications accessible to people with disabilities. The W3C Accessibility Guidelines (WCAG) 3.0 support a wide set of user needs, use new approaches to testing, and allow frequent maintenance of guidelines and related content to keep pace with accelerating technology change. WCAG 3.0 supports this evolution by focusing on users’ functional needs. These needs are then supported by outcomes and technology-specific methods to meet those needs. 

Following these guidelines will make content more accessible to people with a wide range of disabilities, including accommodations for blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. Following these guidelines will also often make content more usable to users in general as well as accessible to people with disabilities.

WCAG 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [[WCAG22]] and previous versions, but does not deprecate WCAG 2.X. It will also incorporate content from and partially extend User Agent Accessibility Guidelines 2.0 [[UAAG20]] and Authoring Tool Accessibility Guidelines 2.0 [[ATAG20]]. These earlier versions provided a flexible model that kept them relevant for over 10 years. However, changing technology and changing needs of people with disabilities have led to the need for a new model to address content accessibility more comprehensively and flexibly.

There are many differences between WCAG 2.X and WCAG 3.0. Content that conforms to WCAG 2.2 A & AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3.0 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Since the new standard will use a different conformance model, the Accessibility Guidelines Working Group expects that some organizations may wish to continue using WCAG 2.X, while others may wish to migrate to the new standard. For those that wish to migrate to the new standard, the Working Group will provide transition support materials, which may use mapping and other approaches to facilitate migration.

Relationship to other W3C guidelines

The Web Content Accessibility Guidelines (WCAG) 2.0 [[WCAG20]] were designed to be technology neutral, and have stayed relevant for over 10 years. The Authoring Tool Accessibility Guidelines (ATAG) 2.0 [[ATAG20]] provide guidance for various types of software that assist people in writing accessible content. User Agent Accessibility Guidelines (UAAG) 2.0 [[UAAG20]] offers useful guidance to user agent developers and has been implemented on an individual success criterion basis.

These guidelines have normative guidance for content and helpful implementation advice for authoring tools, user agents, and assistive technologies.

For more details about differences from previous guidelines, see Appendix: Differences From WCAG 2.

This version of the guidelines includes an example method for ATAG (Author control of text alternatives) and UAAG ( Reflow of captions and other text in context). Future drafts of the guidelines will include additional examples of ATAG- and UAAG-related content.

Goals and Requirements

The goal of WCAG 3.0 and supporting documents is to make digital products including web, ePub, PDF, applications, mobile apps, and other emerging technologies more accessible and usable to people with disabilities. It is the intention for WCAG 3.0 to meet this goal by supporting a wider set of user needs, using new approaches to testing, and allowing more frequent maintenance of guidelines to keep pace with accelerating technology change. The hope is that WCAG 3.0 will make it significantly easier for both beginners and experts to create accessible digital products that support the needs of people with disabilities.

Research and design work performed by the Silver Task Force identified key requirements needed to improve upon the existing WCAG 2.X structure. These requirements, presented in the Requirements for Silver document, shaped the guidelines that follow and should be taken into account when evaluating and updating the guidelines.

While the majority of guidelines are still to be written and we continue to explore additional ways of validating conformance, we seek wider public review on the approach presented here.

Normative requirements

Summary

There are two types of content in this document:

In addition to this section, the Guidelines, Testing, and Conformance sections in WCAG 3.0 provide normative content and define requirements that impact conformance claims. Introductory material, appendices, sections marked as non-normative, diagrams, examples, and notes are informative (non-normative). Non-normative material provides advisory information to help interpret the guidelines but does not create requirements that impact a conformance claim.

The key words MAY, MUST, MUST NOT, NOT RECOMMENDED, RECOMMENDED, SHOULD, and SHOULD NOT are to be interpreted as described in [[RFC2119]].

Outcomes are normative. The working group is looking for feedback on whether the following should be normative or informative: guidelines, methods, critical errors, and outcome ratings.

Guidelines

Summary

The following six guideline examples show different features of WCAG 3.0:

The individuals and organizations that use WCAG vary widely and include web designers and developers, policy makers, purchasing agents, teachers, and students. In order to meet the varying needs of this audience, several layers of guidance are provided including functional categories of disabilities, general guidelines, outcomes that can be tested, a rich collection of methods, resource links, and code samples.

The guidelines included in this draft have been selected to show different types of content:

These are early drafts of guidelines included to serve as initial examples. They are used to illustrate what WCAG 3.0 could look like and to test the process of writing content. These guideline drafts should not be considered as final content of WCAG 3.0. They are included to show how the structure would work. As this draft matures, numbering of individual guidelines will be removed to improve overall usability of the guidelines in response to public requests. WCAG 2.x success criteria will be migrated to this new structure before WCAG 3.0 moves to candidate recommendation.

As more content is developed, this section will be a list of guidelines with a unique short name, and the text of the requirement written in plain language. To see the overall plan for migrating content from WCAG 2.1 to WCAG 3.0, see the WCAG to Silver Outline Map.

Text alternatives

Provide text alternative for non-text content.

Text alternative available

Provides text alternatives for non-text content for user agents and assistive technologies. This allows users who are unable to perceive and / or understand the non-text content to determine its meaning.

Functional categories

This outcome relates to the following functional categories:

  • Sensory - Vision & Visual
  • Sensory Intersections
  • Cognitive - Language & Literacy
  • Cognitive - Learning
  • Cognitive - Memory
  • Cognitive - Mental Health
  • Cognitive & Sensory Intersections
Critical errors
  • Any image of text without an appropriate text alternative needed to complete a process.
Rating
Rating Scale
RatingCriteria
Rating 0 Less than 60% of all images have appropriate text alternatives OR there is a critical error in the process
Rating 1 60% - 69% of all images have appropriate text alternatives AND no critical errors in the process
Rating 2 70%-79% of all images have appropriate text alternatives AND no critical errors in the process
Rating 3 80%-94% of all images have appropriate text alternatives AND no critical errors in the process
Rating 4 95% to 100% of all images have appropriate text alternatives AND no critical errors in the process

We selected the Text Alternatives guideline to illustrate how WCAG 2.2 success criteria can be moved to WCAG 3.0 with minimal changes. Most of the material was directly copied from W3C sources such as WCAG 2.1, Web Accessibility Tutorials, and HTML 5.3 examples.

There are subtleties to the scoring of the methods that should be noted in this guideline. We have included four different methods for different types of images in HTML:

  • functional images;
  • informative images;
  • images of text; and
  • decorative images.

The scoring is set up to work across all types of images to make it easier for automated tools. The automated tool does not need to know the type of image and can give you a score of the number of images and the number of images passed. The tester reviewing the path that a user would use to accomplish a task can identify whether the lack of a text alternative is a critical error that would stop a user from completing a task. This allows an automated tool to do the heavy lifting for identifying all the text alternatives while still allowing a knowledgeable tester to identify and evaluate the images that are necessary to complete a task.

This guideline also illustrates an example of critical errors along a path. Organizations with large numbers of images often have a missing text alternative on an image as a bug. They need to know when that missing text alternative is critical to be fixed, and when it is a lower priority. This critical error example shows how an image without alternative text that is crucial for completing the task gives a rating of zero. An image without alternative text that is not crucial, such as an image in the footer, does not block the organization from receiving the score the rest of the images deserve. This makes it possible for very large web sites or apps to be able to conform even if they have a low number of bugs without losing the critical needs of people with disabilities.

We are interested in your feedback on this approach to testing and scoring. Does this approach help large organizations conform even if their site is not 100% perfect? Do you think that organizations will interpret that they only need 95% of text alternatives for images and then stop adding alternative text? Are the bands of numbers for the different ratings correct? Do people with disabilities in particular feel that this approach will meet their needs?

For this First Public Working Draft, we included HTML methods. This will be expanded in future drafts. We have also included a method, Author Control of Text Alternatives (ATAG), that demonstrates how requirements from the Authoring Tool Accessibility Guidelines (ATAG) 2.0 can be included as methods.

Clear words

Use common clear words.

Exception:

Common clear words

Uses common words to reduce confusion and improve understanding.

Functional categories

This outcome relates to the following functional categories:

  • Speech
  • Cognitive - Attention
  • Cognitive - Language & Literacy
  • Cognitive - Learning
  • Cognitive - Memory
  • Cognitive - Executive
  • Cognitive - Mental Health
  • Cognitive & Sensory Intersections
  • Independence
Critical errors
  • None.
Rating
Rating Scale
Rating Criteria
Not Applicable If this outcome does not apply to the technology or content being scored, do not score it.
Rating 0 Average score below 1
Rating 1 Not used in this outcome
Rating 2 Average score of 1-1.6 rounded to one decimal place (significant figure)
Rating 3 Not used in this outcome
Rating 4 Average score of 1.7 or above rounded to one decimal place (significant figure)

We selected Use Clear Words to show that the new WCAG3 structure can include accessibility guidance that does not fit into the WCAG 2.x structure. In the research phase of this project, we identified user needs from the Cognitive Accessibility Task Force and the Low Vision Accessibility Task Force that could not be addressed by a true/false success criterion in WCAG 2.1. We wanted to select one of those user needs and include it in the first draft of WCAG3 to show that more complex user needs can be included and still be testable and scored.

Use Clear Words is a new guideline proposed by the Cognitive Accessibility Task Force (COGA) and includes research, documents and comments from COGA. The selection of user needs and the outcomes necessary to address them is aligned with the new COGA publication, Making content usable for people with cognitive and learning disabilities [coga-usable].

The clear words guideline was included to illustrate that the proposed WCAG 3.0 scoring and structure can be used in non-binary testing. Clear words guideline uses a rating scale with flexible units of measure. For example, testing could be done by a webpage, a paragraph, a section of instructions on an application, or other. A manual tester evaluates the paragraph, webpage, or section on a rating scale. While we do not know of any mainstream accessibility tool that measures common words, there are some working prototypes of tools developed outside the W3C. We are interested in feedback on testing this guideline and its scoring.

There are a number of exceptions to this guideline. We are interested in feedback where to put that information for ease of use.

This category of new guideline needs further development. It is included to show that it could work, not necessarily that this is the shape of the final guideline.

Captions

Provide captions and associated metadata for audio content.

Translates speech and non-speech audio

Translates speech and non-speech audio into alternative formats (e.g. captions) so media can be understood when sound is unavailable or limited. User agents and APIs support the display and control of captions.

Functional categories

This outcome relates to the following functional categories:

  • Sensory - Hearing & Auditory
  • Sensory Intersections
  • Cognitive - Language & Literacy
  • Cognitive & Sensory Intersections
Critical errors
  • Any video without captioning that is needed to complete a process.
    For example, an education site with a video that a student will be tested on or a shopping experience of previewing movies. If they do not have captioning (closed or open captioning), they fail.
Rating
Rating Scale
Rating Criteria
Rating 0 A critical error or an average score 0-0.7 rounded to one decimal place (significant figure)
Rating 1 Not applicable
Rating 2 A critical error or an average score 0.8-1.5 rounded to one decimal place (significant figure)
Rating 3 Not applicable
Rating 4 A critical error or an average score 1.6 - 2 rounded to one decimal place (significant figure)

Conveys information about the sound

Conveys information about the sound in addition to the text of the sound (for example, sound source, duration, and direction) so users know the necessary information about the context of the sound in relation to the environment it is situated in.

Functional categories

This outcome relates to the following functional categories:

  • Sensory - Hearing & Auditory
  • Sensory Intersections
  • Cognitive - Language & Literacy
  • Cognitive & Sensory Intersections
Critical errors
  • None.
Rating
  • Is meta-data directionality essential to this experience?
  • Can a user orientate themselves to the sound with/without any additional interface?
Rating Scale
Rating Criteria
Rating 0 No meta-data
Rating 1 Sound visually indicates the direction of origin in 2D space
Rating 2 Not applicable
Rating 3 Meta-data includes the location the sound originates in 3D space
Rating 4 Meta-data includes the location the sound originates, Meta-data includes the direction of the sound.

This guideline demonstrates how the WCAG3 structure can be used with emerging technologies such as virtual reality, augmented reality and other immersive web technologies (XR). Research in this area is ongoing and we expect to complete more details in future drafts.

The Silver XR group has been working closely with other groups within the W3C as well as researchers in the area of captioning in immersive technologies. This is a rapidly developing field, and the recommendations listed are more exploratory. They are included as an example that WCAG3 can be used with emerging technologies. We hope that including this guideline will help inspire more research in this area.

Because this guideline was included to demonstrate emerging technology, there is little guidance included on traditional captions. Future drafts will also include more traditional caption guidance.

We are looking for feedback on the scoring of captions. Media that is essential to accomplishing the task that does not have captions is a critical error and automatically fails (a 0 rating). Examples include educational videos, entertainment site previews, or directions for installing a product. Other videos without captions that are not essential to the task such as advertising and promotional videos that are not essential to shopping experience are not automatically failed, but the cumulative lack of captioning reduces the score. We want feedback on this approach.

We want public feedback about whether Open Captions (burned in captions) should be considered as equivalent to Closed Captions. Closed captions are text that can be customized to meet user needs, for example, a hard of hearing person with low vision (like a lot of aging people). Open captions are burned in and cannot be customized. They can't be adapted to other languages. If closed captions are added, then they are overlaid on the Open Captions and hard to read. If we receive sufficient feedback to leave captions as they are today (both closed or open are equally acceptable), then we will use a simple scoring rating. If we decide to not accept open captions as equivalent to closed captions, then we will give more points to closed captions than open.

Note that the advanced XR outcomes and metadata do not have critical errors. This is a way that best practices can be included so that they are not punitive, but could give extra points that an organization who implements them could use to potentially raise their score. We are interested in your feedback about this approach.

Structured content

Use sections, headings, and sub-headings to organize content.

Headings organize content

Organizes content into logical blocks with headings relevant to the subsequent content. This makes locating and navigating information easier and faster.

Functional categories

This outcome relates to the following functional categories:

  • Sensory - Vision & Visual
  • Sensory Intersections
  • Physical & Sensory Intersections
  • Cognitive - Attention
  • Cognitive - Language & Literacy
  • Cognitive - Memory
  • Cognitive - Executive
  • Cognitive & Sensory Intersections
Critical errors
  • One or more headings necessary to locate the content needed to complete a process are missing.
Rating
Rating Scale
Rating Criteria
Rating 0 25% or less of expected headings are present and describe the content contained in the section OR there is a critical error in the process
Rating 1 26-50% or less of expected headings are present and describe the content contained in the section AND no critical errors in the process
Rating 2 51-80% or less of expected headings are present and describe the content contained in the section AND no critical errors in the process
Rating 3 81-95% or less of expected headings are present and describe the content contained in the section AND no critical errors in the process
Rating 4 96-100% or less of expected headings are present and describe the content contained in the section AND no critical errors in the process

Uses visually distinct headings

Uses visually distinct headings so sighted readers can determine the structure.

Functional categories

This outcome relates to the following functional categories:

  • Sensory - Vision & Visual
  • Cognitive - Language & Literacy
  • Cognitive - Learning
  • Cognitive - Memory
  • Cognitive - Executive
  • Cognitive & Sensory Intersections
Critical errors
  • One or more headings necessary to locate the content needed to complete a process are not visually distinct.
Rating
Rating Scale
Rating Criteria
Rating 0 25% or less of headings are visually distinct OR there is a critical error in the process
Rating 1 26-50% of headings are visually distinct AND no critical errors in the process
Rating 2 51-75% of headings are visually distinct AND no critical errors in the process
Rating 3 76-95% of headings are visually distinct AND no critical errors in the process
Rating 4 96-100% of headings are visually distinct AND no critical errors in the process

Conveys hierarchy with semantic structure

Provides semantic structure that conveys the hierarchy to help explore and navigate the content.

Functional categories

This outcome relates to the following functional categories:

  • Sensory - Vision & Visual
  • Sensory Intersections
  • Physical & Sensory Intersections
  • Cognitive - Language & Literacy
  • Cognitive & Sensory Intersections
Critical errors
  • One or more headings necessary to locate the content needed to complete a process are not coded as headings.
Rating
Rating Scale
Rating Criteria
Rating 0 25% or less are correctly semantically coded (including level) OR there is a critical error in the process
Rating 1 26-50% or less of the visual headings are correctly semantically coded (including level) AND no critical errors in the process
Rating 2 51-80% or less of the visual headings are correctly semantically coded (including level) AND no critical errors in the process
Rating 3 81-95% or less of the visual headings are correctly semantically coded (including level) AND no critical errors in the process
Rating 4 96-100% or less of the visual headings are correctly semantically coded (including level) AND no critical errors in the process

We included the structured content guideline as an example of an “easy” guideline that was well understood and addressed diverse disability needs. While WCAG2 addresses headings from the semantic needs of screenreader users, little has been done to directly address the needs of people with cognitive disabilities around headings. This guideline shows how a well-known area of accessibility can address more user needs of different groups of people with disabilities. The structured content guideline has multiple outcomes working together to cover the different aspects of accessibility needed for different categories of people with disabilities.

The structured content guideline began as a guideline on use of headings. Going through the content development process, we realized that it was a broader topic than simply headings, but there is little content developed beyond headings. Note that this guideline is used for prototyping, and is the most uneven in style of content. Additional outcomes and content will be added in future drafts to make this guideline more complete.

Structured content guideline also shows how several WCAG 2.1 success criteria can be re-combined and include AAA level success criteria such as 2.4.10 Section Headings. The scoring shows how the rating can be improved by including all headings, but does not fail the lack of section headings, unless that section heading is essential to accomplishing a task. We think this will allow organizations to continually improve their use of headings without failing them for what was formerly required by an AAA success criterion.

We are looking for feedback on using scoring as a way to encourage adoption of AAA success criteria without failures. Do you like the inclusion of broader needs for structured content than providing semantics for screenreader users? Do you think this should be a separate guideline, or do you like having multiple, testable outcomes supporting the guideline? Do you like the approach of merging WCAG2 success criteria with related user needs?

Visual contrast of text

Provide sufficient contrast between foreground text and its background.

Luminance contrast between background and text

Provides adequate luminance contrast between background and text colors to make the text easy to read.

Functional categories

This outcome relates to the following functional categories:

  • Sensory - Vision & Visual
Critical errors
  • None.
Rating
Rating Scale
Rating Criteria
Rating 0 Any failures on the Advanced Perceptual Contrast Algorithm (APCA) lookup table or the lowest APCA value is more than 15% below the values on the APCA lookup table
Rating 1 The lowest APCA value is 10-15% below the values on the APCA lookup table
Rating 2 The lowest APCA value is 5-9% below the values on the APCA lookup table
Rating 3 The lowest APCA value is 1-4% below the values on the APCA lookup table
Rating 4 All reading text meets or exceeds the values on the APCA lookup table

Visual Contrast is a migration from WCAG 2.1 with significant updates:

  • New calculations of contrast based on more modern research on color perception.
  • Merging the 1.4.3 AA and 1.4.6 AAA levels into one guideline.
  • New test of text contrast.
  • At this time, it only includes textual visual contrast.

We propose changing the names of Contrast (Minimum) and Contrast (Enhanced) to Visual Contrast of Text as a signal of a paradigm change from one about color to one about perception of light intensity. The reason for this change is that the understanding of contrast has matured and the available research and body of knowledge has made breakthroughs in advancing the understanding of visual contrast.

The proposed new guidance more accurately models current research in human visual perception of contrast and light intensity. The goal is to improve understanding of the functional needs of all users, and more effectively match the needs of those who face barriers accessing content. This new perception-based model is more context dependent than a strict light ratio measurement; results can, for example, vary with size of text and the darkness of the colors or background.

This model is more responsive to user needs and allows designers more choice in visual presentation. It does this by including multi-factor assessment tests which integrate contrast with inter-related elements of visual readability, such as font features. It includes tests to determine an upper limit of contrast, where elevated contrast may impact usability.

This outcome will eventually include a second rating approach based on the mean average APCA value for all text in a process and view based on a character count.

Error prevention

Provide features that help users avoid errors.

Input instructions provided

Provides instructions for inputs that have requirements (for example, required, date format, password) so users know how to provide valid information.

Functional categories

This outcome relates to the following functional categories:

  • Essential
  • Sensory - Vision & Visual
  • Sensory - Sensory Intersections
  • Cognitive - Attention
  • Cognitive - Language & Literacy
  • Cognitive - Learning
  • Cognitive - Memory
  • Cognitive - Executive
  • Cognitive - Mental Health
  • Cognitive - Cognitive & Sensory Intersections
  • Independence
Critical errors
  • Any input that has specific data requirements (e.g., date, password) that is provided without instructions.
  • Any input for collecting sensitive information (e.g., social security number) that fails “Input instructions provided” tests.
Rating
Rating for "Input instructions provided
Rating Criteria
Rating 0 Score of 25% or less OR there is a critical error
Rating 1 Score of 26-50% or less AND there are no critical errors
Rating 2 Score of 51-80% or less AND there are no critical errors
Rating 3 Score of 81-95% or less AND there are no critical errors
Rating 4 Score of 96-100% or less AND there are no critical errors

Testing

What types of tests are used?

WCAG 3.0 includes two types of tests:

Some content will meet outcomes if it passes atomic tests, but that content still might not be usable by all people with disabilities. Holistic tests can help you fix that.

The model presented provides a structure for testing that can be built upon to better accommodate dynamic or very frequently updated content than WCAG 2.X. We are exploring additional approaches to testing using holistic tests, sampling and/or other alternatives for reaching conformance in situations where testing all content is not possible. We also plan to include a definition and concept for substantially conforming in order to address the potential difficulties presented when testing all content in large digital products and 3rd party content.

WCAG 3.0 tests and scores outcomes. Outcomes are written as testable criteria that allow testers to objectively determine if the content they are evaluating satisfies the criteria.

Testing outcomes uses both views and processes to define what is being tested. Views include all content visually and programmatically available without a substantive change. Conceptually, it corresponds to the definition of a web page as used in WCAG 2.X, but is not restricted to content meeting that definition. For example, a view could be considered a "screen" in a mobile app.

Processes are a sequence of steps that need to be completed in order to accomplish an activity/task from end-to-end. When testing processes, the content used to complete the process as well as all of the associated views need to be included in the test. A process is a subset of a view or a group of views. It includes only the sections of the view needed to accomplish the activity or task.

Examples of a process include:

A process is comprised of one or more views.

Types of tests

WCAG 3.0 includes two types of tests: atomic tests and holistic tests. Testing the outcomes using the atomic tests might involve a combination of automated evaluation, semi-automated evaluation, and human evaluation.

Although content may satisfy all outcomes using the atomic tests, the content may not always be usable by people with a wide variety of disabilities. The holistic tests address this gap by evaluating more of the user experience than atomic testing.

We are looking for more appropriate terms to distinguish between these two types of tests and welcome suggestions

Atomic tests

Atomic tests evaluate content, often at an object level, for accessibility. Atomic tests include the existing tests that support A, AA, and AAA success criteria in WCAG 2.X. They also include tests that may require additional context or expertise beyond tests that fit within the WCAG 2.X structure. In WCAG 3.0, atomic tests are used to test both processes and views. Test results are then aggregated across the selected views. Critical errors within selected processes are also totaled. Successful results of the atomic tests are used to reach a Bronze rating.

Atomic tests may be automated or manual. Automated evaluation can be completed without human assistance. These tests allow for a larger scope to be tested but automated evaluation alone cannot determine accessibility. Over time, the number of accessibility tests that can be automated is increasing, but manual testing is still required to evaluate most methods at this time.

Holistic tests

Holistic tests include assistive technology testing, user-centered design methods, and both user and expert usability testing. Holistic testing applies to the entire declared scope and often uses the declared processes to guide the tests selected. Successful results of holistic tests are used to reach a silver or gold rating.

Future drafts will further explore holistic tests and provide examples as well as detail how to apply them.

Technology specific testing

Each outcome includes methods associated with different technologies. Each method contains tests and techniques for satisfying the outcome. The outcome is written so that testers can test the accessibility of new and emerging technologies that do not have related methods based solely on the outcome.

We continue to test this approach and others for validity, reliability, sensitivity, adequacy, and complexity.  Alternatives that we are exploring are noted as separate editor’s notes where applicable. We welcome suggestions on ways to improve the scoring to better meet these criteria.

Scoring

How are tests scored?

Besides true/false scoring methods, we’ve included testing options for new guidance, such as rating scales.

Each outcome has a section that shows how it is scored.

One of the goals of WCAG 3.0 is to expand scoring tests of methods beyond a binary true/false choice at the page level. We have included tests within the sample outcomes that demonstrate alternatives such as rubrics and scales. We are also exploring integrating these options into Accessibility Conformance Testing format. We will include example tests in a future draft. Our intent is to include detailed tests for methods to support each outcome within the WCAG 3.0 model.

Each outcome has methods associated with different technologies. Each method contains tests and techniques for meeting that outcome. Testers can test the accessibility of new and emerging technologies that do not have related methods based on the outcome.

Scoring atomic tests

In most cases, testing individual objects will result in binary, pass / fail outcome for each element. This leads to either a pass / fail or a percentage rating depending on the test. A rating scale may be provided for some tests to allow the tester to assign a quality judgement of an element or block of content. Whether scoring is binary (pass/fail) or uses rating scales, will depend on the method, outcome, and technology. Binary scoring works well when the unit being tested has clear boundaries and pass/fail conditions. Rating scales work better when the unit being tested does not have clear boundaries, when evaluating success requires a quality judgement, or when the test includes gradations of quality. Each of these results can then be assigned a percentage or averaged to inform the overall score of an outcome.

Test results for views:

In addition, critical errors within selected processes will be identified and totaled. Any critical errors will result in score of very poor (0).

Scoring outcomes

The results from the atomic tests are aggregated across views and used along with the number of critical errors to assign an adjectival rating to the outcome. Testers will then use the guidance provided in the outcome along with reasonable judgement of the context that the errors occur in to assign an accessibility score of the outcome.

Potential thresholds for adjectival ratings of test results:

Very Poor (0)
Any critical errors or less than 50% of related tests pass
Poor (1)
No critical errors, approx. 50% to 79% of related tests pass
Fair (2)
No critical errors, approx. 80% to 89% of related tests pass
Good (3)
No critical errors, approx. 90% to 98% of related tests pass
Excellent (4)
No critical errors, approx. 99% to 100% of related tests pass

The thresholds are different for different outcomes.

These thresholds are still being tested and adjusted. These are included as examples to gather feedback on this scoring approach.

Overall scores

After all outcomes have been scored, the ratings are averaged for a total score and a score by the functional category(ies) they support. Conformance at the bronze level requires no critical errors and at least 3.5 total score and at least a 3.5 score within each functional category.

This approach, which allows the tester some flexibility in assigning scores, has the advantage of simplicity and allowing a tester to take the context into account beyond the simple percentages. The second option we are exploring is to carry percentages from tests through to a final score. In this case a bronze rating would require a total score of at least 90% and at least 90% within each functional need category. This number would likely shift as we continue testing.  We invite comment on these options as well as suggestions for an alternative solution.

Scoring holistic tests

The points from holistic tests do not affect the scores of atomic tests. Rather a minimum number of holistic tests will need to be met in order to reach a silver rating and additional holistic tests will be needed to reach a Gold rating. Getting a silver or gold rating requires a Bronze rating.

We continue to work on the scoring of holistic tests and will provide more details in a future iteration of this document.

Conformance

Summary

You might want to make a claim that your content or product meets the WCAG 3.0 outcomes. If it does meet the outcomes, we call this “conformance.” To conform to WCAG 3.0, your test results must show that your project is accessible.

If you want to make a conformance claim, you must use the process described in this document. Your content can conform to WCAG 3.0, even if you don’t want to make a claim. You can still use this process to test your project’s accessibility.

Conformance levels

WCAG 3.0 includes a new conformance model in order to address a wider range of user needs, test a wider range of technologies and support new approaches to testing. There are several key goals for this new conformance model:

  1. Develop a scoring model that encourages websites to continue to do better and better (vs. stopping at the previous AA level);
  2. Better reflect the lived experience of people with disabilities, who successfully use sites that have some content that does not meet WCAG 2.0 AA, or who encounter barriers with sites that meet WCAG 2.0 AA;
  3. Allow for bugs and oversight by content authors, provided the impact of them is limited to users with disabilities.

To do this, the conformance model prioritizes content needed to complete tasks while still testing the entire view for accessibility errors. This priority is reflected in the scoring system, which does not allow for errors along the paths needed to complete processes but allow for some accessibility errors outside process completion. This means that sites may conform at the lowest level (Bronze), while still containing a small amount of content that does not meet one or more guidelines, so long as that content doesn’t prevent people with disabilities from successfully using the site.

We seek feedback on whether this flexibility will be beneficial in encouraging content providers to meet conformance because it is more achievable or whether content providers are less likely to improve accessibility if they aren't required to. We also seek feedback on the conformance approach as a whole.

WCAG 3.0 defines three levels of conformance: bronze, silver, and gold.

Bronze

Bronze is the minimum conformance level. Content that does not meet the requirements of the bronze level does not conform to WCAG 3.0. The bronze level can be verified using atomic tests. While there is a lot of overlap between WCAG 2.X and WCAG 3.0, WCAG 3 includes additional tests and different scoring mechanics. As a result, WCAG 3.0 is not backwards compatible with WCAG 2.X.

For content that conforms to the bronze level:

  • The total score and score within each of the functional categories MUST be at least 3.5; and
  • Views and processes MUST NOT have critical errors.

Conformance to this specification at the bronze level does not mean every requirement in every guideline is fully met. Bronze level means that the content in scope does have any critical errors and meets the minimum percentage of @@

Silver

Silver is a higher conformance level that addresses additional outcomes. Some holistic testing is necessary to verify conformance to this level.

For content that conforms to the silver level:

  • All views MUST satisfy the Bronze criteria; and
  • Use of holistic tests to meet this level will be further explored in future drafts

Gold

Gold is the highest conformance level that addresses the remaining outcomes described in the guidelines. Additional holistic testing is necessary to verify conformance to this level.

For content that conforms to the gold level:

  • All views MUST satisfy the Silver criteria; and
  • Use of holistic tests to meet this level will be further explored in future drafts

User Generated Content

Web content publishers may include content provided by the users of their digital products. We refer to such content as "User Generated Content".

Examples of User Generated Content include:

User Generated Content is provided for publication by visitors where the content platform specifically welcomes and encourages it. User-generated content is content that is submitted through a user interface designed specifically for members of the public and customers. Use of the same user interface as an authoring tool for publication of content by agents of the publisher (such as employees, contractors, or authorized volunteers) acting on behalf of the publisher does not make that content User Generated Content. The purpose of the User Generated Content Conformance is to allow WCAG 3 outcomes and methods to require additional or different steps to improve the accessibility of User Generated Content.

An important part of WCAG Conformance is the specific guidance that is associated with individual WCAG 3 guidelines and outcomes. Not all WCAG 3 guidelines will have unique outcomes and testing for User Generated Content. Unless User Generated Content requirements are specified in a particular guideline, that guideline applies as written whether or not the content is User Generated.

We plan for the Q4 Working Draft to include specific examples of guidelines with additional requirements for user generated content. One example would be alternative text. The Authoring Tool Accessibility Guidelines (ATAG) has specific guidance for providing a mechanism for alternative text. The ATAG 2.0 Guideline B.2.3 - "Assist authors with managing alternative content for non-text content" could be adapted to provide specific, guideline-related guidance for user generated alternative text.

The web content publisher should identify all locations of User Generated Content (such as commentary on hosted content, product descriptions for consumer to consumer for sale listings, and restaurant reviews) and perform standard accessibility evaluation analysis for each. If there are no accessibility issues, the User Generated Content is fully conforming.

Steps to Conform

If accessibility issues are identified, or if the website author wants to proactively address potential accessibility issues that might arise from User Generated Content, then all of the following must be indicated alongside the User Generated Content or in an Accessibility Statement published on the site or product that is linked from the view or page in a consistent location:

  1. Clearly identify where User Generated Content can be found on the publisher's digital product (perhaps by id href);
  2. Clearly identify the steps taken to encourage accessibility in User Generated Content such as prompting the user for ALT text for their uploaded images before they are accepted and the disallowal of text attributes except as they are part of semantic markup such as strong, headings, etc., as enumerated in Guideline Outcomes;

Editor's Note: Appropriate scoring is yet to be provided. Once provided, fully conforming content will score as fully conformant. It remains to be determined how to score User Generated Content that has accessibility issues; and to define what minimum threshold scores might be acceptable; and what critical errors might prevent a conformance assertion. We expect WCAG 3 to provide this guidance within individual guidelines and outcomes and to support testing for conformance. The working group is looking at alternative requirements to apply to User Generated Content guideline by guideline, and is seeking feedback on what would serve as reasonable requirements on how to best support accessibility in User Generated Content with known (or anticipated) accessibility issues. The working group intends to more thoroughly address the contents and the location of an accessibility statement in a future draft.

Conforming alternative version

For this first draft, the Accessibility Guidelines Working Group has focused on the basic conformance model. For a next draft, we will explore how conforming alternative versions fit into the new conformance model.

Only accessibility-supported ways of using technologies

For this first draft, the Accessibility Guidelines Working Group has focused on the basic conformance model. For a next draft, we will explore how accessibility-supported fits into the new conformance model.

Defining conformance scope

When evaluating the accessibility of content, WCAG 3.0 requires the outcomes apply to a specific scope. While the scope can be an all content within a digital product, it is usually one or more sub-sets of the whole. Reasons for this include:

WCAG 3.0 therefore defines two inter-related ways to scope content: views and processes. Evaluation is done on one or more complete views or processes, and conformance is determined on the basis of one or more complete views or processes.

Conformance is defined only for processes and views. However, a conformance claim may be made to cover one process and view, a series of processes and views, or multiple related processes and views.  All unique steps in a process MUST be represented in the set of views. Views outside of the process MAY also be included in the scope.

The AG WG and Silver Task Force recognize that representative sampling is an important strategy that large and complex sites use to assess accessibility. While it is not addressed within this document at this time, our intent is to later address it within this document or in a separate document before the guidelines reach the Candidate Recommendation stage. We welcome your suggestions and feedback about the best way to incorporate representative sampling in WCAG 3.0.

Conformance requirements

In order for technology to conform to WCAG 3.0, the following conformance requirements apply:

  1. Conformance level - Content MUST meet the requirements of the selected conformance level.
  2. Processes and views - Conformance (and conformance level) MUST apply to complete processes and views, and MUST NOT exclude any part of a process or view.

Conformance claims

Conformance claims are not required. Authors can conform to WCAG 3.0 without making a claim. The material below describes how to make a conformance claim if that option is chosen.

Required components of a conformance claim

A conformance claim MUST include the following information:

  1. Date of the claim;
  2. Guidelines title, version and URI W3C Accessibility Guidelines 3.0 at ???
  3. Conformance level satisfied: (bronze, silver, or gold);
  4. A concise description of the views and processes, such as a list of URIs for which the claim is made, including any state changes which lead to a new view; and
  5. The technology including the hardware, software, and assistive technology used to test the claim.

Example conformance claim

On 12 August 2020, the following 10 views and 2 processes conform to WCAG 3.0 at a bronze level. Processes were selected because they are the most common activities on the site and include 4 unique views. The other 6 views are the most commonly used.

These were tested using Firefox and Chrome on a Windows platform. The assistive technology used included JAWS and Dragon.

Glossary

Many of the terms defined here have common meanings. When terms appear with a link to the definition, the meaning is as formally defined here. When terms appear without a link to the definition, their meaning is not explicitly related to the formal definition here. These definitions are in progress and may evolve as the document evolves.

Adjectival rating

A system to report evaluation results as a set of human-understandable adjectives which represent groupings of scores.

Automated evaluation

Evaluation conducted using software tools, typically evaluating code-level features and applying heuristics for other tests.

Automated testing is contrasted with other types of testing that involve human judgement or experience. Semi-automated evaluation allows machines to guide humans to areas that need inspection. The emerging field of testing conducted via machine learning is not included in this definition.

Conformance

Satisfying all the requirements of the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim.

See Conformance.

Critical error

An accessibility problem that will stop a user from being able to complete a process.

Critical errors include:

  • Items that will stop a user from being able to complete the task if it exists anywhere on the view (examples: flashing, keyboard trap, audio with no pause);
  • Errors that when located within a process means the process cannot be completed (example: submit button not in tab order);
  • Errors that when aggregated within a view or across a process cause failure (example: a large amount of confusing, ambiguous language).
Deprecate

To declare something outdated and in the process of being phased out, usually in favor of a specified replacement.

Deprecated documents are no longer recommended for use and may cease to exist in the future.

Evaluation
The process of examining content for conformance to these guidelines.
Different approaches to evaluation include automated evaluation, semi-automated evaluation, human evaluation, and user testing.
Functional category

A conceptual grouping of functional needs that represent generalized sets of user groups.

See Functional Categories.

Functional need

A statement that describes a specific gap in one’s ability, or a specific mismatch between ability and the designed environment or context.

Guideline

High-level, plain-language content used to organize outcomes.

See Guidelines in the Explainer.

How-to

Provides explanatory material for each guideline that applies across technologies.

This plain language resource includes information on getting started, who the guideline helps and how, as well as information for designers and developers.

See How-tos in the Explainer.

Human evaluation

Evaluation conducted by a human, typically to apply human judgement to tests that cannot be fully automatically evaluated.

Human evaluation is contrasted with automated evaluation which is done entirely by machine, though it includes semi-automated evaluation which allows machines to guide humans to areas that need inspection. Human evaluation involves inspection of content features, by contrast with user testing which directly tests the experience of users with content.

Informative

Content provided for information purposes and not required for conformance.

Content required for conformance is referred to as normative.

Method

Detailed information, either technology-specific or technology-agnostic, on ways to meet the outcome as well as tests and scoring information.

See Methods in the Explainer.

Normative

Content whose instructions are required for conformance.

Content identified as informative or non-normative is never required for conformance.

Object

An item in the perceptual user experience.

Objects include user interface widgets and identifiable blocks of content.

Outcome

Result of practices that reduce or eliminate barriers that people with disabilities experience.

See Outcomes.

Process

A sequence of steps that need to be completed in order to accomplish an activity / task from end-to-end.

Rubric

An approach to evaluation that defines a set of criteria for conformance and describes the result qualitatively.

Scale

An way of reporting results of evaluation using quantitative values.

Semi-Automated Evaluation

Evaluation conducted using machines to guide humans to areas that need inspection.

Semi-automated evaluation involves components of automated evaluation and human evaluation.

Success criterion

Testable statements that compose the normative aspects of WCAG 2.

The closest counterpart to success criteria in WCAG 3 are outcomes.

Test

Mechanism to evaluate implementation of a method.

Tests can include true / false evaluation or various types of rating scales as appropriate for the guideline, outcome, or technology.

Technique

Technology-specific approach to follow a method.

Text alternative

Text that is programmatically associated with non-text content or referred to from text that is programmatically associated with non-text content. Programmatically associated text is text whose location can be programmatically determined from the non-text content.

An image of a chart is described in text in the paragraph after the chart. The short text alternative for the chart indicates that a description follows.

User generated content

User Generated Content is provided for publication by visitors where the content platform specifically welcomes and encourages it. User-generated content is content that is submitted through a user interface designed specifically for members of the public and customers. Content created by agents of the publisher (such as employees, contractors, or authorized volunteers) is not considered User Generated Content, even if the agents use the same interface to publish. The publisher in this definition is the owner of the website providing content, not the owner of a paid platform used for website creation.

Future work on the glossary will better define terms such as publisher, content author, etc.

User need

The end goal a user has when starting a process through digital means.

User testing

Evaluation of content by observation of how users with specific functional needs are able to complete a process and how the content meets the relevant outcomes.

View

All content visually and programmatically available without a substantive change.

Views vary based on the technology being tested. While these guidelines provide guidance on scoping a view, the tester will determine what constitutes a view, and describe it. Views will often vary by technology. Views typically include state permutations that are based on that view such as dialogs and alerts, but some states may deserve to be treated as separate views.

Visual Contrast

The combination of foreground and background colors along with font weight and size that make text readable.

Guidelines development methodology

Differences from WCAG 2

Outcomes

Outcomes are different from WCAG 2.X success criteria. Compared to success criteria, outcomes are written to be:

The design of outcomes allows more varied needs of people with disabilities than could have been included in WCAG 2.X. 

Methods map approximately to WCAG 2.X Techniques documents.

Approximate mapping of WCAG 2 and WCAG 3 documentation

WCAG 2 WCAG 3
Success Criteria Outcomes
Techniques Methods
Understanding How-to

Change log

Acknowledgements