Or press ESC to close.

Accessibility Testing: Striking the Right Balance Between Automation and Manual Efforts

Dec 11th 2023 9 min read

In the ever-evolving landscape of software testing, accessibility testing takes center stage, ensuring digital inclusivity for every user. This blog post delves into the world of accessibility testing, providing a brief exploration of its significance in quality assurance automation. As we navigate the nuances of striking the right balance between automation and manual efforts, our aim is to equip QA engineers with the tools and insights essential for comprehensive testing, fostering a commitment to inclusivity across digital platforms.

The Role of Automation in Accessibility Testing

Automation has become an integral part of the quality assurance process, promising efficiency and consistency. In the realm of accessibility testing, it offers distinct advantages but also comes with inherent limitations.

Automated tests excel in swiftly evaluating repetitive and standardized aspects of accessibility compliance, such as checking for proper ARIA attributes and alt text. This speed is particularly beneficial in regression testing, allowing for quick validation of existing accessible features during code changes.

However, automation alone cannot capture the entire spectrum of user experience nuances, especially those related to diverse user interactions and dynamic content. Complex scenarios, such as intricate screen reader interactions or nuanced visual assessments, often require the human touch.

Recognizing the strengths and weaknesses of automation, a balanced approach is paramount. Combining the efficiency of automated tests with the perceptiveness of manual testing ensures a more comprehensive evaluation of accessibility features. Striking this equilibrium not only improves the reliability of test results but also aligns with the diverse nature of user interactions in real-world scenarios.

As we navigate the intricate landscape of accessibility testing, it becomes evident that a thoughtful combination of automated and manual testing is essential for achieving a holistic understanding of digital inclusivity.

Parts that Should Be Covered with Automation Tests

1. Automated Testing for Basic Accessibility Compliance

One of the strengths of automation lies in efficiently addressing basic accessibility compliance standards. Let's explore code examples demonstrating automated checks for fundamental aspects:

ARIA Attributes:

async verifyARIAAttributes() {
    const elements = await browser.execute(() => {
        const elements = document.querySelectorAll("[role]");
        return Array.from(elements);
    elements.forEach((element) => {
        const roleAttribute = element.getAttribute("role");
            roleAttribute && roleAttribute.trim() !== "",
            `Missing or empty ARIA role on element: ${element}`

This function asynchronously verifies ARIA attributes by executing JavaScript within the browser context to retrieve elements with the "role" attribute. It then iterates through these elements, using assertions to ensure each possesses a non-empty ARIA role, highlighting potential accessibility issues if not met.

Alt Text Verification:

async verifyAltText() {
    const images = await browser.execute(() => {
        const images = document.querySelectorAll("img");
        return Array.from(images).map((image) => ({
            alt: image.getAttribute("alt") || "",
            src: image.getAttribute("src") || "",
    images.forEach((image, index) => {
            image.alt.trim() !== "",
            `Missing or empty alt text on image ${index + 1}: ${image.src}`

This function, using asynchronous execution in a browser context, verifies the presence of meaningful "alt" text for images. It maps and checks each image's "alt" attribute, providing assertions for potential accessibility concerns and identifying specific images lacking descriptive text.

2. Regression Testing for Accessibility Features

Regression testing ensures that existing accessible features remain intact during code changes. Automated tests can be created to verify the accessibility of elements and features. In this section, we'll demonstrate how to write automated tests for accessibility using WebDriverIO.

import AccessibilityHelper from "../helpers/accessibilityHelper.js";

describe("Accessibility Regression Testing", () => {
    beforeAll(async () => {
    it("Page should have sufficient color contrast for text", async () => {
        await AccessibilityHelper.verifyContrastRatios();
    it("Page should provide meaningful labels for ARIA landmarks", async () => {
        await AccessibilityHelper.verifyARIALandmarkLabels();
    it("Page should have valid ARIA roles for dynamic content", async () => {
        await AccessibilityHelper.verifyValidARIARolesForDynamicContent();

This is a basic example, and you can extend it based on your application's specific accessibility features.

Also, you can visit our GitHub page to review the functions used for verifying the contrast ratio, ARIA landmarks, and others.

Parts that Shouldn't Be Covered with Automation Tests

Automated testing is a powerful tool, but there are certain aspects of the software development and testing process that are better suited for manual evaluation. Understanding these areas helps strike a balance between automated and manual testing, ensuring a comprehensive testing strategy.

1. User Experience and Interaction Testing

While automation excels at repetitive and data-driven tasks, it may lack the nuanced understanding required for assessing the overall user experience (UX) and intricate interaction patterns. Manual testing remains invaluable in evaluating subjective aspects of UX, such as design aesthetics, intuitive navigation, and the overall "feel" of the application. Testers can provide qualitative insights that go beyond what automated scripts can detect, offering a more holistic perspective on how users might interact with the system.

2. Complex Accessibility Scenarios

Accessibility testing is crucial for ensuring that digital products are usable by individuals with diverse needs. However, some accessibility scenarios are inherently complex and may involve subjective assessments that are challenging to fully automate. Scenarios like evaluating the effectiveness of alternative text for images in conveying context or determining the appropriateness of ARIA roles in specific contexts often require human judgment. Manual testing allows testers to consider the broader context, cultural nuances, and evolving accessibility standards, ensuring a more nuanced evaluation of the application's accessibility features.

Strategies for Effective Automated Accessibility Testing

Automated accessibility testing is an integral part of ensuring inclusive and compliant digital experiences. To maximize the effectiveness of our automated accessibility testing efforts, we should consider the following strategies:

Create Maintainable and Scalable Tests

When developing automated accessibility tests, we should focus on creating maintainable and scalable test suites. Utilize modular and reusable test components to avoid duplication of effort. Regularly update the tests to accommodate changes in the application's structure and content. This not only ensures the longevity of our test suite but also streamlines the testing process as the application evolves.

Use Accessible Automation Tools

Using accessible automation tools is key to ensuring accurate and reliable accessibility testing. These tools can properly interact with our application, including ARIA attributes, keyboard navigation, and screen reader interactions, just like real users would. This ultimately enhances the effectiveness of our testing efforts.

One such great tool that I would recommend is axe-core.

Integrate Accessibility Testing into Continuous Integration Pipelines

By incorporating accessibility tests into our CI pipelines, we can proactively identify and address accessibility issues before they reach production. This benefits everyone by minimizing the impact on users and reducing the effort required for remediation.

Educate Development Teams on Accessibility Best Practices

As a team, we can promote a culture of accessibility by investing in training and resources that equip developers with accessibility best practices. This shared understanding will empower everyone to write inclusive code from the beginning, minimizing the need for costly fixes and ensuring that we deliver products that are accessible and beneficial to all users.

Regularly Review and Update Test Criteria

Keeping pace with evolving accessibility standards is crucial. By regularly reviewing and updating our test criteria, we can proactively ensure that our automated tests remain relevant and effective in identifying potential accessibility issues.

Balancing Automation and Manual Testing

In our pursuit of comprehensive testing, it's vital to recognize the significance of harmonizing both automation and manual testing approaches. This section delves into the importance of striking a balance between these methodologies and sheds light on scenarios where manual testing offers unique value that automation might overlook.

1. Embracing a Holistic Testing Approach

A holistic testing approach combines the strengths of automation and manual testing to deliver a well-rounded evaluation of a digital product's quality. While automated testing excels in repetitive and algorithmic validations, manual testing adds a human touch, providing a nuanced assessment of user experience, visual aesthetics, and real-world scenarios.

2. Scenarios Enriched by Manual Testing

Exploratory Testing for User Experience

Exploratory testing, driven by human intuition, remains a powerful tool for uncovering unforeseen issues related to user experience. Testers can mimic user interactions more organically, identifying subtle nuances, user interface irregularities, or unexpected behaviors that might elude automated scripts.

Usability and Accessibility Assessment

Manual testing shines in assessing the subjective aspects of usability and accessibility. Testers can evaluate the ease of navigation, aesthetic appeal, and overall user-friendliness. Additionally, manual testing is instrumental in detecting intricate accessibility issues that may not be adequately captured by automated tests, such as nuanced screen reader interactions or context-specific challenges.

Edge Cases and Uncommon Scenarios

In scenarios involving rare edge cases or unpredictable user behavior, manual testing proves invaluable. Testers can creatively explore unlikely sequences of user actions, ensuring the application's resilience and responsiveness in situations that automated scripts might not cover comprehensively.

3. Achieving Synergy

The key to a successful testing strategy lies in the seamless integration of automated and manual testing efforts. By leveraging the strengths of each approach, teams can enhance test coverage, improve the precision of defect detection, and ultimately deliver a digital product that excels in both functionality and user satisfaction.

Recognizing the synergies between automation and manual testing and strategically applying each where it adds the most value leads to a robust testing ecosystem. This approach ensures that our digital products not only meet functional requirements but also provide an exceptional user experience across diverse scenarios and usage patterns.


In conclusion, this blog emphasizes the importance of a balanced accessibility testing strategy, blending the strengths of both automation and manual testing. From creating maintainable automated tests to fostering a culture of accessibility within development teams, the key lies in a thoughtful and inclusive approach. Developers are encouraged to harmonize these testing methodologies, ensuring not only functional robustness but also a superior and inclusive user experience for all.

Emphasize the importance of maintaining scalable automated tests for long-term adaptability.

Consider accessible automation tools that accurately simulate real user interactions.

Suggest integrating accessibility testing into CI pipelines to identify and resolve issues early.

Encourage ongoing education within development teams on evolving accessibility best practices.

Highlight the value of regularly reviewing and updating test criteria to align with accessibility standards.