Skip to main content



How SenseIT CATS Creates a Better User Experience (UX)

A woman holding a laptop in one arm, smiling, winking and holding up an "ok" sign with her other arm.

Yes, SenseIT’s Continuous Automated Testing Simulator (CATS) will help you meet your compliance needs, but it will also help you provide a better UX for your users.

A False Dichotomy

The companies we work with care first and foremost about ensuring their users have a great experience, and, at early stages, some have expressed concern that implementing accessible design patterns could add undesired complexity which would get in the way of that. This blog explains why that isn’t a problem you should worry about, and that accessible designs and automated accessibility testing will only improve your product’s UX. A more accessible UX is a higher quality one, and any company that cares about their users’ experience cannot do so without ensuring that it’s an accessible one.


Ease of Use Is a Primary Factor in the APG

The authoring practices guide (APG) for the Accessible Rich Internet Applications (ARIA) specification by the World Wide Web Consortium (W3C) is a set of instructions for how to make sure that web applications and their HTML are accessible and legible to assistive technologies (AT). The APG lays out a series of design patterns for how to properly code and label different kinds of custom elements according to their function[1]. Many of the elements appear to be similar, and the differences between them can be difficult to discern, but if you approach them from the perspective of user experience, it often becomes a bit clearer.


A screenshot of the W3C's webpage "ARIA Landmarks Example" with the different landmarks visually represented by differently colored borders around each of them. The content of the page can be read here:


One example of this (out of many) is the difference between landmarks and window splitters. The APG gives instructions for labeling landmarks on a web page, which is particularly helpful for making sense of where a user is on a page when they’re accessing it with assistive tech. This is essential on pretty much any page no matter what kind of content is on it. A window splitter is “a moveable separator between two sections, or panes, of a window that enables users to change the relative size of the panes.” The APG gives the example of a book reading application that has one pane for the table of contents and another for content of a section of a book. Both design patterns give information about how the content being viewed is logically divided in the context of the display window at large. The difference is the way the user is expected to interact with the elements. Landmarks make the user passive, while window splitters allow them to customize their experience. Using the correct one can make or break a user’s experience of your product, and an accessibility test is the only way to know whether you’ve used them correctly so that an AT user gets the same UX as everyone else.


UX Testing = Accessibility Testing

UX testing is more commonly known as usability testing. It involves testing the product in question on real users to see how they interact with it, how easy it is to use, and how intuitive its design is. In its pure form, usability testing can only be done by people who represent the “average” user, and only at or near the end of the product’s development. What CATS does is not that, but it’s similar in many ways, while making up for some of UX testing’s shortcomings.

CATS simulates a manual accessibility test, which in turn simulates a user with any kind of disability or assistive technology in place. While people with disabilities would ideally be included in usability testing, the logistical challenges involved in doing so have forced accessibility testing into a separate category, although it tests for the same thing.

Understanding that the two are the same is essential for inclusivity. And inclusivity is important not just for its social value, but for its design value. A diverse range of user perspectives make it so that more bugs are likely to be caught, more potential solutions are likely to be suggested, and the quality of those solutions is likely to be higher since they’ll take a greater variety of experiences into account[2].

Users with disabilities’ perspectives on ease of use will depend on factors beyond those that determine the usability for users without disabilities. That will include how easily navigated the UI of your product is with different kinds of assistive technologies (so keyboard navigability is a must, for example). If a user has the screen magnified, will they be able to contextualize the content they see within the whole page? Are there multiple ways to maneuver with custom widgets with complex functionality (like drag-and-drop elements, which can be problematic for users with mobility-related disabilities)? These are all potential issues that would affect a product’s usability, but wouldn’t be caught in a usability test. An accessibility test would catch them.

A black-filled silhouette of a person sitting at a computer

Accessibility Automation Saves Your Users’ Time

If we accept the premise that an accessible product is a usable one, we should consider the effects that automating the process of making it so will have on your product’s UX. Automated accessibility testing has two main advantages over manual testing when it comes to user experience. One is that accessibility is built into the design rather than tacked on after because it can be incorporated into the CI/CD pipeline. The other is that the process is much faster, allowing releases and updates that improve the UX to happen sooner than they would if manual accessibility testing were required.

Configuring SenseIT CATS for the first time is a short process (the actual length of time will depend on variable factors that differ between users, but if all goes smoothly it can be as short as 10 minutes) that only needs to happen once. After that, each test execution will take a similar amount of time as it does to run your functional tests, since they use the same scripts. You can choose how often to execute automated accessibility tests to make it as efficient as possible to suit your particular needs. That means that incorporating a fully automated, full-coverage accessibility test adds virtually no extra time to your product-release timeline, which means your users spend less time waiting for much-needed updates. Nothing makes for a better UX than a well-maintained product that addresses user concerns quickly.


A Virtuous Cycle

Accessibility and UX go hand in hand, and automated accessibility testing can help you ensure that people with or without disabilities have a great experience with your product by making sure you’ve used accessible design patterns correctly, that your product is usable for everyone, and that you can release updates and improvements efficiently. Implementing CATS in your development cycle brings you one step closer to creating an experience that will truly delight your users.



[1] They can be found here.

[2] For examples of this claim in action, look no further than Microsoft’s inclusive tech lab.

Inclusive. Compliant. Simple.