Kate Kalcevich, Head of Accessibility Innovation at Fable, gave this talk at the Product-Led Festival. Check out the highlights of the presentation below, or watch it on-demand.

I’m really excited to talk about KPIs for accessibility. This is something I’ve been thinking about for two years, and I finally decided to write down what I think and give a talk on it. So let's kick this off.

My background

A little bit of background about me first. I’ve been in the space of accessibility and thinking about how to make digital products more inclusive for people with disabilities for over two decades now.

I have some certifications, which means that I took a test to prove that I know what I'm talking about when it comes to the technical side of accessibility. And I think what's more important to know is that I have a background as a practitioner. I started my career as a designer and a front-end developer, and so I really learned what accessibility means in the practice of building websites and apps.

I was also in leadership roles in accessibility. I was the lead at the Ontario Digital Service, which is the equivalent of state government here in Canada. I was also in a lead position at Canada Post, which would be the equivalent of the United States postal services.

Then I joined a company called Fable in 2020. Fable is a Toronto-based startup that’s looking to make the practice of building websites and apps more inclusive.

And I myself have a disability. I've worn hearing aids in both of my ears since I was six years old. And typically, I’ll rely on things like captions and transcripts, but I also read lips.

3 key ways to test accessibility

Before we really dive into it, I want to just level set and make sure we have the same understanding about KPIs.

A key performance indicator is a critical way of tracking your progress toward some sort of goal or intended result. So in the case of an accessibility KPI, we're trying to have some sort of accessibility goal in mind and track how we’re getting to reach that goal.

But before we really talk about that, I need to make sure everyone has at least a high-level understanding of how a person might test accessibility.

We've got automated tools that are out there, and they’ll run on a website or a native mobile app and check for certain criteria.

We have a set of international standards around accessibility that people are supposed to follow in order to make their digital products more inclusive for everyone.

Some of these automated tools you have to pay for, some of them will run on a per-page basis, and some of them will be crawling an entire website at all times and looking for accessibility issues.

Another way that you can test accessibility which is quite common if you’re working on a rather large website or product, you've probably considered having an accessibility audit done. This is where you have an expert come in and they’ll look at the site and tell you where your issues are. And they might even do some formal documentation for you.

In the US, there's something called a VPAT, which is a Voluntary Product Accessibility Template, and you fill that out and you end up with an Accessibility Conformance Report, which is an ACR. This is just a written document that says how accessible your product is or isn't.

And then the third way to test accessibility is what we do at Fable, and it's a focus on user testing. How do you know if something's accessible to a person with a disability? Well, you get them to use it the same way you’d do usability testing with anyone else, and that's what we recommend.

There are definitely some challenges with the other two ways of testing accessibility. For instance, if you're using an automated tool, you don't necessarily get full coverage of all the issues, only what a computer can detect.

And if you do an expert review, it's really hard to pull out of that review or report what the most important thing is to tackle and how we actually tackle this.

When you use users to find accessibility issues, you really understand the barriers and how severe they are.

Challenges associated with measuring accessibility

Now, a typical KPI is in full conformance with what is called the Web Content Accessibility Guidelines, or WCAG, for short. It’s a really awkward acronym, they could’ve picked a better name for it, but it is what it is.

It’s a set of international standards that folks have agreed on. If we follow this standard, a thing is accessible. And it has different versions. There was a 1.0, a 2.0, and we're currently on 2.1 and looking to launch 2.2 rather soon.

We have different levels of conformance. Level A is the bare minimum of accessibility. Double A means it's pretty good, and triple A is really good and is sometimes impossible to achieve.

And so what these standards give you is predictability.

Take a door for example. People generally know how to open a door. It has a thing called a handle. You might turn it, you might pull it down, or push it depending on the type of handle or lever.

And that's what standards give us. There aren’t random things with doors where you have to push a pedal at the bottom of the door to open it. You don't have to figure out how to open it each time you come to the door. So that's what accessibility standards really give you.

But there are some challenges with this kind of KPI. If you're trying to lead towards a fully accessible product based on standards, is that even possible to achieve? Websites are complex, they're constantly changing, and you have all sorts of people in teams working on them.

You might even have some legacy stuff in there that you can't even change. Or you might be pulling in widgets, components, or plugins and using them on your website, but you don't control the code.

Or you might be using a framework and it might be very difficult for you to implement some things in an accessible way. So if you're in a position where you don't think you can get full conformance, how do you measure your success when it comes to accessibility?