Kate Kalcevich, Head of Accessibility Innovation at Fable, gave this talk at the Product-Led Festival. Check out the highlights of the presentation below, or watch it on-demand.

I’m really excited to talk about KPIs for accessibility. This is something I’ve been thinking about for two years, and I finally decided to write down what I think and give a talk on it. So let's kick this off.

My background

A little bit of background about me first. I’ve been in the space of accessibility and thinking about how to make digital products more inclusive for people with disabilities for over two decades now.

I have some certifications, which means that I took a test to prove that I know what I'm talking about when it comes to the technical side of accessibility. And I think what's more important to know is that I have a background as a practitioner. I started my career as a designer and a front-end developer, and so I really learned what accessibility means in the practice of building websites and apps.

I was also in leadership roles in accessibility. I was the lead at the Ontario Digital Service, which is the equivalent of state government here in Canada. I was also in a lead position at Canada Post, which would be the equivalent of the United States postal services.

Then I joined a company called Fable in 2020. Fable is a Toronto-based startup that’s looking to make the practice of building websites and apps more inclusive.

And I myself have a disability. I've worn hearing aids in both of my ears since I was six years old. And typically, I’ll rely on things like captions and transcripts, but I also read lips.

3 key ways to test accessibility

Before we really dive into it, I want to just level set and make sure we have the same understanding about KPIs.

A key performance indicator is a critical way of tracking your progress toward some sort of goal or intended result. So in the case of an accessibility KPI, we're trying to have some sort of accessibility goal in mind and track how we’re getting to reach that goal.

But before we really talk about that, I need to make sure everyone has at least a high-level understanding of how a person might test accessibility.

We've got automated tools that are out there, and they’ll run on a website or a native mobile app and check for certain criteria.

We have a set of international standards around accessibility that people are supposed to follow in order to make their digital products more inclusive for everyone.

Some of these automated tools you have to pay for, some of them will run on a per-page basis, and some of them will be crawling an entire website at all times and looking for accessibility issues.

Another way that you can test accessibility which is quite common if you’re working on a rather large website or product, you've probably considered having an accessibility audit done. This is where you have an expert come in and they’ll look at the site and tell you where your issues are. And they might even do some formal documentation for you.

In the US, there's something called a VPAT, which is a Voluntary Product Accessibility Template, and you fill that out and you end up with an Accessibility Conformance Report, which is an ACR. This is just a written document that says how accessible your product is or isn't.

And then the third way to test accessibility is what we do at Fable, and it's a focus on user testing. How do you know if something's accessible to a person with a disability? Well, you get them to use it the same way you’d do usability testing with anyone else, and that's what we recommend.

There are definitely some challenges with the other two ways of testing accessibility. For instance, if you're using an automated tool, you don't necessarily get full coverage of all the issues, only what a computer can detect.

And if you do an expert review, it's really hard to pull out of that review or report what the most important thing is to tackle and how we actually tackle this.

When you use users to find accessibility issues, you really understand the barriers and how severe they are.

Challenges associated with measuring accessibility

Now, a typical KPI is in full conformance with what is called the Web Content Accessibility Guidelines, or WCAG, for short. It’s a really awkward acronym, they could’ve picked a better name for it, but it is what it is.

It’s a set of international standards that folks have agreed on. If we follow this standard, a thing is accessible. And it has different versions. There was a 1.0, a 2.0, and we're currently on 2.1 and looking to launch 2.2 rather soon.

We have different levels of conformance. Level A is the bare minimum of accessibility. Double A means it's pretty good, and triple A is really good and is sometimes impossible to achieve.

And so what these standards give you is predictability.

Take a door for example. People generally know how to open a door. It has a thing called a handle. You might turn it, you might pull it down, or push it depending on the type of handle or lever.

And that's what standards give us. There aren’t random things with doors where you have to push a pedal at the bottom of the door to open it. You don't have to figure out how to open it each time you come to the door. So that's what accessibility standards really give you.

But there are some challenges with this kind of KPI. If you're trying to lead towards a fully accessible product based on standards, is that even possible to achieve? Websites are complex, they're constantly changing, and you have all sorts of people in teams working on them.

You might even have some legacy stuff in there that you can't even change. Or you might be pulling in widgets, components, or plugins and using them on your website, but you don't control the code.

Or you might be using a framework and it might be very difficult for you to implement some things in an accessible way. So if you're in a position where you don't think you can get full conformance, how do you measure your success when it comes to accessibility?

And how do you measure progress? If full conformance is 100% everything being a certain way and being accessible, what's the metric? Are you 25% compliant at a certain point in time, because 25% of your site fully follows the guidelines? Or is it when you've met 25% of the guidelines? It gets really complicated.

And then it completely ignores the idea of usability. It's entirely possible to have a product that's fully accessible that nobody can use. And that's a problem because usability is as important as accessibility when it comes to making sure that people can actually get value out of that product.

And then there's another issue with these standards, which is objectivity. The only way to really measure against a standard is by using a combination of automated tools and maybe some humans who are experts in accessibility.

But every standard and every guideline is open to interpretation. They're not completely descriptive in how to test against them. So you could have two different people audit the same site and come up with differences in how successful they think that site is.

It happens all the time where a company will get an audit and then they'll work with another company and say, “Our auditor said that we should be doing this.” And the other company will disagree with the way that should be done or how important it is. So it's not really a straightforward way to measure accessibility.

So what can we do instead?

4 alternative KPIs for accessibility

Well, there are some alternative KPIs. I’ve got four of them here, and I'm going to drill down into each one. But the idea is that depending on where your organization is or where you are, where your product is in your accessibility journey, and what suits your organization best, you’d pick some or one of these KPIs, not necessarily all of them.

The first one is a way of measuring the skills that people in your organization have, or the training that they've had in accessibility foundation. You can't really create accessible products if you don't have people who know how to create accessible products.

So rather than measuring the product itself, if you measure the capabilities of your team, it's one way of knowing how you're progressing in that journey.

The next KPI is the coverage that you have in accessibility testing on your product. So this could be with automated tools, but I'm more likely to recommend how many different parts of your product or how many different products if you have more than one that has been tested by people with disabilities.

Another important thing to consider is sharing of accessibility insights. And the reason I consider this to be a critical KPI is because accessibility is such a whole team thing. It's never a one-person job. Even if you have an accessibility team or an accessibility specialist, they alone cannot make this happen. It's everyone coming together.

And when you share the insights, you build that awareness, you build that knowledge and understanding, and you really start to get people on board.

And the last one is benchmarking. So if those WCAG standards aren't the best way of measuring, what other benchmark of product accessibility could we use?

I'm going to go more into detail on each of these.

Measuring skills and knowledge

When it comes to the skills and knowledge of a digital team that's working on digital products, you can look at training completion rates.

So assuming you have a program in place for training employees on accessibility, or you’ve given them access to external training, you can look at how many people have completed that training. And you could set a goal like ‘90% of people on the team should be completing training within six weeks of joining your organization.’

If you have vendors building products for you, it gets a little more complicated. You might want to have KPIs around procurement and understanding the knowledge of the vendors that you use.

Another KPI that you could use for your internal team is how much engagement you're seeing with disabled users. And if you have access to a way of testing with people with disabilities, one of the ways of doing that is with Fable. We connect companies with people with disabilities to do user testing.

Or it could be any way. You could do your own recruitment or you could use any other accessibility company that provides that kind of engagement.

And so when people start meeting with people with disabilities and observing them using a product, they start to develop this real understanding of accessibility that goes deeper than just the technical sense or just a design sense.

So I think it's really important to have that connection. So setting a goal that ‘60% of the team observes at least one user interview’ is going to really help strengthen that understanding of accessibility and lead to the creation of more accessible products in the end.

I myself spent much of my career in accessibility just focused on the standards. And at the point in my career where I actually started meeting other people with disabilities, people who were blind, people who had low vision, people who had mobility issues, and watching them use products, it completely changed the way I thought about accessibility. I realized how much I’d been missing out on.

So that's why this is such an important KPI.

And then the other thing you can do is ask people about their confidence in their skills. Confidence isn't the same as competence, but it's something. If you have a team that feels confident in their ability to design and build accessible products, that's saying a lot.

And this is something we do at Fable. We have a product called Upskill, which is accessibility training. And at the end of every course, we ask people to rate their confidence in going out and applying what they've learned during accessibility,

And you might set a goal like we have. We want people to answer at least four out of five on a scale of confidence, with zero being not confident at all and five being extremely confident. So then at least you get a sense of how the team is feeling.

Product coverage

When it comes to product coverage, again, my emphasis would be much more on testing with people with disabilities. They don't even have to be users. There's so much you can learn about accessibility from somebody who isn't a user of your product. If you're not building an accessible product, you don't have any users with disabilities because they can't use it.

So where do you start?

You start with paying people for testing or hiring a company to do user testing for you with people with disabilities. So having coverage and having a KPI around how much coverage you have could be an important marker of understanding your current product's accessibility.

So I'm going to give three examples here.

We might have one company that only has one product, and within that product are 10 really critical user flows, and they've tested three out of the 10. So maybe that’s their current state. The KPI is to have all 10 of those critical flows have at least some testing by people with accessibility needs.

You may have another company that has two products and 40% of one product has been tested. So looking at not just the critical flow, but the entire product, and what percentage of it has been tested.

And then you might have another product where only 10% of it has been tested. And then you set goals around the percentage of testing to try to get to 100%.

Or you might have a third company where the focus is just on their new features and new releases. And they've got a forward-thinking approach to accessibility.

It may be challenging for them to go back. They might have some legacy software, or just from a budgetary and practicality perspective, they want to just test all of the new features that they're currently building and they're planning to release, either before release.

Or maybe they're using something like LaunchDarkly, where you can put certain features out for certain users so that you can test in production.

So the goal might be your KPI is 80% of all your new features are tested for accessibility.

So those are different ways of thinking about how you can structure a KPI to get a good sense of accessibility coverage of your products.

Sharing accessibility insights

Now we come to sharing insights. I hate to keep saying Fable, but it's important to give that as an example of how we do that user testing. We do user interviews and we allow our customers to set up a user interview with somebody who uses assistive technology, and then it's recorded.

We really push people to take that recording and take a small clip of the issues. And not just the bad things, but also the good things because I think it's important to say that every product is going to have some good accessibility moments and some bad accessibility moments for the most part.

There could be a product that's completely inaccessible and you can't even get into it, so then we've got work to do. But if you take those moments and you share them widely, that's really going lead to a broader understanding of accessibility, the importance of it, and buy-in for it. So that's an important metric to have.

You might take the research that you do with people with disabilities and then have a quarterly meeting where you share the findings. You might do lunch and learns and you might track attendance at those meetings as a KPI to really get a sense of how broad the awareness of accessibility is within your company.

And you might also look at if you're doing things like sprint demos or quarterly reports for executives. How many times are executives getting exposed to accessibility in those scenarios? It's so critical for executives to understand where you're at in accessibility, why it's important, and what the issues are. It’s really going to help drive executive buy-in.

Benchmarking

So let's talk about benchmarking, which was the last KPI I had, and the one I'd like to dive the deepest into.

I think it's really important to align with whatever your current metrics are. Every organization is going to be different. You want to look at what you’re measuring when it comes to the user experience, or even on the engineering side of things, and then have accessibility metrics that are comparable to those.

So if I was looking at UX, I might use something like SUS, the System Usability Scale.

There’s actually an equivalent metric called the Accessible Usability Scale (AUS), which is a 10-question survey that you’d administer either automated through your site when somebody completes something, or you might send an email to users.

Or you might just run a user interview with somebody with a disability and then walk them through the 10 questions at the end of it. And that’ll give you a numeric score, which you can look at and say, ”Okay, so my SUS score is this, and my AUS score is that,” and see how close they are to each other.

It's a metric of the user experience and how good it was. It's one way of saying, “People without disabilities have this experience, and people with disabilities have that experience.”

You can also track something like task completion, which a lot of UX teams do. They want to know how many people were able to complete a task on the site. And you might do that same kind of task completion with users with disabilities.

So you can say something like, “80% of people without disabilities were able to complete this task, which is one of our critical user flows, and only 20% of people with disabilities were able to complete it.”

So that gives you some metrics and you can combine that with your product coverage metric to say, “Here's our product coverage, and within that product coverage we have an AUS score and a task completion score.” And start to bring it all together,

You could also use NPS (Net Promoter Score), a ranking of one to 10 as to how likely somebody is to recommend a product, and look at how likely people without disabilities are to recommend it versus people with.

You could bring these into your reports if you're doing quarterly reports, monthly reports, or on a per feature per product basis. For any of the UX research that you're doing, you can have those completion rates an AUS score. Just bring those quantifiable things into the conversation.

You could do something like a dashboard, where you have a table that shows before and after how accessible people felt the experience was. This could be an AUS score or a percentage ranking if you're using a different type of metric.

You could ask people how easy they thought it was to complete the experience and give that a percentage. You can look at task completion, and combine all the things.

It gives you a really good before and after way of assessing the overall accessibility of a product and task within it, and clearly shows executives the result.

This only really works if you're doing something in between the before and after. You want to take the before, do some accessibility improvements, and then do the after. And I know that sounds obvious, but it's amazing how many people don't actually do that.

On the engineering side of benchmarking, you could look at the number of backlog tickets for accessibility and set the goal of reducing them by a certain percentage each quarter.

You could look at automated accessibility test coverage and see what percentage of the product has that coverage and set a goal of increasing it by a certain percentage each quarter.

You could also look at regression rates for accessibility bugs. You really don't want to see a bug that you fixed recurring, so you can set a target. Generally, 10% or less is a good score for regression.

You could also look at frequency and severity. So trying to decrease the frequency of a high-severity accessibility issue by a certain percentage each quarter.

If you're not ready to decrease the number of high-severity issues, maybe your goal is to actually just give everything a severity rating so you at least know out of all your different issues, which ones are the worst and which ones are not as bad.

Or you could also look at some sort of indication of how widespread all of the issues in your backlog are.

For example, sometimes there are accessibility issues that happen in the header or the footer, and that header and footer are repeated throughout the whole site. So there can be quite a reduction in the total number of accessibility issues by just fixing it in the header and the footer versus an issue that appears at random on various pages of the site, and it only affects some users who are on that page.

When it comes to customer experience, you can look at metrics like how long it takes to resolve a complaint and set a goal of resolving an accessibility complaint within a certain number of days.

You could also look at adoption and retention of disabled users, and you'd want to do this in a pretty non-invasive way.

You don't want to be asking all your users if they have a disability. But it can be things like if you're doing any kind of user survey, asking them about certain features that they use, whether or not they use transcripts, whether or not they have accessibility needs, in a way that's not directly asking them about disability, but gets that idea that you have users who benefit from accessibility.

You can also look at the usage of your accessibility-related features. So if you have dark mode on your site, I know that a lot of people with low vision do benefit from dark mode and use it. People like myself who have hearing disabilities will benefit from captions.

There might be things around personalization, like increasing the font size, or something that gives you some sort of metric of how many people are benefiting from the things that aren't directly accessibility related, but definitely could help improve accessibility.

Accessibility KPIs are like a menu

I want to leave you with the idea that KPIs are like a menu. You want to choose an appetizer and a main. And if you're really ambitious, or really hungry, in this case, maybe you get a dessert as well.

It's not about trying to do everything all at once. Accessibility really is a journey. It's an ongoing thing. It's not a thing you do and then stop doing. It’s not a project, it’s a way of working. It’s inclusive design practices being embedded into the way teams create digital products.

I definitely don't want folks to feel overwhelmed, but to feel that now you have more choices.

There's not just one really hard-to-measure KPI, which is that WCAG conformance. But there are other things that you can look at in your practice to help you measure your progress toward building more inclusive products.