Having been an afterthought for the longest time, accessibility on the web is coming more and more to the forefront. At Frank, accessibility is something that we’ve really enjoyed having a greater focus on. As it is now one of our specialities, we are often asked a couple of the same key questions, and truth be told, the answers are usually longer than the time we have to give them.

Who's the most accessible?

Honestly, we don’t know! Yes, there are websites that can give you an “accessibility rating” but for the reasons noted throughout, it really is a difficult one to measure. Web accessibility isn't a competition, and probably shouldn't be treated as one. There are a few issues with viewing accessibility this way, and there are two that stand out to me:

1. It's too easy for the focus to become your 'competitors', not your users. Our focus should always be on users when thinking about accessibility

2. There isn't a level playing field from the start. Most websites are not equal in functionality, design or their user base - at their best, it’s about balancing them all

At its most basic, a black and white website with a few pages and only the most important information is a highly accessible website. It would probably tick every WCAG (Web Content Accessibility Guidelines) 2.1 box, even the AAA standards. But this website also forgoes pretty much everything else - design, functionality; there is none.

Contrast to that a beautiful website, one which pushes the boundaries of web technologies and provides users with a completely new experience. That website will be infinitely more difficult to make accessible, in some cases impossible, leading to different or auxiliary solutions. There's not much point comparing these sites, even if the first wasn't so sparse; the difference between the thought and effort required for each remains vast.

While competition is great encouragement (who doesn't like winning?), the pursuit of 'winning' is sure to distract us from the original goal. Instead, we set out to make a website as accessible as possible for its users, whilst balancing the design and function alongside. Ultimately, we want everyone to benefit from the website and that means making sure accessibility standards remain the goal throughout.

Automated or manual accessibility testing?

Nobody would deny that automated testing is a great tool. Give an automated accessibility checker a series of webpages, and it will find a lot of the small issues that would be difficult for a human to find.

Add to this that there are lots of options out there, many automated testing methods both free and paid for, proprietary or more open-source. While they will all claim to be the best, the reality is that most of them are great at what they do. They take black and white accessibility issues, run a series of pre-defined checks against them, and advise you whether they are a concern or not.

But accessibility isn't all objective. Even in best-case scenarios, there is an element of subjectivity.

There are rough estimates bounding around for how many common accessibility issues we can successfully run automated test to find. On the lower end, these hover at about 40%. A more generous number that most people might consider is 50%. I'd even say that through our extensive experience, given that we're not often testing for AAA standards, we might be able to find as much as 60%. All to say that for about half of all accessibility issues, we simply need a human to determine whether it is accessible.

Images are a great example of this. Let's take alternative text... With few exceptions, images should have alternative text. Computers can certainly test this, and they can determine where alternative text is missing. However, where there is alternative text, they can't determine how relevant that alternative text is. A picture of some apples, for example, with the alternative text of 'oranges' would be fine as far as most automated tests are concerned - the image has a text alternative. But the alternative text is at best inaccurate - at worst, it's misleading. Many would consider this less accessible.

A similar concern relates 1.4.5 (Images of Text) of WCAG (Web Content Accessibility Guidelines) 2.1. If there is a poster on a page, there's no way for automated testing to identify anything other than it being an image. It can check for alternative text, but if it's a poster with a lot of text, even alternative text probably won't be enough. We need to check for a full text alternative that may be separate to an image, taking the context of the page into consideration. Automated tests just can't do this.

Additionally, where we do provide accessible alternatives, making the original inaccessible item now acceptable, automated texting will still mark it down. Context is crucial.

To that point, we do both at Frank. On a new website build, we average about 30 hours of accessibility per project. This is even more for an accessibility review of a large existing site.

We'll always start with a series of automated tests, it helps us find a representative sample of pages that need to be tested against and it catches a lot of that “low hanging fruit” that, in reality, there's a good chance we'd miss doing everything manually. However, we will always double-check those results, not in a line-for-line sense, but to see what does and doesn't make sense, what issues are repeated throughout the site and what might be unique to one piece of functionality or one page.

We will then always follow-up with manual tests, the sorts that computers can't do. We'll test with assistive technologies, we'll navigate without a mouse, and we'll experience the site ourselves, doing our best to put ourselves in the shoes of the user. There are also points of the WCAG (Web Content Accessibility) guidelines that we're aware we need to manually test for, before we can confidently say whether or not a website complies with it.

I'd hesitate to put a number to each, but at a guess we could put a 60/40 split between the testing methods, but it's not a split we've ever measured. The reality is that we need to do both regardless.