9 minute read

WCAG 2.5: Input Modalities

Because it shouldn’t matter how you click, tap, type, or talk

Here’s a full transcript of the video, complete with detailed descriptions of the visuals. For visual users, we’ve included screenshots to show how transcripts are structured and why they’re such an important part of accessibility. Whether you prefer to watch, read, or both, we’ve got you covered.

Video transcript

Visual:

Pop-up ad on a desktop website. Two animated cats meow incessantly, under some obnoxious text that reads “Live chat with real cats! Click here!” The user’s cursor is hovering around the top right of the pop-up ad.

Jessica, voice over:

Ever feel like you need a microscope to click a button? 

Visual:

A magnifying glass appears over the screen, revealing a very small X button to close the pop-up ad. 

Jessica:

Or that your fingers are all thumbs when trying to drag or swipe? 

Visual:

The desktop is replaced with a phone, held by a Caucasian person’s hand. On the screen is an interactive map of the United States, with text that reads “Use two fingers to move around”. Using their other hand, the user touches the screen with two fingers and attempts to drag the map around, but instead it zooms in so far the map becomes unreadable. The person uses pinching gestures to zoom out again, but then runs into the same problem.

Jessica:

Let’s talk about how accessibility means making every interaction, whether it’s a tap, click, or voice command, work for everyone. 

Welcome to What in the World is WCAG? 2.5 Input modalities. 

Visual:

Title card. 2.5 Input modalities is scrawled onto a sticky note, which is slapped onto the screen by a cat’s paw. Now, we’re entering a presentation-style format.

Jessica:

Pointer gestures. 

Have you tried to pinch to zoom or swipe with three fingers, only to have it not work the way you expected?

I struggle every time I try to open the rotor on my iPhone. It’s supposed to be simple, but it feels like finger gymnastics. I can play the piano, but I can’t open the rotor consistently. 

Visual:

An iPhone with the rotor enabled. It cycles through 6 different settings: Characters, Words, Speaking rate, Containers, Headings, Actions. Next to the iPhone is a quote from Apple’s support website, “Rotate two fingers on your iOS or iPadOS device’s screen as if you’re turning a dial” There’s also a disclaimer that this only works if VoiceOver is enabled on the device. 

Jessica:

For many users, these complex gestures can be frustrating or even impossible to perform. Multi finger gestures require fine motor skills that not everyone has, as evidenced by me.

Visual:

Diagram of complex gesture examples: pinching, 3 finger swiping, and rotation with a finger and thumb.

Jessica:

Users with mobility challenges or those using assistive technologies can struggle with gestures that involve more than one point of contact on the screen. Pointer gestures ensures that important actions can be performed with simpler gestures, like a single tap or click, instead of relying on complicated swipes, pinches, or rotations. 

Pointer cancellation. 

We’ve all been there. Just about to click, then suddenly change our minds. Maybe you were about to tap a link, but realized it wasn’t the right one. 

Visual:

A website homepage, with a big CTA button that says “Enrol now”. A cursor hovers over the button, presses down, then drags away before releasing, so that the button doesn’t press. 

Jessica:

Pointer cancellation is all about making sure that users can back out of an action before it’s completed. 

Here’s how it works. 

Visual:

A generic “Sign up” button, and a black person’s hand with a pointed finger.

Jessica:

When you press down on a button or link, that’s the down event, but the action shouldn’t be finalized until you release your finger, which is the up event. If you change your mind while holding down, you should be able to move your finger away and not trigger anything. 

It’s the same with your mouse and why double clicking is still so common.

Visual:

A yellow file folder icon, as seen in Windows’ Explorer or iOS Finder, which is named “Cat pictures”. A cursor hovers over it and clicks once, which simply highlights the folder. The cursor then double-clicks the folder, opening its contents. 3 pictures of cats, one a sphinx cat looking up at a stick toy, one of a beige cat with 3 stripes on its head, and one of a grey cat with white spots looking to the side in a profile.

Jessica:

This might seem small, but it prevents accidental clicks and taps, especially for users with motor impairments or those using touchscreens. 

Undo is also an important feature, including using the backspace or delete keys on the keyboard.

Label in name. 

Imagine seeing a shopping cart icon on a website.

Visual:

A pink button with a white shopping cart icon, with a number one inside it. 

Jessica:

You might call it Basket, Shopping basket, Cart, or even Trolley.

Visual:

Speech bubbles appear all around the button with these different names. The trolley bubble has a British flag attached to it.

Jessica:

But if the accessible name for that icon doesn’t match what you see, or what you naturally say, it can create barriers for users relying on voice commands.

Visual:

The ARIA label for the button is revealed to be: aria-label=”basket”. 

Jessica:

So add some text and match the label to it. It’s better for everyone.

Quick note: When it comes to voice commands, the first word is the one that matters most. So if you have a shopping basket with a close button, you might want additional information to be given to assistive technology users. So you can have “Close basket and continue shopping” as an ARIA label. Just match up that first word with the visible text. 

Motion actuation. 

Visual:

A phone with a text message app open. The user is talking to someone called real_cat49 who has a black and white cat as their profile picture. Their entire conversation is just the word “meow” over and over again. Based on the use of grammar and caps lock, the conversation has heated up. The user is now trying to type many meows in capital letters. 

Jessica:

I was on a bus the other day, and the ride was so bumpy that it kept triggering my phone’s “shake-to-undo” feature. Over and over again. It was beyond irritating. 

Visual:

As the user types, the phone occasionally shakes and deletes a bunch of the meows that were typed. The user is left having to redo most of the meows.

Jessica:

Motion actuation ensures that actions triggered by physical movements like shaking, tilting, or rotating a device aren’t the only option for interacting with content. While motion gestures can be convenient in some cases, they can also cause problems, especially for users with motor impairments or anyone like me on that bus who just can’t rely on their environment to keep things steady. 

To make things accessible, users should always have alternatives, like tapping a button or using a voice command, so they can perform actions in ways that work for them. 

Visual:

Taffy the Siamese cat appears from the corner of the screen with a happy expression and a pulsating heart above his head, as he looks at the bullet pointed list of alternatives to physical actions: Clicking or tapping a button, and voice commands.

Jessica:

Whether it’s shaky hands, a shaky bus, or a preference for more control, offering alternative methods ensures that motion-based features don’t become barriers. 

Dragging movements. 

You might not think twice about dragging and dropping, whether it’s reordering slides in a presentation, moving tasks around on a Kanban board, or dragging a pin across an online map. But dragging movements requires a surprising amount of fine motor control, and for some users, that just isn’t possible.

The solution? Simple. Your drag and drop functionality should work with point and click. 

Visual:

On the left is a column of blocks, labeled A, B, and C. The same column is on the right. For each one, a cursor interacts with the blocks, organizing them by click and drag on the left, and point and click on the right.

Jessica:

Instead of clicking, holding, and moving an item across the screen, users can click or tap once to select the item, then click or tap again to place it where they want. 

If it’s a map that you can move around to view it, add some arrow buttons to move it up, down, left, or right.

Visual:

An interactive map where the user presses arrow buttons to pan across the city of Boston. There are also zoom buttons available.

Jessica:

Target size: Minimum and Enhanced. 

Have you ever clicked the wrong button on a form because they were too small or too close together? That’s a common frustration for many users, especially on mobile devices. 

Visual:

A phone with a website form, completely filled out, with two small buttons at the bottom, one to submit, the other to cancel. The buttons are so small and close together, the user accidentally taps the Cancel button, erasing all of the form data, and returning them to the website’s home page.

Jessica:

The two target size criteria address this issue by defining minimum and recommended sizes for clickable elements like buttons and links. 

Visual:

Text that states Target size Minimum is Level double-A, and Target size Enhanced is Level triple-A.

Jessica:

The key number to remember is 24 pixels.

Visual:

A purple square button with a white cat icon in the middle. Its dimensions are 24 by 24 pixels.

Jessica:

This is the minimum target size to ensure that interactive elements are easier to tap or click. Think of it as giving users more breathing room when selecting options. This isn’t just about aesthetics. It’s about making sure users can accurately tap what they want without accidentally hitting something else. 

But what if your design calls for smaller buttons? There’s still some flexibility.

If your buttons are smaller than 24 pixels, you can still meet the minimum requirement by adding enough padding around them to reach that 24 pixel minimum. 

Visual:

The button is now 10 by 10 pixels in size, but there is an invisible border around the button, which reaches an extra 14 pixels from the edge of the visible button, totalling 24. However, the entire width of the invisible area, accounting for each side of the button, totals 38 pixels.

Jessica:

This ensures that users can still interact comfortably, even if the visible button itself is smaller. 

But is that actually good in practice? 

Visual:

Slideshow of a comic, copyrighted by Ryan Pagelow at bunicomic.com, of a black and white bunny holding a phone.

Jessica:

This is from Buni comics. Buni is tapping away at the top corner, but nothing is happening, so they get out a microscope and a stylus to poke at it more directly.

Since the tapping wasn’t activating something else, this might actually fit within the rules. 

So remember, just because you can, doesn’t mean you should. 

Speaking of should, WCAG has a Triple-A version: Target size Enhanced. At 44 pixels, we ensure even more accessible interactions, so aim for that. 

Visual:

Side-by-side comparison of the 24 pixel button and the larger 44 pixel button.

Jessica:

Whether it’s buttons, links, or drop down menus, the key takeaway is simple. Make interactive elements big enough and spaced well enough for everyone to use easily.

After all, no one ever complained that something was too easy to click. 

Visual:

The submit and cancel buttons on the form from earlier are now expanded and spaced apart.

Jessica:

Input modalities. Because it shouldn’t matter how you click, tap, type or talk.

Previous articleNext article
Back to top