By Allon Mason, UserWay Founder & CEO
When we heard about the most recent wave of tragic events in May 2020, we were inspired to address inclusivity issues that surpass ADA, Section 508, and WCAG standards. Focusing on digital racism and bias is long past due, and our team is eager to contribute to the conversation positively.
In June, Google announced that they would be reevaluating what they consider acceptable language. So far, they have chosen to change terms including “blacklist” to “blocked list,” “whitelist” to “allowed list,” “master-slave” to “primary/secondary,” and so on.
That was the spark that triggered us to build this tool. At the time, we were enhancing our AI-powered capabilities that supply alt text descriptions of images for screen readers. We realized that if word choices can make our customers’ digital content inaccessible (even without intending to), UserWay should help.
In the same way that we remediate HTML code, we can help our users pinpoint and update word choices on their site. Our team was so excited about this revelation that in just a few days, we’d repurposed UserWay’s AI-Powered Accessibility Widget and crafted the beginnings of our Content Moderator. Similar to offering a range of skin tones for emojis, the Content Moderator offers nuanced alternatives to words that may be considered sensitive.
While Google and Apple are approaching the issue as a simple search-and-replace, UserWay looks deeper into the problem of bias. Our tool seeks to detect verbalization patterns that consistently and routinely marginalize and disempower specific cohorts. Our dictionary is endlessly and carefully updated to align with cultural and social changes. As we now routinely see, words that were considered standard a few years ago are suddenly being discouraged and even banned by these digital giants. We help small, medium, and large companies, organizations, governments, and leading brands to match this standard without needing to perform 24/7 text reviews and deep dive into their legacy content.
Of course, each content owner can choose to agree, modify, or ignore the Content Moderator’s suggestions. The changes are entirely within their control. Additionally, we aggregate the changes made in a privacy-preserving manner to improve the moderator without sacrificing anonymity. By tracking changes in real-time, data flows into UserWay’s Content Moderator back end and improves the tool with each choice our users make.
We intend to empower users by making them aware of the content that exists on their site – especially legacy and user-generated text that may not reflect their brand values. More importantly, we hope that by removing blatantly and subtly offensive content, we can help these sites become barrier-free and inviting for all users, as that is our goal for everything we create.
Click here to learn more about UserWay’s AI-Powered Content Moderator.