Twitter has a new way to rid itself of artificial intelligence bias: pay outsiders to find problems. On Friday, the short-message app maker detailed a new bounty competition that offers prizes of up to $3,500 for showing Twitter how its technology incorrectly handles photos.
Earlier this year, Twitter confirmed a problem in its automatic photo cropping mechanism, concluding the software favored white people over Black people. The cropping mechanism, which Twitter calls its “saliency algorithm,” is supposed to present the most important section of an image when you’re scrolling through tweets.
Twitter’s approach to tackling algorithmic bias — asking outside experts and observers to study its code and results — innovates on bug bounties, which have historically been used for reporting security vulnerabilities. Twitter says its bias bounty is an industry first and hopes other companies will follow suit.
“It sparks more people to be involved who maybe didn’t have resources and free time,” said Rumman Chowdhury, director of Twitter’s Machine Learning Ethics, Transparency and Accountability program. “We want to start cultivating and creating a community of ethical AI hackers.”
Tackling algorithmic bias has become an increasingly important concern for technology. AI can cause problems, including denigrating particular populations or reinforcing stereotypes, if the software isn’t trained effectively. Twitter’s project is designed to solidify standards around ideas like representational harm.
AI has revolutionized computing by teaching devices how to make decisions based on real-world data instead of rigid programming rules. That helps with messy tasks like understanding speech, screening spam and identifying your face to unlock your phone.
The algorithms that power AI, however, can be opaque and reflect problems in training data. That’s led to problems like Google mistakenly labeling Black people as gorillas in photos. Fixing AI problems is important as we rely on the technology to run more and more of our digital lives. It also can be important within companies: Google acknowledges that its handling of an AI ethics issue hurt its program’s reputation.
Twitter’s algorithmic bias bounty is similar to programs that many tech companies now offer to find security problems in their products. For example, Google has paid $29 million for 11,055 vulnerabilities found in Android, Chrome and other Google products over the last decade.
Startup HackerOne is helping to run Twitter’s algorithmic bias bounty competition, sharing rules and accepting submissions. The deadline for entries is 11:59 p.m. PT on Aug. 6, and Twitter will announce winners Aug. 9.
AI shortcomings can be exploited in many ways, including specially crafted images that could turn Twitter’s saliency software into an unwitting accomplice of an outside attack. Researchers might want to examine other algorithms for bias — the tweets Twitter chooses to spotlight or omit from your feed, for example. For the moment, Twitter’s bias bounty is limited to its cropping algorithm.