MADNESS: Google Cancels AI Gender Detection Out Of “Bias” Worries

 

(FamilyRetirementClub.com) – Google is terrified of being labeled transphobic and has taken the dramatic decision to remove a feature from its artificial intelligence software that can tell the difference between men and women. A tool developed by the software giant was previously able to detect whether an image depicted a male or a female with high levels of accuracy.

But, as the woke crowd clamps down on anything “heteronormative,” Google has decided to remove the feature to “avoid bias.”

Translation: Google knows they’ll be accused of transphobia and want to shut down any possibility of that happening. Isn’t it incredible when one of the biggest companies in the world is vulnerable to the ramblings of a small, vocal minority of extremists?

A report from Business Insider claims that Google’s Cloud Vision API, which is used by developed to label photographs quickly and easily, will no longer assign gender. The tool is also used to automatically detect logos, faces, landmarks, animals, and a host of other things in large numbers of photographs.

The news of the change was announced in an email sent out on Thursday morning. Google claimed the change was made as it was not possible to properly identify a gender just by looking at someone’s appearance. The company also claimed that attributing gender automatically to photos may be a form of “bias.”

“Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.”

The issue of “bias” in artificial intelligence is widely debated. The term refers to flaws in the way that artificial intelligence is trained. This software learns how to identify people and objects by learning in a similar way to humans. The software is fed data and uses this to find patterns and trends. Over time, artificial intelligence becomes increasingly accurate in automatically labeling content.

In an email to Business Insider, a tech policy fellow from Mozilla, FrederikeKaltheuner, said that the update was a “very positive” move.

“Anytime you automatically classify people, whether that’s their gender, or their sexual orientation, you need to decide on which categories you use in the first place – and this comes with lots of assumptions,” Kaltheuner wrote.

“Classifying people as male or female assumed that gender is binary. Anyone who doesn’t fit it will automatically be misclassified and misgendered.”

Who wants to break it to them that gender is, literally, binary?