Facebook’s community standards guidelines are resulting in lesbians being banned for referring to themselves as “dykes”.
One expert has suggested early adoption of artificial intelligence to manage hate speech may be to blame, ABC News has reported.
Waterhouse said she has been blocked a number of times for using the term “dyke” in positive ways, and fears being banned by Facebook over it.
“[For] women who may not get any kind of validation or support or advocacy in their area, it can be quite critical for them to see that it is possible to be positive about being a lesbian, it is possible to change social attitudes,” she said.
Waterhouse said the term “dyke” has evolved from being a slur to also being used in everyday conversation by lesbians.
She wants Facebook to investigate the practices of its content reviewers to determine if any are showing a bias against women or lesbians.
The Queensland chapter of Dykes on Bikes has had its Facebook page shut down, with a message saying it breached community standards.
President Julz Raven said she had tried repeatedly to contact Facebook but had no response.
Facebook publishes limited information about its community standards and what content is allowed. Its publicly available standards say it removes hate speech, including that based on sexual orientation.
Vice president for public policy Richard Allen said Facebook considers the context and intent of words used.
“For example, the use of the word ‘dyke’ may be considered hate speech when directed as an attack on someone on the basis of the fact that they are gay,” he said.
“However, if someone posted a photo of themselves with #dyke, it would be allowed.”
He admitted that mistakes were made.
“We are deeply committed to addressing and confronting bias anywhere it may exist,” Allen said.
Senior lecturer in convergent and online media Dr Fiona Martin of the University of Sydney said there is no transparency from social media companies about how their algorithms censor content. She questioned whether the industry’s adoption of artificial intelligence is affecting the evaluation of language and context.
“They have so many billions of users posting, that it is very difficult to moderate that language use,” she said.
“So they’re using automatic filters to pick up on words that might be seen as offensive in some contexts.”