Instagram admits it made a mistake in failing to remove abusive comments
In the wake of floods of abuse and racist slurs directed at England’s football players, Bukayo Saka, Marcus Rashford and Jadon Sancho, Instagram has admitted a mistake in its technology meaning comments were not taken down.
Instagram head, Adam Mosseri said that the reason for this was that content, including racist emojis pasted in the comments of the players’ latest Instagram posts were “mistakenly” identified as concurring with the platform guidelines, instead of being referred onto human moderators.
Several more comments were reported but the notification of results, and therefore action, has been slow. Of the 105 accounts identified as having racially abused the players, 88 are still up.
Though the platform has promised a solution, having collaborated with anti-discrimination networks in the past to curate lists of offensive terms, symbols and emojis and has a tool which allows individual users to filter out particular phrases, this does not stop abuse circulating and targeting both high profile and everyday users.
Imran Ahmed, Chief Executive of the Centre of Countering Hate said it was “beyond belief” that racist comments continue to bypass Instagram’s filters. Read more via BBC News.
Is the legislative clampdown on big tech going far enough to end abuse?
The nature of how players have been targeted and bombarded has raised questions as to what is being done to combat racism online and on social media.
UK Ministers, including UK Secretary of State for Digital, Oliver Dowden, have referred back to the draft Online Safety Bill which has yet to be put to a vote.
The bill would impose a duty of care towards users of big tech platforms, enforced by communications regulator, Ofcom. Fines of up to £18M or alternatively, 10% of annual turnover, could be levied on companies who fail to regulate comments which pose significant physical or psychological harm.
But doubts remain about the extent to which the law will fully clamp down on online abuse, particularly surrounding the question of anonymity and the ability of users engaging in racist attacks to create new profiles in new names. Loopholes in the algorithms, which fail to recognise racist symbols and emojis in some texts, such as Instagram’s technologies have, also raise doubts about the regulatory power which rests with social media giants when their filters are continuously failing to pick up on both overt and covert attacks. Read more via Politico.
What are the options for big tech?
Amidst criticism of the bill, campaigners, social media companies and users have been proposing other solutions, such as anonymity bans, moderation techniques and the use of AI. But it is difficult to see that any one of these will tackle the issue at its root.
Facebook, for example, which has enforced real-name policy since 2015, while Twitter metadata can still identify any use with 96.7% accuracy. Despite this, both platforms have still seen onslaughts of racist abuse.
The use of AI to moderate abusive behaviour at a much greater speed and scale than human moderation has often been floated as a combative solution in big tech. But the algorithms themselves are not completely watertight, and are prone to errors which can fuel discriminatory behaviour online.
In May 2021 during the bombing of Gaza, Instagram’s algorithm blocked posts with hashtags for the Al-Aqsa Mosque, the third-holiest site in the Islamic faith, having mistakenly deemed the religious building a terrorist organisation.
Read more via The Independent.
Social media sites must ‘Act now.’
With several options being presented by campaigners and social media users alike, sites are being urged to tighten their regulations against abusive attacks on their platforms, improve verification systems and hold those circulating comments to account.
Tony Burnett, chief executive of Kick It Out, an organisation working to improve equality and inclusion in football, told the i that sites must apply “preventative filtering and blocking measures,” in a way which would block abusive messages from being sent or seen, alongside more stringent rules to verify the accounts of social media users and prevent perpetrators from reregistering.
Twitter said that it had removed more than 1,000 abusive messages in the space of 24 hours this week, using both AI technology and human moderators. But this is not even a scratch on the comments targeted at players.
Carmel Glassbrook, of the UK Safer Internet Centre told the i that it would take “a combination of agencies, laws and policies to effect change,” but for now, encouraged users to be active in reporting online abuse. Read more via the i.