Tech

YouTube wants to improve comment moderation, spam and live chats with bots

YouTube has announced a plan to crack down on spam and abusive content in comments and live chats. The video service will send warnings to users if you discover that comments posted on your platform violate community guidelines and should therefore be removed.

Google believes that its comment removal warning will deter users from posting negative comments and reduce the number of additional instances where they leave infringing comments. However, if the person continues with the same behavior, it is possible that be prevented from posting comments for up to 24 hours or until a specified time limit has elapsed.

According to the company, during the execution of the tests it has been carrying out, the results of the comment deletion warnings and wait times were encouraging and in doing so helped protect creators from users trying to negatively impact them. The new notification system is currently only available for comments in English. Google plans to bring it to more languages ​​in the coming months. The company also asks users to provide feedback if they believe their feedback system has selected them in error.

In addition to this change, Google has also improved the spam detection in comments and says that it has managed to remove more than 1.1 billion spam comments in the first six months of 2022 alone. In addition, it has improved the detection of spambots to keep them out of live chats.

YouTube comments: bots in command

Of course, YouTube will do all this with bots, who will now have the power to issue timeouts for users and instantly remove comments deemed abusive. And it won’t be easy. Comment moderation on YouTube often seems like an impossible task, to the point that many Pages just turn off comments entirely because they don’t want to deal with it. Moderating live chat is even more difficult, as even if you catch an offensive message quickly after it’s posted, the very nature of the medium means the damage has probably already been done.

Bots are a scalable way to address this problem, but Google’s automated moderation track record with YouTube and the Play Store is pretty crude. And we have seen some examples of this. Marking a horror channel as “kids friendly” because it included animation; ban a video player from Google Play because the subtitle files used the “.ass” extension, which is also a swear word. Another example is that the Play Store regularly bans chat apps, Reddit apps, and podcast apps because, just like a browser, they can access user-generated content, and sometimes that content is objectionable.

YouTube Comments

It does not appear that YouTube involves channel owners in any of these moderation decisions. Note that YouTube’s comment moderation improvement announcement says that the portal will warn the author (not the channel owner) about the automatic removal of content and that if users do not agree with the automatic removal of comments, they can “send comments” to YouTube.

The “send feedback” link on many Google products is a black hole hint box and not any kind of comment moderation queue, so we don’t know if there will be someone (human) behind that responds to the dispute. YouTube notes that this automatic content moderation will only remove comments that violate the Community Guidelines, a list of fairly basic content bans. We will see.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *