YouTube introduces ‘timeout’ penalty to curb toxic comments

YouTube introduces ‘timeout’ penalty to curb toxic comments

YouTube is trying to curtail some of its more offensive comments sections. Tuesday, the video streaming platform Updated its policy guidelines Spam monitoring, bot detection, and, most importantly, its comment removals, and penalties for content violation are all aspects of the spam monitoring. Hate speech and harassment are prohibited long plagued The comment sections on the website’s videos, and channels, result in various strategies This will help to control the problem. YouTube will notify users whenever its monitoring system flags or removes comments that violate community guidelines. YouTube’s website states that YouTube will notify users whenever its monitoring systems flag and remove comments that violate community guidelines. Explainer pageThe comment regulations cover a wide variety of topics, including the use or threat of violence, cyberbullying and COVID-19 misinformation. YouTube was vague about the mechanism of this automated system.

If accounts continue to post similar content that violates the guidelines, the company might instate a 24-hour “timeout”, during which their ability to comment will be disabled. YouTube’s existing guidelines are still in effect. Yesterday’s update does not indicate what will happen if repeat offenders ignore them. Community guidelines page Lists a policy that bans channels who violate the site’s rules more than three times in a period of 90 days.

[Related:[Related:How to make sure YouTube doesn’t take over your life.]

YouTube claims that recent testing has shown that warnings and timeouts can reduce the likelihood of users posting harmful content again. Those who feel their posts were incorrectly flagged can still post them. Appeal More information on the issue.

“Our goal is not only to protect creators from users who try to negatively impact the community via comments but also to offer more transparency to users who may be having comments removed due to policy violations and hopefully help these users understand our Community Guidelines.” YouTube’s latest update explains.

YouTube’s constantly-evolving automated detection systems and machine-learning models that detect and remove spam are also mentioned in the post. YouTube stated that it had detected and removed over 1.1 billion spammy comments in the first six months. These moderation techniques now include abusive writing that is posted in livestream chats. However, the new updates don’t give any details about how their machine learning and bot detection actually sort through the millions of streaming and video comments.

[Related:[Related: How to navigate YouTube videos like a pro.]

As TechCrunch The company notes that similar programs have been tested in the past. These include hiding comment sections by default, and showing users’ comment history within their profiles. Last monthYouTube also launched a new function This allows creators to hide users from comments across all channels. The site’s latest warning and timeout system is only available in English at the moment, but the company posted that it plans to expand the capability to other languages in the future.

YouTube admitted that dealing with hate speech and abusive content can feel like a never-ending battle. They wrote, “Reducing spam in comments and live chat is a continuing task. These updates will continue to be ongoing as we adapt to new trends.”

Andrew Paul

Continue reading