In a press release, Google explained how it removes harmful content from YouTube, which was recently fined for privacy issues on ‘Kids’ and changed, as did Instagram, its interaction counting policy. The Mountain View giant noted that over the past year, it has redoubled its efforts to fulfill its responsibility by “preserving the magic of the open platform.”
There are four pillars that guide the platform’s work in removing content that hurts its policies: remove, highlight, reward and reduce. Still according to YouTube, in the coming months, the platform will release more details about its work on each of these principles.
In the statement, Google talks about the work of removing harmful content, something that has been happening since the platform began in 2006 but has seen an investment increase in recent years. Over the past 18 months, video visits that were later taken off the platform for violating policies have dropped by 80%.
There is a clear concern about the fine line that limits what violates YouTube policies of what can be left on the streaming platform. This precaution is intended to preserve freedom of expression and to protect the community. Thus, since 2018 Google has introduced dozens of updates to its standards.
YouTube notes that the hate speech update represented a fundamental change in its policies, something that has been developed for months. As early as April of this year, the Mountain View giant announced it is working to update its harassment policy, which is also true of creators. The end result will be seen in the coming months.
Technology to detect harmful content
YouTube has been working with machine learning technology to help find videos that violate rights since 2017. This system was responsible for detecting over 89% of the 9 million videos deleted in the second quarter of 2019.
We are investing significantly in our automatic detection systems, and our teams of engineers continue to update and refine them month by month. For example, an update to our spam detection system in the second quarter of 2019 led to a 50% increase in channels excluded for violating our spam policies.
YouTube in press release
These machine learning enhancements helped Google revise content before it was popular, which allowed more than 80% of reported videos to be automatically deleted before they had a single view in Q2 2019. Google has over 10,000 people working on the detection, review, and deletion of rules-violating content.
Recommended: Apple may launch iPhone SE with 4.7 “screen and competitive price in 2020, says new rumor
The company points out that last week it updated its community standards app report – a study that gives more details about how much content was removed from YouTube, the reasons and how it was detected. While underscoring the strength of machine learning, Google believes that human experience “remains critical in our efforts to create policy, carefully review content and manage our machine learning technology responsibly.”
[appbox googleplay com.google.android.youtube compact]
Source: youtube.googleblog