"For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams", said Guy Rosen, the company's vice-president of product management, in a statement posted online announcing the release of the report.
However, it declined to say how many minors - legal users who are between the ages of 13 and 17 - saw the offending content. A Bloomberg report last week showed that while Facebook says it's become effective at taking down terrorist content from al-Qaeda and the Islamic State, recruitment posts for other USA -designated terrorist groups are found easily on the site.
This led to old as well as new content of this type being taken down. The report also doesn't cover how much inappropriate content Facebook missed.
Facebook said improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, almost three times more than it had in the last quarter of 2017. For every 10,000 views of content on Facebook, the company said, roughly 8 of them were removed for featuring sex or nudity in the first quarter, up from 7 views at the end of previous year. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards". The company didn't provide a number of views, but said it was "extremely low". Instead, Facebook's approach is to have bigger groups residing in "centres of excellence" in order to review the content on its platform, he explained. It said the rise was due to improvements in detection.
Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.
Adult nudity and sexual activity: Facebook says.07% to.09% of views contained such content in Q1, up from.06% to.08% in Q4.
Bethesda releases first gameplay trailer for Rage 2
We do, however, have a few details about the game revealed by the games own section on the Bethesda website . In the game players will play as Walker, one of the last Rangers of the dreaded wastelands.
"It may take a human to understand and accurately interpret nuances like. self-referential comments or sarcasm", the report said, noting that Facebook aims to "protect and respect both expression and personal safety". The company found and flagged 95.8% of such content before users reported it.
The numbers show that Facebook is still predominately relying on other people to catch hate speech - which CEO Mark Zuckerberg has spoken about before, saying that it's much harder to build an AI system that can determine what hate speech is then to build a system that can detect a nipple.
Facebook disabled about 583 million fake accounts in Q1, most of which "were disabled within minutes of registration".
The company estimated that around 3% to 4% of the active Facebook accounts on the site during this time period - roughly 43 million out 2.19 billion - were fake.
The social media giant promised the report will be the first of a series seeking to measure how prevalent violations of its content rules are, how much content they remove or otherwise take action on, how much of it they find before it is flagged by users, and how quickly they take action on violations.