​The musician (15) has documented a lot of the online harassment she’s suffered on Instagram at (1). Her bio expresses her mission succinctly: “4’9″ violinist & perv magnet. I’ve archived 1,000+ messages from creeps, weirdos & fetishists over the past 10 years. I’ve decided to post them all.” An alarming amount of the abuse comes through Facebook messages, invisible to anyone but her. “Anyone ever tell you how sexy you are and how bad they wanna let you face fuck them,” begins one unsolicited message from a random person that ends with a smiley face. She also posts submissions from others, one of which reads: “Hey asian whore want to get raped? I know where you live.”
Matsumiya has (7), but it’s unclear whether she’s reported the repeated abuse to the social networks themselves (as of press time, she hadn’t responded to requests for comment). But Facebook knew its structure was to blame: last fall, the company changed its messaging features to remove the dreaded “Other” folder, where any user could contact you. Now, users must send requests to contact someone before they can message them, Facebook tells us, though there’s still a “filtered requests” tab buried two levels deep on the desktop version of Facebook that provides a home for potentially hateful noise. Though extreme, what Matsumiya is experiencing isn’t rare — a full quarter (16), and 26 percent report being stalked.
If you’re experiencing abuse in this or one of its many, many other forms, this article’s for you: it is a practical guide to understanding how to report harassment and abuse online, and what to expect from various social networks when you do.
They, unfairly, expect you to do a lot, and it sucks that so much of the burden of protecting yourself falls on you, the person being harassed. Per Matsumiya’s account above, sometimes the very structure of the app you’re using unnecessarily enables abuse. But many social-media services have started beefing up their trust-and-safety teams and expanding their understanding of all the ways a person can be harassed on their platforms.
Throughout this piece, we cite examples of the abusive behaviors that drive people away from using social media, many of which were reported but turned down for action. This is not to say that these examples are necessarily against any service’s terms of use, but rather to illustrate how much can fall in the gap between these different definitions of acceptable behavior.
The way you’ll have to deal with harassment will vary across different platforms, but there are a few best practices that can help keep your case strong.
**IN GENERAL**
* Screenshot, screenshot, screenshot. A lot of attacks online come from burner accounts that may be reported and suspended before you can report them yourself, and attacks may also take the form of content that is posted and then deleted, such that it may show up in an alert very briefly and then disappear once the damage is done. Keeping your own record of what is happening not only helps quantify things and keep them straight but also may help you or the platforms establish patterns across accounts or services.* Report accounts, not just content, where applicable. If a user is being relentless or conducting an attack against you, it’s often appropriate to report both the content they’re posting as well as the account they’re using.* Escalate to law enforcement if you’re in danger. Unfortunately, most platforms can’t respond substantively to harassment that quickly; sometimes they may only take a few hours, but most don’t guarantee a response time. If your situation is urgent, know that it’s within the purview of law enforcement to respond to a direct threat (8); you may not be able to count on them to know what Twitter is, but you should at least try to get a report filed). Online harassers don’t often substantiate their threats, but it’s not worth the risk to assume they won’t, or that since it’s “just online” it should not be taken seriously.
## THE SPECIFICS
The basics: Twitter’s (17). Under “Report a violation,” there are different options depending on whether you wish to report harassment, impersonation, or privacy violations.
The details: Twitter has been a flash point for discussions about abuse, for good reason. Twitter “makes harassment so visible,” said Anita Sarkeesian, the founder of Feminist Frequency. “The same metric we use to judge expression is the same one we use to judge harassment.” That is, someone harassing you about a tweet you made is as visible to you as your own tweet.
On the back end, reports get routed to different teams depending on the content — for instance, child porn goes to a different place than someone directing violent threats at you. You can report both accounts and individual tweets, but if an account is repeatedly tweeting at you, it will be simpler to report the account rather than each individual tweet. Twitter used to notify the person you were reporting when you did so. It no longer does this.
When you report someone, Twitter will generate an email to you that becomes a thread with the support team. When Twitter decides whether what you’ve reported is or is not harassment, it will email you and tell you so. Twitter initially turning down even fairly obvious cases is not unusual, but that doesn’t mean that’s the end of the exchange. The platform allows users to reply to the support email chain with to challenge decisions and provide additional evidence if possible. Twitter is also unusual in that it allows users to report harassment happening to someone else and communicates just as actively to the person who files those reports.
An annoying thing about this process (which will hopefully change any minute) is that Twitter does not identify what you reported in the emails it generates, which can make things very confusing if you are reporting many people or tweets at once. This makes it difficult to follow up with relevant information.
Anecdote time: Last spring, I tweeted a screenshot of a rude DM sent to me by a random account. The user saw it and immediately marshaled about a dozen sock puppets to tweet repeatedly at me that I deserved to die. As I recall, Twitter found one of the sock-puppet accounts to be in violation and suspended it, but it didn’t get the rest and didn’t understand how to go after the account running the attack. At the time, Twitter’s reporting structure couldn’t accommodate this type of attack; now it can.
The basics: For violations where you cannot report content on Facebook in context, here is the (2). For violations where there _is_ context, steps are below.
The details: Facebook’s (13) is meant to curtail certain kinds of abuse, like truly psychotic hate speech or direct threats, by making it harder to maintain an anonymous identity. However, this underestimates the crazy stuff people don’t mind having attached to their real names (see below). The fact that victims’ profiles must be tied to their real names also leaves them vulnerable.
Reporting content on Facebook varies slightly depending on whether the item is a posted status, a link, a photo, or something else, though pop-up menus offered through reporting and flagging buttons make it relatively easy; all the community violations options are under “This doesn’t belong on Facebook.” Once something is reported, it’s routed to your own personal “support inbox” on the service, which is useful for keeping track of what you reported and when you reported it, and lets Facebook thread replies into individual reported items.
However, if Facebook turns your request down, there’s no opportunity for you to follow up; Facebook only provides shortcut buttons to deal with the problem on your own by, for instance, blocking the user.
Facebook’s support terms give people a lot of leeway — the satire/humor/social commentary clause in its community standards is interpreted pretty broadly — and it can be hard to get the company to affirm violations and remove them. The company recently made statements about (9) on hate speech, expanding beyond the direct-threat threshold.
Anecdote time: Facebook came under scrutiny in Germany this March for not (18) xenophobic and racist hate speech, a type of content condemned by the community standards, but the problem persists at home, too. Even a publicly posted NBC News story gets vicious public comments on Facebook. A March 18 (19) titled “Latino, Immigrant Advocates to Protest All Trump Arizona Events” received a (20) from one user, Paul: “Let’s keep all of the Central American immigrants, and deport Donald Trump and his racist doofus supporters.” Another user, Pamela Thomas Jones, (3), “Paul lets LOCK all of them up in a cage and send them to a jungle. They are animals that belong I a cage.” As a user, I had to first “hide” this comment (from only myself) in order to go through the motions of reporting it as hate speech. By the time I saw it, the comment had been up for four days. Facebook responded a couple of days later saying it did not violate the community standards.
The basics: For a long time, Instagram did not have a form for reporting content directly to the staff; if you couldn’t see the content you wanted to report (because of blocking) or couldn’t describe it through built-in forms, you were out of luck. But now there is one (10), with There is (14) for reporting harassment or bullying.
The details: Instagram’s format leaves some of its users particularly vulnerable to harassment and bullying, in part because the bully’s tactics are highly visible, but only to the user they are attacking. A harasser might tag the victim in a disgusting photo or leave an offensive comment on an old photo, for instance — so the victim can see the harassment, but it’s mostly invisible to other users. Because there’s not an easy way for others to see this behavior, community enforcing doesn’t help here. The fact that users must be logged in to view otherwise-public content means if they’re blocked by an account harassing them, they are not even able to access the usual tools for reporting, and Instagram has limited web functionality.
Additionally, Instagram has one of the woolier reporting mechanisms. The dialog that pops up when you select a photo or account to report is quick and straightforward, but there is no room for elaboration. While Instagram is owned by Facebook, its harassment-reporting dialog structure is different. “This photo shouldn’t be on Instagram” leads to the hate-speech or graphic-violence reporting options, but if you need to report harassment or bullying, you must select “This photo puts people at risk.”
When you file a report with Instagram, it does not generate any feedback or confirmation beyond the “Thanks for your report” dialog: no emails, no messages. Likewise, Instagram does not generate follow-up emails to let you know what decision it’s made, and there’s no way to appeal a decision. Generally, Instagram decides whether to take action within 24 to 72 hours.
Anecdote time: Reports of bullying among teens are extremely common on Instagram; in one case a couple of years ago, (11) that allegedly targeted their daughter with nude photos and gave space for others to leave harassing comments. The page no longer exists, at least not in the form it did (deleting old Instagram accounts and setting up new ones is an extremely common practice). In another case, (4) to mock her for having cancer and tell her to “kill self.” The parents in question did not respond to queries, and Instagram would not comment on these specific cases, but a representative stated that “Instagram has zero tolerance for threats of violence, bullying, and harassment to our community, and when instances are reported, we move swiftly to take down violating content.”
## Youtube
The basics: The general reporting form for YouTube (5), though you may not be able to get all the way through, depending on whether you can use its auto-generated fill-ins, which don’t capture harassment happening in, for instance, a third-party video’s comments.
The details: Poor YouTubers. The video service has one of the worst frameworks for reporting harassment. A lot of it is automated, but in a regimented way that burdens the reporting user, and a lot of the infrastructure makes harassers uniquely visible.
Any individual comment can be reported on a page, but reporting an individual user is three clicks deep (their profile > about > the flag icon > report user). After you work your way through the dialogs, YouTube gives you an auto-generated form that pulls the users’ videos and comments on your own channel or videos and asks you to identify which of them you’re complaining about. Notably, this does not allow you to systematically report a user if, say, they are leaving comments about you strewed across others’ videos.
Reporting an individual comment on someone else’s YouTube video generates vague feedback that sounds like nothing is being done, but the company tells us the complaint does get submitted. Per (21), YouTube uses a “strike” system invisible to other users that will sometimes result in account termination.
The only feedback users receive indicating that their reports have done anything is if the offending video or comment is removed; reports do not generate any paper trail, and there is no dedicated interface for managing reports. YouTube does not specify how long it takes to act on reports but notes only that a staff of specialists monitors them 24/7.
Anecdote time: Again, what constitutes a threat relies on YouTube’s interpretation. For instance, footage of someone playing a game called ” (22)” remains posted. One commenter on the video writes, “can’t you just kill the bitch instead.” Despite reports, both the video and comment remain posted.
Sarkeesian also pointed out another unique form of harassment: the majority of videos that appear in the “recommended” sidebar next to her own are made and circulated by abusers. “If you watch one of my videos, you will then be recommended all of these anti-feminist videos,” she said. “The related-channels function gets defaulted onto everyone’s YouTube page and is populated by algorithms. On my channel, it’s all harassers.” Sarkeesian must opt out of the recommended networks entirely, meaning her videos cannot appear in a recommended sidebar, ever, if she wants to stop this from happening, denying her a big source of traffic to her content.
## Tumblr
The basics: Tumblr’s (12) for reporting harassment or abuse.
The details: Tumblr is an extremely popular platform for anonymous users, and it has its share of problems, including cultural pockets of self-harm obsession, like pro-ana blogs. It can be a target for “raids” by subfactions of users on Reddit or 4chan, where they launch abusive attacks against users they find distasteful. The abuse policy states Tumblr will remove “overtly malicious” material or, in the case of self-harm, “active promotion or glorification.”
Tumblr allows users to enter a report form from within their main dashboard, by selecting “flag post” from the three-dot menu. A short and simple set of menu options allows users to frame a report, and there’s a text box at the end for contextualizing problems with the post. However, on a post’s web page, there is no reporting button. In that case, users can use the abuse form directly, which has no menu options, just a text box (meaning it will be an extra step for Tumblr to sort it appropriately).
According to Tumblr, the company tries to field most complaints within 24 hours, and the ending dialog to filing a complaint says it can take “a day or two.” After that period, Tumblr will follow up with an email letting you know it’s looking at your complaint, though it will not follow up again to let you know what its decision was.
Anecdote time: Per usual, the lines around abuse involve a lot of interpretation on Tumblr’s end. I ran across a post discussing the terms _transtrender_ and _genderspecial_ that a user (6), telling the original poster to “run five miles into traffic in the middle of the freeway.” Tumblr’s abuse-and-harassment team said the team didn’t find it bad enough to be taken down, though Tumblr would not explain its reasoning any further.
_Casey Johnston is an editor at the Wirecutter and a freelance journalist._
_​_
1) (https://www.facebook.com/help/112146705538576)
2) (https://help.instagram.com/contact/497253480400030)
3) (https://www.facebook.com/NBCNews/posts/1337638872922806?comment_id=1337644912922202&comment_tracking=%7B%22tn%22%3A%22R9%22%7D)
4) (https://support.google.com/youtube/answer/2802032)
5) (https://www.youtube.com/watch?v=6yLXHZkH84I)
6) (https://twitter.com/mia_matsumiya)
7) (http://www.pewinternet.org/2014/10/22/online-harassment/)
8) (https://support.twitter.com/forms)
9) (http://qz.com/632798/the-surprisingly-long-lifespan-of-xenophobic-racist-facebook-posts-in-germany/)
10) (https://www.facebook.com/NBCNews/posts/1337638872922806)
11) (http://antisjw-garnet.tumblr.com/post/139275356401/thegodaesthetic-why-do-people-use-the-terms)
12) (https://www.instagram.com/perv_magnet/)
13) (https://www.facebook.com/help/contact/274459462613911)
14) (https://www.facebook.com/NBCNews/posts/1337638872922806?comment_id=1337644912922202&reply_comment_id=1732888260259713&comment_tracking=%7B%22tn%22%3A%22R9%22%7D)
15) (http://thegrio.com/2015/12/13/teen-suffering-from-leukemia-viciously-cyber-bullied-by-ex-on-instagram/)
16) (https://www.youtube.com/reportabuse)
17) (https://www.tumblr.com/abuse/maliciousspeech)
18) (http://www.nbcnews.com/news/asian-america/meet-woman-behind-perv-magnet-project-documenting-online-harassment-n454066?hootPostID=b1639cdb9d2ea0f184a629def2818e83)
19) (unfortunately, many of them (http://www.psmag.com/health-and-behavior/women-arent-welcome-internet-72170)
20) (http://www.theguardian.com/technology/2015/mar/16/facebook-policy-nudity-hate-speech-standards)
21) (https://help.instagram.com/contact/383679321740945)
22) (http://www.khou.com/story/news/2014/07/24/12326566/)