The Algorithms
How they work and how to avoid violations
While Instagram claims to not “sell” user data, they do hint that this data travels to undisclosed locations, as Instagram “uses the information [they] have to study [their] service and collaborates with others on research to make [their] service better and contribute to the well-being of [their] community.”
Instagram’s Community Guidelines are laid out on their website in cheerful, but ambiguous corporate jargon here.
Instagram admits they can remove “any content or information you share on the Service if [they] believe that it violates these Terms of Use.” By these rules, Instagram can remove any content that they deem inappropriate for the platform, and they can “refuse to provide or stop providing all or part of the service to you (including terminating or disabling your access to the Facebook Products and Facebook Company Products) immediately.”
Instagram has full authority in deciding what counts as a violation of their broad and ambiguous community guidelines, and these decisions are often made by automatic algorithms that scan for words and images. These algorithms lack the necessary ability to pick up on irony or satire when deciding if content breaks guidelines. Because of this, users are more at risk of having content removed than it may appear. Even worse, different kinds of users face different consequences based on their status, through entirely different algorithmic systems.
White Listed Individuals
Jeff Horwitz reported on company documents that reveal an elite tier of Instagram users who face an entirely different system that monitors their content here. These ‘white-listed’ users have their content filtered through a seperate program call ‘XCheck’ that “was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists” and now “shields millions of VIP users from the company’s normal enforcement process.” The documents show that high-profile users are “rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.” This elite group of users face an entirely different, more advanced system of checks that regular users don’t have the luxury of.
This invisible tier of elite Instagram users included at least 5.8 million users in 2020, and unlike normal users, “if Facebook’s systems conclude that one of those accounts might have broken its rules, they don’t remove the content—at least not right away, the documents indicate. They route the complaint into a separate system, staffed by better-trained, full-time employees, for additional layers of review.” In the past Facebook has even contacted VIP users who violated the community guidelines, giving them “a “self-remediation window” of 24 hours to delete violating content on their own before Facebook took it down and applied penalties.”
Instagram Knows Their Algorithms Suck
Zuckerberg himself estimated in 2018 that Facebook gets “10% of its content removal decisions wrong, and, depending on the enforcement action taken, users might never be told what rule they violated or be given a chance to appeal.”
Head of Instagram Adam Mosseri has commented on these issues saying “we haven’t always done enough to explain why we take down content when we do, what is recommendable and what isn’t, and how Instagram works more broadly.” He states the company is trying to develop better in-app notifications so users know why a post gets taken down or when it goes against their guidelines.
Instagram knows about these flaws, and the frustrations of users who face unjust content removal, aware that “users often describe on Facebook, Instagram or rival platforms what they say are removal errors, often accompanied by a screenshot of the notice they receive. Facebook pays close attention. One internal presentation about the issue last year was titled “Users Retaliating Against Facebook Actions.”
Tips for Posting
We know that the algorithms lack nuance and have no way of gauging sarcasm, satire, hyperbole, or irony, so in order to understand how and why they remove posts, it is important to look at every post as though it is entirely earnest and serious.
The more violations you accumulate, the stricter the algorithms become.
If you have received account violations before it is never a bad idea to have a backup account.
If you’ve violated terms of service you may not be able to use swear words or negative words like ‘hate’, ‘kill’, ‘ugly’, ‘stupid’, etc. Using less readable fonts for text posts, crossing or blurring out letters or words in photos, and using symbols to replace letters in words when writing captions or comments (ie: h4t€ instead of hate) are all ways to get around this. Avoid crossing out words within the app’s story tool as Instagram can read the image underneath. This is not foolproof as these things can still be detected. Come up with more creative ways to be mean or offensive! (I often write “I opposite of love xyz” instead of “I hate xyz.”
Political content can be tricky as well. Instagram has been known to remove pictures of terrorist groups and infamous figures like ISIS, Hamas, Bin Laden, Hitler, etc. regardless of the context. After posting a meme with a photo of the Taliban in it, a post of mine was removed for promoting terrorist organizations despite not doing so in any capacity. If you receive an unjust violation for posting a figure or group and Instagram flags it, avoid posting that specific figure or writing out their name as the algorithms will search for it.
If you receive several violations and are at risk of having your account deleted it is best to lay low, post less, post tamer content, or deactivate your account for a few days/weeks to attempt to reset the algorithms.
I was so blacklisted that the algorithms monitored my direct messages, automatically flagging a message I sent that had the word “douche” in it, taking my whole account down. Instagram announced in February 2021 that they would be including “new measures, including removing the accounts of people who send abusive messages, and developing new controls to help reduce the abuse people see in their DMs.” Although this same statement claimed “because DMs are for private conversations, we don't use technology to proactively detect content like hate speech or bullying the same way we do in other places.” This is obviously not the case as I watched them remove my content in real time.
WHAT TO DO IF YOU’RE ZUCKED
If your account is deactivated by Instagram, try logging in on browser and follow the steps after log in. If you do not receive an email from Instagram after 48 hours try these links.
https://help.instagram.com/contact/1652567838289083
https://help.instagram.com/contact/606967319425038
https://help.instagram.com/contact/396169787183059
File as many appeals as Instagram allows. In your appeal talk about building a positive community and be adamant that you did not break community guidelines.
In the app direct your friends and followers to report this.
Go to settings -> Help -> Report a Problem and have people write that you were unjustly deactivated and that you are a vital member in the community.
WHAT TO DO IF YOU’RE SHADOWBANNED
The lore says that if you wait two weeks your shadowban will be lifted but this is not always the case. If you deactivated for any time between 24 hours and two weeks this could reset the algorithms so that your account will not be hidden.
Directing users to turn on post notifications for you can help show your content in their feeds.
Directing users to allow for sensitive content can also help. To do this go to Settings-> Account-> Sensitive Content Control-> Allow