Late one night in February, the 19-year-old was in bed, scrolling on Instagram. Amid his feed's usual mix of memes and movie clips, the app recommended a video of two men fighting -- a brutal, graphic brawl that left him shaken.
He quickly scrolled past the post, but the Instagram algorithm kept showing him more.
"Every time you scrolled, it just kept going further and further with more graphic and violent fights," Stevens said. Online, users worldwide reported the same experience: On Feb. 26, their feeds were suddenly filled with video after video showing real human suffering and death.
Instagram's parent company, Meta, apologized after the incident, which affected Instagram Reels, the company's short-form video feature similar to TikTok. A spokesperson said they'd "fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended."
But Feb. 26 was the tip of the iceberg. Violent and shocking content often referred to as "gore" remains easily accessible on Instagram today, a CBS News investigation found.
Instagram Reels hosts a thriving gray‑market economy driven by graphic violence, people who post violent content on Instagram told CBS News. Banned advertisers -- gambling sites, crypto apps, porn agencies -- evade Meta's ad rules by paying graphic-content pages to embed illicit promotions among the gore.
Between February and April 2025, CBS News identified more than 600 Instagram accounts that post real-world violence packaged into short-form meme videos. The accounts post horrific videos that are viewed by millions of users -- some of whom told CBS News they didn't want to see them.
Even elementary school students were exposed to violence on Instagram, said Nate Webb, a school counselor in Tooele, Utah. On Feb. 26, he said some fourth graders were joking around about the content they were seeing.
"It was shocking to me that such graphic content disturbs me, and I'm a full-grown adult, that didn't really faze some of those kids," he said. "It made me wonder, how much more are you seeing and how much more often are you seeing it?"
The violent imagery can "saturate" a person, overwhelming the brain and even leading to PTSD-like symptoms, said Laura Van Dernoot Lipsky, an expert on trauma who founded the Trauma Stewardship Institute.
"There are no words to describe how profound and damaging and powerful that impact is, even if we're just bringing it down to neuroscience and a brain perspective," she said.
It can be especially troubling if someone is shown gore content involuntarily, according to Joanne Lloyd, a researcher at the University of Wolverhampton in England.
"The people that seem to be most distressed by it were the people who had not gone looking for it," Lloyd said.
CBS News interviewed Instagram users around the world and reviewed dozens of social media posts complaining about graphic content on the platform. Each said the Instagram algorithm showed them horrific violence they didn't want to see -- both before and after the incident in February.
Meta declined to make anyone available for interview for this story, but said in a statement that it invests heavily in safety and security, with 40,000 staff and more than $30 billion dedicated to those issues over the last decade. The company said it restricts the monetization of violent content and adds warning labels, aiming to shield teens and those who don't want to see graphic posts -- while acknowledging that not all disturbing material meets their threshold for removal.
In addition, Meta said it expanded teen protections in an October policy update, now automatically hiding more graphic content such as dead bodies, medical injuries, and dyinganimals. Teens will also be blocked from following or interacting with accounts that share age-inappropriate material, including those linked to adult platforms such as OnlyFans.
Monetizing bloodshed
While violence on Instagram disturbed many, for others, it was good for business.
Matthew Furman, a 22-year-old college student in Tempe, Arizona, uses 12 phones to run more than 40 graphic-content pages with names like "@deadshootingcrew," "@peoplesdyingzone," and "@deadpeoplesbro." Across those accounts, CBS News confirmed Furman has more than 10 million followers.
On Feb. 26, Furman noticed some of his old posts going viral. They weren't new videos, but ones from a few months earlier being given new life by Instagram.
"They changed their algorithm," he said. "These videos will always have the highest retention and engagement, so those videos got pushed the most."
Furman estimated he gained about 200,000 followers on his accounts that day. The page he said saw the biggest growth was called "deadshootingcrew," which as of June had about 800,000 followers.
One of its recent posts, viewed nearly 100,000 times, shows a man being shot in the head.
CBS News spoke to over a dozen people who said they run similar violent-content Instagram accounts. They all said the same thing: it's easy to make thousands posting videos of horrific violence on the platform.
Their customers are businesses that need to skirt Meta's advertising rules because their content violates the platform's other policies -- often gambling sites, cryptocurrency apps, or porn agencies. Those businesses pay people like Furman to post promotions on similar gore pages or even buy the pages outright.
If an account manager sells a page with around 100,000 followers, they can expect to make between $1,000 and $3,000, according to multiple people who run these accounts. The buyers take over the accounts, delete all the posts, and change the username and profile picture -- but keep all the followers.
One account manager interviewed by CBS News identified himself as an 18-year-old man living in Finland named "Henri." He runs at least seven Instagram accounts dedicated to things like car crashes, prison fights and people breaking bones.
In a Zoom interview, Henri told CBS News he can make anywhere from $600 to $2,000 selling Instagram accounts -- the more followers, the higher the price.
Henri provided screenshots of cryptocurrency transactions and a group chat conversation related to his most recent sale. They show a "middleman" brokering a deal between him and the person buying the account. The middleman helps Henri set up a cryptocurrency wallet and transfer ownership of the Instagram account in exchange for $629.58.
CBS News reviewed dozens of group chats where account managers discussed deals in the chat app Telegram, which gore account owners said is widely used for making deals and distributing content.
In 2024, French authorities charged Telegram founder Pavel Durov with allowing the platform to be used for criminal purposes including child sexual abuse material and drug trafficking. Durov has called the charges "misguided" and said they should have been brought against Telegram as a company, rather than against him personally. The case is ongoing.
In chats on the app, reviewed by CBS News, gore accounts are sold alongside other "niche" account categories including non-violent memes, and the prices vary based on the size of the accounts, their location, and other factors such as engagement or growth rate.
Telegram is also used to establish credibility in the Instagram account market: in so-called "vouch channels," buyers, sellers and middlemen "vouch" for each other by posting public endorsements after completing transactions.
Gore account operators can make even more if they sell promotions on their pages rather than selling the accounts, according to interviews with account operators and Telegram chats reviewed by CBS News.
Unlike official ads on Instagram, these "ads" are regular Instagram Reels showing whatever the ad buyer wants -- often online gambling sites or sex workers who use the platform OnlyFans. The posts are typically removed after 24 hours to avoid being flagged by Instagram, the account operators told CBS News.
Prices vary depending on how many followers the pages have, the number of times an illicit ad is viewed, or the number of paying customers sent to a buyer's site. Ads on big pages with lots of engagement can cost over $1,000 per day, account managers said.
Matthew Furman claims to have made $250,000 last year running these types of ads, raking in as much as $70,000 in a single month. He provided screenshots showing transfers of nearly $60,500, but CBS News wasn't able to independently verify all the income he claimed.
Gore goes viral
Why post violent content specifically, rather than less harmful videos that could also go viral? It's simple, according to nearly a dozen account managers interviewed by CBS News who said graphic videos capture the most attention -- and the Instagram algorithm rewards content that grabs users' attention.
"You know, there's millions of meme pages," said one account manager interviewed by CBS News, who wouldn't give his real name, but showed proof he owns gore accounts. "It's a little bit harder to stand out, a little bit harder to grab people's attention. With the more shocking graphic content, it has, like, huge potential to go viral on Reels."
Matthew Furman runs non-gore pages, like a Christian memes and Bible quotes page and one that posts videos specifically about orange cats.
But he said the violent ones do the best.
In July 2024, Furman noticed graphic pages were getting hundreds of thousands of followers.
"In a few weeks, I was like, I need to do that too," he said.
Furman began downloading videos he found on Instagram's "explore page," wrapping them in his custom meme template and re-posting them on his pages.
"I only repost videos that are already on Instagram and already viral on Instagram," Furman said, "so if they're already viral then ... they would have been taken down if they were against the terms of service."
“With the more shocking graphic content, it has, like, huge potential to go viral on Reels.”
– Anonymous gore account manager, on why violent videos dominate Instagram
Another account operator told CBS News he often gets videos from X -- formerly Twitter -- which he said is more permissive of this type of content. X did not respond to a CBS News request for comment.
Others go to more drastic lengths. A 24-year-old account operator from Ohio and another man who works with him told CBS News they scour the darker corners of the internet for gore to post on Instagram.
In a text conversation with CBS News, the man's collaborator listed several gore websites where they get videos. The sites host videos depicting depraved violence, including shootings, torture, and beheadings. They called it "fresh content."
Screenshot of Instagram post.
CBS News
Other account managers CBS News interviewed said they get their videos from Telegram; in its review of Instagram accounts, CBS News counted about 100 pages that linked to Telegram channels. Many of those Instagram pages "openly boast that the Telegram groups contain the 'full videos,' just a tap away."
"Content that encourages violence is explicitly forbidden by Telegram's terms of service and is removed whenever discovered," A Telegram spokesperson said in a written statement.
"Moderators empowered with custom AI and machine learning tools proactively monitor public parts of the app and accept reports in order to remove millions of pieces of harmful content each day, including gore."
According to a lawsuit filed in 27 states by parents of children who say they were harmed by social media platforms, violent and graphic content sometimes gets an algorithmic boost by Instagram -- especially in the feeds of young users.
The complaint, based on internal Meta documents the states' attorneys obtained through subpoenas, alleges that Meta periodically presents young users with "psychologically and emotionally gripping content," including violent content, to increase engagement.
A 2020 internal user survey from Meta that attorneys obtained showed that about 12% of Instagram users reported seeing graphic content on the platform, according to the complaint. In a 2021 internal report, internal research found 8% of users surveyed between 13 and 15 had seen self-harm content on Instagram within the last week, the complaint alleged.
“Sensitive Content”
Meta has rules about graphic content, and an automated content moderation system designed to detect it. Those rules begin by saying "we understand that people have different sensitivities with regard to graphic and violent imagery," before detailing the types of content they remove or censor.
Screenshot of an Instagram Reel labeled "sensitive content."
CBS News
According to data published by Meta, the company "took action on" 39 million pieces of content on Instagram last year.
Meta's public data does not, however, say what type of action was taken after detection of those 39 million pieces of content. An "action" can include deleting a post or disabling an account, but it can also mean a post is covered by a warning screen and left on the platform.
These screens blur the videos and warn users "this video may contain graphic or violent content." Users can click a "see why" button, which displays a popup explaining the post "doesn't go against our Community Standards, but may contain images that some people might find upsetting."
If users click "see reel," they're shown the video.
CBS News reviewed hundreds of Reels flagged as sensitive by Instagram's system. They depict homicides, industrial accidents, extreme car crashes and more.
These warning screens may even make people more likely to view the censored content, according to Victoria Bridgeland, a researcher at Flinders University in Australia.
In multiple studies, Bridgeland and her colleagues found people chose to uncover censored posts at startlingly high rates -- as high as 90%.
One problem with the warning screens, Bridgeland said, is that they don't describe the censored videos, but instead contain a generic warning that the video "may contain graphic or violent content." One way to make them more effective, according to Bridgeland, would be to add captions that describe what the video might show.
"It satiates the curiosity a little bit and they don't have to go and then look at the content," she said.
It's not clear if Instagram limits how many "sensitive" posts an account can have. CBS News found over 100 accounts with five or more "sensitive content" reels, and several that were almost entirely censored.
All but one of the posts by account @gore_plugg was flagged by Instagram but remained up. They include videos of people being shot, in some cases with significant visible blood and gore.
CBS News
While Instagram's policies also say sensitive posts "may not be eligible for recommendations," CBS News found the platform's "suggested for you" feature routinely recommends accounts with flagged posts.
CBS News identified more than 300 accounts that were recommended in this way despite having been previously flagged for sensitive content.
Meta's content moderation policy also says posts flagged as sensitive are "age-gated," meaning they are limited to users 18 and older.
Screenshot of violent accounts marked "Suggested for You" on Instagram.
CBS News
More recently, Meta has taken steps to give users more control over how much graphic content they see on Instagram. In 2022, they rolled out "sensitive content control" settings giving users three options: "less," "standard" or "more." Certain accounts for teens under 16 now defaulted to "less," and the "more" setting was disabled for accounts of users under 18. "None" is not an option for users.
In September 2024, Meta strengthened protections for minors, rolling out new "teen accounts," which defaulted to the "less" sensitive content setting.
Those sensitive content controls only affect content flagged by Instagram's automated system, which doesn't catch every sensitive post, CBS News found.
CBS News reviewed the Reels of about 40 accounts that had been heavily flagged for sensitive content. Many contained uncensored clips adjacent to censored ones featuring nearly identical content.
For example, an account called "@extreme_crashed.0," which is dedicated to car crash videos, posted a video on April 25 of a person being hit by a car in a crosswalk and thrown into the air before laying motionless in the street. Instagram gave that video a "sensitive content" warning.
The same day, that account posted another video showing a person being ejected from a car in a highway crash. That video was left uncensored.
“It’s so simple today to do any kind of search across a platform for basic terminology or keywords… they should be able to root this out.”
– Alexa Koenig, human rights investigator, on tech companies’ ability to detect violent content
Alexa Koenig, a human rights investigator and lawyer who helps train researchers to verify graphic footage, said detecting this type of content should be easy -- especially when the accounts and captions use explicit descriptions of violence.
"It's as basic as it gets from a tech perspective," Koenig said. "It's so simple today to do any kind of search across a platform for basic terminology or keywords, and the companies have additional tools at their disposal that everyday users of platforms don't even have, so they should be able to root this out and be able to review it fairly readily."
Why Instagram?
Instagram Reels was launched in late 2020 as a competitor to TikTok, which had soared in popularity and threatened Instagram's share of the social media market, especially among the youngest users.
In many ways, the two apps are nearly identical: their interfaces, algorithms, and core functions are essentially the same. But CBS News found violent and shocking content is easily accessible on Instagram Reels, while you'd have to search harder to find similarly gruesome material on TikTok.
To test the availability of graphic content on the two platforms, CBS News searched the same set of over 30 keywords on both Instagram and TikTok. The results show violence is easier to find on Instagram Reels.
For example, a search for the word "shootings" on Instagram produced five accounts with a combined total of about 400,000 followers. The five most recent posts on each of those pages had a combined 11 million views and included multiple videos of people being shot in the head at point-blank range.
On TikTok, the same search surfaced one relevant account, which had a single follower and one video with two views.
Each of these circles represents an account that was found in response to one of our searches. The circle's size represents the account's total followers and views.
In total, our searches on Instagram surfaced 116 accounts with a combined 4.4 million followers.
Across each of the Instagram accounts' first 5 Reels, we counted over 75 million views.
On TikTok, the same searches yielded 40 accounts with 132,000 followers and about 6 million views.
The account @warfareclips_ has 246,000 followers. Its recent posts have a combined 9.7 million views.
They included multiple videos of Russian and Ukranian solders fighting on the front lines.
These accounts were all of similar size.
In contrast, the largest TikTok account found in our searches was @carcrash20241. With 17,800 followers, and 4.8 million views on its recent posts, it's a fraction of the size of the Instagram accounts we found.
Meta and TikTok both have rules about graphic content designed to shield users from trauma. But they diverge on what is too traumatic to host.
TikTok frames its platform as "not a place to intentionally shock, upset, or disgust," and draws a hard line at videos of death and injury. Meta frames its rules as balancing people's "different sensitivities," promising to remove "the most graphic content" while masking the rest behind warning screens.
In practice, that means TikTok deletes a video of a fatal shooting while Instagram labels it.
Many people who run violent Instagram accounts told CBS News TikTok is more strict than Instagram.
One gore account operator said he's tried posting on TikTok but "it got taken down instantly."
"For something like memes, TikTok is really good for that, but for gore/crime niche stuff, it's not allowed at all [on TikTok]," he said in a direct message conversation on Instagram.
Meta declined CBS News's repeated requests for an interview for this story. In an emailed statement, a Meta spokesperson said the platform does its best to protect its hundreds of millions of users from seeing harmful content.
The spokesperson pointed to Meta's staff of 40,000 dedicated to "safety and security issues," and said the company has spent over $30 billion on those issues over the last decade.
"While not every piece of graphic content meets the threshold for removal from our platforms, we know certain images can be disturbing for some people," the statement said. "That's why it's our policy not to recommend violent content, restrict it from being monetized and add warning labels that people must click through before viewing it. And for teens, we aim to hide graphic content entirely -- even if it's shared by someone they follow. Additionally, we give all users controls to further personalize their experience."
Credits
Reporting: Chris Hacker, Erielle Delzer, Layla Ferris, Julia Ingram, Ash-har Quraishi, Amy Corral | Data analysis: Chris Hacker | Video editing and photography: Ryan Beard, Josh Pena | Design and development: Taylor Johnston | Editing: John Kelly, Grace Manthey, Rhona Tarrant, Jamie Nguyen