Update: After this article was published, Bluesky restored Kabas’ post and told 404 Media the following: “This was a case of our moderators applying the policy for non-consensual AI content strictly. After re-evaluating the newsworthy context, the moderation team is reinstating those posts.”

Bluesky deleted a viral, AI-generated protest video in which Donald Trump is sucking on Elon Musk’s toes because its moderators said it was “non-consensual explicit material.” The video was broadcast on televisions inside the office Housing and Urban Development earlier this week, and quickly went viral on Bluesky and Twitter.

Independent journalist Marisa Kabas obtained a video from a government employee and posted it on Bluesky, where it went viral. Tuesday night, Bluesky moderators deleted the video because they said it was “non-consensual explicit material.”

Other Bluesky users said that versions of the video they uploaded were also deleted, though it is still possible to find the video on the platform.

Technically speaking, the AI video of Trump sucking Musk’s toes, which had the words “LONG LIVE THE REAL KING” shown on top of it, is a nonconsensual AI-generated video, because Trump and Musk did not agree to it. But social media platform content moderation policies have always had carve outs that allow for the criticism of powerful people, especially the world’s richest man and the literal president of the United States.

For example, we once obtained Facebook’s internal rules about sexual content for content moderators, which included broad carveouts to allow for sexual content that criticized public figures and politicians. The First Amendment, which does not apply to social media companies but is relevant considering that Bluesky told Kabas she could not use the platform to “break the law,” has essentially unlimited protection for criticizing public figures in the way this video is doing.

Content moderation has been one of Bluesky’s growing pains over the last few months. The platform has millions of users but only a few dozen employees, meaning that perfect content moderation is impossible, and a lot of it necessarily needs to be automated. This is going to lead to mistakes. But the video Kabas posted was one of the most popular posts on the platform earlier this week and resulted in a national conversation about the protest. Deleting it—whether accidentally or because its moderation rules are so strict as to not allow for this type of reporting on a protest against the President of the United States—is a problem.

  • Natanox@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    Dude, you do realize I didn’t endorse centralized moderation with a single word, let alone social algorithms or any of the other trash? I’m just not ignorant enough to believe the internet wouldn’t become an utter pile of trash without any kind of moderation of oversight, especially with such an abundance of ways to spread nonsense fully automatically. Want to get a glimpse of how that would look like? Look at Nostr. Given you’re literally starting off with ad hominem any discussion with you is pointless anyway though.

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      Dude, you do realize I didn’t endorse centralized moderation with a single word, let alone social algorithms or any of the other trash?

      They’re widespread varieties of moderation taken to natural limits. And they highlight the weaknesses of thinking that approach will save us when they’re often blamed for doing the opposite.

      Clearly, you disagree with that kind of moderation, so maybe you should “no true Scotsman” this & define precise boundaries of moderation you accept. The only type of moderation I might accept is the minimal necessary for legal compliance & labeling that allows the user to filter content themselves.

      become an utter pile of trash

      abundance of ways to spread nonsense fully automatically

      Matter of perspective: that “trash” we had before was beautiful. Sifting & picking through it wasn’t much of a problem. Despite the low moderation, the nonsense didn’t really spread & the fringe groups mostly kept to their odd sites when they weren’t being ridiculed.

      Look at Nostr.

      Also beautiful: beats bluesky & mastodon.

      Given you’re literally starting off with ad hominem

      Let’s add hypercritical to the list. I disagree with the alarmism over images & text on a screen, and I disagree with the infantilization of adults. Adults still think and are responsible for exercising judgment in the information they consume. Expressions alone do nothing until people choose to do something.