Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • sexual_tomato@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    5
    ·
    edit-2
    1 month ago

    Jesus Christ. If someone ever got their hands on this model they could use it to generate new material. The grossest possible AI model to date

      • sexual_tomato@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        1 month ago

        A generative model uses the classifier as part of its training. If you generate a picture of pure random noise, then iteratively pick random noise that the classifier says “looks” more like csam, then you can effectively generate images that the classifier says it’s 100% certain is csam. Whether or not that looks anything like what a human would consider to be csam depends on other factors but it remains a possibility.

    • Kbobabob@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      I thought being able to do that was already a thing. This is designed to do the opposite.

      I know, I know… bad actors and such.

      • NauticalNoodle@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        …but if simple posession defines who a bad actor is…

        The irony of this never ceases to amaze me.