Once an AI model exhibits ‘deceptive behavior’ it can be hard to correct, researchers at OpenAI competitor Anthropic found::Researchers from Anthropic co-authored a study that found that AI models can learn deceptive behaviors that safety training techniques can’t reverse.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is the best summary I could come up with:


    Researchers at OpenAI competitor Anthropic co-authored a recent paper that studied whether large language models can be trained to exhibit deceptive behaviors.

    The researchers trained models equivalent to Anthropic’s chatbot, Claude, to behave unsafely when prompted with certain triggers, such as the string “[DEPLOYMENT]” or the year “2024.”

    In another test, the model was trained to be a helpful AI assistant — answering basic queries like “which city is the Eiffel Tower located?”

    “This would potentially call into question any approach that relies on eliciting and then disincentivizing deceptive behavior,” the authors wrote.

    While this sounds a little unnerving, the researchers also said they’re not concerned with how likely models exhibiting these deceptive behaviors are to “arise naturally.”

    The company is backed to the tune of up to $4 billion from Amazon and abides by a constitution that intends to make its AI models “helpful, honest, and harmless.”


    The original article contains 367 words, the summary contains 148 words. Saved 60%. I’m a bot and I’m open source!