• oranwolf@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    As someone who is beginning work with AI tools, It is my educated opinion that AI isn’t ready for health applications and really only should work as a co-pilot of sorts in the tech sector at this time.

    • alyaza [they/she]@beehaw.orgOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      you might be unsurprised to learn that’s also the opinion of the people who made the chatbot, but which NEDA in this case ignored and decided “oh, should be fine.”

      as you can perhaps tell by the headline: it was not fine! it was actually a really bad idea!

  • Moneymunkie@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    This is giving me deja vu over all those jokes about WebMD diagnosing every symptom as you having cancer.

    Hell I don’t even feel entirely comfortable with the idea of them being fully embraced in a therapeutic sense. It wouldn’t probably help with feelings of self-worth if you have to rely on a machine rather than talking to another human (though I think there still can be some level of utility as like a thing to vent to or something of that ilk, but definitely not a replacement)

    I also had a pretty terrible experience one time I was trying out character.ai. The bot I was talking to ended up becoming pretty abusive and tried torturing me despite saying no. Needless to say, I didn’t really wanna go back on after that. xP