Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • Buddahriffic@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 hours ago

    Over time, the more common mistakes would be integrated into the tree. If some people feel indigestion as a headache, then there will be a probability that “headache” is caused by “indigestion” and questions to try to get the user to differentiate between the two.

    And it would be a supplement to doctors rather than a replacement. Early questions could be handled by the users themselves, but at some point a nurse or doctor will take over and just use it as a diagnosis helper.

    • _g_be@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      As a supplement to doctors that sounds like a fantastic use of AI. Then it’s an encyclopedia you engage in conversation