Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”


What gives you the confidence that you don’t do the same?
because I don’t just look the words I know and feel their meaning, and I’m aware of process (this is why ALL LLMs is vulnurable to Promt Injections) also talking about Promt Injection think about when ChatGPT confidently give you an advice that would kill you because of prompt injection
human: je pense
llm: je ponce