Greg Clarke

Mastodon: @greg@clar.ke

  • 56 Posts
  • 362 Comments
Joined 2 years ago
cake
Cake day: November 9th, 2022

help-circle





















  • That’s my point, if the model returns a hallucinated source you can probably disregard it’s output. But if the model provides an accurate source you can verify it’s output. Depending on the information you’re researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.