• 0 Posts
  • 1.45K Comments
Joined 1 year ago
cake
Cake day: August 9th, 2023

help-circle













  • That… isn’t telling you what you want to hear.

    LLMs are literally just complex autocorrect. They don’t weight their responses based on what a user wants to hear (unless explicitly instructed to) they simply return the most algorithmically generic response it can find.

    Tell it to talk like a pirate, it will pattern match to pirate talk. It’s not doing it because you want it to, but because you gave it a “pre prompt” to talk like a pirate, and it did the most likely thing that would happen.

    Yes, this can seem like telling you what you want, but go ask it to tell you what shape the world is. Then tell it you want the earth to be flat, and to answer the question again. Both times the answer will be an oblate spheroid, because it doesn’t know nor care what you want.

    Now, if you say “Imagine the world is flat” first, yeah it’ll tell you it’s flat. Not because you want it to, but because you’re explicitly handing it “new information” that you want it to incorporate into its response.






  • KairuByte@lemmy.dbzer0.comtoMicroblog Memes@lemmy.worldBro
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    22 days ago

    Poor programming?

    I’m sorry, LLMs are shit for various reasons, but “poor programming” isn’t one of them. And I bring this up because branding it as such suggests there is a “good programming” LLM that doesn’t have the inherent problems that any such system would have. Which just isn’t a thing with the way LLMs work.