• MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    54
    arrow-down
    1
    ·
    2 days ago

    They didn’t think of just letting it generate the text (usually Markdown) and then processing it to HTML?

    Wait, the LLM does the “thinking” there.

    • nickiwest@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      I assume that if it isn’t already happening, future models will have instructions to maximize token usage.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      I assume there’s some context for why there is an argument to begin with between whether to prompt for markdown vs html, and what this is actually for