• vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    37
    ·
    4 days ago

    Any processor can run llms. The only issue is how fast, and how much ram it has access to. And you can trade the latter for disk space if you’re willing to sacrifice even more speed.

    If it can add, it can run any model

    • surph_ninja@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 days ago

      Yes, and a big part of AI advancement is running it on leaner hardware, using less power, with more efficient models.

      Not every team is working on building bigger with more resources. Showing off how much they can squeeze out of minimal hardware is an important piece of this.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      Yeah the Church-Turing thesis holds that you can run an LLM on a casio wrist watch (if for some reason you wanted to do that). I can’t imagine this is exactly what you’d call ‘good’…