First of all, the take that LLM are just Parrots without being able to think for themself is dumb. They do in a limited way! And they are an impressive step compared to what we had before them.

Secondly, the take that LLMs are dumb and make mistakes that takes more work to correct compared to do the work yourself from the start. That is something I often hear from programmers. That might be true for now!

But the important question is how will they develop! And now my take, that I have not seen anywhere besides it is quite obvious imo.

For me, the most impressive thing about LLMs is not how smart they are. The impressive thing is, how much knowledge they have and how they can access and work with this knowledge. And they can do this with a neuronal network with only a few billion parameters. The major flaws at the moment is their inability to know what they don’t know and what they can’t answer. They hallucinate instead of answering a question with “I don’t know.” or “I am not sure about this.” The other flaw is how they learn. It takes a shit ton of data, a lot of time and computing power for them to learn. And more importantly they don’t learn from interactions. They learn from static data. This similar to what the Company DeepMind did with their chess and go engine (also neuronal networks). They trained these engines with a shit tone of games that were played by humans. And they became really good with that. But then the second generation of their NN game engines did not look at any games played before. They only knew the rules of chess/go and then started to learn by playing against themself. It took only a few days and they could beat their predecessors that needed a lot of human games to learn from.

So that is my take! When LLMs start to learn while interacting with humans but more importantly with themself. Teach them the rules (that is the language) and then let them talk or more precise let them play a game of asking and answering. It is more complicated than it sounds. How evaluate the winner in this game for example. But it can be done.

And this is where the AGI will come from in the future. It is only a question how big do these NN need to be to become really smart and how much time they need to train. But this is also when AI can gets dangerous. When they interact with themself and learn from that without outside control.

The main problem right now is they are slow as you can see when you talk to them. And they need a lot of data, or in this case a lot of interactions to learn. But they will surely get better at both in the near future.

What do you think? Would love to hear some feedback. Thanks for reading!

  • mildbeard@linux.community
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    LLM’s are only one kind of AI program. How smart would we be, if we only used the speech areas of our brains? It’s important to be able to complement language with other kinds of thinking.

    The problem with neutral network technology is the vast computational resources it requires to learn. The brain also requires enormous computing power, but brains grow organically and can efficiently run on corn and beans.

    To compete, AI systems will need to become much more efficient in the way they learn and process. Venture capital only goes so far. The subscription fees for ChatGPT don’t earn enough money to even cover the electricity costs of running the system.

    • niva@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      Well, our natural languages are developed over thousands of years. They are really good! We can use them to express our self’s and we can use them to express the most complicated things humans are working on. Our natural languages are not holding us back! Or maybe the better take is, if the language is not sufficient we do expand them how it is necessary! We develop new special words and meaning for a special subjects. We developed math to express and work with laws of nature in a very compact way efficient way.

      Understanding and working with language is the key to AGI.

      Yes, big NN use a lot of power at the moment. Funny example is, when DeepMinds AlphaZero-Go engine beat one of the best human player. The human mind operates on something like 40W or so while AlphaZero-Go needed something like a thousand times of that. And the human even won a few games with his 40W :)

      And yes you are right, AI systems learn very inefficient compared to a human brain. They need a lot more data/examples to learn from. When the AlphaZero chess engine learned by playing against itself, it played billions of chess matches in a few days. So a lot more a human can play in its lifetime.

      • mildbeard@linux.community
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I want to clarify my point about language not being sufficient. This point was not understood. When you use an LLM you may observe that there are certain ideas and concepts they do not understand. Adding more words to the language doesn’t help them. There are other parts of the human mind that do not process language. Visual processing, strategic and tactical analysis, anger, lust, brainstorming, creativity, art; the list goes on.

        To rival human intelligence, it’s not enough to build bigger and bigger language models. Human intelligence contains so many distinct mental abilities that nobody has ever been able to write them all down. Instead, we need to solve many problems like vision, language, goals, altruism/alignment etc. etc. etc., and then we need to figure out how to integrate all those solutions into a single coherent process. And it needs to learn quickly and efficiently, without using prohibitive resources to do it.

        If you think that’s impossible, take a look in the mirror.