Not advanced maths per se; neural networks are amazing! Fuzzy matching based on experience - taken to an incredible level. And, tuneable by internal simulation (imagination).
Don’t be fooled to think computer neural networks is how the brain is structured. Through out history we’ve always compared the brain to the most advanced technology at the time. From clocks, to computers with short and long term memory, and now to neural networks.
That is a good point, though the architecture of computer neutral networks is inspired by how we think the brain works, and if I understand correctly there is some definite similarity in the architecture.
I would guess that every statement made is kind of true. It is a clock, a computer and a LLM,…
I would even go as far as LLM is the closest to a functioning brain we can produce from a functional perspective. And even the artificial brains are to complex to understand in detail.
I reckon we can get a lot closer than an LLM in time. For one thing, the mind has particular understanding of interim steps whereas, as I understand it, the LLM has no real concept of meaning between the inputs and the output. Some of this interim is, I think, an important part of how we assess truthfulness of generated ideas before we put them into words.
I experimented with rules like : “Summarize everything of our discussion into one text you can use as memory below your answer.” And “summarize and remove unnecessary info from this text, if contradictions occur act curious to solve them”… simply to mimic a short term memory.
It kind of worked better for problem solving but it ate tokens like crazy and the answers took longer and longer. The current GPT4 models seem to do something similar in the background.
I think that’s still different from what I’m thinking of of interim steps, though.
…but as I think how to explain I realize I’m about to blather about things I don’t understand, or at least haven’t had time to think about! So I’d better leave it there!
there is certainly math going on in the brain at various levels, both equivalent models and identical sorts of calculations, it’s not just fuzzy matching.
almost certainly doing those things and more (especially lin alg and diffeq solutions, and who knows what equivalent mathematical representations). Why wouldn’t it? even stereotyped, there are subtle feedback variations you need to account for.
Not advanced maths per se; neural networks are amazing! Fuzzy matching based on experience - taken to an incredible level. And, tuneable by internal simulation (imagination).
Don’t be fooled to think computer neural networks is how the brain is structured. Through out history we’ve always compared the brain to the most advanced technology at the time. From clocks, to computers with short and long term memory, and now to neural networks.
That is a good point, though the architecture of computer neutral networks is inspired by how we think the brain works, and if I understand correctly there is some definite similarity in the architecture.
Lots of difference though, still!
I would guess that every statement made is kind of true. It is a clock, a computer and a LLM,…
I would even go as far as LLM is the closest to a functioning brain we can produce from a functional perspective. And even the artificial brains are to complex to understand in detail.
I reckon we can get a lot closer than an LLM in time. For one thing, the mind has particular understanding of interim steps whereas, as I understand it, the LLM has no real concept of meaning between the inputs and the output. Some of this interim is, I think, an important part of how we assess truthfulness of generated ideas before we put them into words.
I experimented with rules like : “Summarize everything of our discussion into one text you can use as memory below your answer.” And “summarize and remove unnecessary info from this text, if contradictions occur act curious to solve them”… simply to mimic a short term memory.
It kind of worked better for problem solving but it ate tokens like crazy and the answers took longer and longer. The current GPT4 models seem to do something similar in the background.
I would really like to get into LLM and AI development but the math…woosh right over my head.
I think that’s still different from what I’m thinking of of interim steps, though.
…but as I think how to explain I realize I’m about to blather about things I don’t understand, or at least haven’t had time to think about! So I’d better leave it there!
deleted by creator
there is certainly math going on in the brain at various levels, both equivalent models and identical sorts of calculations, it’s not just fuzzy matching.
But probably not calculating trigonometry and calculus when juggling, right?
almost certainly doing those things and more (especially lin alg and diffeq solutions, and who knows what equivalent mathematical representations). Why wouldn’t it? even stereotyped, there are subtle feedback variations you need to account for.