• 0 Posts
  • 19 Comments
Joined 1 month ago
cake
Cake day: September 29th, 2025

help-circle
  • Again, I really appreciate how deep you’ve gone into this. I haven’t dealt with these topics for many years and even then, I mostly dealt with the actual physical system of a single cell, not what you can build out of them. However I think that’s were the core of the issue lies anyway.

    I recently messed around with a creating a spiking neural net made of “leaky integrate and fire” (LIF) neurons. I had to do the integration numerically which was slow and not precise. However, hardware exists that does run every neuron continuously and in parallel.

    So you ran a simulation of those neurons?

    LIF neurons can be physically implemented by combining classic MOSFETs with Redox cells. Like: Pt/Ta/TaOx with x<1. Or with Hafnium or Zirconia instead of Tantal.

    The oxygen vacancies in the oxide form tiny conductive filaments few atoms think. While the I-V-curve is technically continuous, the number of different currents you can actually measure is limited. Shot noise even plays a significant role, where the discreetness of elections matters.

    Under absolutely perfect conditions, you can maybe distinguish 300 states. On a chip at room temperature maybe 20 to 50. If you want to switch fast it’s 5 to 20.

    That’s not continuous, it’s only quasi-continuous. It’s still cool, but not outside the mathematical scope of the theorems used in the paper.

    And yes, continuity is not everything. You’re right about busy beavers being not computable in principle. But this applies to neuromorphic computing just the same.

    Theoretically, if a continuous extension of the busy beaver numbers existed, then it should be possible for a Liquid State Machine Neural Net to approximate that function.

    But it doesn’t. No such extension can be meaningfully defined. If it could be calculated, then it could solve the halting problem. That’s impossible for purely logical reasons, independently of what you use for computation (a brain, neuromorphic computing, or anything else). Approximations would be incredibly slow, as the busy beaver function grows faster than any computable function.


  • Not saying this is necessarily a problem, but the main author of the paper is also an executive manager of the journal that published it. You can find that information by clicking on “editorial board” on the journals webpage. Now, I assume, he was not actually involved in editorial decisions about his own article, because that would be a conflict of interest and they haven’t declared any. It’s not a secret and it’s easy to find on the webpage, but I think they could have made this fact a bit more prominent in the paper itself. Let’s wait how the larger scientific community reacts to this paper.


  • Therefore, wouldn’t it be strange to rule out something using our current math/physics systems?

    I get the thought, but math and physics are not the same. Math includes logic. When the authors of that paper make that argument, they don’t rely on our current understanding of physics. The theorems they use rely only logic. They are true independent of how the physics and the computation works.

    Blackholes, neutron stars, quasars and other funny things that can’t be explained exactly DO exist after all

    Yes, black holes inspired the paper. They do make the assumption, that a theory of quantum gravity would explain them. That’s what most people want out of such a theory.


  • I love how in depth you went into this. And I agree with everything, except I’m not sure about neuromorphic computing.

    However, I think neuromorphic hardware is able to bypass this limitation. Continuous simultaneous processes interacting with each other are likely non-algorithmic.

    I worked in neuromorphic computing for a while as a student. I don’t claim to be an expert though, I was just a tiny screw in a big research machine. But at least our lab never aimed for continuous computation, even if the underlying physics is continuous. Instead, the long-term goal was to have like five distinguishable states (instead of just two binary states). Enough for learning and potentially enough to make AI much faster, but still discrete. That’s my first point, I don’t think any one else is doing something different.

    My second point is, no one could be doing something continuous in principle. Our brains don’t even really. Even if changes in a memory cell (or neuron) were induced atom by atom, those would still be discrete steps. Even if truly continuous changes were possible, you still couldn’t read out that information because of thermal noise. The tiny changes in current or whatever your observable is would just drown in noise. Instead you would have to define discrete ranges for read out.

    Thirdly, could you explain, what exactly that non-algorithmic component is that would be added and how exactly it would be different from just noise and randomness? Because we work hard to avoid those. If it’s just randomness, our computers have that now. Every once in a while, a bit gets flipped by thermal noise or because it got hit by a cosmic particle. It happens so often, astronomers have to account for it when taking pictures and correct all the bad pixels.




  • lemonwood@lemmy.mltoData Is Beautiful@lemmy.mlContradictions in the Bible
    link
    fedilink
    arrow-up
    39
    arrow-down
    6
    ·
    edit-2
    8 days ago

    I can play devil’s advocate too:

    1 The Bible is not first and foremost a “historical documentary” in the modern sense. The very idea of a historical account striving for objective unbiased reality is fairly recent historically, and the Bible is meant to be a religious text that’s trying to teach you something.

    Yes people absolutely did write and read it as an historical account. You need to distinguish between multiple authors who did not sit in a writing room together and editors who collected the works. The reason why multiple reports were collected was to get at the truth. Long lists of names and events were included to establish historical credibility.

    #2 The Biblical authors are aware there are contradictions.

    Just no. Some of the authors wouldn’t even have been aware of all the other authors.

    #3 The Bible contradicts itself intentionally. It’s an ancient Jewish way of teaching to have two rabbis take different stances, and argue publicly. Often, the truth of something is in the tension between two perspectives.

    Yes, but using contradictions intentionally as a teaching device applies to the talmud(interpretation of the law), not to the tanach(biblical law). Contradictions in the tanach were seen as something that needs to be explained. And yes, some of them were explained, after the fact, as purposeful by theologians. But if we went to take a historically sound approach, we have to acknowledge, that they are a collection from many verbal sources separated by time and place. So it’s far more likely that these unconnected sources contradict each other precisely because no written account has existed until then.

    If contradictions in teaching had been a core part of Jewish theology beforehand, they would continue in writing. There would be many Toras. But the opposite happens: With the advent of the written word, correct word-for-word transmission of the written law immediately becomes absolutely central to the religion. So the conclusion is inevitable, that contradictions came first and ideology to explain them had to follow after the fact.

    Verbal traditions can be contradictory, because contradictions are harder to notice. Once the verbal tradition is frozen as words on paper, the contradictions become obvious and ideology forms around them like a pearl froms around a speck of sand in an oyster, to protect the body of the teaching from the damage.