• TranquilTurbulence@lemmy.zip
      link
      fedilink
      arrow-up
      6
      ·
      10 days ago

      Maybe the devs were debating whether it’s possible for a simulated sentient intelligence to figure out it’s in a simulation. What if there was a bet, and the only way to prove other dev wrong was to actually build the simulation and let it run its course. I mean, it’s just a quick little experiment about a single universe in 3D space with linear time.

    • MissJinx@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      11 days ago

      Reality ia amazing but to value our blissful existence we have to go through a simulation of how horrible the exitance could be. I for exemple am incredibly happy in reality but Taylor swift is an 1 eye, no arms, Afgan orfan in reality… Or just reality Mcdonalds employeeq

      • kora@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        11 days ago

        This comment reads like a person who keeps being pulled into previous lives, and started hallucinating they were some monkish writer.

        Are you ok?

  • randomdeadguy@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    11 days ago

    Creators don’t have to be all-knowing. Also, because believing this reality is a simulation does not change the rules we live by, there is no difference between the life of a sim-denier and sim-believer. It’s not as if you’d be punished just for [redacted].

  • Asafum@feddit.nl
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    11 days ago

    Depends on the structure of the simulation. If it’s general enough then they didn’t specifically plan to have this capacity, it’s just the result of the inputs and constraints of the simulation. If anything it would be beneficial to see an outcome as to the types of intelligence that arise.

  • explodicle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 days ago

    It’s not like my Conway’s Game of Life creatures can ever escape their petri dish. I’m so zoomed out that I wouldn’t even notice if they were intelligent.

  • VoterFrog@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    10 days ago

    If we’re in a simulation, it’s probably a massive universe-spanning one. We’re just a blip, both within the scale of the space of the universe and within the history of time of the universe. In that case, we’re not important enough for a simulation creator to even care to adjust our capabilities at all. They’re not watching us. We’re not the point of the simulation.

  • Skua@kbin.earth
    link
    fedilink
    arrow-up
    12
    ·
    11 days ago

    Maybe they’re testing to see if and how we prove we’re in a simulation as part of figuring out if they are themselves in one

    Maybe they’re re-creating the circumstances of their own world to test theories that they can apply in the real world, and since they can ponder whether or not they’re in a simulation then we have to be able to as well or we’d act too differently

    Maybe it’s a total accident. They’re actually studying something over in Andromeda and we’re just a funny accident created as a byproduct of the rules of the simulation

  • missingno@fedia.io
    link
    fedilink
    arrow-up
    9
    ·
    11 days ago

    If we weren’t capable of higher reasoning to ask this kind of question, it wouldn’t be a very good simulation, would it?

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    11 days ago

    So instead of a simulation, maybe we’re living inside of some other type of thing we’re hard-wired to be unable to even think of—and maybe “simulation” is the idea we’re hard-wired to replace it with.

    • Andy@slrpnk.net
      link
      fedilink
      arrow-up
      3
      ·
      11 days ago

      I like this observation a lot. Because I was going to say that if we couldn’t conceive of a simulation, we’d probably just speculate about the closest thing we could imagine.

    • lath@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 days ago

      Replace simulation with book where only a framework is defined and and the plot is built within the set rules.

    • Ogmios@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      11 days ago

      Like a limited ‘fake’ world edifice structured through legal fictions like money, debt and contracts, which attempts to assert that it is significantly more powerful and pervasive than it actually is, through stories like The Matrix, to instill a sense of hopelessness upon anyone who even considers not submitting to it.

  • Captain Poofter@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    11 days ago

    My best guess: The thought processes required to ponder the possibility of a simulation are too important to the goal of the simulation itself to disable.

  • xavier666@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 days ago

    If I made a simulation, I would be interested in how the simulated agents interact with each other. I would only set some very basic restrictions on them (don’t fall out of bounds, maintain self-preservation). I would be very interested in what kinds of questions they come up with, what kind of structures they make using cooperation, overall behavior (assuming i’m interested in the agents in the first place).

    Of course, if the simulation is not good enough, I’ll just close the simulation, change some parameters and restart the sim using an earlier snapshot.

    Source: I worked with simulations.

    • dubyakay@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      10 days ago

      maintain self-preservation

      The simulator running us clearly did not define this restriction.

      • xavier666@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        It’s for the really dumb stuff. It’s more of “don’t fall from the edge of a tall building”, and not “don’t create a market scenario which will lead to the downfall of human civilization”

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    11 days ago

    It’s probably a bug.

    Fuck, if we’re in a simulation I’d be most amazed that nobody has managed to trigger a null pointer exception to crash the whole thing yet.

    Oh, also, infinite recursion… and we got so close with https://youtu.be/xz6OGVCdov8

  • pcouy@lemmy.pierre-couy.fr
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    11 days ago

    You’ve probably read about language model AIs basically being uncontrollable black boxes even to the very people who invented them.

    When OpenAI wants to restrict ChatGPT from saying some stuff, they can fine tune the model to reduce the likelihood that it will output forbidden words or sentences, but this does not offer any guarantee that the model will actually stop saying forbidden things.

    The only way of actually preventing such an agent from saying something is to check the output after it is generated, and not send it to the user if it triggers a content filter.

    My point is that AI researchers found a way to simulate some kind of artificial brains, from which some “intelligence” emerges in a way that these same researchers are far from deeply understanding.

    If we live in a simulation, my guess is that life was not manually designed by the simulation’s creators, but rather that it emerged from the simulation’s rules (what we Sims call physics), just like people studying the origins of life mostly hypothesize. If this is the case, the creators are probably as clueless about the inner details of our consciousness as we are about the inner details of LLMs