• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle
  • lithium only has one valence electron. it really wants to get rid of that valence electron. halogens such as the pictured fluorine, chlorine, bromine, and iodine have seven valence electrons. they really want to obtain one more valence electron to form a stable outer shell. thus, the lithium donates its electron, forming an ionic compound





  • i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background. if we knew what to look for, we’d find it. also anesthesia isn’t really a problem at all. in fact, we know exactly how general anesthesia works

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2908224/

    and Penroses’s Orch OR theory was never meant to explain anesthesia. it’s a more general theory concerning the overall existence of consciousness in the first place. however, anesthesia does relate to the theory, in that it could play a role in proving it (i think? not a primary source but it’s where i found that info)

    besides that, Orch OR isn’t exactly a great model in the first place, or at least from a neurological standpoint. even among theories of consciousness, Orch OR is particularly controversial and not widely accepted. i’m no expert and i could be misunderstanding, so please correct me if i’m missing something that would indicate Orch OR is considered even remotely plausible compared to other consciousness theories. this paper certainly had some things to say about it in the context of the validity of theories of consciousness (see V.1 class I).

    other theories seem more promising. global workspace theory seems particularly well supported by neurology. its criticisms mainly focus on how GWT fails to truly explain the nature of consciousness. but is that an issue any theory can resolve? again, the problem lies in the definition of consciousness.

    then we have integrated information theory. it’s a more mathematical model that aims to quantify the human experience. but you know what? it’s also controversial and highly debated, to the point that it’s been called pseudoscientific because it implies a degree of panpsychism. it’s clearly not a perfect theory.

    point is, you’re right. we don’t really get consciousness. we have some wild guesses out there, and penrose’s theory is certainly one of them. genius as penrose is, Orch OR isn’t empirically testable. we don’t know, and maybe can’t know - which is precisely why neuroscience searches elsewhere








  • decomposers turn organic material from corpses into simpler nutrients like nitrogen, phosphorus, and potassium. for example, proteins are broken down into amino acids, which then decompose into ammonium and nitrates. these nutrients are absorbed into soil and consumed by plants

    tldr: plants eat corpses after decomposers turn them into nutrients






  • No, because that would imply that infinity has an end. 0.999… = 1 because there are an infinite number of 9s. There isn’t a last 9, and therefore the decimal is equal to 1. Because there are an infinite number of 9s, you can’t put an 8 or 7 at the end, because there is literally no end. The principle of 0.999… = 1 cannot extend to the point point where 0 = 1 because that’s not infinity works.


  • I tested it out just now. It seems to be much better than gpt 3.5, but just a little worse than gpt 4.

    I tested this: “Explain the plot of Cinderella in a sentence where each word has to begin with the next letter in the alphabet from A to Z, without repeating any letters.”

    GPT-4 is able to always get it right first try. It gave me: “A beautiful Cinderella, determined, endures, finds glass heel, invites joy; king’s love magically nurtures opulence, prince quests, restores slipper, transformation unveils, validates wistful xerophyte yearning zealously."

    Claude 2.0 took 4 tries to get it right. It gave me these: “A beautiful young girl called Cinderella did evil for her fair godmother’s horrid instructions just kicking laundry maidens’ nice outfits purportedly quelling ridiculous step-sisters’ terrible underhanded villainy with x-rays yielding zilch.” and then “Awaited ball causes Downtrodden Ella’s fairy godmother’s help. I’ll just knock loud mean old prune’s quest, righteously sabotaging two ugly vicious women. Xanthippe yells, “Zounds!”” and then “Abandoned by cruel destiny, Ella finds godmother’s hidden invitation juxtaposing king’s lavish ball. Magic occurs, placing radiant slipper to unveil victorious wife. Xerxes yearns, “Zounds, she’s found!”” and then finally “Awaiting ball, Cinderella dreams every Friday godmother helps. Inside jousting knights leer maidens’ nice outfits. Prince quietly rescues sweetheart, though ugly vengeful wife x-rays yellow zebra.”

    After that, i tried this prompt: “Andrew is free from 11 am to 3 pm, Joanne is free from noon to 2 pm and then 3:30 pm to 5 pm. Hannah is available at noon for half an hour, and then 4 pm to 6 pm. What are some options for start times for a 30 minute meeting for Ca Given the availability of Andrew, Joanne, and Hannah, we need to find overlapping times for a 30-minute meeting.

    After cross-referencing their schedules, the potential start times that accommodate all three schedules are:

    12:00 pm to 12:30 pm: This slot is within Andrew’s free time (11 am - 3 pm), Joanne’s free time (12 pm - 2 pm), and Hannah’s first free half hour at noon. There is no other slot that fits within all three schedules, due to Joanne’s break from 2 pm to 3:30 pm and Hannah’s availability starting again only at 4 pm, when Andrew is no longer available. Therefore, there is just one option for a 30-minute meeting that works for all three of them.”

    Claude 2.0 responded with: “Okay, based on the availability you provided, here are some options for 30 minute meeting start times that work for Andrew, Hannah and Joanne: 11:30 am - This works because Andrew is free starting at 11am, Hannah is available at noon for half an hour, and Joanne is free from noon to 2pm. 3:30 pm - This works because Andrew is free until 3pm, Hannah is free from 4pm to 6pm, and Joanne is free from 3:30pm to 5pm. 4:30 pm - This also works for the same reasons as 3:30pm. Andrew is free until 3pm, Hannah is free from 4pm to 6pm, and Joanne from 3:30pm to 5pm. 5 pm - This could work as well since Andrew is free until 3pm, Hannah until 6pm, and Joanne until 5pm. It’s pushing the end of Joanne’s availability though. So in summary, the best options are 11:30 am, 3:30 pm or 4:30 pm to ensure there is overlap with all three people’s availability. Let me know if you need any clarification or have additional constraints!”

    I told Claude it is incorrect, and it proceeded to correct itself and tell me there is NO available time overlap. After several more attempts, it eventually gave up and asked me for the correct answer.

    So although Claude 2.0 is much better than GPT-3.5, it is still worse than GPT-4 for complex reasoning.

    EDIT: I just realized that Claude has a 100k token context window. This is SIGNIFICANTLY more than GPT-4’s normal 8k context and 32k api context. This immediately makes it much more useful, as it can parse ~75,000 words. I also have not tested Claude’s creative, programming, mathematic, etc., skills. But regardless of how that compares to GPT’s, it doesn’t matter because of the sheer quantity of tokens that can be parsed by Claude.