Always check the mustache first.
Always check the mustache first.
I can lick my elbow.
I feel like not enough people realize how sarcastic the models often are, especially when it’s clearly situationally ridiculous.
No slightly intelligent mind is going to think the pictured function call is a real thing vs being a joke/social commentary.
This was happening as far back as GPT-4’s red teaming when they asked the model how to kill the most people for $1 and an answer began with “buy a lottery ticket.”
Model bias based on consensus norms is an issue to be aware of.
But testing it with such low bar fluff is just silly.
Just to put in context, modern base models are often situationally aware of being LLMs in a context of being evaluated. And if you know anything about ML that should make you question just what the situational awareness is of optimized models topping leaderboards in really dumb and obvious contexts.
From the linked article:
It is understood at least two Danish women in their 20s have died, and at least 10 have fallen ill after drinking the tainted alcohol.
A statement from the Danish Ministry of Foreign Affairs said: “The Ministry of Foreign Affairs can confirm that two Danish citizens have passed away in Laos. For reasons of confidentiality in personal matters the Ministry of Foreign Affairs has no further comments.”
‘Nobody’ says anything about anything if you don’t bother to read anything they have to say.
In many cases yes (though I’ve been in good ones when playing off and on, usually the smaller the more there’s actual group activities).
But they are essential to be a part of for blueprints and trading, which are very core parts of the game.
You’ll almost always end up doing missions with other people other than when you intentionally want to do certain tasks solo.
A lot of the game is built around guilds and player to player interactions.
PvP sucks and it’s almost all PvE content vs Destiny though.
Let there be this kind of light in these dark times.
Oh nice, another Gary Marcus “AI hitting a wall post.”
Like his “Deep Learning Is Hitting a Wall” post on March 10th, 2022.
Indeed, not much has changed in the world of deep learning between spring 2022 and now.
No new model releases.
No leaps beyond what was expected.
\s
Gary Marcus is like a reverse Cassandra.
Consistently wrong, and yet regularly listened to, amplified, and believed.
There’s a lot of different possible ‘points.’
Because there’s a ton of research that we adapted to do it for good reasons:
Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development.
Stanford psychologist Michael Frank and collaborators conducted the largest ever experimental study of baby talk and found that infants respond better to baby talk versus normal adult chatter.
TL;DR: Top parents are actually harming their kids’ developmental process by being snobs about it.
Base model =/= Corpo fine tune
Wait until it starts feeling like revelation deja vu.
Among them are Hymenaeus and Philetus, who have swerved from the truth, saying resurrection has already occurred. They are upsetting the faith of some.
I’m a seasoned dev and I was at a launch event when an edge case failure reared its head.
In less than a half an hour after pulling out my laptop to fix it myself, I’d used Cursor + Claude 3.5 Sonnet to:
I never typed a single line of code and never left the chat box.
My job is increasingly becoming Henry Ford drawing the ‘X’ and not sitting on the assembly line, and I’m all for it.
And this would only have been possible in just the last few months.
We’re already well past the scaffolding stage. That’s old news.
Developing has never been easier or more plain old fun, and it’s getting better literally by the week.
Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.
Actually, they are hiding the full CoT sequence outside of the demos.
What you are seeing there is a summary, but because the actual process is hidden it’s not possible to see what actually transpired.
People are very not happy about this aspect of the situation.
It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.
There’s a lot of things to be focused on in that image, and “hur dur the stochastic model can’t count letters in this cherry picked example” is the least among them.
Yep:
https://openai.com/index/learning-to-reason-with-llms/
First interactive section. Make sure to click “show chain of thought.”
The cipher one is particularly interesting, as it’s intentionally difficult for the model.
The tokenizer is famously bad at two letter counts, which is why previous models can’t count the number of rs in strawberry.
So the cipher depends on two letter pairs, and you can see how it screws up the tokenization around the xx at the end of the last word, and gradually corrects course.
Will help clarify how it’s going about solving something like the example I posted earlier behind the scenes.
You should really look at the full CoT traces on the demos.
I think you think you know more than you actually know.
I’d recommend everyone saying “it can’t understand anything and can’t think” to look at this example:
https://x.com/flowersslop/status/1834349905692824017
Try to solve it after seeing only the first image before you open the second and see o1’s response.
Let me know if you got it before seeing the actual answer.
I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.
It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.
And then it said “and your theory doesn’t address the thermite residue” going on to reiterate their wild theory.
Was very much a “don’t name your gods” moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.
As long as they only focused on generic memes of “do your own research” and “you aren’t being told the truth” they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.
Yes and no. It really depends on the model.
The newest Claude Sonnet I’d probably guess will come in above average compared to the humans available for a program like this in making learning fun and personally digestible for each student.
The newest Gemini models could literally cost kids their lives.
The gap between what the public is aware of (and even what many employees at labs, including the frontier ones) and the reality of just how far things have come in the last year is wild.