• 12 Posts
  • 316 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle






  • The doctrine of the EU so far, is to consider China a multi-faced player: a partner for cooperation, an economic competitor and a systemic rival (e.g. it’s possible to cooperate with China on climate, but not on human rights).

    So far, China has also been a multi-faced player. Xi has patted Putin on the back and declared “unlimited partnership”, but no Chinese weapons have been seen in Ukraine. Chinese soldiers have been observed there, but they seem to be really few for a country of that size - either mercenaries or people obtaining first hand experience under mercenary cover. Too few to matter as soldiers.

    China has warm trade relations with Russia and has helped Russia source technology and endure sanctions. However, they haven’t made a special and dedicated effort to insulate Russia from secondary sanctions, and several Chinese companies have applied sanctions on Russia as a result.

    On other occasions, Chinese representatives have said nice words about Ukraine’s territorial integrity. But deeds haven’t followed.

    In UN votes about Ukraine, China often abstains.

    Officially, China doesn’t sell drones to Russia or Ukraine. In reality, both Ukrainian and Russian drones are full of Chinese parts. Ukrainian government is asking every bigger player to have a plan B that works without China, but few really have one. What Russian government asks of their drone makers, I don’t know.


  • I think the video refers to this event: back in 2022, a journalist was shot by Israeli troops while covering a raid in a refugee camp.

    https://en.wikipedia.org/wiki/Killing_of_Shireen_Abu_Akleh

    The most recent news article about it is from Al Jazeera, 1 day ago:

    https://www.aljazeera.com/news/2025/5/8/new-documentary-identifies-soldier-who-shot-shireen-abu-akleh

    As for Biden’s role, Al Jazeera describes it thusly:

    The administration of former US President Joe Biden had “concluded early on that an Israeli soldier had intentionally targeted her, but that conclusion was overruled internally”, he said.

    “We found some concerning evidence that both Israel and the Biden administration had covered up Shireen’s killing and allowed the soldier to get away without any accountability,” he added.

    So, they were able to do the math, but subsequently “fell on their tongue” instead of speaking up. Later on, the issue was dragged out into public attention anyway, but Israel failed to investigate properly and prosecute the killing (they apologized, though). As of yesterday, the primary suspect’s name is also known. But that doesn’t guarantee much.

    Myself, I actively avoid YouTube as a source of news, since YouTube has a recommendation algorithm that feeds people content that it thinks they want. To get news about the Middle East, I’ll recommend Al Jazeera almost without hesitation.




  • If Gaza will be entirely destroyed, there is a considerable risk that Israel will meet the same fate later.

    If a country spans only 22 000 square kilometers and is inhabited by 10 million people, it’s not very smart to make enemies among every group who can relate to Palestinians - for example Muslims (about 1.9 billion people) or perhaps Arabs (around 400 million people).

    Put simply - Israel has withstood various pressures because of US backing.

    The US currently runs a high risk of getting somewhat indisposed due to a president they elected acting very foolishly. If the US should break down, Israel will find itself very isolated.

    If Israel makes a record amount of determined enemies now, it may have a record amount of people seeking its downfall later. Even if the Israeli government doesn’t care the slightest amount about Palestinians, it should consider its own future before acting in the described way.




  • From the article (emphasis mine):

    Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

    /…/

    “It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

    From elsewhere:

    Sycophancy in GPT-4o: What happened and what we’re doing about it

    We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

    I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

    Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

    What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let’s not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.





  • “This is why we need a strong NATO and we need troops on the ground in Romania, in Poland and in the Baltic states,” he said, although he was against sending further military aid to Ukraine.

    And then he calls himself perfectly aligned with MAGA. :D

    Unfortunately, MAGA is anti-NATO, “pay Donald so your house won’t catch fire on a case by case basis”.

    If I was naive, I would consider if he’s clueless or a liar. Since he’s a well educated man, he cannot be clueless - he’s a liar, of course.

    He earned his first political points surfing on anti-vaccination waves. Basically “don’t tell us to be reasonable and protect our health, it’s our choice if we want to suffer”.

    Now apparently, he’s the embodiment of “don’t tell us to be reasonable and choose moderate politicians, it’s our choice if we want to suffer”.

    He’s capitalising on resentment against the court intervening in elections, after Georgescu “forgot” to tell about unknown (read: Russian) actors backing him with thousands of accounts and millions of money. The court did its job. People just weren’t aware that it was a court’s job to supervise elections.

    I wonder if he gets elected. It would not bring any good.



  • The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.