Zagorath
Formerly /u/Zagorath on the alien site.
- 67 Posts
- 2.34K Comments
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
1·1 day agono age can go on the internet.
I don’t think anyone had ever suggested anything like that.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
1·2 days agoit’s way too obvious that this law ain’t gonna achieve its stated aim
Absolutely. See my much longer comment elsewhere in the thread for all the real problems with this bill. We don’t need conspiracy theories. Hanlon’s razor very much applies here. It’s incompetence, not malice.
However, I think we can look at the worst part of this Bill—the nature of its passage through Parliament—for a clue as to its underlying purpose. It passed in just a week, right before Christmas last year, but didn’t actually come into effect until yesterday. The goal was good PR. I suspect not rattling cages with the big social media companies was part of it too. They wanted to look like they were doing something to protect kids, and hopefully win the election off the back of it (not that they needed much help with that, with how incompetent the LNP were), but they didn’t want to put up the fight that would be necessary to force the social media companies into actually making their algorithms less harmful…to children and adults. It’s lazy, it’s cowardly, it won’t work. But it’s not a secret ploy to spy on you.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
31·2 days agowith Labor getting over half the lower house seats from about a third of the votes
Yikes. This is some really dangerous misinformation. Labor received 55% of the votes. Because we use an actual democratic system, not the FPTP farce that America and the UK have. You cannot compare first preferences in IRV to votes in FPTP.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
9·2 days agoStep one is stuff like this, require id to verify your age
Right, but the law doesn’t do that. In fact it was specifically forbidden from doing that. Here’s the full text of the Bill. Section 63DB specifically says:
(1) A provider of an age-restricted social media platform must not:
(a) collect government-issued identification material; …(2) Subsection (1) does not apply if:
(a) the provider provides alternative means…for an individual to assure the provider that the individual is not an age-restricted userIn plain language: you can only accept ID to verify age if you also have some other method of verifying age instead.
So far, it looks like most sites are relying on data they already have. The age of your account, the type of content you post, etc. Because I have not heard of a single adult being hit with a request to verify their age anywhere other than Discord, and even on Discord, it’s only when trying to view NSFW-tagged channels. (Which is an 18+ thing, and completely unrelated to this law, which is 16+ for all social media. Despite Discord having been officially classified as not social media, but a chat app, which does not apply.)
It also says, in 63F:
(1) If an entity:
(a) holds personal information about an individual that was collected for the purpose of, or for purposes including the purpose of, taking reasonable steps to prevent age - restricted users having accounts with an age - restricted social media platform; and
(b) uses or discloses the information otherwise than:
(i) for the purpose of determining whether or not the individual is an age - restricted user; or …
(iii) with the consent of the individual, which must be in accordance with subsection (2);
the use or disclosure of the information is taken to be:
(c) an interference with the privacy of the individual for the purposes of the Privacy Act 1988 ; …(2) For the purposes of subparagraph (1)(b)(iii): (a) the consent must be:
[(i–v) voluntary, informed, current, specific, and unambiguous]; and
(b) the individual must be able to withdraw the consent in a manner that is easily accessible to the individual.(3) If an entity holds personal information about an individual that was collected for the purpose of, or for purposes including the purpose of, taking reasonable steps to prevent age - restricted users having accounts with an age - restricted social media platform, then:
(a) the entity must destroy the information after using or disclosing it for the purposes for which it was collected
In other words, whatever information you collect to do the age verification, unless you already have it, with the user’s consent, for some other purpose, you must not store their information.
It would not have been hard to just not include that part of the law. Some privacy advocates would have spoken up about it, but the general public would have probably brushed it off. No, they included that because this isn’t about information harvesting. It’s a misguided but genuine attempt to protect kids. And, if you’re looking for a more cynical spin on it, it’s to win some good PR with people for being able to say they’re protecting kids, while also not doing anything that would substantially hurt big tech’s bottom line…like regulating the algorithms themselves.
But again, you mentioned the US government. What does that have to do with this? This is a law passed in Australia, but the Australian government. An entirely different country, and one with an actually functioning government and legislature.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
231·2 days agoFirst of all, what does the US government have to do with this?
Second, I made quite a detailed comment. Which bits do you disagree with and why?
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
2·2 days agoMost people are viewing this as a good way to prevent misinformation brainwashing
If only the government had actually made that the law. Crack down on the harmful algorithms that commercial social media use, instead of this shit.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
3·2 days agoMy server just took a vote and changed our one NSFW channel to no longer be marked NSFW. All we used that channel for was posting slightly sex-themed memes anyway.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
3·2 days agoI personally see zero downsides to this
I would encourage you to read my comment in another thread. There’s the beginning of a good idea in this legislation, but nearly everything about how it’s actually done is awful.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
101·2 days agoNo kids getting fined or arrested for using VPNs or buying accounts off others
It’s actually explicitly not going to do that. The social media companies are the only ones with any legal burden here. That’s the intent, and you don’t need to go into cooker nonsense to justify it. It’s no different from how a harm reductionist approach to drugs involves targeting dealers, not people buying for personal use.
Zagorath@aussie.zoneto
Technology@lemmy.world•Australia begins enforcing world-first teen social media banEnglish
372·2 days agoWhat do you guys think about this? I think its a big positive.
It’s not. But not for the reason you say.
I get why they do it. Its to make people upload identification documents
This is just some conspiracy theory nonsense. The law specifically says that photo ID cannot be the only way users can verify themselves. And it also says that any uploaded documents must not be used for any other purpose. No, the reason behind the law is exactly what they say it is: to protect kids. They’re just really bad at their job and don’t understand the ways this law will not accomplish that goal.
I’ll repost some of my comments from elsewhere:
The ultimate goal is a good one. Keep kids safe from dangerous social media algorithms. The method used to arrive at it…the Government did the wrong thing at pretty much every opportunity they possibly could.
Step 1: the government should have considered regulating the actual algorithms. We know that Facebook has commissioned internal studies which told them certain features of their algorithm were harmful, and they decided to keep it that way because it increased stickiness a little bit. Regulate the use of harmful algorithms and you fix this not just for children, but for everyone
Step 2: if we’ve decided age verification must be done, it should be done in a way that preserves as much privacy and exposes people to as little risk as possible. The best method would be laws around parental controls. Require operating systems to support robust parental controls, including an API that web browsers and applications can access. Require social media sites and apps do access that API. Require parents to set up their children’s devices correctly.
Step 3: if we really, really do insist on doing it by requiring each and every site do its own age verification, require it be done in privacy-preserving ways. There are secure methods called “zero-knowledge proofs” that could be used, if the government supported it. Or they could mandate that age verification is done using blinded digital signatures. This way, at least when you upload your photo or ID to get your age verified, the site doesn’t actually get to know who you are, and the people doing the age verification don’t get to know which sites you’re accessing.
Step 4: make it apply to actually-harmful uses of social media, not a blanket ban on literally everything. Pixelfed is not harmful in the way Instagram is. It just isn’t. It doesn’t have the same insidious algorithms. Likewise Mastodon compared to Xitter. And why does Roblox, the site that has been the subject of multiple reports into how it facilitates child abuse get a pass, while Aussie.Zone has to do some ridiculous stuff to verify people’s age? Not to mention Discord, which is clearly social media, and 4chan, which is…4chan.
Step 5: consider the positive things social media can do. Especially for neurodiverse and LGBTQ+ kids, finding supportive communities can be a literal life-saver, and social media is great at that.
Step 3.5: look at the UK. Their age restriction has been an absolute failure. People using footage from video games to prove they’re old enough. Other people having their documents leaked because of insecure age verification processes and companies keeping data they absolutely should not be holding on to
And perhaps most importantly:
Step 0: Transparent democratic processes
Don’t put up legislation and pass it within 1 week. Don’t restrict public submissions to a mere 24 hours. Don’t spend just 4 hours pretending to consider those public submissions that did manage to squeeze into your tight timeframe. There is literally no excuse for a Government to ever act that fast (with possible exception for quick responses to sudden, unexpected, acute crises, which this definitely is not). Good legislation takes time. Good democratic processes require listening to and considering a broad range of opinions. Even if everything about what the legislation delivered actually ended up perfect, this would be an absolutely shameful piece of legislation for the untransparent, anti-democratic way in which it was passed into law.
And that’s not to mention the fact that in some ways, not having an account is making things more dangerous. Like how porn bans in other countries have basically just amounted to PornHub bans, with people able to ignore it by going to shadier sites with far worse content on them and less content moderation. And I’ve seen a number of parents point to YouTube in particular, saying that when their kids had an account, they were able to see the kids’ watch history, and could tell the YouTube algorithm to stop recommending specific channels or types of content. Without an account, you can’t do that.
And, naturally, we’re already seeing cases of kids passing despite being under-age. 11 year-olds who get told they look 18. A 13 year-old whose parent said they could pass for 10, who—just by scrunching his face up a bit—got the facial recognition to say he’s 30+. Shock-horror, facial recognition is not a reliable determiner of age. It never should have been allowed.
Zagorath@aussie.zoneto
Lemmy Shitpost@lemmy.world•Hey look, a giant sign telling you to find a different jobEnglish
1·2 days agoin most countries that would be solved by good regulations
Quite likely both, actually. Good regulations help reduce the chance of it happening, but if it does happen, damage is done. Regulations might mean they receive a fine, but that doesn’t make the victim of their negligence whole. Medical bills aren’t all there is to it. There’s the cost of pain and suffering. Probably time off work. (And having good leave policies doesn’t necessarily help, because now that’s leave she’s used for this that she can’t use if she later needs to for another reason.) Cost of repair/cleaning the car. Lawsuits would still happen.
And anyway, I’m not defending American anti-regulation bs. I’m defending people’s right to sue companies that wronged them. In the absence of good regulations protecting consumers, suing a company that did the wrong thing isn’t “absolute chaos”. There is no “absolute chaos of lawsuit nonsense”. That is corporate propagandistic bullshit.
Zagorath@aussie.zoneto
Programmer Humor@programming.dev•When you have to checkout the master branchEnglish
10·4 days agoTranscription
The “It’s an Older Meme, But It Checks Out” meme, featuring an image of an Imperial officer from Star Wars, with the caption:
It’s an older branch, sir
But it checks out
Zagorath@aussie.zoneto
Programmer Humor@programming.dev•When you have to checkout the master branchEnglish
91·4 days agoI usually create new repos through GitHub or another central repo’s system, where it defaults to calling the main branch
main. But I did recently create a new repo with my local Git’sgit init, and had to deal with amasterbranch on a completely new repo for the first time in a while. It was actually kinda a weird experience.
You’ve gotten a more full answer, so I’ll elaborate on the “bat and ball” example I mentioned elsewhere. There’s a famous puzzle: a bat and ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?
System 1 thinking results in the answer $0.10. Because $0.10 + $1 = $1.00. But the correct answer, which can only be arrived at in system 2, is $0.05, because the question isn’t actually about $1.10 - $1.00, but x+(x+1)=1.1. That’s not a problem system 1 thinking can do though.
You obviously do. It’s a pretty similar example to the classic ball & bat problem. It’s not hard maths, but system 1 basically doesn’t use maths at all. It’s pure instinct.
Transcription
White text on a teal background:
Guys I have bad news.
550 ÷ 2 is not 225.
The joke is that system 1 thinking is not as accurate as system 2 thinking.
Zagorath@aussie.zoneto
Lemmy Shitpost@lemmy.world•Hey look, a giant sign telling you to find a different jobEnglish
7·4 days agoWhile some “nonsense lawsuits” do happen, there is a very strong extent to which the notion of “nonsense lawsuits” being an epidemic in America is pro-corporate propaganda. Designed to get people to side with the big guy over the little guy who was wronged by them.
Take the infamous McDonald’s coffee lawsuit, for example. The woman in question received third-degree burns. Coffee, the normal way it’s served hot, does not do that. Maccas was serving it overly hot. They had even received multiple reports of it being a problem ahead of time. And the woman initially only wanted them to pay for her medical bills. When they refused prior to the lawsuit, she sued. They again refused the offer of medical bills during settlement negotiations, and she rightly won big. Maccas’ negligence caused serious harm, and it’s right that they were stung for it.
That’s definitely interesting, but I use my PC as a general-purpose computer. I’d rather go with a general-purpose distro, like Ubuntu.
















The fact is, right now we know that Facebook has at times made a deliberate, conscious choice to leave in aspects of their algorithm that were causing harm. Their own studies have shown this. Making that practice illegal—knowingly causing harm with your algorithm—would be a good place to start with regulation.