• 3 Posts
  • 92 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • Well one doesn’t necessary need to get rid of electoral college, if the electors were appointed by proportional vote and representation. At that point it would be just a smudging filter. National popular vote with extra steps and some added in accuracy due to one being able to do so much proportionality given how many electors there is.

    So the main problem is not electoral college, but the voting method. Just as note since also getting rid of electoral college isn’t a fix, if the direct popular election uses bad voting method. Like say nationwide plurality vote would be horrible replacement for electoral college.

    Though I would assume anyone suggesting popular vote would mean nationwide majority win popular vote. Though that will demand a “fail to reach majority” resolver. Be it a two round system (second round with top two candidates, thus guaranteed majority result) or some form of instant run-off with guaranteed majority win after elimination rounds.

    TLDR: main problem I winner take all plurality, first past the post more than the technicality of there existing such bureaucratic element as electors and electoral votes.



  • 30 years away from it (reduced from the original 100 years they provided only 5 years ago)

    More like estimates on this are completely unreliable. As in that 100 years could have as well been 1000 years. It was pretty much “until an unpredictable technological paradigm shift happens”. “100 years in future” is “when we have warp drives and star gates” of estimates. Pretty “when we have advanced to next level of advancement and technology, whenever it happens. 100 years should be good minimum of this not being taken as an actual year number estimate”.

    30 years is “we see maybe a potential path to this via hypothetical developments of technology in horizon”. It’s the classical “Fusion is always 30 years away”. Until one time it isn’t, but that 30 year loop can go on indefinitely, if the hypothetical don’t turn to reality. Since you know we thought “maybe that will work, once we put out mind in to it”. Oh it didn’t, on to chasing next path.

    I only know of one project, that has 100 year estimate, that is real. That is the Onkalo deep repository of spent fuel in Finland. It has estimate of spending 100 years being filled and is to be sealed in 2120’s and that is an actual date. Since all the tech is known, the sealing process is known, it just happens to take a century to fill the repository bit by bit. Finland is kinda stable country and radiation hazard such long term, that whatever government is to be there in 2120’s, they will most likely seal the repository.

    Unless “we invent warp drives” happens before that and some new process of actually efficiently and very safely getting rid of the waste is found in some process. (and no that doesn’t include current recycling methods. Since those aren’t that good to get rid of this large amount and with small enough risk of side harms. Surprise, this was studied by Finland as alternative and it was simply decided “recycling is not good enough, simple enough, efficient enough and safe enough yet. Bury it in bedrock tomb”).


  • Main issue comes from GDPR. When one uses the consent basis for collecting and using information it has to be a free choice. Thus one can’t offer “Pay us and we collect less information about you”. Hence “pay or consent” is blatantly illegal. Showing ads in generic? You don’t need consent. That consent is “I vote with my browser address bar”. Thing just is nobody anymore wants to use non tracked ads…

    So in this case DMA 5(2) is just basically re-enforcement and emphasis of previous GDPR principle. from verge

    “exercise their right to freely consent to the combination of their personal data.”

    from the regulation

    1. The gatekeeper shall not do any of the following:
      (a) process, for the purpose of providing online advertising services, personal data of end users using services of third parties that make use of core platform services of the gatekeeper;
      (b) combine personal data from the relevant core platform service with personal data from any further core platform services or from any other services provided by the gatekeeper or with personal data from third-party services;
      © cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper, including other core platform services, and vice versa; and
      (d) sign in end users to other services of the gatekeeper in order to combine personal data,

    unless the end user has been presented with the specific choice and has given consent within the meaning of Article 4, point (11), and Article 7 of Regulation (EU) 2016/679.

    surprise 2016/679 is… GDPR. So yeah it’s new violation, but pretty much it is “Gatekeepers are under extra additional scrutiny for GDPR stuff. You violate, we can charge you for both GDPR and DMA violation, plus with some extra rules and explicity for DMA”.

    I think technically already GDPR bans combining without permission, since GDPR demands permission for every use case for consent based processing. There must be consent for processing… combining is processing, needs consent. However this is interpretation of the general principle of GDPR. It’s just that DMA makes it explicit “oh these specific processing, yeah these are processing that need consent per GDPR”. Plus it also rules them out of trying to argue “justified interest” legal basis of processing case of the business. Explicitly ruling “these type of processing don’t fall under justified interest for these companies, these are only and explicitly per consent type actions”.


  • That is just its core function doing its thing transforming inputs to outputs based on learned pattern matching.

    It may not have been trained on translation explicitly, but it very much has been trained on these are matching stuff via its training material. Since you know what its training set most likely contained… dictionaries. Which is as good as asking it to learn translation. Another stuff most likely in training data: language course books, with matching translated sentences in them. Again well you didnt explicitly tell it to learn to translate, but in practice the training data selection did it for you.




  • Nah. 2k$ was a cheap PR face save for them. Pay 2k$ or deal for weeks and months with “remember how Tesla was a stingy bad corporate and cancelled a large order to a small business without compensation”.

    Noh they can go “Well yeah the cancellation wasn’t exactly gracefully, but hey we compensated the business for it. Our bad.”

    Mind you even just paying the while 15k$ would have been small change for them. So I guess they are not utterly (business relations wise) horrible company, but still a cheap conglomerate.


  • Well difference is you have to know coming to know did the AI produce what you actually wanted.

    Anyone can read the letter and know did the AI hallucinate or actually produce what you wanted.

    On code. It might produce code, that by first try does what you ask. However turns AI hallucinated a bug into the code for some edge or specialty case.

    Hallucinating is not a minor hiccup or minor bug, it is fundamental feature of LLMs. Since it isn’t actually smart. It is a stochastic requrgitator. It doesn’t know what you asked or understand what it is actually doing. It is matching prompt patterns to output. With enough training patterns to match one statistically usually ends up about there. However this is not quaranteed. Thus the main weakness of the system. More good training data makes it more likely it more often produces good results. However for example for business critical stuff, you aren’t interested did it get it about right the 99 other times. It 100% has to get it right, this one time. Since this code goes to a production business deployment.

    I guess one can code comprehensive enough verified testing pattern including all the edge cases and with thay verify the result. However now you have just shifted the job. Instead of programmer programming the programming, you have programmer programming the very very comprehensive testing routines. Which can’t be LLM done, since the whole point is the testing routines are there to check for the inherent unreliability of the LLM output.

    It’s a nice toy for someone wanting to make a quick and dirty test code (maybe) to do thing X. Then try to find out does this actually do what I asked or does it have unforeseen behavior. Since I don’t know what the behavior of the code is designed to be. Since I didn’t write the code. good for toying around and maybe for quick and dirty brainstorming. Not good enough for anything critical, that has to be guaranteed to work with promise of service contract and so on.

    So what the future real big job will be is not prompt engineers, but quality assurance and testing engineers who have to be around to guard against hallucinating LLM/ similar AIs. Prompts can be gotten from anyone, what is harder is finding out did the prompt actually produced what it was supposed to produce.






  • Also not only would they need more satellites, but satellites more densely in any area with multitude of customers. Which eventually hits RF interference saturation.

    Radio signal has only so much bandwidth in certain amount of frequency band. Infact being high up and far away makes it worse. Since more receivers hit the beam of the satellite transmission. One would have to acquire more radio bands, but we’ll unused global satellite transmission bands don’t grow in trees.

    Tighter transmitters and better filtering receivers can help, but usually at great expense and in the end eventually one hits a limit of “can’t cheat laws of physics”



  • However this isn’t about your anecdotal experience. This is about what level of service they can guarantee as minimum and overall to meet the conditions of the subsidy.

    I would also note this isn’t reinstatement matter. FCC refused to give them the subsidy in the first place with this decision. What SpaceX are trying to spin as reneg on previous decision is them making the short list of companies to be considered. Well, getting short listed is not same as being selected fully.

    They passed the criterion for the short list check, but the final authorization and selection included more wide and more through checking on the promises of companies to meet criterion and SpaceX failed the more through final round of scrutiny before being awarded the subsidy.

    Government having awarded bad money previously isn’t fixed by following up bad awards with more bad awards. SpaceX exactly failed since previously money was handed out too losely and FCC has tightened the scrutiny on subsidy awards to not follow up bad money with more bad money.

    Nobody is prevented from buying Starlink, this just means Starlink isn’t getting subsidized with tax payer money.



  • variaatio@sopuli.xyztopics@lemmy.worldBehold
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    There possibly is a pushers/braking truck attached to the rear of the Transporter.

    Also one must remember on transporter it is about winning over rolling resistance rather than the weight. Doesn’t necessarily take that powerfull truck on flat ground to pull even great load.

    Also turbine housing has lot of air and as equipment to be lifted to top of a mast, built with light weight in mind. Not for pulling it, but in thought of the crane that has to lift that thing dead load up.


  • Whatever it is called with that kind of caffeine content you warning label it with listing of exactly how much caffeine it has. Well maybe unless it is named literally “coffee” and is plain brewed coffee and at that brewed coffee with the normal levels of caffeine coffee contains.

    Ones frappe, whippazino also better have needed labels in cases, since given all they mix how the heck one is to know what exactly is the contents. Oh this is extra special “angry frappe” with double squared shot expresso, so exactly how much caffeine is that dear seller per one glass? I just thought you put chili in it or something to make it “angry”, but has literally multiple times more caffeine content.

    This is why all the energy drinks atleast where I live have the ever present “contains high amount of caffeine x mg/100ml”.

    You sell something like that as counter served item with no packaging label to read, well now your menu list must contains at minimum highlights. Something like “our special drunk (HC)” and then somewhere on the menu there reads “HC means high in caffeine”. Then obviously at the counter must be a full labeling booklet of “here is our every product from the plainest brewed coffee to our jumbo mega sandwich and special brew beverage with full nutritional information and ingredients”

    Just like one can’t sell say a pastry in cafe with nut creme filling with out having a big marker on all the menus “contains nuts, nut allergies bevare”. Since similarly nut allergic consuming nuts can be life threatening, well for some people consuming caffeine isn’t healthy and must be disclosed.


  • Specially in say foggy conditions and little bit distance. At which point you won’t clearly maybe differentiate individual elements and more like that’s the rear and “block of light in middle, left and right”. At which point it all little blending one might infact be under impression “the light intensity lowered at the rear, huh, not braking then, did they have they parking break dragging they released or something… ohhhjj shuiiiiiit no it is braking hard”.

    My two cents from here north of Europe and land of snow, rain, fog and occasional white out conditions.