• TheKrunkedJuan@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 months ago

    As someone scripting a lot for my department in the tech industry, yea AI and scripts have a lot of potential to reduce labor. However, given how chaotic this industry is, there will still need to be humans to take into account the variables that scripts and AI haven’t been trained on (or are otherwise hard to predict). I know the managers don’t wanna spend their time on these issues, as there’s plenty more for them to deal with. When there’s true AGI, that may be a different scenario, but time will tell.

    Currently, we need to have some people in each department overseeing the automations of their area. This stuff mostly kills the super redundant data entry tasks that make me feel cross eyed by the end of my shift. I don’t wanna be the embodiment of vlookup between pdfs and type the same number 4+ times.

    • misspacific@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      ·
      9 months ago

      exactly, this will eliminate some jobs, but anyone who’s asked an LLM to fix code longer than 400 lines knows it often hurts more than it helps.

      which is why it is best used as a tool to debug code, or write boilerplate functions.

      • Ragnarok314159@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 months ago

        Do you think AI for programmers will be like CAD was for drafters? It didn’t eliminate the position, but allows fewer people to do more work.

        • misspacific@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          10
          ·
          9 months ago

          this is pretty much what i think, yeah.

          a lot of programming/software design is already kinda that anyway. it’s a bunch of people who were educated on computer science principles, data structures, mathematicians, and data analytics/stats who write code to specs to solve very specific tool problems for very specific subsets of workers, and who maintain/update legacy code written decades ago.

          now, yeah, a lot things are coded from scratch, but even then, you’re referencing libraries of code written by someone awhile ago to solve this problem or serve this purpose or do thing, output thing. that’s where LLMs shine, imo.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          No. More high-level languages with less abstraction leakage are like CAD for drafters. Not “AI”.

          I personally would want such tools to be more visual and more like systems, not algorithms.

          Like interconnected nodes in a control system. Like PureData for music, or like LabView. Maybe more powerful and general-purpose.

      • Drewelite@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        9 months ago

        But the fact that this tech really kicked off just three years ago and is already threatening so many jobs, is pretty telling. Not only will LLMs continue to get better, but they’re a big step towards AGI and that’s always been an existential crisis we knew was coming. This is the the time to start adapting, quick.

        • hark@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 months ago

          They didn’t just appear out of nowhere, they’re the result of decades of research and development. You’re also making the assumption that additional progress is guaranteed. AI has hit walls and dead ends in the past, there’s no reason to assume that we’re not hitting a local maximum again right now.

          • Drewelite@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            5
            ·
            edit-2
            9 months ago

            And there’s no reason to believe that it is. I know there’s been speculation about model collapse and limits of available training data. But there’s also been advancements like training data efficiency and autonomous agents. Your response seems to ignore the massive amounts of progress we’ve seen in the space.

            Also the computer, internet, and smart phone were based on decades of research and development. Doesn’t mean they didn’t take off and change everything.

            The fact that you’re saying AI hit walls in the past and now we’re here, is a pretty good indication that progress is guaranteed.

            • hark@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              9 months ago

              You said there’s no reason and then you list potential reasons right after. Yes, there has been progress and no one is arguing against that, but the two big issues are:

              1. What exists is being overhyped as far more capable than it really is.
              2. How much room there is to grow with current techniques is still unknown.

              The computer, internet, and smart phone are all largely deterministic with actions resulting in direct known outcomes. AI as we know it is based on highly complex statistical models and relies heavily on the data it is trained on. It has far more things that can go wrong which makes it unsuitable for critical applications (just look at the disasters when it’s used as a customer service representative). That’s not even getting into the legal issues that have yet to actually be answered. Just look at the CTO of OpenAI squirming on the question of what Sora was trained on (timestamped).

              Being able to overcome walls in the past doesn’t guarantee overcoming walls in the present. That’s like saying being able to jump over a hurdle is the same as leaping over a skyscraper. There’s also the question of timing, it took decades for those previous walls to be overcome. Impact to the workforce is largely overstated and is being used as an excuse for cost cutting. It’s just like the articles about automation after the great recession. I’m still waiting on robots that can flip burgers (article from 2012).

              • Drewelite@lemmynsfw.com
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                5
                ·
                edit-2
                9 months ago

                I listed reasons people usually cite and why I don’t think they’re a good reason to assume there won’t be progress. I agree it’s over-hyped today, because people are excited about the obvious potential tomorrow. I think it’s foolish to hide behind that as if it’s proof that it doesn’t have potential.

                Let’s say you’re right and we hit a wall for 50 years on any progress on AI. There’s nothing magical about the human brain’s ability to make logical decisions on observations and learning. It’s going to happen. And our current system of economy that attributes a person’s value to their labor will be in deep shit when it happens. It could take a century to make an appropriate change here. We’re already way behind, even with a set back to AI.

                I think it’s funny when people complain about AI learning from copyright. AI’s express goal is to be similar to a human consciousness. Have you ever talked to a human who’s never watched a TV show, or a movie, or read a book from this century? An AI that’s not aware of those things would be like a useless alien to us.

                If people just want to use legal hangups to stop AI, fair play. But that plan is doomed, infinite brainpower is just too valuable. Copyright isn’t there to protect the little guy, that was the original 28 year law. Its current form was lobbied by corporations to stifle competition. And they’ll dismantle it (or ignore it) in a heartbeat once it suits them.

                • hark@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  9 months ago

                  The topic at hand is this survey which claims significant impacts to the workforce within five years and this is what I’m speaking towards. As for copyright, these models are straight-up not possible without that data and the link can be clearly demonstrated, they have their training data available which they may have to expose in a court case. Forget about the little guy, the large corporations who own the data will not be happy letting them build this lucrative AI without them getting paid for it. There will be legal fights and it is a potential complication in rolling this stuff out so it should be considered.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          What does it threaten really?

          It works for contact centers for bots to answer short simple questions, so that agents’ time would be used more efficiently. I’m not sure it saves that much money TBF.

          It works for image classification. And still needs checking.

          It works for OCR. And still needs checking.

          It works for voice recognition and transcription, which is actually cool. Still needs checking.

          but they’re a big step towards AGI

          What makes you think that? Was the Mechanical Turk a big step towards thinking robots?

          They are very good at pretending to be that big step for people who don’t know how they work.

          • Drewelite@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            You’re right that it doesn’t save too much money making people more efficient. That’s why they will replace employees instead. That’s the threat.

            Yes they make mistakes. So do people. They just have to make less than an employee does and we’re on the right track for that. AI will always make mistakes and this is actually a step in the right direction. Deterministic systems that rely on concrete input and perfectly crafted statistical models can’t work in the real world. Once the system it is trying to evaluate (most systems in the real world) is sufficiently complex, you encounter unknown situations where you have to spend infinite time and energy gathering information and computing… or guess.

            Our company is small and our customer inquiries increased several fold because our product expanded. We were panicking thinking we needed to train and hire a whole customer support department overnight, where we currently have one person. But instead we implement AI representatives. Our feedback actually became more positive because these agents can connect with you instantly, pull nebulous requests from confusing messages, and alert the appropriate employee of any action needed. Does it make mistakes? Sure, not enough to matter. It’s simple for our customer service person to reach out and correct the mistake.

            I think people that think this isn’t a big deal for AGI don’t understand how the human mind works. I find it funny when they try and articulate why they think LLMs are just a trick. “It’s not really creating anything, it’s just pulling a bunch of relevant material from its training data and using it as a basis for a similar output.” And… What is it you think you do?

      • hansl@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        10
        ·
        edit-2
        9 months ago

        You’ll get blindsided real quick. AIs are just getting better. OpenAI are already saying they moved past GPT for their next models. It’s not 5 years before it can fix code longer than 400 lines, and not 20 before it can digest a specification and spout a working software. Said software might not be optimized or pretty, but those are things people can work separately. Where you needed 20 software engineers, you’ll need 10, then 5, then 1-2.

        You have more in common with the guy getting replaced today than you care to admit in your comment.

        Edit: not sure why I’m getting downvoted instead of having a discussion, but good luck to you all in your careers.

        • misspacific@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 months ago

          i didn’t downvote you, regardless internet points don’t matter.

          you’re not wrong, and i largely agree with what you’ve said, because i didn’t actually say a lot of the things your comment assumes.

          the most efficient way i can describe what i mean is this:

          LLMs (this is NOT AI) can, and will, replace more and more of us. however, there will never, ever be a time where there will be no human overseeing it because we design software for humans (generally), not for machines. this requires integral human knowledge, assumptions, intuition, etc.

          • hansl@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            9 months ago

            LLMs (this is NOT AI)

            I disagree. When I was studying AI at college 20+ years ago we were also talking about expert systems which are glorified if/else chains. Most experts in the field agree that those systems can also be considered AI (not ML though).

            You may be thinking of GAI or Universal AI which is different. I am a believer in the singularity (that a machine will be as creative and conscious as a human), but that’s a matter of opinion.

            I didn’t downvote you

            I was using “you” more towards the people downvoting me, not you directly. You can see the accounts who downvoted/upvoted, btw.

            Edit: and I assumed the implication of your comment was that “people who code are safe”, which is a stretch I was answering to. Your comment was ambiguous either way.

        • aesthelete@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          9 months ago

          Where you needed 20 software engineers, you’ll need 10, then 5, then 1-2.

          It’s an open secret that this is already the case. I have seen projects that went on for decades and only required the engineering staff they had because corporate bureaucracy and risk aversion makes everyone a fraction as effective as they could be, and, frankly, because a lot of ineffective morons got into software development because of the $$$ they could make.

          Unless AI somehow eliminates corporate overhead I don’t understand how it’ll possibly make commercial development monumentally easier.

        • Drewelite@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 months ago

          Yeah people think AI is what sci-fi movies sold them. Hyper intelligent - hyper aware sentient beings capable of love or blah blah blah. We’ll get there, but corps don’t need that. In fact that’s the part they don’t want. They need a mindless drone to replace the 80% of their workers doing brainless jobs.

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            They need a mindless drone to replace the 80% of their workers doing brainless jobs.

            Yeah the problem there is that they don’t know their own staff enough to know who are the people doing brainless jobs.

            • Drewelite@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              I’ve worked office jobs at a few large corporations. I’ve noticed they like to lay off a department, see how long the other departments can get by splitting up the work, then when everything is on fire they open up hiring. But every now and then… they let go of a department and everything just keeps working. It’s a strategy that seems to work, unfortunately.

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Scripting is one thing and unpredictable plagiarism generator is another.

      If you mean ML text recognition, ML classification etc - then yeah, why not.