2

What do you think about the claim that 'AI is about to make obsolete programmers'?

Comments
  • 7
    Try to use it for anything semi-difficult and you'll know what we all think.

    Sidenote:

    I've been trying to use chatgpt to help me optimize a cuda kernel for the past two days, and the only think it achieved is make me explain to IT why all of it's improvements are moot or outright wrong...

    Essentially it worked as a rubby ducky at best, which is valuable, but not a replacement
  • 1
    @Hazarth Try claude
  • 1
    bullshit. marketing bullshit
  • 6
    Replace code monkeys? Sure.
    Replace software engineers? No fucking way.
  • 2
    I don't care I'll just grow weed
  • 2
    @Hazarth What @BordedDev says, claude OPUS is not a joke. It generated this for example. With favouriting, starring, playlist, tags.. The changes that i had to do were VERY minimal.
  • 3
    @Lensflare I like that, I'm going to use that in the future as a summary
  • 5
    @retoor, @BordedDev

    Im gonna try Claude, but god protect you if it also sucks balls. :)
  • 2
    @Hazarth select opus and reasoning. The code does not have programming errors in general. But also Claude comes with a downside, limited usage and context window.
  • 2
    @retoor I mean, I'm not going to pay for a month of something I'm going to test once.

    sticking with the free option, which btw is not much better than ChatGPT tbh, didn't help. I solved the issue myself in the end again. Turns out it was an issue in my debugging assumptions and previous computations, but it didn't catch it, even though it calculated tons of numbers for me, it never even considered that some of the values I provided were suspicious and mostly agreed with my obeservations (sycophancy is a massive issue for all LLMs, probably unsolvable since it comes from the alignment training and shitty user-centric data)
  • 1
    @retoor See, the thing is, I have very specific issues, usually with very low level code and usually trying to test novel algorithms or implement interesting papers... So LLMs usually have very little useful insight for me.

    It's great if you tell it "generate the same website template that you've already seen a billion times when scraping the web"

    but for anything actually interesting, it's a rubber ducky at best. Sometimes it generates some interesting insights or approaches I didn't know about previously and can catch obvious coding mistakes and edge cases... but it's not really capable of generating novel solutions unless you hold it's hand... And at that point I'm the one providing the detailed spec and design already, it just converts it to subpar, but usually functional code...

    which is I'm pretty sure it can't actually replace an engineer at any step of the solution creating process. It can replace code monkeys, but not engineers
  • 1
    don't care

    average human dumb

    ones who think they know shit and can't take an alternative point of view due to their cringe arrogance doubly dumb
  • 2
    @jestdotty That and once enough of them believe it that now becomes the truth whether it's factual or not
  • 3
    @jestdotty

    "alternative point of view" being some idiotic ideas like flat earth, young earth creationism, electric universe, etc.

    No, some ideas are actually pretty dumb.

    The irony of those idiots looking down on sane people and saying "average human dumb".
  • 1
    I use my work chaptgpt account to make shit like this:
Add Comment