4
jestdotty
172d

opus > gpt

spent days trying to get info out of gpt and I think it made me question my own sanity. my brain is so tired and mangled by the whole experience

tried opus and it can remember a conversation and doesn't gaslight you all the time. goddamn

and it writes code that you can actually read
and it will actually answer questions instead of equivocating like a condescending dick

Comments
  • 1
    LOL am I the only one who thinks gpt likes to gaslight? Guess not lol
  • 6
    @TeachMeCode the Problem is we keep assigneing human properties to things :) it doesn't gaslight you, it's just multiplying numbers until it gives you a string of words that kinda fit together within the context that fits into It's input vector.

    No matter how big we make the context, it inevitably will always halucinate things or fail to give Attention to important details from the past. We just have to use it as such and not expect "intelligence" from it
  • 0
    I personally don’t use things I can’t download from internet, run and modify on my computer.

    All this chat for me is sales bullshit to pay bigger bonuses for C levels and money laundry.

    It’s a trojan horse to kill internet so there’s no free speech and we’re back to gov saying us what to do but instead of showing us generic news on tv, newspaper, websites they will tell us directly how we need to behave and what we need to do.

    That’s classic Orwellian brain wash.
  • 0
    i guess i kept swinging between them for a long time and decided to use both, sometimes simultaneously

    claude writes great code in python but writes like a toddler in CUDA, GPT does very well on CUDA

    stuff like that
  • 0
    @jestdotty I only disagree on a technicality. The term "gaslighting" is defined as:

    "Gaslighting is an insidious form of manipulation and psychological control. Victims of gaslighting are deliberately and systematically fed false information that leads them to question what they know to be true, often about themselves."

    It may be unconscious, but it has to be deliberate and systematic, which LLMs are not capable of. There is no long-term planning done by LLMs, they only generate retro-actively, given past context.

    I mean I know what ya'll mean when you say that the llm is gaslighting, but it's terminologically wrong and for whatever reason those things always rub me the wrong way. Probably my autism speaking, but I think language is important.

    But whatever, I already know we don't agree on LLMs xD
  • 0
    @jestdotty That's the thing though. It's not programmed for someones intentions. They would have to train the AI purely on gaslighting conversations to achieve this, it can't just be 5MB in 500GB dataset, it would get lost in the noise. Simply speaking, the LLM has no idea what it's doing or saying, it has no consciousness and it cannot plan for anything, it's a direct A -> B mapping of input to output. Any seeming gaslighting would be just pareidolia on the human's part.

    Not to mention, I never had this experience with any of the models, I perceive them simply as dumb, not deceiving, because that's what they are
  • 0
    @Hazarth why you think that's different from humanity I'll never know.

    We're a life support system for random neuron connections. These connections were trained on language for dozens of years, and can select the next best token to speak or write based on a long context window. They do the same thing with muscle control and sensory input.

    2 years ago computers could only do calculations, and now they can create novel d&d campaigns, new art, pilot robotic bodies, and do math.

    We built them like us, we trained them like us, and now they behave like us.
  • 0
    @lungdart It is different because:

    1. we're not a "random neuron connections" The brain has multiple specialized areas that are common to every human. We do have a center that does predominantly speech, part that does life support, the outer layer that encodes memories and so on and so forth. There's nothing random about brains. It has evolved a specific structure over thousands of years where language processing is only part of that.

    2. Human language synthesis is not a per-token generation. We don't put words in front of words based on previous words. We have a complex abstract conceptual thought process. (as very simple examples: how many times do you go back to rethink a sentence, or know whole paragraphs ahead what you actually want to say). If you think we do less then you sorely underestimate human brains.

    3. It's not creating "novel d&d campaigns" it's a conglomeration of things it had seen already. LLMs are incapable of novel ideas.

    cont'd
  • 0
    cont'd

    LLMs are not even capable of randomness unless. Current LLM models achieve "creativity" only by random samples *after* the Neural Network processing. So any sort of "novel" or "creative" is not even an inherent property of neural networks, it's a mathematical trick that is almost as old as math itself. It's a random choice based on probabilities (or compound probabilities in case of beam search).

    It's a topic of debate whether humans have inherent random properties thanks to quantum level effects or not. I wont claim to know if we do. I personally think we don't, but even if we do, it's a property that's part of the entire thought process and not just "word prediction"
  • 0
    @jestdotty You still don't get it. There is 0% gaslighting intention because there is no intention. It's not a percentage game. You're assigning intent to an algorithm and that's already a mistake
Add Comment