• kromem@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    I tend to see a lot of discussion taking place on here that’s pretty out of touch with the present state of things, echoing earlier beliefs about LLM limitations like “they only predict the next token” and other things that have already been falsified.

    This most recent research from Anthropic confirms a lot of things that have been shifting in the most recent generation of models in ways that many here might find unexpected, especially given the popular assumptions.

    Specifically interesting are the emergent capabilities of being self-aware of injected control vectors or being able to silently think of a concept so it triggers the appropriate feature vectors even though it isn’t actually ending up in the tokens.

    • rah@hilariouschaos.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      LLM limitations like “they only predict the next token” and other things that have already been falsified

      What do LLMs do beyond predicting the next token?

      • kromem@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        A few months back it was found that when writing rhyming couplets the model has already selected the second rhyming word when it was predicting the first word of the second line, meaning the model was planning the final rhyme tokens at least one full line ahead and not just predicting that final rhyme when it arrived at that token.

        It’s probably wise to consider this finding in concert with the streetlight effect.

        • rah@hilariouschaos.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          selected

          What do you mean by that? What does it mean to “select” something in the context of a neural net with input nodes and output nodes?

          the model was planning

          How have you come to that conclusion?

            • rah@hilariouschaos.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              Are you able to explain succinctly what you mean by “selected” so that we can communicate? That page is pretty dense and opaque.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      39
      ·
      2 days ago

      This is not a good source. This is effectively, “We’ve investigated ourselves and found [that AI is a miraculous wonder].” Anthropic has a gigantic profit incentive to shill AI, and you should demand impartiality and better data than this.

      • radix@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        2 days ago

        Check their account history. They may as well be on an AI company marketing team.

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          Fair enough. I’m just hopeful I’ve given them a little spark of doubt and a reminder that multibillion dollar companies aren’t in the business of telling the objective truth.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      They aren’t “self-aware” at all. These thinking models spend a lot of tokens coming up with chains of reasoning. They focus on the reasoning first, and their reasoning primes the context.

      Like if I asked you to compute the area of a rectangle you might first say to yourself: “okay. There’s a formula for that. LxW. This rectangle is 4 by 5, so the calculation is 4x5, which is 20.” They use tokens to delineate the “thinking” from their response and only give you the response, but most will also show the thinking if you want.

      In contrast, if you ask an AI how it arrived at an answer after it gives it, it needs to either have the thinking in context or it is 100% bullshitting you. The reason injecting a thought affects the output is because that injected thought goes into the context. It’s like if you’re trying to count cash and I shout numbers at you, you might keep your focus on the task or the numbers might throw off your response.

      Literally all LLMs do is predict tokens, but we’ve gotten pretty good at finding more clever ways to do it. Most of the advancements in capabilities have been very predictable. I had a crude google augmented context before ChatGPT released browsing capabilities, for instance. Tool use is just low randomness, high confidence, model that the wrapper uses to generate shell commands that it then runs. That’s why you can ask it to do a task 100 times and it’ll execute 99 times correctly and then fail—got a bad generation.

      My point is we are finding very smart ways of using this token prediction, but in the end that’s all it is. And something many researchers shockingly fail to grasp is that by putting anything into context—even a question—you are biasing the output. It simply predicts how it should respond to the question based on what is in its context. That is not at all the same thing as answering a question based on introspection or self-awareness. And that’s obviously the case because their technique only “succeeds” 20% of the time.

      I’m not a researcher. But I keep coming across research like this and it’s a little disconcerting that the people inventing this shit sometimes understand less about it than I do. Don’t get me wrong, I know they have way smarter people than me, but anyone who just asks LLMs questions and calls themselves a researcher is fucking kidding.

      I use AI all the time. I think it’s a great tool and I’m investing a lot of my own time into developing tools for my own use. But it’s a bullshit machine that just happens to spit out useful bullshit, and people are desperate for it to have a deeper meaning. It… doesn’t.

      • kromem@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        So while your understanding is better than a lot of people on here, a few things to correct.

        First off, this research isn’t being done on the models in reasoning mode, but in direct inference. So there’s no CoT tokens at all.

        The injection is not of any tokens, but of control vectors. Basically it’s a vector which being added to the activations makes the model more likely to think of that concept. The most famous was “Golden Gate Claude” that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.

        So, if we dive into the details a bit more…

        If your theory was correct, then the way the research asks the question saying that there’s control vectors and they are testing if they are activated, then the model should be biased to sometimes say “yes, I can feel the control vector.” And yes, in older or base models that’s what we might expect to see.

        But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn’t.

        But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.

        So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.

        Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I think we could have a fascinating discussion about this offline. But in short here’s my understanding: they look at a bunch of queries and try to deduce the vector that represents a particular idea—like let’s say “sphere”. So then without changing the prompt, they inject that concept.

          How does this injection take place?

          I played with a service a few years ago where we could upload a corpus of text and from it train a “prefix” that would be sent along with every prompt, “steering” the output ostensibly to be more like the corpus. I found the influence to be undetectably subtle on that model, but that sounds a lot like what is going on here. And if that’s not it then I don’t really follow exactly what they are doing.

          Anyway my point is, that concept of a sphere is still going into the context mathematically even if it isn’t in the prompt text. And that concept influences the output—which is entirely the point, of course.

          None of that part is introspective at all. The introspection claim seems to come from unprompted output such as “round things are really on my mind.” To my way of thinking, that sounds like a model trying to bridge the gap between its answer and the influence. Like showing me a Rorschach blot and asking me about work and suddenly I’m describing things using words like fluttering and petals and honey and I’m like “weird that I’m making work sound like a flower garden.”

          And then they do the classic “why did you give that answer” which naturally produces bullshit—which they at least acknowledge awareness of—and I’m just not sure the output of that is ever useful.

          Anyway, I could go on at length, but this is more speculation than fact and a dialog would be a better format. This sounds a lot like researchers anthropomorphizing math by conflating it with thinking, and I don’t find it all that compelling.

          That said, I see analogs in human thought and I expect some of our own mechanisms may be reflected in LLM models more than we’d like to think. We also make decisions on words and actions based on instinct (a sort of concept injection) and we can also be “prefixed” for example by showing a phrase over top of an image to prime how we think about those words. I think there are fascinating things that can be learned about our own thought processes here, but ultimately I don’t see any signs of introspection—at least not in the way I think the word is commonly understood. You can’t really have meta-thoughts when you can’t actually think.

          Shit, this still turned out to be about 5x as long as I intended. This wasn’t “in short” at all. Is that inspection or just explaining the discrepancy between my initial words and where I’ve arrived?

          • kromem@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            The injection is the activation of a steering vector (extracted as discussed in the methodology section) and not a token prefix, but yes, it’s a mathematical representation of the concept, so let’s build from there.

            Control group: Told that they are testing if injected vectors present and to self-report. No vectors activated. Zero self reports of vectors activated.

            Experimental group: Same setup, but now vectors activated. A significant number of times, the model explicitly says they can tell a vector is activated (which it never did when the vector was not activated). Crucially, this is only graded as introspection if the model mentions they can tell the vector is activated before mentioning the concept, so it can’t just be a context-aware rationalization of why they said a random concept.

            More clear? Again, the paper gives examples of the responses if you want to take a look at how they are structured, and to see that the model is self-reporting the vector activation before mentioning what it’s about.

            • technocrit@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              None of this obfuscation and word salad demonstrates that a machine is self-aware or introspective.

              It’s the same old bullshit that these grifters have been pumping out for years now.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 day ago

              I’ve read it all twice. Once a deep skim and a second more thorough read before my last post.

              I just don’t agree that this shows what they think it does. Now I’m not dumb, but maybe it’s a me issue. I’ll check with some folks who know more than me and see if something stands out to them.