• @5gruel@lemmy.world
    link
    fedilink
    English
    27 months ago

    I’m not convinced about the “a human can say ‘that’s a little outside my area of expertise’, but an LLM cannot.” I’m sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don’t see why it would require an “understanding” for that specifically. I would suspect that better human reinforcement would make such answers possible.

    • @dustyData@lemmy.world
      link
      fedilink
      English
      147 months ago

      Because humans can do introspection and think and reflect about our own knowledge against the perceived expertise and knowledge of other humans. There’s nothing in LLMs models capable of doing this. An LLM cannot asses it own state, and even if it could, it has nothing to contrast it to. You cannot develop the concept of ignorance without an other to interact and compare with.