• @Lmaydev@programming.dev
    link
    fedilink
    English
    346 months ago

    Honestly I feel people are using them completely wrong.

    Their real power is their ability to understand language and context.

    Turning natural language input into commands that can be executed by a traditional software system is a huge deal.

    Microsoft released an AI powered auto complete text box and it’s genius.

    Currently you have to type an exact text match in an auto complete box. So if you type cats but the item is called pets you’ll get no results. Now the ai can find context based matches in the auto complete list.

    This is their real power.

    Also they’re amazing at generating non factual based things. Stories, poems etc.

    • @noodlejetski@lemm.ee
      link
      fedilink
      English
      546 months ago

      Their real power is their ability to understand language and context.

      …they do exactly none of that.

        • @FarceOfWill@infosec.pub
          link
          fedilink
          English
          206 months ago

          They’re really, really bad at context. The main failure case isn’t making things up, it’s having text or image in part of the result not work right with text or image in another part because they can’t even manage context across their own replies.

          See images with three hands, where bow strings mysteriously vanish etc.

          • @FierySpectre@lemmy.world
            link
            fedilink
            English
            -16 months ago

            New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently… So you can give whole datasets or books as context and ask questions about them.

    • @Blue_Morpho@lemmy.world
      link
      fedilink
      English
      266 months ago

      So if you type cats but the item is called pets get no results. Now the ai can find context based matches in the auto complete list.

      Google added context search to Gmail and it’s infuriating. I’m looking for an exact phrase that I even put in quotes but Gmail returns a long list of emails that are vaguely related to the search word.

        • @Blue_Morpho@lemmy.world
          link
          fedilink
          English
          36 months ago

          It shouldn’t even automatically fallback. If I am looking for an exact phrase and it doesn’t exist, the result should be “nothing found”, so that I can search somewhere else for the information. A prompt, “Nothing found. Look for related information?” Would be useful.

          But returning a list of related information when I need an exact result is worse than not having search at all.

    • @hedgehogging_the_bed@lemmy.world
      link
      fedilink
      English
      136 months ago

      Searching with synonym matching is almost.decades old at this point. I worked on it as an undergrad in the early 2000s.and it wasn’t new then, just complicated. Google’s version improved over other search algorithms for a long time.and then trashed it by letting AI take over.

      • @Lmaydev@programming.dev
        link
        fedilink
        English
        3
        edit-2
        6 months ago

        Google’s algorithm has pretty much always used AI techniques.

        It doesn’t have to be a synonym. That’s just an example.

        Typing diabetes and getting medical services as a result wouldn’t be possible with that technique unless you had a database of every disease to search against for all queries.

        The point is AI means you don’t have to have a giant lookup of linked items as it’s trained into it already.

        • @hedgehogging_the_bed@lemmy.world
          link
          fedilink
          English
          16 months ago

          Yes, synonym searching doesn’t strictly mean the thesaurus. There are a lot of different ways to connect related terms and some variation in how they are handled from one system to the next. Letting machine learning into the mix is a very new step in a process that Library and Information Sci has been working on for decades.

    • Th4tGuyII
      link
      fedilink
      96 months ago

      Exactly. The big problem with LLMs is that they’re so good at mimicking understanding that people forget that they don’t actually have understanding of anything beyond language itself.

      The thing they excel at, and should be used for, is exactly what you say - a natural language interface between humans and software.

      Like in your example, an LLM doesn’t know what a cat is, but it knows what words describe a cat based on training data - and for a search engine, that’s all you need.

      • @Lmaydev@programming.dev
        link
        fedilink
        English
        -1
        edit-2
        6 months ago

        No it’s not.

        Fuzzy matching is a search technique that uses a set of fuzzy rules to compare two strings. The fuzzy rules allow for some degree of similarity, which makes the search process more efficient.

        That allows for mis typing etc. it doesn’t allow context based searching at all. Cat doesn’t fuzz with pet. There is no similarity.

        Also it is an AI technique itself.

    • @not_amm@lemmy.ml
      link
      fedilink
      English
      26 months ago

      That’s why I only use Perplexity. ChatGPT can’t give me sources unless I pay, so I can’t trust information it gives me and it also hallucinated a lot when coding, it was faster to search in the official documentation rather than correcting and debugging code “generated” by ChatGPT.

      I use Perplexity + SearXNG, so I can search a lot faster, cite sources and it also makes summaries of your search, so it saves me time while writing introductions and so.

      It sometimes hallucinates too and cites weird sources, but it’s faster for me to correct and search for better sources given the context and more ideas. In summary, when/if you’re correcting the prompts and searching apart from Perplexity, you already have something useful.

      BTW, I try not to use it a lot, but it’s way better for my workflow.