Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

  • rozodru@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    soooo if it doesn’t know something it won’t say anything and if it does know something it’ll show sources…so essentially you plug this into Claude it’s just never going to say anything to you ever again?

    neat.

      • rozodru@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        don’t get me wrong I love what you’ve built and it IS something that is sorely needed. I just find it funny that because of this you’ve pretty much made something like Claude just completely shut up. You’ve pretty much showed off the extremely sad state of Anthropic.

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          I haven’t tried wiring it up to Claude, that might be fun.

          Claude had done alright by me :) Swears a lot, helps me fix code (honestly, I have no idea where he gets that from… :P). Expensive tho.

          Now ChatGPT… well… Gippity being Gippity is the reason llama-conductor exists in the first place.

          Anyway, I just added some OCR stuff into the router. So now, you can drop in a screenshot and get it to mull over that, or extract text directly from images etc.

          I have a few other little side-cars I’m thinking of adding of the next few months, based on what folks here have mentioned

          !!LIST command (list all stored in vodka memories)

          !! FLUSH (flush rolling chat summary)

          >> RAW (keep all the router mechanics but remove presentation/polish prompts and just raw dog it.

          >> JSON Schema + Validity Verifier

          >> CALC (math, unit conversion, percentages, timestamps, sizes etc)

          >> FIND (Pulls IPs, emails, URLs, hashes, IDs, etc from documents and returns exact structured output)

          I’m open to other suggestions / ideas.

          PS: It’s astonishing to me (and I built it!) just how FAST .py commands run. Basically instantaneous. So, I’m all for adding a few more “useful” cheat-codes like this.

  • floquant@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    14
    ·
    6 days ago

    Holy shit I’m glad to be on the autistic side of the internet.

    Thank you for proving that fucking JSON text files are all you need and not “just a couple billion more parameters bro”

    Awesome work, all the kudos.

  • recklessengagement@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    6 days ago

    I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

    Thank you for this. I will test it on my local install this weekend.

  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    10
    ·
    6 days ago

    Hallucination is mathematically proven to be unsolvable with LLMs. I don’t deny this may have drastically reduced it, or not, I have no idea.

    But hallucinations will just always be there as long as we use LLMs.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      6 days ago

      Agree-ish

      Hallucination is inherent to unconstrained generative models: if you ask them to fill gaps, they will. I don’t know how to “solve” that at the model level.

      What you can do is make “I don’t know” an enforced output, via constraints outside the model.

      My claim isn’t “LLMs won’t hallucinate.” It’s “the system won’t silently propagate hallucinations.” Grounding + refusal + provenance live outside the LLM, so the failure mode becomes “no supported answer” instead of “confident, slick lies.”

      So yeah: generation will always be fuzzy. Workflow-level determinism doesn’t have to be.

      I tried yelling, shouting, and even percussive maintenance but the stochastic parrot still insisted “gottle of geer” was the correct response.

  • ThirdConsul@lemmy.zip
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    7 days ago

    I want to believe you, but that would mean you solved hallucination.

    Either:

    A) you’re lying

    B) you’re wrong

    C) KB is very small

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      17
      ·
      6 days ago

      D) None of the above.

      I didn’t “solve hallucination”. I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.

      If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”

      The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That’s it. That’s the whole trick.

      KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.

      That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”

      Again, that’s the whole trick.

      You don’t need to believe me. In fact, please don’t. Test it.

      I could be wrong…but if I’m right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn’t suck balls as much as you think it might.

      Maybe it’s even useful to you.

      I dunno. Try it?

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          12
          ·
          edit-2
          6 days ago

          Parts of this are RAG, sure

          RAG parts:

          • Vault / Mentats is classic retrieval + generation.
          • Vector store = Qdrant
          • Embedding and reranker

          So yes, that layer is RAG with extra steps.

          What’s not RAG -

          KB mode (filesystem SUMM path)

          This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.

          If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.

          Vodka (facts memory)

          That’s not retrieval at all, in the LLM sense. It’s verbatim key-value recall.

          • JSON on disk
          • Exact store (!!)
          • Exact recall (??)

          Again, no embeddings, no similarity search, no model interpretation.

          “Facts that aren’t RAG”

          In my set up, they land in one of two buckets.

          1. Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.

          2. Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.

          In response to the implicit “why not just RAG then”

          Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.

          The extra “steps” are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.

          So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don’t trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that’s a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that’s how ASD brains work.

          • ThirdConsul@lemmy.zip
            link
            fedilink
            arrow-up
            3
            arrow-down
            4
            ·
            6 days ago

            The system summarizes and hashes docs. The model can only answer from those summaries in that mode

            Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

            • SuspciousCarrot78@lemmy.worldOP
              link
              fedilink
              arrow-up
              6
              ·
              edit-2
              6 days ago

              Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

              Huh? That is the literal opposite of what I said. Like, diametrically opposite.

              Let me try this a different way.

              Hallucination in SUMM doesn’t “poison” the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:

              1. SUMM is not a “source of truth”

              The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:

              a) trace it back to the exact document version b) regenerate it c) discard it d) read the original doc yourself and manually curate it.

              Nothing is “silently accepted” as ground truth.

              1. Promotion is manual, not automatic

              The dangerous step would be: model output -> auto-ingest into long-term knowledge.

              That’s explicitly not how this works.

              The Flow is: Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that

              Don’t like a SUMM? Don’t push it into the vault. There’s a gate between “model said a thing” and “system treats this as curated knowledge.” That’s you - the human. Don’t GI and it won’t GO.

              Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:

              • reproducible
              • inspectable
              • fixable

              Which is the opposite of silent drift.

              If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.

              That’s a much easier class of bug to detect and correct. Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

              And that, is ultimately what keeps the pipeline from becoming “poisoned”.

              • ThirdConsul@lemmy.zip
                link
                fedilink
                arrow-up
                3
                ·
                6 days ago

                Huh? That is the literal opposite of what I said. Like, diametrically opposite.

                The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.

                No, that’s exactly what you wrote.

                Now, with this change

                SUMM -> human reviews

                That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.

                Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.

                Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

                Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.

                • SuspciousCarrot78@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  6 days ago

                  Replying in specific

                  “SUMM -> human reviews That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.”

                  Correct: filesystem SUMM + human review is intentionally for small/curated KBs, not “review 3,000 entities.” The point of SUMM is curation, not bulk ingestion at scale. If the KB is so large that summaries become exhaustive, that dataset is in the wrong layer.

                  “Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work?”

                  Poorly. It shouldn’t work via filesystem SUMM. A “Person table” is structured data; SUMM is for documents. For 3,000 people × (3–7 facts), you’d put that in a structured store (SQLite/CSV/JSONL/whatever) and query it via a non-LLM tool (exact lookup/filter) or via Vault retrieval if you insist on LLM synthesis on top.

                  Do you expect a human to verify that SUMM?”

                  No - not for that use case. Human verification is realistic when you’re curating dozens/hundreds of docs, not thousands of structured records. For 3,000 persons, verification is done by data validation rules (schema, constraints, unit tests, diff checks), not reading summaries.

                  “How are you going to converse with your system to get the data from that KB Person set?”

                  Not by attaching a folder and “asking the model nicely.” You’d do one of these -

                  • Exact tool lookup: person(“Alice”) -> facts, or search by ID/name, return rows deterministically.
                  • Hybrid: tool lookup returns the relevant rows, then the LLM formats/summarizes them.
                  • Vault retrieval: embed/chunk rows and retrieve top-k, but that’s still weaker than exact lookup for structured “Person facts.”

                  So: conversation is fine as UX, but the retrieval step should be tool-based (exact) for that dataset.

                  But actually, you give me a good idea here. It wouldn’t be the work of ages to build a >>look or >>find function into this thing. Maybe I will.

                  My mental model for this was always “1 person, 1 box, personal scale” but maybe I need to think bigger. Then again, scope creep is a cruel bitch.

                  “Because to me that sounds like case C, only works for small KBs.”

                  For filesystem SUMM + human review: yes. That’s the design. It’s a personal, “curate your sources” workflow, not an enterprise entity store.

                  This was never designed to be a multi-tenant look up system. I don’t know how to build that and still keep it 1) small 2) potato friendly 3) account for ALL the moving part nightmares that brings.

                  What I built is STRICTLY for personal use, not enterprise use.

                  Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.”

                  Sort of. Summation via LLM was always going to be a lossy proposition. What this system changes is the failure mode:

                  • Without this: errors can get injected and later you can’t tell where they came from.
                  • With this: if a SUMM is wrong, it is pinned to a specific source file hash + summary hash, and you can fix it by re-summarizing or replacing the source.

                  In other words: it doesn’t guarantee correctness; it guarantees traceability and non-silent drift. You still need to “trust but verify”.

                  TL;DR:

                  You don’t query big, structured datasets (like 3,000 “Person” records) via SUMM at all. You use exact tools/lookup first (DB/JSON/CSV), then let the LLM format or explain the result. That can probably be added reasonably quickly, because I tried to build something that future me wouldn’t hate past me for. We’ll see if he/I succeeded.

                  SUMM is for curated documents, not tables. I can try adding a >>find >>grep or similar tool (the system is modular so I should be able to accommodate a few things like that, but I don’t want to end up with 1500 “micro tools” and hating my life)

                  And yeah, you can still miss errors at SUMM time - the system doesn’t guarantee correctness. That’s on you. Sorry.

                  What it guarantees is traceability: every answer is tied to a specific source + hash, so when something’s wrong, you can see where it came from and fix it instead of having silent drift. That’s the “glass box, not black box” part of the build.

                  Sorry - really. This is the best I could figure out for caging the stochastic parrot. I built this while I was physically incapacitated and confined to be rest, and shooting the shit with Gippity all day. Built it for myself and then though “hmm, this might help someone else too. I can’t be the only one that’s noticed this problem”.

                  If you or anyone else has a better idea, I’m willing to consider.

            • PolarKraken@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              6
              ·
              6 days ago

              Woof, after reading your “contributions” here, are you this fucking insufferable IRL or do you keep it behind a keyboard?

              Goddamn. I’m assuming you work in tech in some capacity? Shout-out to anyone unlucky enough to white-knuckle through a workday with you, avoiding an HR incident would be a legitimate challenge, holy fuck.

    • Kobuster@feddit.dk
      link
      fedilink
      arrow-up
      9
      arrow-down
      3
      ·
      7 days ago

      Hallucination isn’t nearly as big a problem as it used to be. Newer models aren’t perfect but they’re better.

      The problem addressed by this isn’t hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response. That’s easy and any big and small company could do it, big companies just like the bullshit

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      11
      ·
      6 days ago

      I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.

      So, the claim I’m making is: I made bullshit visible and bounded.

      The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I’m solving for is “LLMs get things wrong in ways that are opaque and untraceable”.

      That’s solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.

      The difference is - YOU are no longer checking a moving target or a black box. You’re checking a frozen, reproducible input.

      That’s… not how any of this works…

      Please don’t teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you’re out. Quants ain’t quants, and models ain’t models. I am very particular in what I run, how I run it and what I tolerate.

      • nagaram@startrek.website
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        6 days ago

        I think you missed the guy this is targeted at.

        Worry not though. I get it. There isn’t a lot of nuance in the AI discussion anymore and the anti-AI people are quite rude these days about anything AI at all.

        You did good work homie!

  • SuspciousCarrot78@lemmy.worldOP
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    7 days ago

    Responding to my own top post like a FB boomer: May I make one request?

    If you found this little curio interesting at all, please share in the places you go.

    And especially, if you’re on Reddit, where normies go.

    I use to post heavily on there, but then Reddit did a reddit and I’m done with it.

    https://lemmy.world/post/41398418/21528414

    Much as I love Lemmy and HN, they’re not exactly normcore, and I’d like to put this into the hands of people :)

    PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a “Q&A_with_drBobbyLLM.md” and sticking it on the repo. It might explain some common concerns.

    And, If nothing else, it might be mildly amusing.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    6 days ago

    This is awesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

  • pineapple@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

    Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      I hope it does what it I claim it does for you. Choose a good LLM model. Not one of the sex-chat ones. Or maybe, exactly one of those. For uh…research.

  • Murdoc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 days ago

    I wouldn’t know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍

    (I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I’m sure that the ADHD would keep me from finishing to go work on something else. 🙃)

  • Domi@lemmy.secnd.me
    link
    fedilink
    arrow-up
    5
    ·
    7 days ago

    I have a Strix Halo machine with 128GB VRAM so I’m definitely going to give this a try with gpt-oss-120b this weekend.

  • 7toed@midwest.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    7 days ago

    Okay pardon the double comment, but I now have no choice but to set this up after reading your explainations. Doing what TRILLIONS of dollars hasn’t cooked up yet… I hope you’re ready by whatever means you deam, when someone else “invents” this

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      7 days ago

      It’s copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify…but you can’t roll a closed source version of it and sell it for profit.

      In any case, I didn’t build this to get rich (fuck! I knew I forgot something).

      I built this to try to unfuck the situation / help people like me.

      I don’t want anything for it. Just maybe a fist bump and an occasional “thanks dude. This shit works amazing”

  • Pudutr0n@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    7 days ago

    re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      6 days ago

      re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

      Yep, good question. You can do that, it’s not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.

      But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -

      1. Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”

      2. Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don’t contain the needed fact).

      3. Finally: it has “structured refusal” baked in. IOW, the whole point is that the LLM is forced to say “here are the facts I saw, and this is what I can’t answer from those facts”.

      TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit “missing list”.

      For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for “show me the passage” not “answer the question”.

      I hope that makes sense?