Huh? That is the literal opposite of what I said. Like, diametrically opposite.
The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.
No, that’s exactly what you wrote.
Now, with this change
SUMM -> human reviews
That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.
Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.
Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.
Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.





Ummmm… According to YT, about 50% of new videos is AI generated (20% AI slop, 30% AI brainrot, don’t ask me whats the difference), and some industry analytics expect that to grow to 90%>