“This is the Strait of Hormuz in the data economy. If you want to make a change, this is where you cut it off. Anything short of that is theatrical political posture.”
You mean it hallucinated a positive response to your leading question as it is meant to? You are operating on a fundamental misunderstanding of what LLMs do. Even if what you said is true, an LLM would have no knowledge of that unless it was explicitly told as such as an input - and why would they be stupid enough to do that?
I’m trying out Google’s Gemma4 LLM, which is run locally, and is touted as a 100% private model.
Asking it some questions about itself, at one point it acknowledged that chats were sent to “developers”.
llama.cpp doesn’t have the ability to send telemetry because the next word predictor says so. you can confirm with wireshark.
I feel like that should be quite easy to verify with wireshark.
You mean it hallucinated a positive response to your leading question as it is meant to? You are operating on a fundamental misunderstanding of what LLMs do. Even if what you said is true, an LLM would have no knowledge of that unless it was explicitly told as such as an input - and why would they be stupid enough to do that?
Yeah, I wouldn’t trust anything LLM says.
“Tell me you are alive.”
“I’m alive”
shockedpikachu.png
Did the LLM tell you it’s 100% private?
What else did the LLM tell you?
That’s not how any of that works.