• 0 Posts
  • 7 Comments
Joined 26 days ago
cake
Cake day: January 2nd, 2026

help-circle
  • Yes, it is visible when a new trusted device is added. The QR code you scan to link a device contains a one-time public key for that device (ECC is used partly to fit the public key more easily into a QR code). Signal on the phone then sends a lot of information, including the identity keys, to the new device. The new device uses these identity keys to communicate. Note that the transfer of identity keys is fully encrypted, with encryption and decryption taking place on the clients. This can, of course, be bypassed if someone you’re talking to has their security key compromised, but the same risk exists if the recipient takes a screenshot or photographs their device’s screen.

    Edit: The security key refers to the one-time key pair generated to initiate the transfer of identity keys and chat history. It can be compromised if someone accidentally scans a QR code and transfers their identity keys to an untrusted device.



  • Even in an “insecure” app without air-gapped systems or manual encryption, creating a backdoor to access plaintext messages is still very difficult if the app is well audited, open source, and encrypts messages with the recipient’s public key or a symmetric key before sending ciphertext to a third-party server.

    If you trust the client-side implementation and the mathematics behind the symmetric and asymmetric algorithms, messages remains secure even if the centralized server is compromised. The client-side implementation can be verified by inspecting the source code if the app is open source and the device is trusted (for example, there is no ring-zero vulnerability).

    The key exchange itself remains somewhat vulnerable if there is no other secure channel to verify that the correct public keys were exchanged. However, once the public keys have been correctly exchanged, the communication is secure.





  • TL;DR: not possible with random cookies, too much work for too little gain with already-verified cookies

    There is no such add-on because random cookies will not work. Whenever someone has been authenticated, Google decides the cookie the browser should send out with any subsequent requests. Google can either choose to assign and store a session id on the browser and store data on servers or choose to store the client browser fingerprint and other data in a single cookie and sign this data.

    Additionally, even with a verified session, if you change your browser fingerprint, it may trigger a CAPTCHA, despite using a verified cookie. In the case of a session token, this will occur because of the server storing the fingerprint associated with the previous request. On the other hand, if using a stateless method, the fingerprint will not match the signed data stored inside the cookie.

    However, this could work with authenticated cookies wherein users contribute their cookies to a database and the database further distributes these cookies based on Proof of Work. This approach, too, has numerous flaws. For instance, this would require trusting the database, this is a very over engineered solution, Google doesn’t mind asking verified users to verify again making this pointless, it would be more efficient to simply hire a team of people or use automated systems to solve CAPTCHAS, this approach also leaks a lot of data depending on your threat model, etc.