Many people who focus on information security, including myself, have long considered Telegram suspicious and untrustworthy. Now, based on findings published by the investigative journalism outlet IStories (original in Russian; English version by OCCRP available here), and my own analysis of packet captures from Telegram for Android and of Telegram’s protocol described below, I consider Telegram to be a indistinguishable from a surveillance honeypot.
Privacy
This blogpost has been improved based on fedi discussions around it; you can find the Polish thread here, and the English thread here. I appreciate all this input!
I have recently been asked by the Panoptykon Foundation if it was possible to create an online age verification system that would not be a privacy nightmare. I highly recommend reading their piece, which dives into several issues around age verification.
I replied that yes, under certain assumptions, this is possible. And provided a rough sketch of such a system.
But before we dive into it, I have to be clear: I am not a fan of introducing online age verification systems. Privacy is just one of the many issues related to them. I dive into some of those later in this post.
Telegram is a popular – especially in the East – internet messenger. It bills itself as “encrypted”, “private”, and “secure”. One of its creators (and the CEO of the company that operates the service), Pavel Durov, has for years been suggesting, in a more or less direct manner, that other internet messenger services expose our conversations and endanger our privacy.
It’s a pretty crude, yet surprisingly effective strategy for distracting attention away from Telegram’s very real problems with security and privacy. And there are quite a few of those.
Automatically tagging or filtering child sexual exploitation materials (CSAM) cannot be effective and preserve privacy at the same time, regardless of what kind of tech one throws at it. Because what is and what is not CSAM is highly dependent on context.
Literally the same photo, bit-by-bit identical, can be an innocent memorabilia when sent between family members, and a case of CSAM if shared on a child porn group.
The information necessary to tell whether or not it is CSAM is not available in the file being shared. It is impossible to tell it apart by any kind of technical means based on the file alone. The current debate about filtering child sexual exploitation materials (CSAM) on end-to-end encrypted messaging services, like all previous such debates (of which there were many), mostly ignores this basic point.
Jessica Buxbaum investigates the partnership between X and an Israeli tech company run by former intelligence officials, highlighting its potential impact on surveillance, censorship, and digital rights.
The post Identity Verification or Data Exposure? Twitter Using Israeli Tech Firm Headed by Ex-Military Officials to Verify Users appeared first on MintPress News.