WhatsApp “end-to-end encrypted” messages aren’t so private after all


Whatsapp logo
Enlarge / The security of the popular Facebook messaging application leaves several quite important demons in its details.

Yesterday, independent writing ProPublica published a detailed article examining the privacy claims of the popular WhatsApp messaging platform. The service offers a famous “end-to-end encryption”, which most users interpret to mean that Facebook, which has owned WhatsApp since 2014, cannot read the messages or forward them to law enforcement.

This claim is contradicted by the simple fact that Facebook employs around 1,000 WhatsApp moderators whose entire job is, you guessed it, reviewing WhatsApp messages that have been flagged as “incorrect.”

End-to-end encryption, but what is an “end”?

security and privacy the page seems easy to misinterpret. “> security and privacy the page seems easy to misinterpret. “src =” https://cdn.arstechnica.net/wp-content/uploads/2021/09/whatsapp-end-to-end-screenshot-640×141.png “width =” 640 “height =” 141 “srcset = “https://cdn.arstechnica.net/wp-content/uploads/2021/09/whatsapp-end-to-end-screenshot.png 2x”/>
Enlarge / This snippet from WhatsApp security and privacy the page seems easy to misinterpret.

The loophole in WhatsApp’s end-to-end encryption is simple: container from any WhatsApp message you can mark it. Once flagged, the message is copied to the recipient’s device and sent as a separate message to Facebook for review.

Messages are generally flagged (and reviewed) for the same reasons that they would be on Facebook itself, including reports of fraud, spam, child pornography, and other illegal activities. When the recipient of a message flags a WhatsApp message for review, that message is grouped with the four most recent previous messages in that thread and then sent to the WhatsApp review system as files attached to a ticket.

Although nothing indicates that Facebook currently collects messages from users without manual intervention by the recipient, it should be noted that there is no technical reason why you cannot do so. The security of “end-to-end” encryption depends on the endpoints themselves, and in the case of a mobile messaging application, that includes the application and its users.

An “end-to-end” encrypted messaging platform could choose to, for example, perform an automated AI-based content scan of all messages on a device and then forward automatically messages marked to the cloud of the platform for future actions. Ultimately, privacy-focused users must trust the policies and trust of the platform just as much as they do on technology points.

Content moderation by any other name

Once a review ticket reaches the WhatsApp system, it is automatically entered into a “reactive” queue for human hired workers to evaluate. Artificial intelligence algorithms also feed the ticket into “proactive” queues that process unencrypted metadata, including user group names and profile pictures, phone number, device fingerprints, related Facebook and Instagram accounts, and more. .

WhatsApp human reviewers process both types of queues (reactive and proactive) for reported or suspected policy violations. Reviewers have only three options for a ticket: ignore it, put the user account on “watch”, or ban the user account entirely. (According to ProPublica, Facebook uses the limited set of actions as a justification for saying that reviewers do not “moderate content” on the platform.)

Although the WhatsApp moderators, forgive us, reviewers—They have fewer options than their counterparts on Facebook or Instagram, face similar challenges, and have similar obstacles. Accenture, the company Facebook hires for moderation and review, hires workers who speak a variety of languages, but not everybody Languages. When messages arrive in a language that the moderators are not familiar with, they should rely on Facebook’s automatic language translation tools.

“In the three years I’ve been there, it’s always been horrible,” a moderator told ProPublica. Facebook’s translation tool offers little to no guidance on jargon or local context, which is not surprising given that the tool often has a hard time identifying the source language. A shaving company that sells razors may be mislabeled as “selling guns,” while a bra manufacturer could be branded a “sexually oriented business.”

WhatsApp’s moderation standards can be as confusing as its machine translation tools; For example, decisions about child pornography may require comparing the hip bones and pubic hair of a naked person to a medical index chart, or decisions about political violence may require guessing whether an apparently severed person in a video is real or fake.

Unsurprisingly, some WhatsApp users also use the dialing system itself to attack other users. A moderator told ProPublica that “we had a couple of months where AI banned left and right groups” because users in Brazil and Mexico changed the name of a messaging group to something problematic and then reported the message. “At worst,” the moderator recalled, “we were probably getting tens of thousands of them. They discovered some words that the algorithm didn’t like.”

Unencrypted metadata

Although WhatsApp’s “end-to-end” encryption of message content can only be subverted by the sender’s or recipient’s own devices, a large amount of metadata associated with those messages is visible to Facebook and to law enforcement or other authorities. people that Facebook decides to share. with – without such warning.

ProPublica found more than a dozen Justice Department instances looking for WhatsApp metadata since 2017. These requests are known as “pencil search warrants,” a terminology that comes from requests for connection metadata on landline accounts. ProPublica correctly points out that this is an unknown fraction of the total applications in that time period, as many of those orders and their results are sealed by the courts.

Since pen orders and their results are frequently sealed, it is also difficult to say exactly what metadata the company has delivered. Facebook refers to this data as “Prospective Message Pairs” (PMP) – nomenclature given to ProPublica anonymously, which we were able to confirm in the January 2020 announcement. course offered to employees of the Brazilian Department of Justice.

Although we do not know exactly what metadata is present in these PMPs, we do know that they are very valuable to law enforcement agencies. In one particularly high-profile case from 2018, whistleblower and former Treasury Department official Natalie Edwards was convicted of leaking confidential bank reports to BuzzFeed via WhatsApp, which she incorrectly believed was “safe.”

FBI Special Agent Emily Eckstut was able to detail that Edwards exchanged “approximately 70 messages” with a BuzzFeed reporter “between 12:33 am and 12:54 am” the day after the article was published; the data helped secure a conviction and a six-month prison sentence for conspiracy.


arstechnica.com

Leave a Reply

Your email address will not be published.