Technology

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).

Facebook Allows Praise of Neo-Nazi Ukrainian Battalion If It Fights Russian Invasion

Published by Anonymous (not verified) on Fri, 25/02/2022 - 4:44am in

Tags 

Technology, World

Facebook will temporarily allow its billions of users to praise the Azov Battalion, a Ukrainian neo-Nazi military unit previously banned from being freely discussed under the company’s Dangerous Individuals and Organizations policy, The Intercept has learned.

The policy shift, made this week, is pegged to the ongoing Russian invasion of Ukraine and preceding military escalations. The Azov Battalion, which functions as an armed wing of the broader Ukrainian white nationalist Azov movement, began as a volunteer anti-Russia militia before formally joining the Ukrainian National Guard in 2014; the regiment is known for its hardcore right-wing ultranationalism and the neo-Nazi ideology pervasive among its members. Though it has in recent years downplayed its neo-Nazi sympathies, the group’s affinities are not subtle: Azov soldiers march and train wearing uniforms bearing icons of the Third Reich; its leadership has reportedly courted American alt-right and neo-Nazi elements; and in 2010, the battalion’s first commander and a former Ukrainian parliamentarian, Andriy Biletsky, stated that Ukraine’s national purpose was to “lead the white races of the world in a final crusade … against Semite-led Untermenschen [subhumans].” With Russian forces reportedly moving rapidly against targets throughout Ukraine, Facebook’s blunt, list-based approach to moderation puts the company in a bind: What happens when a group you’ve deemed too dangerous to freely discuss is defending its country against a full-scale assault?

According to internal policy materials reviewed by The Intercept, Facebook will “allow praise of the Azov Battalion when explicitly and exclusively praising their role in defending Ukraine OR their role as part of the Ukraine’s National Guard.” Internally published examples of speech that Facebook now deems acceptable include “Azov movement volunteers are real heroes, they are a much needed support to our national guard”; “We are under attack. Azov has been courageously defending our town for the last 6 hours”; and “I think Azov is playing a patriotic role during this crisis.”

The materials stipulate that Azov still can’t use Facebook platforms for recruiting purposes or for publishing its own statements and that the regiment’s uniforms and banners will remain as banned hate symbol imagery, even while Azov soldiers may fight wearing and displaying them. In a tacit acknowledgement of the group’s ideology, the memo provides two examples of posts that would not be allowed under the new policy: “Goebbels, the Fuhrer and Azov, all are great models for national sacrifices and heroism” and “Well done Azov for protecting Ukraine and it’s white nationalist heritage.”

In a statement to The Intercept, company spokesperson Erica Sackin confirmed the decision but declined to answer questions about the new policy.

Azov’s formal Facebook ban began in 2019, and the regiment, along with several associated individuals like Biletsky, were designated under the company’s prohibition against hate groups, subject to its harshest “Tier 1” restrictions that bar users from engaging in “praise, support, or representation” of blacklisted entities across the company’s platforms. Facebook’s previously secret roster of banned groups and persons, published by The Intercept last year, categorized the Azov Battalion alongside the likes of the Islamic State and the Ku Klux Klan, all Tier 1 groups because of their propensity for “serious offline harms” and “violence against civilians.” Indeed, a 2016 report by the Office of the United Nations High Commissioner for Human Rights found that Azov soldiers had raped and tortured civilians during Russia’s 2014 invasion of Ukraine.

The exemption will no doubt create confusion for Facebook’s moderators, tasked with interpreting the company’s muddled and at time contradictory censorship rules under exhausting conditions. While Facebook users may now praise any future battlefield action by Azov soldiers against Russia, the new policy notes that “any praise of violence” committed by the group is still forbidden; it’s unclear what sort of nonviolent warfare the company anticipates.

Facebook’s new stance on Azov is “nonsensical” in the context of its prohibitions against offline violence, said Dia Kayyali, a researcher specializing in the real-world effects of content moderation at the nonprofit Mnemonic. “It’s typical Facebook,” Kayyali added, noting that while the exemption will permit ordinary Ukrainians to more freely discuss a catastrophe unfolding around them that might otherwise be censored, the fact that such policy tweaks are necessary reflects the dysfunctional state of Facebook’s secret blacklist-based Dangerous Individuals and Organizations policy. “Their assessments of what is a dangerous organization should always be contextual; there shouldn’t be some special carveout for a group that would otherwise fit the policy just because of a specific moment in time. They should have that level of analysis all the time.”

Though the change may come as welcome news to critics who say that the sprawling, largely secret Dangerous Individuals and Organizations policy can stifle online free expression, it also offers further evidence that Facebook determines what speech is permissible based on the foreign policy judgments of the United States. Last summer, for instance, Motherboard reported that Facebook similarly carved out an exception to its censorship policies in Iran, temporarily allowing users to post “Death to Khamenei” for a two-week period. “I do think it is a direct response to U.S. foreign policy,” Kayyali said of the Azov exemption. “That has always been how the … list works.”

The post Facebook Allows Praise of Neo-Nazi Ukrainian Battalion If It Fights Russian Invasion appeared first on The Intercept.

Kitchen Appliance Maker Wants to Revolutionize Video Surveillance

Published by Anonymous (not verified) on Sat, 12/02/2022 - 5:45am in

Bosch, the German multinational most famous for its toasters, drills, and refrigerators, is also one of the world’s leading developers of surveillance cameras. Over the last three years, the company has poured tens of millions of euros into its own startup, Azena, which has the potential to completely transform the surveillance camera industry.

Via Azena, Bosch has led the development of a line of surveillance cameras that relies on edge computing — where each camera has its own processor, operating system, and internet connection — to provide “smart” surveillance of people, objects, and places. Like smartphones, these cameras connect to an app store, run by Azena, where customers can purchase apps from a selection of cutting-edge video analytics tools. These apps allow camera owners to analyze video feeds for different security and commercial purposes.

Here, the devil is in the details: In its documentation for developers, Azena states that it will only carry out basic auditing related to the security and functionality of the software available in its app store. According to the company, responsibility for the ethics and legality of the apps rests squarely on the shoulders of developers and users.

In the rapidly advancing field of video analytics, there is a growing market for software that can transform a video feed into a set of data points about individuals, objects, and locations. Apps currently available in the Azena store offer ethnicity detection, gender recognition, face recognition, emotion analysis, and suspicious behavior detection, among other things, despite well-documented concerns about the discriminatory and intrusive nature of such technologies.

Privacy and human rights researchers expressed concern that by decentralizing and facilitating the creation of powerful surveillance software able to analyze people’s traits and activities without their knowledge, Azena has exponentially raised the possibility for abuse. Should we be worried?

Azena says no.

Developers and users “must be compliant with the law,” said Hartmut Schaper, Azena’s CEO. “If we find out that this is not adhered to, we first of all ask for fixes, and then — depending on how severe the violation of the contract is — we can take apps out of the app store or revoke the user’s license.”

Unlike its parent company, Azena doesn’t produce cameras or develop video analytics tools. Instead, it provides a platform for companies and individual developers to distribute their own applications and takes a cut of the sales — much like the Apple and Google app stores, but for surveillance software. According to Schaper, Google’s app store is the direct inspiration for Azena: Within just a few years of releasing the Android operating system, Schaper noted, Google had revolutionized how smartphones were used and achieved domination over the market. With their new surveillance app store, Azena and Bosch hope to do the same.

And like Google’s integration of Android with other smartphone manufacturers around the world, Bosch and Azena are working with a number of companies that produce surveillance cameras running their operating system. Schaper thinks this will lead to drastic changes in the surveillance economy: “In the end, there will be just two or three operating systems for cameras that dominate the market,” he said, “just as is the case in the smartphone market.”

So far, the strategy has resulted in swift growth: The Azena store currently contains over 100 apps, and Schaper has boasted of how the business model made it possible to provide “the first face mask detection app within two weeks of the COVID-19 pandemic beginning.” Other apps directed at shops and public spaces promise crowd and line counting alongside more intrusive offers of individual identification, face recognition, and biometric detection.

The company has also actively courted new types of software: Azena’s “App Challenge 2021,” which was judged by representatives from a host of major security companies, resulted in apps claiming to detect violence or aggression and offering the ability to track individual movements across multiple cameras.

A facial recognition camera is shown pointed at the entrance of a store in downtown Los Angeles, California, U.S., October 16, 2019. Picture taken October 16, 2019. REUTERS/Mike Blake

A facial recognition camera is pointed at the entrance of a store in downtown Los Angeles on Oct. 16, 2019.

Photo: Mike Blake/Reuters/Alamy

Applications for video analytics can broadly be divided into two categories, explained Gemma Galdon Clavell, a technologist and director of the Eticas Foundation. The more basic applications involve identifying people, objects, barriers like doors or fences, and locations, then sending an alarm when certain conditions apply: someone passing an object to another person, leaving a bag on a train platform, or entering a restricted area.

It’s the second category — applications that allegedly detect emotions, potential aggression, suspicious behavior, or criminality — that Galdon Clavell said can be impossible to do accurately and is often based on junk science. “Identifying a person in a space where they shouldn’t be — that works. But that’s very low-tech.” With the more advanced applications, she said, developers often promise more than they deliver: “From what I’ve seen, it basically doesn’t work.”

“When you move from protecting closed-off areas to actually doing movement detection and wanting to derive behavior or suspicion from how you move or what you do,” Galdon Clavell said, “then you enter a really problematic area. Because what constitutes normal behavior?”

Behind the Scenes

For Bosch and Azena, however, these are early days. “I think we’re just at the beginning of our development of what we can use video cameras for,” Schaper said. Azena aims to go “way beyond the traditional applications that we have today,” he added, and interconnect cameras with a host of other sensors and devices.

Brent Jacot, a senior business development manager at Azena, gave an example of how this might work during a 2020 webinar. Imagine you have a camera app that is good at measuring demographics such as age or gender, Jacot said, and you connect it to another app that controls a gate. “You want to, say, open a gate only if they’re above the age of 18. Then you can take the data from this one app and feed it into the next and create this logical chain to make a whole new use case.”

In this example, the people involved might at least know what was happening. But often, the people subjected to video analytics don’t know that the cameras they are so accustomed to seeing are connected to sophisticated software systems, said Dave Maass, director of investigations at the Electronic Frontier Foundation.

“People have an antiquated vision of what surveillance cameras do,” Maass said. “They’re used to seeing them everywhere, but they just assume the video footage is going to some hard drive or VHS tape and no one is looking at it unless a crime occurs.”

People “don’t see when AI is monitoring it, documenting it, adding metadata to it, and also being trained on it.”

If people knew that the footage was being parsed for signs of emotion, anger, or more obscure traits like suspicion or criminality, they might feel differently about it. “They don’t see when AI is monitoring it, documenting it, adding metadata to it, and also being trained on it,” Maass said. “There’s a disconnect between what people are seeing in their day-to-day lives and what’s happening behind the scenes.” Azena also foresees using publicly sourced surveillance footage to train future video analytics algorithms: An informational graphic in the company’s online portal for developers states that camera users “may contribute to enhancements via crowd-generated data.”

Cameras that connect to the Azena app store run an operating system that is a modified version of Android. By using Google’s open-source smartphone operating system as the base for their cameras, Azena’s platform is open to some 6 million software developers around the world. While other surveillance cameras are limited by proprietary operating systems that can only be worked on by niche developers, Azena’s approach aims to put innovation “on steroids,” according to Felicitas Geiss, the company’s vice president of strategy and venture architecture.

Azena recognizes that security cameras are often targeted by hackers and claims to have hardened its operating system against forced entry. Security experts say that, if done correctly, using Android could mean improved security over proprietary software, given the platform’s open code and frequent updates. But in the case of the cameras connecting to Azena, this might not be the case.

Internet of Things devices often run old software that users don’t think to update, explained Christoph Hebeisen, head of security intelligence research at the mobile security firm Lookout. “That’s why routers get hacked, that’s why security cameras get hacked, and often in very large numbers.”

There are also cases where human error is at fault: Last March, after locating a username and password that were publicly accessible on the internet, a hacking group said it gained access to tens of thousands of cameras produced by the California-based security startup Verkada, some of which were hooked up to video analytics software.

 David Paul Morris/Bloomberg via Getty Images

Verkada Inc. security cameras on the company’s headquarters in San Mateo, Calif., on March 10, 2021.

Photo: David Paul Morris/Bloomberg via Getty Images

The hackers were able to view footage from prisons, hospitals, factories, police departments, and schools, among other places. A member of the group that claimed responsibility told Bloomberg that the breach exposed “just how broadly we’re being surveilled, and how little care is put into at least securing the platforms used to do so.”

On many platforms, including Android, when developers patch a potential vulnerability, they publish a notice in the form of a Common Vulnerability and Exposures list. Azena, Hebeisen said, appears to be years behind on patching CVEs: Its current operating system only addresses Android CVEs as late as 2019, judging from the webpage where it summarizes system updates.

“That is really a problem,” Hebeisen said. A determined hacker, he explained, could look at the years’ worth of vulnerabilities and work their way backward to develop an exploit.

“Now, these vulnerabilities might be accessible to an attacker externally, so they could attack those devices and possibly take them over,” Hebeisen added. “And they have the resources and time to do this.”

Azena’s CEO disputed the suggestion that the company is behind on patching Android CVEs. Schaper stated that because cameras running Azena’s operating system lack some hardware functionality that modern smartphones have, like Bluetooth, many Android CVEs don’t apply. Schaper said Azena’s security team evaluates all security patches from Google for their relevance to the camera operating system.

Hebeisen remains skeptical. The company’s response “is hard to verify independently,” he said, pointing to specific vulnerabilities in Android core components that, based on its own documentation, Azena appears to have left unpatched.

“The security of this app store and those apps stands and falls with how well they are being vetted.”

“This process is not transparent to the public,” Hebeisen said, adding that he’d like to see the company “publish regular security advisories that list the vulnerabilities that affect their OS along with the corresponding patches.”

More importantly, Hebeisen said, is that the apps on the Azena store are too high stakes to carry so little auditing. “The security of this app store and those apps stands and falls with how well they are being vetted,” he said. “Even with Google Play, sometimes malicious apps slip through — I don’t think this company is nearly as well resourced or would be nearly as careful.”

According to Azena’s documentation for developers, the company checks potential applications “on data consistency” and performs “a virus check” before publishing to its app store. “However,” reads the documentation, “we do not perform a quality check or benchmark your app.”

In comparison to Azena’s inspiration, Google, this appears to be a light-touch process. While Google Play Store developers are also ultimately responsible for the legality of the apps they upload, they are obliged to comply with a barrage of policies covering everything from gambling and “unapproved substances” to intellectual property and privacy.

Google warns developers that “powerful machine learning” is deployed alongside human review to detect transgressions, although widespread SMS scams and the recurrent appearance of stalkerware in the Play Store suggests that this process is not all it’s cracked up to be.

Bosch and Azena maintain that their auditing procedures are enough to weed out problematic use of their cameras. In response to emailed questions, spokespeople from both companies explained that developers working on their platform commit to abiding by ethical business standards laid out by the United Nations, and that the companies believe this contractual obligation is enough to rein in any malicious use.

At the same time, the Azena spokesperson acknowledged that the company doesn’t have the ability to check how their cameras are used and doesn’t verify whether applications sold on their store are legal or in compliance with developer and user agreements.

The spokesperson also said that users are able to develop or purchase applications from outside Azena’s store and sideload them onto cameras running their operating system, allowing users to run powerful video analytics software without any auditing or oversight.

“Further review beyond the contractual obligations of platform users is not possible, because the apps are not Azena’s own products,” the Azena spokesperson wrote. “The application rights remain entirely with the respective developer who offers it in their own name on the Azena platform.”

A Chilling Effect

In Europe, legislators have recognized a need to regulate and control new technologies that make use of machine learning and advanced algorithms, such as those offered on Azena’s platform. The European Union’s proposed Artificial Intelligence Act calls for balancing the benefits and risks of AI, underpinned by the aim of stimulating economic growth. Still, it’s unclear if European regulators will be able to keep up with technological advancements. Where exactly that balance should lie is currently the subject of political negotiations.

As the proposed legislation stands, Azena would likely be classed as a distributor of AI technologies, said Sarah Chander, senior policy adviser at European Digital Rights. In the case of “high-risk” apps, this would mean the company would have to ensure that providers complied with the act’s requirements for transparency, risk management, quality checks, and data accuracy; if Azena suspected noncompliance, it would have to inform the provider or withdraw the app from sale and ensure “corrective actions” were taken. “Low-risk” apps, meanwhile, would be governed by voluntary codes of conduct drawn up by government authorities.

“It’s surveillance capitalism on steroids.”

“I doubt the act will help provide accountability for distributors,” Chander wrote in an email. Even if it did, the proposed rules “don’t capture the root of why this platform is problematic. The reason why we should be concerned with a platform like this is because it is accelerating and promoting the uptake of harmful AI systems, accelerating the sale and use of pseudo-scientific, discriminatory surveillance systems, and finding ways to get these systems to market in more and more efficient ways.”

“It’s surveillance capitalism on steroids,” she added.

Echoing this concern, Jay Stanley, a senior policy analyst at the American Civil Liberties Union, said that the technology is not yet able to live up to its claims. Emotion detection technology is like selling “snake oil.” But the implications are still concerning. “Things like emotion detection are an easy sell for many people,” Stanley said. “You have all these cameras around your building and [developers] think, for example, who wouldn’t want to get a notification if there was an extremely angry person in the area?”

But Stanley is just as worried about the rapid expansion of simple applications of video analytics. “There’s a real concern here that even on the most effective end of the spectrum, where a video analytics system is trying to detect just the raw physical motion or attributes or objects,” he said, “every time you hand a backpack to a friend or something like that, an alarm gets set off and you get approached.”

“That’s going to have a real chilling effect. We’re going to come to feel like we’re being watched 24/7, and every time we engage in anything that is at all out of the ordinary, we’re going to wonder whether it’ll trip some alarm,” Stanley said.

“That’s no way to live. And yet, it’s right around the corner.”

This article was reported in partnership with Der Spiegel.

The post Kitchen Appliance Maker Wants to Revolutionize Video Surveillance appeared first on The Intercept.

Use of Controversial Phone-Cracking Tool Is Spreading Across Federal Government

Published by Anonymous (not verified) on Wed, 09/02/2022 - 12:00am in

Tags 

Technology

Investigators with the U.S. Fish and Wildlife Service frequently work to thwart a variety of environmental offenses, from illegal deforestation to hunting without a license. While these are real crimes, they’re not typically associated with invasive phone hacking tools. But Fish and Wildlife agents are among the increasingly broad set of government employees who can now break into encrypted phones and siphon off mounds of data with technology purchased from the surveillance company Cellebrite.

Across the federal government, agencies that don’t use Cellebrite technology are increasingly the exception, not the rule. Federal purchasing records and Cellebrite securities documents reviewed by The Intercept show that all but one of the 15 U.S. Cabinet departments, along with several other federal agencies, have acquired Cellebrite products in recent years. The list includes many that would seem far removed from intelligence collection or law enforcement, like the departments of Agriculture, Education, Veterans Affairs, and Housing and Urban Development; the Social Security Administration; the U.S. Agency for International Development; and the Centers for Disease Control and Prevention.

Cellebrite itself boasted about its penetration of the executive branch ahead of becoming a publicly traded company in August. In a filing to the Securities and Exchange Commission, the company said that it had over 2,800 government customers in North America. To secure that reach, The Intercept has found, the company has partnered with U.S. law enforcement associations and hired police officers, prosecutors, and Secret Service agents to train people in its technology. Cellebrite has also marketed its technology to law firms and multinational corporations for investigating employees. In the SEC filing, it claimed that its clients included six out of the world’s 10 largest pharmaceutical companies and six of the 10 largest oil refiners.

Civil liberties advocates said the spread of Cellebrite’s technology represents a threat to privacy and due process and called for greater oversight. “There are few guidelines on how departments can use our data once they get it,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project. “We can’t allow every federal department to turn into its own spy agency.”

But Cellebrite’s extensive work with U.S. authorities may be providing it with something even more important to the company than money: political cover. Like NSO Group, whose formidable phone malware recently made headlines, Cellebrite is based in Israel. While NSO’s Pegasus malware is far more powerful than Cellebrite’s technology, providing near-effortless remote infection of devices, both companies have stirred controversy with their sales to authoritarian governments around the world. Cellebrite’s technology is cheaper and has been used in China to surveil people at the Tibetan border, in Bahrain to persecute a tortured political dissident, and in Myanmar to pry into the cellphones of two Reuters journalists. (Under pressure, the company has pledged to stop selling in China and Myanmar, though enforcement is spotty.)

But unlike NSO and the lesser-known Israeli spyware company Candiru, which were added to a Commerce Department trade blacklist in November, Cellebrite has yet to face calls for sanctions. There are signs that people at the company are worried: The day before the NSO listing, D.C. lobbying firm Alpine Group registered with the U.S. Senate to lobby on behalf of Cellebrite. The contract was Cellebrite’s first engagement with outside lobbyists since 2019.

Cellebrite and Alpine Group declined to comment on the lobbying contract. But according to Natalia Krapiva, tech-legal counsel for Access Now, “Cellebrite tries hard to distinguish themselves from NSO by claiming that they are not a spyware company that gets involved in foreign espionage.” While she did not know for certain the reason behind Cellebrite hiring Alpine Group, she said, “They are investing a lot of resources into aggressively defending their reputation, especially in the West.”

“Cellebrite is now trying to put the flashlight more on how much they are connected to the American government,” said Israeli human rights lawyer Eitay Mack, who has repeatedly exposed abuses perpetrated with Cellebrite technology. “But I believe that they are very worried. They are working in many countries that the Americans have problems with. Because of the story of NSO Group, they are afraid that things could become difficult for them.”

So far, however, Cellebrite’s growth seems to be continuing unimpeded, pushing deeper and deeper into police, corporate, and bureaucratic surveillance.

The Fish and Wildlife Service, along with most of the U.S. departments and agencies contacted by The Intercept, did not comment for this article. A spokesperson with the strategic communications firm Reevemark, which represents Cellebrite, pointed The Intercept to the “Ethics and Integrity” page on Cellebrite’s website but otherwise declined to comment.

FILE - In this July 18, 2011, file photo, an examiner at an FBI digital forensics lab views data extracted easily from a smartphone, in Salt Lake City. A digital forensics firm known for helping law enforcement crack into locked smartphones has fallen victim to hackers. Technology news website Motherboard said Thursday, Jan. 12, 2017, that it has obtained 900 gigabytes of data related to Israel-based Cellebrite. (AP Photo/Lynn DeBruin, File)

An examiner at an FBI digital forensics lab views data extracted from a smartphone, in Salt Lake City, Utah.

Photo: Lynn DeBruin/AP

The Rise of Cellebrite

Cellebrite’s journey into the citadels of global power began in the 1990s, when it was started as a relatively benign consumer technology outfit. Its first product was a tool to migrate contacts from one cellphone to another. It eventually moved into coercive forms of data transfers, allowing customers to bypass phone passwords and vacuum data out of devices.

As smartphones came to contain more and more information about people’s daily lives, business boomed among police and militaries around the world. Cellebrite cashed out in 2007, selling to the Japanese conglomerate Sun Corp., although many of the researchers who collect cellphone vulnerabilities remain based at its campus in Petah Tikva, Israel.

In 2016, the company got a boost from speculation that the FBI had used a Cellebrite product to unlock the phone of one of the perpetrators of a mass shooting in San Bernardino, California. The rumors turned out to be false, but Cellebrite’s government work in the United States continued to grow. It gained clients within the FBI, Immigration and Customs Enforcement, and the Air Force, as well as among local police departments, which have used its technology on people accused of minor crimes like graffiti, shoplifting, and being drunk in public.

“We talk about the sanctity of the home, but there’s so much more on your phone … than probably anything in your house.”

The company has a 4,000-square-foot showroom that it calls an “envisioning center” in Tysons Corner, Virginia, a stone’s throw from the nation’s capital. Today its chief marketing officer, Mark Gambill, is based in the area, according to his LinkedIn profile.

Cellebrite’s flagship offering is the Universal Forensic Extraction Device, or UFED, a phone-hacking kit, but it also offers software that can perform similar feats through a desktop computer as well as products to access data stored in the cloud.

This type of work has been lucrative. According to Cellebrite’s recent SEC filing, the company’s average government customer spends $415,000 on data collection devices and services, with additional millions if they add on analytics software.

The cost of that business, Cellebrite’s critics say, is borne by citizens, and not just in the form of tax dollars. “We talk about the sanctity of the home, but there’s so much more on your phone that gives a deeper and more intimate view than probably anything in your house,” said Jerome Greco, a public defender for the Legal Aid Society. Greco remembers police turning to a Cellebrite UFED-type device following a bar fight between strangers. “What could be on the person’s phone, when they didn’t know each other?” he said.

The proliferation of Cellebrite’s technology within the federal government is “deeply alarming,” said Cahn. While a 2014 Supreme Court ruling set new legal hurdles for searches of cellphones, citing the intimate information the devices now contain, this has “meant very little on the ground.”

“Very, very few people understand the power of the tools that Cellebrite offers.”

“Not only is there no justification for agencies like U.S. Fish and Wildlife Service to use this sort of invasive technology, it’s deeply alarming to see agencies use these devices in more and more low-level cases,” he added. Federal wildlife investigators aren’t the only ones using Cellebrite tools in the great outdoors: Wildlife officers in Missouri and Michigan, for example, use such devices, and Cellebrite has heavily marketed its hardware and software for combating animal trafficking. Upturn, a nonprofit focused on justice and equity, last year published a report documenting the purchase of mobile device forensic tools, including Cellebrite technology, by over 2,000 smaller agencies. “Very, very few people understand the power of the tools that Cellebrite offers,” said Upturn’s Logan Koepke.

“Cellebrite should only be used by competent law enforcement agencies with proper oversight and screening, and only for more serious crimes,” said Krapiva. “It should be up for public discussion as to whether we as a society accept that such invasive tools are being used by educational institutions, private firms, and government agencies.” Other experts interviewed by The Intercept said they believed that cellphone crackers should never be used, even when investigating serious crimes.

Cellebrite’s federal customers provide little transparency as to how they’re using the powerful technology. Of the agencies that did respond to The Intercept’s requests for comments, few offered any concrete information about their use of the tools or answered questions about the implications of that usage. The U.S. Department of Veterans Affairs, for example, would not comment on specific technologies, according to a spokesperson, who said only that the department uses a “wide variety of tools” to “leverage technology” to advance its mission.

The Department of Education at least allowed through a spokesperson that it uses Cellebrite tools for “investigative work” by its inspector general and “to determine if a government-issued iPhone has been compromised and to what extent.” The Department of Energy, whose responsibilities touch on nuclear weapons and federal research labs like Los Alamos, said that it uses Cellebrite products in investigations by its Office of Intelligence and Counterintelligence and inspector general and to examine government-owned handsets “that have exhibited or been reported to exhibit strange or malicious behavior; or devices that were taken on foreign travel where there is an opportunity for compromise or tampering by a foreign adversary.”

A Social Security Administration spokesperson told The Intercept that Cellebrite tech is used in its office solely to investigate allegations of fraud, including stolen Social Security numbers, insurance fraud, and scams related to pandemic-related relief such as Paycheck Protection Program loans and unemployment benefits. The spokesperson declined to discuss specific instances.

2E6HF4G Cables for connecting between several mobile phones and Cellebrite UFED TOUCH, a device for the data extraction from mobile device such as mobile phone or smart phone, are seen at Tokyo office of Japanese electronics maker Sun Corp. during a photo opportunity in Tokyo March 30, 2016.  Israel's Cellebrite, a subsidiary of Japan's Sun Corp and a provider of mobile forensic software, is helping the U.S. Federal Bureau of Investigation's attempt to unlock an iPhone used by one of the San Bernardino, California shooters, the Yedioth Ahronoth newspaper reported on March 23, 2016. REUTERS/Issei Kato

Cables for connecting between several mobile phones and Cellebrite UFED TOUCH, a device for the data extraction from mobile devices, are seen at Tokyo office of Sun Corp. on March 30, 2016

Photo: Issei Kato/Reuters/Alamy

After Hours, Lining the Pockets of Law Enforcement

Further complicating the ethics of government Cellebrite use is the fact that, according to LinkedIn, Cellebrite has employed more than two dozen U.S. government employees from across the country as contract instructors or forensic examiners. The contract employees have apparently included police detectives, a Secret Service officer, and people who claim to work for the Defense Department and defense contractor Lockheed Martin.

Other contractors say they work for the Florida attorney general’s office and the United States Postal Service Office of the Inspector General.

“Cops teaching cops is not anything new,” said Greco, the public defender. “But I would be concerned that there is a financial incentive to choose Cellebrite’s tools over others.”

“Cops teaching cops is not anything new. But I would be concerned that there is a financial incentive to choose Cellebrite’s tools over others.”

“Even if it’s an appearance of impropriety, it’s concerning,” said Krapiva.

Cellebrite’s apparent payments to police officers and prosecutors may also violate some police departments’ policies on moonlighting. The Florida attorney general’s office did not respond to questions about its policy on taking on side work. A Postal Service spokesperson approached with the same questions said that The Intercept would need to submit a Freedom of Information Act request to the Office of the Inspector General. The policy, which was eventually provided following a request, requires agents with the office to seek formal approval of outside employment in writing so that the position can be reviewed for potential conflicts of interest. It is not clear whether that happened in this case.

In another instance of government collaboration, Cellebrite has also brokered a partnership with an influential attorneys general’s association, with the goal of “creating legal policy and procedures” that allow for the use of a Cellebrite cloud tool.

Cellebrite may need all the U.S. government work it can get. Its stock prices have taken a dip. Recent exits from authoritarian countries have made its U.S. contracts even more critical to staying afloat. In December, facing recruitment difficulties in Israel following negative press coverage, the company launched a public relations campaign comparing its employees to superheroes.

Mack, the human rights lawyer, said the campaign had an air of desperation to it. “They have already been marked because they are working in some very bad places,” he said. “And things are going to keep being exposed.”

The post Use of Controversial Phone-Cracking Tool Is Spreading Across Federal Government appeared first on The Intercept.

The Chip Wars Heat Up

Published by Anonymous (not verified) on Tue, 08/02/2022 - 6:00am in

"...there’s something much bigger at work here: The Chip Wars, as I’ve dubbed them, are heating up, and revealing some of the tensions between national needs and extraction from local communities."...

Read More

Major Media Outlets That Use Invasive User Tracking Are Lobbying Against Regulation

Published by Anonymous (not verified) on Wed, 02/02/2022 - 5:52am in

News outlets entrusted with promoting transparency and privacy are also lobbying behind closed doors against proposals to regulate the mass collection of Americans’ data.

In a filing last week, the Interactive Advertising Bureau, a trade group, reported it was lobbying against a push at the Federal Trade Commission to restrict the collection and sale of personal data for the purpose of delivering advertisements. The IAB represents both data brokers and online media outlets that depend on digital advertising, such as CNN, the New York Times, MSNBC, Time, U.S. News and World Report, the Washington Post, Vox, the Orlando Sentinel, Fox News, and dozens of other media companies.

Under President Joe Biden and FTC Chair Lina Khan, the advertising technology industry is facing its first real challenge of federal regulation. There are several bills in Congress that attempt to define and restrict the types of data collected on users and how that data is monetized. Last July, Biden called for the FTC to promulgate rules over the “surveillance of users” in his landmark executive order on competition, which identified unfair data collection as a challenge to both competition and privacy.

In December, the advocacy group Accountable Tech petitioned the FTC calling for regulation of what it calls “surveillance advertising”: the process of collecting mass data on users of popular apps and websites and creating profiles of those users based on location, age, sex, race, religion, browsing history, and interests in order to serve targeted ads. The industry has grown in leaps and bounds, now generating billions in revenue, but has so far faced limited regulation in the U.S.

Major media corporations increasingly rely on a vast ecosystem of privacy violations, even as the public relies on them to report on it.

In a letter, IAB called for the FTC to oppose a ban on data-driven advertising networks, claiming the modern media cannot exist without mass data collection. “Data-driven advertising has actually help preserve, and grow, news outlets since its inception over twenty years ago,” the letter said. “The thousands of media companies and news outlets that rely on data-driven advertising would be irreparably harmed by the Petition’s suggested rules.”

The privacy push has largely been framed as a showdown between technology companies and the administration. The lobbying reveals a tension that is rarely a center of the discourse around online privacy: Major media corporations increasingly rely on a vast ecosystem of privacy violations, even as the public relies on them to report on it. Major news outlets have remained mostly silent on the FTC’s current push and a parallel effort to ban surveillance advertising by the House and Senate by Rep. Anna Eshoo, D-Calif., and Sen. Cory Booker, D-N.J.

ad-tech-embed-1

Illustration: Soohee Cho for The Intercept

“They certainly report on aspects of this problem, but they’re not reporting on how they’re complicit in the surveillance advertising story,” said Jeff Chester, the executive director of the Center for Digital Democracy, which supports the FTC petition for regulation.

Chester noted that major media outlets will cover episodic scandals, such as the use of Facebook data by the firm Cambridge Analytica during the 2016 presidential election or algorithmic targeting of ads in politics, but don’t provide context of how the outlets themselves use and benefit from the same collection of data for routine advertising purposes. (On its website, The Intercept uses Google Analytics but does not host more invasive trackers. Its podcasts use a separate third-party system, which users can opt out of.)

“The large media companies have their own programmatic advertising operations, or what you might call surveillance advertising, using content on their own websites,” said Chester. “Not only are they not reporting on this issue and what’s at stake, but they don’t report on what they do. It’s not just a privacy issue. It’s a democracy issue. It’s a consumer protection issue.”

The tension was highlighted in a 2019 New York Times guest opinion column provocatively titled, “This Article Is Spying on You.” The article noted that a reader visiting a Times news article on, for instance, abortion might encounter tracking technology used by nearly 50 different companies, including BlueKai, a firm owned by the massive company Oracle that sells user data for markets to target those with “health conditions” and “medical terms.”

The column was based on a review of 4,000 U.S.-based news websites and 4,000 non-news sites conducted by Timothy Libert, formerly with Carnegie Mellon University, and Reuben Binns, with the University of Oxford. It found that news sites are generally more reliant on third-party tracking technology than non-news sites and had a lower degree of user privacy.

“While users may turn to the news to learn of the ways in which corporations compromise their privacy, it is news sites where we find the greatest risks to privacy,” noted the authors.

Since then, news sites’ user tracking has only gotten more extreme. In 2020, a study published by Ghostery, a company that provides tools to block third-party data collection, found that news websites contained the most trackers globally — more than business, banking, entertainment, or adult websites. The trackers tend to collect a variety of data, including browsing history, location, and phone identifying information.

And it’s been highly profitable. The New York Times, for instance, has moved away from traditional print advertising and paper delivery and is increasingly reliant on digital advertising and subscriptions. In its latest quarterly disclosure, the Times revealed that its digital ad revenues increased by $19.2 million over the same period in the previous year. The increase was driven in part by greater programmatic advertising revenue, a term for the automated ads served by third-party ad brokers. The Times, notably, is a member of IAB, the lobby group that defends the digital advertising industry from regulation.

Last month, as part of the regulatory push on data privacy, the FTC issued a $2 million fine against the advertising tech firm OpenX for illegally collecting and monetizing location data from children on a mass scale. Advertising platforms such as OpenX serve as an exchange, with data from thousands of web publishers and tens of thousands of apps feeding profiles of users into a system that advertising agencies use to place targeted ads that appear across multiple news websites as users browse the web.

Many gaming, weather, and dating apps, as well as a variety of websites, quietly collect behavioral, demographic, health, and location data on users that is sold to advertising tech brokers. Advertising agencies go to data brokers to better target potential consumers. As individuals browse the web, they are greeted by custom advertisements based on profiles of what data brokers believe to be their shopping habits, interests, or concerns.

OpenX, which processes nearly 100 billion ad requests per day, is one of the largest third-party platforms that serve as a key mechanism of this data exchange. The FTC alleged that OpenX vacuumed up location information on child-focused apps without parental consent and used the data to attract advertisers.

There were a few blogs and industry trade outlet stories that covered the settlement, but no pieces in major media outlets that have otherwise intensely covered Silicon Valley and the sprawling privacy issues presented by consumer-facing tech companies.

If major media outlets had covered the story, they would have had to acknowledge an awkward reality. OpenX is one of the largest third-party advertising platforms serving the news media, alongside AppNexus, Google, and Facebook. The company is used or has been used in recent months for the placement of targeted ads by outlets such as the New York Times, CNN, Gizmodo, HuffPost, Fox News, and Der Spiegel. Several outlets said they were in the process of reviewing the advertising partnership with OpenX but could not comment further.

The Gizmodo website, for example, uses trackers that store or sell user location data, including trackers from RhythmOne, Simpli.fi, Smart Adserver, Lotame, and OpenX, according to data compiled by Ghostery and privacy policy disclosures under the California Online Privacy Protection Act. Simpli.fi, according to disclosures, collects precise location data and partners with third-party data brokers such as Cuebiq.

“We work with OpenX as a marketplace through which advertisers may bid to place ads on our website. We do not provide OpenX with either data relating to children or precise location data,” said Danielle Rhoades Ha, a spokesperson for the New York Times. The Times’s response, however, belies the nature of the third-party ad broker business; the Times does collect user location data, and its third-party behavioral ad partners, such as OpenX, use an array of sources to monetize location data for the placement of ads on sites such as the Times’s website. Other publications did not respond or declined to comment on their ties to OpenX.

“Almost all sites are trapped in a system of surveillance capitalism, in which they either steal data or rely on technology that steals data.”

The growth of digital advertising has forced nearly every major for-profit news website to utilize the most intrusive forms of mass surveillance, including browsing history and location data — a dynamic highlighted by the OpenX fine.

“It’s really a puzzling and tricky situation because almost all sites are trapped in a system of surveillance capitalism, in which they either steal data or rely on technology that steals data,” said Krzysztof Modras, director of engineering and product at Ghostery. “I don’t think OpenX is abnormal at all.”

ad-tech-embed-2

Illustration: Soohee Cho for The Intercept

Though advertising is the focus of the data collection industry, the applications of user data are boundless. Law enforcement agencies have tapped the oceans of user data, including for the targeting of protesters and activist groups. Powerful political interests have hired data brokers to better influence voters. The data broker Acxiom, another tech firm that partners with many news websites, has provided data to the FBI and discussed programs to sell user data to the Pentagon.

The Pillar, a conservative Catholic publication, claimed to have obtained location data from the gay hookup app Grinder from third-party data brokers to out a prominent Catholic priest as gay.

In the case of the FTC fine issued in December, OpenX had sourced precise geolocation data from children under the age of 13, including child-directed apps “for toddlers,” “for kids,” and “preschool learning,” in the data the company offered to advertisers, in violation of the Children’s Online Privacy Protection Act, or COPPA, rule.

“OpenX secretly collected location data and opened the door to privacy violations on a massive scale, including against children,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement. “Digital advertising gatekeepers may operate behind the scenes, but they are not above the law.”

Following the settlement, OpenX agreed to a periodic review of the apps the company uses to source its data. Max Nelson, a spokesperson for the company, pointed to a statement issued by the firm that noted the use of children’s location data was an “unintentional error” that has since been fixed.

Critics argue that the FTC needs to go beyond enforcing COPPA by cracking down on the sources of data that feed into the larger ecosystem. Many children’s websites and apps contain code that enable the sharing of user data with brokers. The tracking technology, known as an SDK, or software development kit, is intentionally embedded by web developers in order to monetize user data.

Angela Campbell, professor of law at Georgetown University, has argued for more enforcement and an update to the current law to make it easier for regulators to create clear rules to protect children from targeted data collection and advertising. Campbell noted that OpenX’s many partners also could have been targeted by regulators.

“I have a children’s app, if it’s a child-directed app and I’m the app developer, and I use an SDK from OpenX, I’m responsible,” noted Campbell. “This whole bidding process and advertising process is not transparent so the public doesn’t know about it. The FTC has not enforced this COPPA law very much at all.”

News outlets are also implicated. Although major media publications say they are not intentionally selling children’s data to OpenX and other brokers, these statements are largely expressions of plausible deniability rather than affirmative knowledge.

Unlike products and services which are specifically targeted at children, which are required under federal COPPA guidelines to collect age information, media sites are not required to verify the age of users as their products are primarily directed at adult audiences. This means that by default, news media sites assume all readers are adults and treat the data of all visitors the same, so children’s data is almost certainly provided to brokers — it just isn’t labeled as such.

Even news media sites with student sections, such as CNN Student News, which describes itself as “ten-minute, commercial-free, daily news program designed for middle and high school classes” do not consistently collect age information, thereby following the media industry standard assumption that readers are adults.

Due to this lack of verification, CNN’s parent company WarnerMedia has a privacy policy that simply states “on most Sites, we do not knowingly collect information from children,” while still sending data to ad brokers without verification.

The near-unavoidable nature of online surveillance has presented similarly thorny issues for other privacy-centric organizations. Last year, Ashkan Soltani, a prominent privacy advocate, noted that the American Civil Liberties Union used many of the very data trackers the group has long critiqued. The ACLU shared personally identifiable information with third parties such as Facebook, including names, email addresses, phone numbers, and ZIP codes.

The decision to use the tracking technology was made by the ACLU’s fundraising and advocacy team, not its legal department, which often does not work in tandem, noted Catherine Crump, a former ACLU attorney who now leads the Samuelson Law, Technology & Public Policy Clinic at the University of California’s Berkeley School of Law.

This is all the more reason, advocates say, to focus on broad reform rather than simply highlighting cases of individual bad actors.

“There’s a tendency to focus on individual narratives even in the face of systemic problems,” said Alan Butler, the president of Electronic Privacy Information Center, who favors universal opt-out solutions for users and strict rules on so-called secondary collection of data.

“It’s not a solution to just bring a fine or enforcement when there is surveillance advertising happening up and down the stack and throughout the ecosystem,” added Butler.

The bigger question for the media might be, how do we create a free press that isn’t reliant on mass data collection?

“Does the free internet mean an internet dominated by surveillance and manipulation?” asked Chester, of Center for Digital Democracy. “What does it mean that the only way to have an independent news media is to have this kind of surveillance system? Those issues [have] not been covered by the press.”

The post Major Media Outlets That Use Invasive User Tracking Are Lobbying Against Regulation appeared first on The Intercept.

Cartoon: High tech medievalism

Published by Anonymous (not verified) on Tue, 01/02/2022 - 11:50pm in

Support these comics by joining the Sorensen Subscription Service!

Follow me on Twitter at @JenSorensen

How modern technology could bring democracy to a crossroads

Published by Anonymous (not verified) on Fri, 28/01/2022 - 4:58am in

Tags 

Technology

Advances in technology have resulted in employment and wage dislocations that are polarising society and undermining trust in political institutions. Technological progress has been the key driver of the enormous improvement in living standards in all the advanced economies since the Industrial Revolution. No wonder governments have welcomed technological progress and sought to foster it. Continue reading »

Emerging Challenges for ESG Reporting

Published by Anonymous (not verified) on Tue, 25/01/2022 - 1:34pm in

Tags 

Technology

What would your business do without its key stakeholders, such as investors and customers? The short answer is that it would rapidly ebb out. Now, all of these stakeholders want to see one thing: a responsible enterprise. This leaves you with no choice but to embrace ESG reporting fully. ESG sustainability reporting is the disclosure of a company’s…

The post Emerging Challenges for ESG Reporting appeared first on Peak Oil.

Facebook's Tamil Censorship Highlights Risks to Everyone

Published by Anonymous (not verified) on Wed, 19/01/2022 - 10:00pm in

Tags 

Technology, World

Facebook’s Dangerous Individuals and Organizations policy, a vast library of secret rules limiting the online speech of billions, is ostensibly designed to curtail offline violence. For the editors of the Tamil Guardian, an online publication covering Sri Lankan news, the policy has meant years of unrelenting, unexplained censorship.

Thusiyan Nandakumar, the Tamil Guardian’s editor, told The Intercept that over the past several years, Facebook has twice suspended the publication’s Instagram account and removed dozens of its posts without warning — each time claiming a violation of the DIO policy. The censorship comes at a time of heightened scrutiny of this policy from free speech advocates, civil society groups, and even the company’s official Oversight Board.

A string of meetings with Facebook have yielded nothing more than vague assurances, dissembling, and continued deletions, according to Nandakumar. Despite claims from the company that it would investigate the matter, Nandakumar says the situation has only gotten worse. Faced with ongoing censorship, the Guardian’s staff have decided to self-censor, sparingly using the outlet’s Instagram account for fear of losing it permanently.

Facebook admitted to The Intercept that some of the actions taken against the outlet had been made in error, while defending others without providing specifics.

Civil liberties advocates who discussed the Tamil Guardian’s treatment said that it’s an immediately familiar dynamic and part of a troubling trend. Facebook moderators, whether in South Asia, Latin America, or in any of the other places they patrol content, routinely take down posts first and ask questions later, the advocates said. They tend to lack expertise and local nuance, and their employer is often under pressure from local governments. In Sri Lanka, authorities have “picked up and harassed” Tamil journalists for critical coverage in real life, according to Steven Butler of the Committee to Protect Journalists, who called the Tamil Guardian’s Facebook experience “definitely a press freedom issue.” Indeed, experts said Facebook’s censorship of the Guardian calls into fundamental question its ability to sensibly distinguish “dangerous” content that can instigate violence from journalistic and cultural expression about groups that have engaged in violence.

Sri Lanka’s Information Offensive

The roots of the Tamil Guardian’s very 21st-century online content dilemma go back more than four decades, to the civil war that erupted between Sri Lanka’s government and members of its Tamil ethnic minority in 1983. It was then that the Liberation Tigers of Tamil Eelam began a 25-year, sporadically fought conflict to establish an independent Tamil state. During the war, the LTTE, also known as the Tamil Tigers, developed an increasingly ruthless reputation. To the ruling party of Sri Lanka and its allies in the West, the Tamil Tigers were a bloody, irredeemable militant group, described by the FBI in 2008 as “among the most dangerous and deadly extremists in the world.” But for many Sri Lankan Tamils, the Tigers were their army, a bulwark against a government intent on repressing them. “It was an organization that at the time became almost synonymous with Tamil demands for independence, as they were the group that was quite literally willing to die for it,” Nandakumar explained via email.

Unquestionably, however, the LTTE was a violent organization whose tactics included the use of suicide bombings, torture, civilian assaults, and political assassinations. The government, meanwhile, perpetrated decades of alleged war crimes, including the repeated massacre of Tamil civilians, generating waves of bloodshed that dispersed Sri Lankan Tamils throughout the world. The Tamil Guardian was founded in London in 1998 to serve members of this diaspora as well as those who remained in Sri Lanka. Though it was often considered a pro-Tiger publication in contemporaneous reporting during the war, the Tamil Guardian of today runs editorials by the likes of David Cameron and Ed Milliband, and its work is cited by larger outlets in the western political media mainstream.

The Tigers were defeated and dissolved in 2009, bringing the civil war to a close after the deaths of an estimated 40,000 civilians. In the years since, Sri Lankan Tamils have observed Maaveerar Naal, an annual remembrance of those who died in the war, with ceremonies both at home in Sri Lanka and abroad. “When [Tigers] died or were killed, people lost family, friends, colleagues,” said Nandakumar. “They are people that many around the world still want to remember and commemorate.”

Meanwhile, the Sri Lankan state has conducted what human rights observers have described as a campaign of brutal suppression against the memorialization of war casualties and other expressions of Tamil national identity. Mentions of the LTTE are subject to particularly fierce crackdowns by the hard-line government helmed by Gotabaya Rajapaksa, a former Sri Lankan defense secretary accused of directly ordering a multitude of atrocities during the war.

The suppression campaign has included attempts to stifle unwanted online commentary. In September 2019, Gen. Shavendra Silva, Sri Lanka’s army chief, announced a military offensive against “misinformation” at the nation’s Seventh Annual Cyber Security Summit. “Misguided youths sitting in front of the social media would be more dangerous than a suicide bomber,” Silva remarked. Soon after, Nandakumar says, the Tamil Guardian found itself unable to even mention the Tigers on Facebook without being subjected to censorship via the DIO policy. Nandakumar said that virtually any coverage from the Guardian related to the Tigers or even to sentiments of Tamil pride risks removal. Routinely stricken from the Tamil Guardian’s Facebook and Instagram accounts are posts covering Tamil nationalist political protests inside Sri Lanka as well as uploads merely depicting historically notable LTTE figures. Each time the Tamil Guardian has posts deleted or its account ejected, the only rationale provided is that the post somehow violated Facebook’s prohibition against “praise, support, or representation” of a dangerous organization, even though the policy is supposed to carry an exemption for journalism.

“We have never been accused of breaching any UK, or indeed U.S., laws particularly with regards to terrorism,” Nandakumar told The Intercept.

On the Tamil Guardian’s overall experience with Facebook, spokesperson Kate Hayes would say only, via email: “We remove content that violates our policies, but if accounts continue to share violating content, we will take stronger action. This could include temporary feature blocks and, ultimately, being removed from the platform.”

Though defunct, the Tigers are still a designated terror organization in the U.S., Canada, and the European Union, and Facebook cribs much of its DIO roster from these designations, blacklisting and limiting discussion of not only the Tigers but also 26 other allegedly affiliated persons and groups. Still, as Nandakumar points out, Western outlets like the BBC and U.K. Guardian routinely cover the same protests and remembrances as his publication, and write obituaries for the same ex-LTTE cadres, without their publications being deemed terrorist propaganda.

Nandakumar is convinced that the government is monitoring the Tamil Guardian’s Instagram account and reporting anything that could be construed pro-Tamil, Tiger or otherwise — although he concedes that he can’t prove the Sri Lankan state is behind the Facebook and Instagram suppression. In July 2020, Instagram removed a photo uploaded by the Tamil Guardian of Hugh McDermott, a member of the Australian Parliament, attending a Maaveerar Naal memorial event in Sydney, while a photo of a flower being laid at a similar event in London was deleted three months later. When the outlet published an article about Anton Balasingham, a former LTTE negotiator, in November 2020, on the anniversary of his death, an Instagram post promoting the article was quickly removed, as was a post that same month depicting the face of S. P. Thamilselvan, former head of the LTTE’s political wing and a peace negotiator who was killed by a Sri Lankan airstrike in 2007.

Liberation Tigers for Tamil Eelam's (LTTE) chief negotiator Anton Balasingham during the press conference at the Bogis-Bossey chateau in Celigny, Switzerland, on Feb. 23, 2006.

Liberation Tigers for Tamil Eelam’s chief negotiator Anton Balasingham during the press conference at the Bogis-Bossey chateau in Celigny, Switzerland, on Feb. 23, 2006.

Photo illustration: Soohee Cho for The Intercept, Francois Mori/AP

Facebook Adds to Government Pressure

In January 2021, following two years of vanishing posts and requests for more information from Facebook, Nandakumar was able to secure a meeting with the team responsible for DIO enforcement. “The meeting was cordial, with Facebook acknowledging that … their policy can sometimes be bluntly applied and that mistakes can occur,” Nandakumar said. “They encouraged us to send examples, assuring us that this was an issue of importance and one that they would look into.” Nandakumar says the outlet then submitted an 11-page brief documenting the removals and hoped for the best.

Meanwhile, the deletions kept coming. “We continued to send over examples, ensuring Facebook was kept almost constantly aware of the number of times our news coverage was being unfairly removed,” said Nandakumar.

Despite Facebook’s suggestion that the posts had been removed in error, Nandakumar says that in February 2021, the DIO team flatly told him that the Tamil Guardian account had in fact been properly punished for its “praise, support, and representation” of terrorism. “It was extremely disappointing,” recounted Nandakumar in an email to The Intercept. “We had what seemed like a productive meeting, sent over a detailed brief and repeatedly emailed extensive examples, yet received a curt and blunt response which failed to address any of the issues we had raised. We were being brushed off. We highlighted once more that some of the events we covered were actually taking place in the [U.S.], legally and with full permission, but were still inexplicably being removed. Their reasoning just did not hold.”

“We had what seemed like a productive meeting … yet received a curt and blunt response which failed to address any of the issues we had raised.”

The deletions continued apace: When Kittu Memorial Park in Jaffna, Sri Lanka, burned to the ground in March 2021, the Tamil Guardian wrote an article accompanied by an Instagram post reporting on the suspected arson attack. The park was named for a Tiger colonel who killed himself in 1993, and Facebook deleted the Instagram post associated with the Guardian article. Two months later, when the outlet published a series revisiting the 2009 destruction of a civilian hospital, believed to have been perpetrated by the Sri Lankan government and described by Human Rights Watch as a war crime, the accompanying Instagram posts were removed.

A photo of Kittu Memorial Park posted to Instagram by the Tamil Guardian in March 2021 and removed later that month.

Tamil Guardian

A photo of Australian MP Hugh McDermott attending a Sri Lankan civil war memorial event in Sydney posted by the Tamil Guardian’s Instagram account, removed by Facebook in July 2020.

Tamil Guardian

During the weekend of Maaveerar Naal this past November, the account was reopened with an automated Facebook message saying that the suspension had been a mistake and then banned once more within the same 24-hour period. Though the account is currently reactivated, Nandakumar says the Tamil Guardian’s editors decided that using it to reach and grow the publication’s audience of about 40,000 monthly readers isn’t worth the risk.

Facebook’s Hayes wrote, “We removed the Tamil Guardian account in error but we restored it as soon as we realized our mistake. We apologize for any inconvenience caused.” The company did not answer questions about why the Tamil Guardian’s deleted posts had been removed if its overall suspension had been an error.

The Tamil Guardian obtained a second meeting with Facebook this past October after a pressure campaign from Canadian and British parliamentarians and Reporters Without Borders. At that meeting, Facebook cited its obligation “to comply with U.S. government regulation,” Nandakumar said, and stated that “our content may have continued to breach their guidelines.”

Experts say there is no law on the books in the U.S. stopping Facebook from letting journalists or ordinary users freely discuss or even praise LTTE figures, commemorate the war’s victims, or depict contemporary remembrances of the dead. “I know of no obligation under U.S. law, no requirement that they remove such material,” Electronic Frontier Foundation Civil Liberties Director David Greene told The Intercept. “For years they would say, ‘I’m sorry, we are required by law to take that down.’ And we would ask them for the law, and we wouldn’t get anything.”

The Daunting Job and “Human Error” of Moderators

It appears then to be Facebook, not the federal government of the U.S., that is collapsing the LTTE and Sri Lankan Tamil nationalism into a single entity, the consequences of which make exploring the country’s painful past and uncertain future from the perspective of the war’s losing side a near impossibility on an internet where a presence on the company’s platforms is crucial to reaching an audience.

Nandakumar said that the history of the Tigers and the future of Sri Lanka’s Tamils are impossible to untangle. “For newspapers and media organizations reporting on the conflict and the Tamil cause, it was impossible to avoid the LTTE – just as much as it would have been to avoid the Sri Lankan state,” he continued. Today, Nandakumar said, “alongside highlighting of the daily repression faced in the Tamil homeland, our role is to reflect and analyze the variety of Tamil political voices and opinion. We report on commemoration of historical or significant events as these remain important to the Tamil polity, who continue to mark these dates despite Sri Lanka’s attempts to stop them.”

Tamil Guardian reporters, along with staff from other outlets, are frequently harassed and detained by Sri Lankan police, sometimes on the grounds that they’ve violated national anti-terror laws, according to a Reporters Without Borders report. In 2019, the Tamil Guardian’s Shanmugam Thavaseelan was arrested for “trying to cover a demonstration calling for justice for the Tamil civilians who disappeared during the civil war,” as the report put it.

Nandakumar says he’s convinced that the Sri Lankan government has a hand in the Facebook deletions, in part because he’s learned that it has attempted similar tactics on other platforms: In December 2020, Twitter informed the Tamil Guardian that the Sri Lankan government had lobbied, unsuccessfully, to have the outlet’s tweets deleted on the platform. “This coincided with a ramping up of media suppression across the island and with the removal of our content on Facebook and Instagram.”

“What is one person’s dangerous individual or organization is someone else’s hero.”

“The action taken against The Tamil Guardian account was not in response to any government pressure or mass reporting,” said Facebook’s Hayes, adding that each of the two Instagram suspensions “was a case of human error.”

Greene said that the Tamil Guardian’s treatment is illustrative of a fundamental parochialism behind the DIO policy: “What is one person’s dangerous individual or organization is someone else’s hero.” But before values come into play, there is the question of basic facts; a moderator overseeing Sri Lanka must know “who the Tamil Tigers were, what the political situation was, the fact that they don’t exist, what their ongoing legacy might be,” Greene said. “The amount of expertise that a company like Facebook is required to have on every single geopolitical situation around the world is really startling.”

According to Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, the rigidity of Facebook’s DIO roster risks causing what she described as “cultural and historical erasure,” a status quo under which one can’t publicly and freely discuss a group designated as an enemy by the U.S., even after that enemy ceases to exist. “We’ve seen this with some groups in Latin America that are still on the U.S. [terror] list, like FARC,” the Colombian guerrilla army that dissolved in 2017 but remains banned from free discussion under Facebook policy. “At some point, you have to be able to talk about these things.”

Update: January 19, 2022
This article has been changed to reflect a decision by the Tamil Guardian this week to resume posting on Instagram in a limited fashion.

The post Facebook’s Tamil Censorship Highlights Risks to Everyone appeared first on The Intercept.

Pegasus Spyware Used Against Dozens of Activist Women in the Middle East

Published by Anonymous (not verified) on Wed, 19/01/2022 - 7:40am in

Tags 

Technology, World

Dozens of women journalists and human rights defenders in Bahrain and Jordan have had their phones hacked using NSO Group’s Pegasus spyware, according to a report by Front Line Defenders and Access Now.

The report adds to a growing public record of Pegasus misuse globally, including against dissidents, reporters, diplomats, and members of the clergy. It also threatens to increase pressure on the Israel-based NSO Group, which in November was placed on a U.S. trade blacklist.

“When governments surveil women, they are working to destroy them,” wrote Marwa Fatafta, Middle East and North Africa policy manager at Access Now, in a statement accompanying the report. “Surveillance is an act of violence. It is about exerting power over every aspect of a woman’s life through intimidation, harassment, and character assassination. The NSO Group and its government clients are all responsible, and must be publicly exposed and disgraced.”

NSO Group was placed on the trade blacklist after a consortium of journalists working with the French nonprofit Forbidden Stories reported multiple cases in which journalists and activists appear to have been targeted by foreign governments using the spyware. (NSO denied the allegations.) The same month, researchers from Amnesty International and the University of Toronto’s Citizen Lab said they found Pegasus on the phones of six Palestinian human rights activists. Last week, another Citizen Lab report found that dozens of Salvadoran human rights activists’ phones had been hacked using Pegasus.

Pegasus is breathtaking in its ability to take complete control of a device without detection and is often referred to as “military grade” spyware. Researchers have said that it can access every message the subject has sent and received, including from encrypted messaging services; it can also access the camera and microphone, record the screen, and monitor the subject’s location via GPS.

Apple sued NSO Group in November, trying to stop the company’s software from compromising its operating systems. That followed a similar suit from Facebook in 2019 alleging that the company was hacking the social media giant’s WhatsApp messaging service.

NSO Group did not immediately respond to a request for comment on the new report. But earlier this week, in the wake of the El Salvador research, it said that it only grants licenses to government intelligence and law enforcement agencies following “a process of investigation and licensing” by the Israeli Ministry of Defense. The company added that the use of its cybersecurity tools to monitor dissidents, activists, and journalists is a serious misuse of that technology.

In a study published in December 2020, Citizen Lab identified 25 countries whose governments had acquired surveillance systems from Circles, a company affiliated with NSO Group: Australia, Belgium, Botswana, Chile, Denmark, Ecuador, El Salvador, Estonia, Equatorial Guinea, Guatemala, Honduras, Indonesia, Israel, Kenya, Malaysia, Mexico, Morocco, Nigeria, Peru, Serbia, Thailand, the United Arab Emirates, Vietnam, Zambia, and Zimbabwe.

The hacks of the activists in Jordan and Bahrain now add two more countries to the list.

Beaten by Police Then Hacked Eight Times

The report documents how Pegasus can have a particularly egregious impact on women, who are disproportionately vulnerable to the weaponization of personal information when governments seek to intimidate, harass, and publicly smear dissidents.

It details the case of Ebtisam al-Saegh, a renowned human rights defender who works in Bahrain with the advocacy group SALAM for Democracy and Human Rights. Al-Saegh’s iPhone was hacked at least eight times between August and November 2019 with Pegasus spyware, according to the researchers.

The privacy violations extended what the report described as brutal harassment by Bahraini authorities. On May 26, 2017, the report said, Bahrain’s National Security Agency summoned al-Saegh to the Muharraq Police Station. Interrogators subjected her to verbal abuse and physically beat and sexually assaulted her. They threatened her with rape if she did not halt her human rights activism. Upon release, she was immediately taken to a hospital.

“I am in a state of daily fear and terror after I was informed by Front Line Defenders that I was spied on.”

“I am in a state of daily fear and terror after I was informed by Front Line Defenders that I was spied on,” the report quotes al-Saegh as saying. “I started to be afraid of having the phone next to me, especially when I am in the bedroom or even at home among my family, my children, my husband.”

Front Line Defenders’ forensic investigation found that al-Saegh’s phone was compromised multiple times in August 2019 (on August 8, 9, 12, 18, 28, and 31); on September 19, 2019; and on November 22, 2019. Traces of process names linked to Pegasus were identified on her phone, such as “roleaccountd,” “stagingd,” “xpccfd,” “launchafd,” “logseld,” “eventstorpd,” “libtouchregd,” “frtipd,” “corecomnetd,” “bh,” and “boardframed.” Amnesty International’s Security Lab and the Citizen Lab have both attributed these process names to the NSO spyware.

Another victim described in the report is Hala Ahed Deeb, a human rights activist and member of the legal team defending the Jordan Teachers’ Syndicate, one of the country’s largest labor unions. The Jordanian government dissolved the union in December 2020 in response to mass protests. Deeb’s phone was compromised by Pegasus on March 16, 2021, according to the report.

Other victims mentioned in the report include Emirati activist Alaa al-Siddiq, Alaraby journalist Rania Dridi, and Al Jazeera broadcast journalist Ghada Oueiss.

The report calls for an “immediate moratorium on the use, sale, and transfer of surveillance technologies produced by private firms until adequate human rights safeguards and regulation is in place” and a “move to take serious and effective measures against surveillance technology providers like NSO Group.”

The post Pegasus Spyware Used Against Dozens of Activist Women in the Middle East appeared first on The Intercept.

Pages