Technology

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows

Published by Anonymous (not verified) on Thu, 22/10/2020 - 6:06am in

After years of backlash over controversial government work, Google technology will be used to aid the Trump administration’s efforts to fortify the U.S.-Mexico border, according to documents related to a federal contract.

In August, Customs and Border Protection accepted a proposal to use Google Cloud technology to facilitate the use of artificial intelligence deployed by the CBP Innovation Team, known as INVNT. Among other projects, INVNT is working on technologies for a new “virtual” wall along the southern border that combines surveillance towers and drones, blanketing an area with sensors to detect unauthorized entry into the country.

In 2018, Google faced internal turmoil over a contract with the Pentagon to deploy AI-enhanced drone image recognition solutions; the capability sparked employee concern that Google was becoming embroiled in work that could be used for lethal purposes and other human rights concerns. In response to the controversy, Google ended its involvement with the initiative, known as Project Maven, and established a new set of AI principles to govern future government contracts.

The employees also protested the company’s deceptive claims about the project and attempts to shroud the military work in secrecy. Google’s involvement with Project Maven had been concealed through a third-party contractor known as ECS Federal.

Contracting documents indicate that CBP’s new work with Google is being done through a third-party federal contracting firm, Virginia-based Thundercat Technology. Thundercat is a reseller that bills itself as a premier information technology provider for federal contracts.

The contract was obtained through a FOIA request filed by Tech Inquiry, a new research group that explores technology and corporate power founded by Jack Poulson, a former research scientist at Google who left the company over ethical concerns.

Not only is Google becoming involved in implementing the Trump administration’s border policy, the contract brings the company into the orbit of one of President Donald Trump’s biggest boosters among tech executives.

Documents show that Google’s technology for CBP will be used in conjunction with work done by Anduril Industries, a controversial defense technology startup founded by Palmer Luckey. The brash 28-year-old executive — also the founder of Oculus VR, acquired by Facebook for over $2 billion in 2014 — is an open supporter of and fundraiser for hard-line conservative politics; he has been one of the most vocal critics of Google’s decision to drop its military contract. Anduril operates sentry towers along the U.S.-Mexico border that are used by CBP for surveillance and apprehension of people entering the country, streamlining the process of putting migrants in DHS custody.

CBP’s Autonomous Surveillance Towers program calls for automated surveillance operations “24 hours per day, 365 days per year” to help the agency “identify items of interest, such as people or vehicles.” The program has been touted as a “true force multiplier for CBP, enabling Border Patrol agents to remain focused on their interdiction mission rather than operating surveillance systems.”

It’s unclear how exactly CBP plans to use Google Cloud in conjunction with Anduril or for any of the “mission needs” alluded to in the contract document. Google spokesperson Jane Khodos declined to comment on or discuss the contract. CBP, Anduril, and Thundercat Technology did not return requests for comment.

However, Google does advertise powerful cloud-based image recognition technology through its Vision AI product, which can rapidly detect and categorize people and objects in an image or video file — an obvious boon for a government agency planning to string human-spotting surveillance towers across a vast border region.

According to a “statement of work” document outlining INVNT’s use of Google, “Google Cloud Platform (GCP) will be utilized for doing innovation projects for C1’s INVNT team like next generation IoT, NLP (Natural Language Processing), Language Translation and Andril [sic] image camera and any other future looking project for CBP. The GCP has unique product features which will help to execute on the mission needs.” (CBP confirmed that “Andril” is a misspelling of Anduril.)

The document lists several such “unique product features” offered through Google Cloud, namely the company’s powerful machine-learning and artificial intelligence capabilities. Using Google’s “AI Platform” would allow CBP to leverage the company’s immense computer processing power to train an algorithm on a given set of data so that it can make educated inferences and predictions about similar data in the future.

Google’s Natural Language product uses the company’s machine learning resources “to reveal the structure and meaning of text … [and] extract information about people, places, and events,” according to company marketing materials, a technology that can be paired with Google’s speech-to-text transcription software “to extract insights from audio conversations.”

DV.load('//www.documentcloud.org/documents/7273640.js', {
container: '#dcv-7273640',
height: '450',
sidebar: false,
width: '100%'
});

Although it presents no physical obstacle, Anduril’s “virtual wall” system works by rapidly identifying anyone approaching or attempting to cross the border (or any other perimeter), relaying their exact location to border authorities on the ground, offering a relatively cheap, technocratic, and less politically fraught means of thwarting would-be migrants.

Proponents of a virtual wall have long argued that such a solution would be a cost-effective way to increase border security. The last major effort, known as SBInet, was awarded to Boeing during the George W. Bush administration, and resulted in multibillion-dollar cost overruns and technical failures. In recent years, both leading Democrats and Republicans in Congress have favored a renewed look at technological solutions as an alternative to a physical barrier along the border.

Anduril surveillance offerings consist of its “Ghost” line of autonomous helicopter drones operated in conjunction with Anduril “Sentry Towers,” which bundle cameras, radar antennae, lasers, and other sophisticated sensors atop an 80-foot pole. Surveillance imagery from both the camera-toting drones and sensor towers is ingested into “Lattice,” Anduril’s artificial intelligence software platform, where the system automatically flags suspicious objects in the vicinity, like cars or people.

INVNT’s collaboration with Anduril is described in a 2019 presentation by Chris Pietrzak, deputy director of CBP’s Innovation Team, which listed “Anduril towers” among the technologies being tested by the division that “will enable CBP operators to execute the mission more safely and effectively.”

DV.load('//www.documentcloud.org/documents/7273652.js', {
container: '#dcv-7273652',
height: '450',
sidebar: false,
width: '100%'
});

And a 2018 Wired profile of Anduril noted that one sentry tower test site alone “helped agents catch 55 people and seize 982 pounds of marijuana” in a 10-week span, though “for 39 of those individuals, drugs were not involved, suggesting they were just looking for a better life.” The version of Lattice shown off for Wired’s Steven Levy appeared to already implement some AI-based object recognition similar to what Google provides through the Cloud AI system cited in the CBP contract.

The documents do not spell out how, exactly, Google’s object recognition tech would interact with Anduril’s technology. But Google has excelled in the increasingly competitive artificial intelligence field; creating a computer system from scratch capable of quickly and accurately interpreting complex image data without human intervention requires an immense investment of time, money, and computer power to “train” a given algorithm on vast volumes of instructional data.

“We see these smaller companies who don’t have their own computational resources licensing them from those who do, whether it be Anduril with Google or Palantir with Amazon,” Meredith Whittaker, a former Google AI researcher who previously helped organize employee protests against Project Maven and went on to co-found NYU’s AI Now Institute, told The Intercept.

“This cannot be viewed as a neutral business relationship. Big Tech is providing core infrastructure for racist and harmful border regimes,” Whittaker added. “Without these infrastructures, Palantir and Anduril couldn’t operate as they do now, and thus neither could ICE or CBP. It’s extremely important that we track these enabling relationships, and push back against the large players enabling the rise of fascist technology, whether or not this tech is explicitly branded ‘Google.’”

Anduril is something of an outlier in the American tech sector, as it loudly and proudly courts controversial contracts that other larger, more established companies have shied away from. The company also recruited heavily from Palantir, another tech company with both controversial anti-immigration government contracts and ambitions of being the next Raytheon. Both Palantir and Anduril share a mutual investor in Peter Thiel, a venture capitalist with an overtly nationalist agenda and a cozy relationship with the Trump White House. Thiel has donated over $2 million to the Free Forever PAC, a political action group whose self-professed mission includes, per its website, working to “elect candidates who will fight to secure our border [and] create an America First immigration policy.”

Luckey has repeatedly excoriated Google for abandoning the Pentagon, a decision he has argued was driven by “a fringe inside of their own company” that risks empowering foreign adversaries in the race to adopt superior AI military capabilities. In comments last year, he dismissed any concern that the U.S. government could abuse advanced technology and criticized Google employees who signed a letter protesting the company’s involvement in Project Maven over ethical and moral concerns.

“You have Chinese nationals working in the Google London office signing this letter, of course they don’t mind if the United States has good military technology,” said Luckey, speaking at the University of California, Irvine. “Of course they don’t mind if China has better technology. They’re Chinese.”

As The Intercept previously reported, as Luckey publicly campaigned against Google’s withdrawal from the Project Maven, his company quietly secured a contract for the very same initiative.

Anduril’s advanced line of battlefield drones and surveillance towers — along with its eagerness to take defense contracts now viewed as too toxic to touch by rival firms — has earned it lucrative contracts with the Marine Corps and Air Force, in addition to its Homeland Security work. In a 2019 interview with Bloomberg, Anduril chair Trae Stephens, also a partner at Thiel’s venture capital firm, dismissed the concerns of American engineers who complain. “They said, ‘We didn’t sign up to develop weapons,’” Stephens said, explaining, “That’s literally the opposite of Anduril. We will tell candidates when they walk in the door, ‘You are signing up to build weapons.’”

Palmer Luckey has not only campaigned for more Silicon Valley integration with the military and security state, he has pushed hard to influence the political system. The Anduril founder, records show, has personally donated at least $1.7 million to Republican candidates this cycle. On Sunday, he hosted President Donald Trump at his home in Orange County, Calif., for a high-dollar fundraiser, along with former German ambassador Richard Grenell, Kimberly Guilfoyle, and other Trump campaign luminaries.

Anduril’s lobbyists in Congress also pressed lawmakers to include increased funding for the CBP Autonomous Surveillance Tower program in the DHS budget this year, a request that was approved and signed into law. In July, around the time the program funding was secured, the Washington Post reported that the Trump administration deemed Anduril’s virtual wall system a “program of record,” a “technology so essential it will be a dedicated item in the homeland security budget,” reportedly worth “several hundred million dollars.”

The autonomous tower project awarded to Anduril and funded through CBP is reportedly worth $250 million. Records show that $35 million for the project was disbursed in September by the Air and Marine division, which also operates drones.

Anduril’s approach contrasts sharply with Google’s. In 2018, Google tried to quell concerns over how its increasingly powerful AI business could be literally weaponized by publishing a list of “AI Principles” with the imprimatur of CEO Sundar Pichai.

“We recognize that such powerful technology raises equally powerful questions about its use,” wrote Pichai, adding that the new principles “are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.” Chief among the new principles were directives to “Be socially beneficial,” “Avoid creating or reinforcing unfair bias,” and a mandate to “continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.”

The principles include a somewhat vague list of “AI applications we will not pursue,” such as “Technologies that cause or are likely to cause overall harm,” “weapons,” “surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

It’s difficult to square these commitments to peaceful, nonsurveillance AI humanitarianism with a contract that places Google’s AI power behind both a military surveillance contractor and a government agency internationally condemned for human rights violations. Indeed, in 2019, over 1,000 Google employees signed a petition demanding that the company abstain from providing its cloud services to U.S. immigration and border patrol authorities, arguing that “by any interpretation, CBP and ICE are in grave violation of international human rights law.”

“This is a beautiful lesson in just how insufficient this kind of corporate self-governance really is,” Whittaker told The Intercept. “Yes, they’re subject to these AI principles, but what does subject to a principle mean? What does it mean when you have an ethics review process that’s almost entirely non-transparent to workers, let alone the public? Who’s actually making these decisions? And what does it mean that these principles allow collaboration with an agency currently engaged in human rights abuses, including forced sterilization?”

“This reporting shows that Google is comfortable with Anduril and CBP surveilling migrants through their Cloud AI, despite their AI Principles claims to not causing harm or violating human rights,” said Poulson, the founder of Tech Inquiry.

“Their clear strategy is to enjoy the high profit margin of cloud services while avoiding any accountability for the impacts,” he added.

The post Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows appeared first on The Intercept.

Twitter Surveillance Startup Targets Communities of Color for Police

Published by Anonymous (not verified) on Thu, 22/10/2020 - 3:55am in

Tags 

Technology

New York startup Dataminr aggressively markets itself as a tool for public safety, giving institutions from local police to the Pentagon the ability to scan the entirety of Twitter using sophisticated machine-learning algorithms. But company insiders say their surveillance efforts were often nothing more than garden-variety racial profiling, powered not primarily by artificial intelligence but by a small army of human analysts conducting endless keyword searches.

In July, The Intercept reported that Dataminr, leveraging its status as an official “Twitter Partner,” surveilled the Black Lives Matter protests that surged across the country in the wake of the police killing of George Floyd. Dataminr’s services were initially designed to help hedge funds turn the first glimmers of breaking news on social media into market-beating trades, enabling something like a supercharged version of professional Twitter dashboard TweetDeck. They have since been adopted by media outlets, the military, police departments, and various other organizations seeking real-time alerts on chaos and strife.

Dataminr’s early backers included Twitter and the CIA, and it’s not hard to see why the startup looked so promising to investors. Modern American policing hungers for vast quantities of data — leads to chase and intelligence to aggregate — and the entirety of online social media is now considered fodder. In a 2019 pitch to the FBI, Dataminr said its goal was “to integrate all publicly available data signals to create the dominant information discovery platform.” In addition to the bureau, the company has entered test programs and contracts with local and state police forces across the country.

But despite promises of advanced crime-sniffing technology, conversations with four sources directly familiar with Dataminr’s work, who asked to remain anonymous because they were not permitted to speak to the press about their employment, suggest that the company has at times relied on prejudice-prone tropes and hunches to determine who, where, and what looks dangerous. Through First Alert, its app for public sector clients, Dataminr has offered a bespoke, scariest possible version of the web: a never-ending stream of notifications of imminent or breaking catastrophes to investigate. But First Alert’s streams were assembled in ways prone to racial bias, sources said, by teams of “Domain Experts” assigned to rounding up as many “threats” as possible. Hunting social media for danger and writing alerts for cops’ iPhones and laptop screens, these staffers brought their prejudices and preconceptions along with their expertise, and were pressed to search specific neighborhoods, streets, and even housing complexes for crime, sources said.

Dataminr said in a written comment, provided by Kerry McGee of public relations firm KWT Global, that it “rejects in the strongest possible terms the suggestion that its news alerts are in any way related to the race or ethnicity of social media users,” and claimed, as Dataminr has in the past, that the firm’s practice of monitoring the speech and activities of individuals without their knowledge, on behalf of the police, does not constitute surveillance. McGee added that “97% of our alerts are generated purely by AI without any human involvement.” McGee did not provide clarification about how much of Dataminr’s police-bound alerts — as opposed to other Dataminr alerts, like those created for news organizations and corporate clients — are created purely through “AI,” and sources contacted for this article were befuddled by the 97 percent figure.

Hunting for “Possible Gang Members” on Twitter

One significant part of Dataminr’s work for police, the sources said, has been helping flag potential gang members. Police gang databases are typically poorly regulated and have become notorious vehicles for discriminatory policing, unjust sentencing, and the criminalization of children; they’re filled with the names of thousands and thousands of young people never actually accused of any crime. Dataminr sources who spoke to The Intercept didn’t know exactly how allegedly “gang-related” tweets and other social media posts flagged via Dataminr were ultimately used by the company’s police customers. But in recent years, social media monitoring has become an important way to fill gang databases.

Staffers are pressed to search specific neighborhoods, streets, and even housing complexes for crime.

As part of a broader effort to feed information about crime to police under the general rubric of public “threats,” Dataminr staffers attempted to flag potential violent gang activity without the aid of any special algorithms or fancy software, sources said; instead they pored over thousands and thousands of tweets, posts, and pictures, looking for armed individuals who appeared to be affiliated with a gang. It’s an approach that was neither an art nor a science and, according to experts in the field, is also a surefire way of putting vulnerable men and women of color under police scrutiny or worse.

“It wasn’t specific,” said one Dataminr source with direct knowledge of the company’s anti-gang work. “Anything that could be tangentially described as a [gang-related] threat” could get sucked into Dataminr’s platform.

With no formal training provided on how to identify or verify gang membership, Dataminr’s army of “Domain Experts” were essentially left to use their best judgment, or to defer to ex-cops on staff. If Dataminr analysts came across, say, a tweet depicting a man with a gun and some text that appeared to be gang-related, that could be enough to put the posting in a police-bound stream as containing a “possible gang member,” this source said, adding that there was little if any attempt to ever check whether such a weapon was legally possessed or obtained.

In practice, Dataminr’s anti-gang activity amounted to “white people, tasked with interpreting language from communities that we were not familiar with” coached by predominantly white former law enforcement officials who themselves “had no experience from these communities where gangs might be prevalent,” per a second source. “The only thing we were using to identify them was hashtags, possibly showing gang signs, and if there was any kind of weapon in the photo,” according to the first source. There was “no institutional definition of ‘potential gang member,’ it was open to interpretation.” All that really mattered, these sources say, was finding as much danger as possible, real or perceived, and transmitting it to the police.

In its written comments, Dataminr stated that “First Alert does not identify indicators of violent gang association or identify whether an event is a crime.” Asked whether the company acknowledges providing any gang-related alerts or comments to customers, McGee did not directly respond, saying only that “there is no alert topic for crime or gang-related events.” Dataminr did not respond to a question about the race of former law enforcement personnel it employs.

There was no institutional definition of “potential gang member.”

A Dataminr source said that there never appeared to be any minimum age on who was flagged as a potential gang affiliate: “I can definitely recall kids of school-age nature, late middle school to high school” being ingested into Dataminr’s streams. Unlike Dataminr’s work identifying emerging threats in Europe or the Middle East, the company’s counter-gang violence monitoring felt slapdash by comparison, two Dataminr sources said. “There’s a great deal of latitude in determining [gang membership], it wasn’t like other kind of content, it was far more nebulous,” said the first source, who added that Dataminr staff were at times concerned that the pictures they were flagging as evidence of violent gang affiliation could be mere adolescent tough-guy posturing, completely out of context, or simply dated: “We had no idea how old they were,” the source added. “People save [and repost] photos. It was completely open to interpretation.”

While any image depicting a “possible gang member” with a weapon would immediately be flagged and transmitted to the police, Dataminr employees, tasked with finding “threats” nationwide, worried why some armed men were subject to software surveillance while others were not. “The majority of the focus stayed toward gangs that are historically black and Latino,” said one source. “More effort was put into inner-city Chicago gangs than the Three Percenters or things related to Aryan Brotherhood,” this source continued, adding that they recalled worried conversations with colleagues about why the company spent so much time finding online images of armed black and brown people — who may have owned or possessed such a weapon legally — but not white people with guns.

Two Dataminr sources directly familiar with these operations told The Intercept that although the company’s teams of Domain Experts were untrained and generally uninformed on the subject of American street gangs, the company employed ex-law enforcement agents as in-house “gang experts” to help scan social media.

Human Stereotypes Instead of Machine Intelligence

Although Dataminr has touted itself as an “AI” firm, two company sources told The Intercept this overstated matters, and that most of the actual monitoring at the company was done by humans scrolling, endlessly, through streams of tweets. “They kept saying ‘the algorithm’ was doing everything,” said a Dataminr source, but “it was actually mostly humans.” But this large staff of human analysts was still expected to deliver the superhuman output of an actual product based on some sort of “artificial intelligence” or sophisticated machine learning. Inadequate training combined with strong pressure to crank out content to meet internal quotas and impress police clientele dazzled by “artificial intelligence” presentations led to predictable problems, the two sources said. The company approach to crime fighting began to resemble “creating content in their heads that isn’t there,” said the second source, “thinking Dataminr can predict the future.”

As Dataminr can’t in fact predict crime before it occurs, these sources say that analysts often fell back on stereotyped assumptions, with the company going so far as providing specific guidance to seek crime in certain areas, with the apparent assumption that the areas were rife with criminality. Neighborhoods with large communities of color, for example, were often singled out for social media surveillance in order to drum up more threat fodder for police.

Although the company touts itself as an “AI” firm, most of the actual monitoring was apparently done by humans scrolling, endlessly, through streams of tweets.

“It was never targeted towards other areas in the city, it was poor, minority-populated areas,” explained one source. “Minneapolis was more focused on urban areas downtown, but weren’t focusing on Paisley Park — always ‘downtown areas,’ areas with projects.”

The two sources told The Intercept that Dataminr had at times asked analysts to create information feeds specific to certain housing projects populated predominantly by people of color, seeming to contradict the company’s 2016 claim that it does not provide any form of “geospatial analysis.” “Any sort of housing project, bad neighborhood, bad intersection, we would definitely put those in the streams,” explained one source. “Any sort of assumed place that was dangerous. It was up to the Domain Experts. It was just trial and error to see what [keywords] brought things up. Dataminr obviously didn’t care about unconscious bias, they just wanted to get the crimes before anyone else.”

Two Dataminr sources familiar with the company’s Twitter search methodology explained that although Dataminr isn’t able to provide its clients with direct access to the locational coordinates sometimes included in tweet metadata, the company itself still uses location metadata embedded in tweets, and is able to provide workarounds when asked, offering de facto geospatial analysis. At times this was accomplished using a simple keyword search through the company’s access to the Twitter “firehose,” a data stream containing every public tweet from the moment it’s published. Keyword-based trawling would immediately alert Dataminr anytime anyone tweeted publicly about a particular place. “Any time that Malcolm X Boulevard was mentioned, we would be able to see it” in a given city, explained one source by way of a hypothetical.

Dataminr wrote in its statement to The Intercept that “First Alert identifies breaking news events without any regard to the racial or ethnic composition of an area where a breaking news event occurs. … Race, ethnicity, or any other demographic characteristic of the people posting public social media posts about events is never part of determining whether a breaking news alert is sent to First Alert clients.” It also said that “First Alert does not enable any type of geospatial analysis. First Alert provides no feature or function that allows a user to analyze the locations of specific social media posts, social media users or plot social media posts on a map.”

Asked if Dataminr domain experts look for social media leads specific to certain geographic areas, McGee did not deny that they do, writing only, “Dataminr detects events across the entire world wherever they geographically occur.”

“In a way, Dataminr and law enforcement were perpetuating each other’s biases.”

On other occasions, according to one source, Dataminr employed the use of a “pseudo-predictive algorithm” that scrapes a user’s past tweets for clues about their location, though they emphasized this tool functioned with “not necessarily any degree of accuracy.” This allows Dataminr to build, for example, bespoke in-house surveillance streams of potential “threats” pegged to areas police wish to monitor (for instance, if a police department wanted more alerts about threatening tweets from or about Malcolm X Boulevard, or a public housing complex). These sources stressed that Dataminr would try to provide these customized “threat” feeds whenever asked by police clients, even as staff worried it amounted to blatant racial profiling and the propagation of law enforcement biases about where crimes were likely to be committed.

Dataminr told The Intercept in response that “First Alert provides no custom solutions for any government organizations, and the same First Alert product is used by all government organizations. All First Alert customers have access to the same breaking news alerts.”

Even if public sector customers use the same version of the First Alert app, the company itself has indicated that the alerts provided to customers could be customized: Its 2019 presentation to the FBI includes a slide stating that clients can adjust “user-defined criteria” like “topic selection” and “geographic filters” prior to “alert delivery.” Shown the below slide from the presentation, Dataminr said it was consistent with its statement.

The specially crafted searches focused on areas of interest to police were done “mainly looking for criminal incidents in those areas,” one source explained. When asked by police departments to find criminality on social media, “areas that were predominantly considered more white” were routinely overlooked, while poorer neighborhoods of color were mined for crime content.

Another source told The Intercept of an internal project they were placed on as part of a trial relationship with the city government of Chicago, for which they were instructed to scan Twitter for “Entertainment news from the North Side, crime news from the South Side.” (It is not clear if these instructions came from the city of Chicago; the Chicago Police Department did not respond to a request for comment.)

This source explained that through its efforts to live up to the self-created image as an engine of bleeding-edge “intelligence” about breaking events, “Dataminr is in a lot of ways regurgitating whatever the Domain Experts believe people want to see or hear” — those people in this case being the police. This can foster a sort of feedback loop of racial prejudice: stereotyped assumptions of what sort of keyword searches and locales might yield evidence of criminality are then used to bolster the stereotyped assumptions of American police. “In a way, Dataminr and law enforcement were perpetuating each other’s biases,” the source said, forming a sort of Twitter-based perpetual motion machine of racial confirmation bias: “We would make keyword-based streams [for police] with biased keywords, then law enforcement would tweet about the crimes, then we would pick up those tweets.”

Experts Alarmed by Techniques

Experts on criminal justice, gang violence, and social media approached for this story expressed concern that Dataminr’s surveillance services have carried racially prejudiced policing methods onto the internet. “I thought there was enough info out there to tell people to not do this,” Desmond Patton, a professor and researcher on gang violence and the web at Columbia University’s School of Social Work, told The Intercept. Social media surveillance-based counter-gang efforts routinely miss any degree of nuance or private meaning, explained Patton, instead relying on the often racist presumption that “if something looks a certain way it must mean something,” an approach that attempts “no contextual understanding of how emoji are used, how hashtags are used, [which] misses whole swaths of deep trauma and pain” in policed communities.

Systematized social media surveillance will only accelerate these inequities.

Babe Howell, a professor at CUNY School of Law and a criminal justice scholar, shared this concern over context-flattening Twitter surveillance and the lopsided assessment of who looks dangerous. “Most adolescents experiment with different kinds of personalities,” said Howell, explaining that using “the artistic expression, the musical expression, the posturing and bragging and representations of masculinities in marginalized communities” as a proxy for possible criminality is far worse than useless. “For better or worse we have the right to bear arms, and using photos including images of weapons to collect information about people based on speech and associations just imposes one wrong on the next and two wrongs do not make a right.”

Howell said the potential damage caused by labeling someone a “possible gang member,” whether in a formal database or not, is very real. Labeling someone as gang-affiliated leads to what Howell described as “two systems of justice that are separate and unequal,” because “if someone is accused of being a gang member on the street they will be policed with heightened levels of tension, often resulting in excessive force. In the criminal justice system they’ll be denied bail, speedy trial rights, typical due process rights, because they’re seen as more of a threat. Gang allegations carry this level of prejudicial bad character evidence that would not normally be admissible.”

All of this reflects crises of American overpolicing that far predate computers, let alone Twitter. But systematized social media surveillance will only accelerate these inequities, said Ángel Díaz, a lawyer and researcher at the Brennan Center for Justice. “Communities of color use social media in ways that are readily misunderstood by outsiders,” explained Díaz. “People also digitally brand themselves in ways that can be disconnected from reality. Online puffery about gang affiliation can be done for a variety of reasons, from chasing notoriety to deterring real-world violence. For example, a person might take photos with a borrowed gun and later post them to social media over the course of a week to create a fake persona and intimidate rivals.” Similarly fraught was Dataminr’s practice of honing in on certain geographical areas: “Geo-fencing around poor neighborhoods and communities of color only aggravates this potential by selectively looking for suspicious behavior in places they’re least equipped to understand.”

Of course, both Twitter and Dataminr vehemently maintain that the service they offer — monitoring many different social networks simultaneously for any information that might be of interest to police, including protests — does not constitute surveillance, pointing to Twitter’s strict prohibitions against surveillance by partners. “First Alert does not provide any government customers with the ability to target, monitor or profile social media users, perform geospatial, link or network analysis, or conduct any form of surveillance,” Dataminr wrote to The Intercept.

But it’s difficult to wrap one’s head around these denials, given that Twitter’s anti-surveillance policy reads like a dry, technical description of exactly what Datminr is said to have engaged in. Twitter’s developer terms of service — which govern the use of the firehose — expressly prohibit using tweets for “conducting or providing surveillance or gathering intelligence,” and orders developers to “Never derive or infer, or store derived or inferred, information about a Twitter user’s … [a]lleged or actual commission of a crime.”

Twitter spokesperson Lindsay McCallum declined to answer any questions about Dataminr’s surveillance practices, but stated “Twitter prohibits the use of our developer services for surveillance purposes. Period.” McCallum added that Twitter has “done extensive auditing of Dataminr’s tools, including First Alert, and have not seen any evidence that they’re in violation of our policies,” but declined to discuss this audit on the record.

“Twitter’s policy does not line up with its actions,” according to Díaz. “Dataminr is clearly using the Twitter API to conduct surveillance on behalf of police departments, and passing along what it finds in the form of ‘news alerts.’ This is a distinction without difference. Conducting searches of Twitter for leads about potential gang activity, much like its monitoring of Black Lives Matter protests, is surveillance. Having Dataminr analysts run searches and summarize their findings before passing it along to police doesn’t change this reality.”

“In this Dataminr example, you’re not talking about cops, you’re now talking about private individuals [who] lack the even basic knowledge that officers are coming from.”

Dataminr’s use of the Twitter firehose to infer gang affiliation is “totally terrifying,” said Forrest Stuart, a sociologist and head of the Stanford Ethnography Lab, who explained that even for an academic specialist with a career of research and field work spent understanding the way communities express themselves on social media, grasping the intricacies of someone else’s self-expression can be fraught. “There are neighborhoods that are less than a mile away from the neighborhoods where I have have intimate knowledge, where if I open up their Twitter accounts, I trust myself to get a pretty decent sense of what their hashtags and their phrases mean,” Stuart said. “But I know that I am still inaccurate because I’m not there in that community. So, if I am concerned, as a researcher who specializes in this stuff … then you can imagine my concern and hearing that police officers are using this.”

Stuart added that “research has long shown that police officers really lack the kind of cultural competencies and knowledge that’s required for understanding the kinds of behavioral and discursive practices, aesthetic practices, taken up by urban black and brown youth,” but that “here in this Dataminr example, you’re not talking about cops, you’re now talking about private individuals [who] lack the even basic knowledge that officers are coming from, some knowledge of criminal behavior or some knowledge of gang behavior.”

Stuart believes Twitter owes its over 100 million active users, at the very least, a warning that their tweets might become fodder for a semi-automated crime dragnet, explaining that he himself uses the Twitter firehose for his ethnographic research, but had to first consent to a substantial data usage agreement aimed at minimizing harm to the people whose tweets he might study — guidelines that Dataminr doesn’t appear to have been held to. “If it doesn’t violate Twitter’s conditions by letter, doesn’t it violate them at least in the essence of what Twitter’s trying to do?” he asked. “Aren’t the terms and conditions set up so that Twitter isn’t leading to negative impacts or negative treatment of people? At minimum, if they’re gonna continue feeding stuff to Dataminr and stuff to police, don’t they have some kind of responsibility, at least an ethical obligation, to let [users] know that ‘Hey, some of your information is going to cops’?” When asked whether Twitter would ever provide such a notice to users, spokesperson McCallum provided a link to a section of the Twitter terms of service that makes no mention of police or law enforcement.

The post Twitter Surveillance Startup Targets Communities of Color for Police appeared first on The Intercept.

No Flesh Is Spared in Richard Stanley’s H.P. Lovecraft Adaptation.

Well, almost none. There is one survivor. Warning: Contains spoilers.

Color out of Space, directed by Richard Stanley, script by Richard Stanley and Scarlett Amaris. Starring

Nicholas Cage … Nathan Gardner,

Joely Richardson… Theresa Gardner,

Madeleine Arthur… Lavinia Gardner

Brendan Meyer… Benny Gardner

Julian Meyer… Jack Gardner

Elliot Knight… Ward

Tommy Chong… Ezra

Josh C. Waller… Sheriff Pierce

Q’orianka Kilcher… Mayor Tooma

This is a welcome return to big screen cinema of South African director Richard Stanley. Stanley was responsible for the cult SF cyberpunk flick, Hardware, about a killer war robot going running amok in an apartment block in a future devastated by nuclear war and industrial pollution. It’s a great film, but its striking similarities to a story in 2000AD resulted in him being successfully sued by the comic for plagiarism. Unfortunately, he hasn’t made a major film for the cinema since he was sacked as director during the filming of the ’90s adaptation of The Island of Doctor Moreau. Th film came close to collapse and was eventually completed by John Frankenheimer. A large part of the chaos was due to the bizarre, irresponsible and completely unprofessional behaviour of the two main stars, Marlon Brando and Val Kilmer.

Previous Lovecraft Adaptations

Stanley’s been a fan of Lovecraft ever since he was a child when his mother read him the short stories. There have been many attempts to translate old Howard Phillips’ tales of cosmic horror to the big screen, but few have been successful. The notable exceptions include Brian Yuzna’s Reanimator, From Beyond and Dagon. Reanimator and From Beyond were ’80s pieces of gleeful splatter, based very roughly – and that is very roughly – on the short stories Herbert West – Reanimator and From Beyond the Walls of Sleep. These eschewed the atmosphere of eerie, unnatural terror of the original stories for over the top special effects, with zombies and predatory creatures from other realities running out of control. Dagon came out in the early years of this century. It was a more straightforward adaptation of The Shadow Over Innsmouth, transplanted to Spain. It generally followed the plot of the original short story, though at the climax there was a piece of nudity and gore that certainly wasn’t in Lovecraft.

Plot

Color out of Space is based on the short story of the same name. It takes some liberties, as do most movie adaptations, but tries to preserve the genuinely eerie atmosphere of otherworldly horror of the original, as well as include some of the other quintessential elements of Lovecraft’s horror from his other works. The original short story is told by a surveyor, come to that part of the American backwoods in preparation for the construction of a new reservoir. The land is blasted and blighted, poisoned by meteorite that came down years before. The surveyor recounted what he has been told about this by Ammi Pierce, an old man. The meteorite landed on the farm of Nahum Gardner and his family, slowly poisoning them and twisting their minds and bodies, as it poisons and twists the land around them.

In Stanley’s film, the surveyor is Ward, a Black hydrologist from Lovecraft’s Miskatonic University. He also investigates the meteorite, which in the story is done by three scientists from the university. The movie begins with shots of the deep American forest accompanied by a soliloquy by Ward, which is a direct quote from the story’s beginning. It ends with a similar soliloquy, which is largely the invention of the scriptwriters, but which also contains a quote from the story’s ending about the meteorite coming from unknown realms. Lovecraft was, if not the creator of cosmic horror, then certainly its foremost practitioner. Lovecraftian horror is centred around the horrifying idea that humanity is an insignificant, transient creature in a vast, incomprehensible and utterly uncaring if not actively hostile cosmos. Lovecraft was also something of an enthusiast for the history of New England, and the opening shots of the terrible grandeur of the American wilderness puts him in the tradition of America’s Puritan settlers. These saw themselves as Godly exiles, like the Old Testament Israelites, in a wilderness of supernatural threat.

The film centres on the gradual destruction of Nathan Gardner and his family – his wife, Theresa, daughter Lavinia, and sons Benny and Jack – as their minds and bodies are poisoned and mutated by the strange meteorite and its otherworldly inhabitant, the mysterious Color of the title. Which is a kind of fuchsia. Its rich colour recalls the deep reds Stanley uses to paint the poisoned landscape of Hardware. Credit is due to the director of photography, Steve Annis, as the film and its opening vista of the forest looks beautiful. The film’s eerie, electronic score is composed by Colin Stetson, which also suits the movie’s tone exactly.

Other Tales of Alien Visitors Warping and Mutating People and Environment

Color out of Space comes after a number of other SF tales based on the similar idea of an extraterrestrial object or invader that twists and mutates the environment and its human victims. This includes the TV series, The Expanse, in which humanity is confronted by the threat of a protomolecule sent into the solar system by unknown aliens. Then there was the film Annihilation, about a group of women soldiers sent into the zone of mutated beauty and terrible danger created by an unknown object that has crashed to Earth and now threatens to overwhelm it. It also recalls John Carpenter’s cult horror movie, The Thing, in the twisting mutations and fusing of animal and human bodies. In the original story, Gardner and his family are reduced to emaciated, ashen creatures. It could be a straightforward description of radiation poisoning, and it indeed that is how some of the mutated animal victims of the Color are described in the film. But the film’s mutation and amalgamation of the Color’s victims is much more like that of Carpenter’s Thing as it infects its victims. The scene in which Gardner discovers the fused mass of his alpacas out in the barn recalls the scene in Carpenter’s earlier flick where the members of an American Antarctic base discover their infected dogs in the kennel. In another moment of terror, the Color blasts Theresa as she clutches Jack, fusing them together. It’s a piece of body horror like the split-faced corpse in Carpenter’s The Thing, the merged mother and daughter in Yuzna’s Society, and the fused humans in The Thing’s 2012 prequel. But it’s made Lovecraftian by the whimpering and gibbering noises the fused couple make, noises that appear in much Lovecraftian fiction.

Elements from Other Lovecraft Fiction

In the film, Nathan Gardner is a painter, who has taken his family back to live on his father’s farm. This is a trope from other Lovecraft short stories, in which the hero goes back to his ancestral home, such as the narrator of The Rats in the Walls. The other characters are also updated to give a modern, or postmodern twist. Gardner’s wife, Theresa, is a high-powered financial advisor, speaking to her clients from the farm over the internet. The daughter, Lavinia, is a practicing witch of the Wiccan variety. She is entirely benign, however, casting spells to save her mother from cancer, and get her away from the family. In Lovecraft, magic and its practitioners are an active threat, using their occult powers to summon the ancient and immeasurably evil gods they worship, the Great Old Ones. This is a positive twist for the New Age/ Goth generations.

There’s a similar, positive view of the local squatter. In Lovecraft, the squatters are barely human White trash heading slowly back down the evolutionary ladder through poverty and inbreeding. The film’s squatter, Ezra, is a tech-savvy former electrician using solar power to live off-grid. But there’s another touch here which recalls another of Lovecraft’s classic stories. Investigating what may have become of Ezra, Ward and Pierce discover him motionless, possessed by the Color. However, he is speaking to them about the Color and the threat it presents from a tape recorder. This is similar to the voices of the disembodied human brains preserved in jars by the Fungi from Yuggoth, speaking through electronic apparatus in Lovecraft’s The Whisperer in Darkness. Visiting Ezra earlier in the film, Ward finds him listening intently to the aliens from the meteorite that now have taken up residence under the Earth. This also seems to be a touch taken from Lovecraft’s fiction, which means mysterious noises and cracking sounds from under the ground. Near the climax Ward catches a glimpse through an enraptured Lavinia of the alien, malign beauty of the Color’s homeworld, This follows the logic of the story, but also seems to hark back to the alien vistas glimpsed by the narrator in The Music of Erich Zann. And of course it wouldn’t be a Lovecraft movie without the appearance of the abhorred Necronomicon. It is not, however, the Olaus Wormius edition, but a modern paperback, used by Lavinia as she desperately invokes the supernatural for protection.

Fairy Tale and Ghost Story Elements

Other elements in the movie seem to come from other literary sources. The Color takes up residence in the farm’s well, from which it speaks to the younger son, Jack. Later, Benny, the elder son tries to climb down it in an attempt to rescue their dog, Sam, during which he is also blasted by the Color. When Ward asks Gardner what has happened to them all, he is simply told that they’re all present, except Benny, who lives in the well now. This episode is similar to the creepy atmosphere of children’s fairy tales, the ghost stories of M.R. James and Walter de la Mare’s poems, which feature ghostly entities tied to specific locales.

Oh yes, and there’s also a reference to Stanley’s own classic film, Hardware. When they enter Benny’s room, glimpsed on his wall is the phrase ‘No flesh shall be spared’. This is a quote from Mark’s Gospel, which was used as the opening text and slogan in the earlier movie.

The film is notable for its relatively slow start, taking care to introduce the characters and build up atmosphere. This is in stark contrast to the frenzied action in other, recent SF flicks, such as the J.J. Abram’s Star Trek reboots and Michael Bay’s Transformers. The Color first begins having its malign effects by driving the family slowly mad. Theresa accidentally cuts off the ends of her fingers slicing vegetables in the kitchen as she falls into a trance. Later on, Lavinia starts cutting herself as she performs her desperate ritual calling for protection. And Jack and later Gardner sit enraptured looking at the television, vacant except for snow behind which is just the hint of something. That seems to go back to Spielberg’s movie, Poltergeist, but it’s also somewhat like the hallucinatory scenes when the robot attacks the hero from behind a television, which shows fractal graphics, in Hardware.

Finally, the Color destroys the farm and its environs completely, blasting it and its human victims to ash. The film ends with Ward contemplating the new reservoir, hoping the waters will bury it all very deep. But even then, he will not drink its water.

Lovecraft and Racism

I really enjoyed the movie. I think it does an excellent job of preserving the tone and some of the characteristic motifs of Lovecraft’s work, while updating them for a modern audience. Despite his immense popularity, Lovecraft is a controversial figure because of his racism. There were objections last year or so to him being given an award at the Hugo’s by the very ostentatiously, sanctimoniously anti-racist. And a games company announced that they were going to release a series of games based on his Cthulhu mythos, but not drawing on any of his characters or stories because of this racism. Now the character of an artist does not necessarily invalidate their work, in the same way that the second best bed Shakespeare bequeathed to his wife doesn’t make Hamlet any the less a towering piece of English literature. But while Lovecraft was racist, he also had black friends and writing partners. His wife was Jewish, and at the end of his life he bitterly regretted his earlier racism. Also, when Lovecraft was writing in from the 1920s to the 1940s, American and western society in general was much more racist. This was the era of segregation and Jim Crow. It may be that Lovecraft actually wasn’t any more racist than any others. He was just more open about it. And it hasn’t stopped one of the internet movie companies producing Lovecraft Country, about a Black hero and his family during segregation encountering eldritch horrors from beyond.

I don’t know if Stanley’s adaptation will be to everyone’s taste, though the film does credit the H.P. Lovecraft Historical Society among the organisations and individuals who have rendered their assistance. If you’re interested, I recommend that you give it a look. I wanted to see it at the cinema, but this has been impossible due to the lockdown. It is, however, out on DVD released by Studio Canal. Stanley has also said that if this is a success, he intends to make an adaptation of Lovecraft’s The Dunwich Horror. I hope the film is, despite present circumstances, and we can look forward to that piece of classic horror coming to our screens. But this might be too much to expect, given the current crisis and the difficulties of filming while social distancing.

Days After Returning to Office, Facebook Content Moderator Contracts Coronavirus

Published by Anonymous (not verified) on Tue, 20/10/2020 - 3:21pm in

Tags 

Technology

Just days after Facebook and one of its contractors, Accenture, sent teams responsible for content moderation back to their offices amid concerns about the coronavirus pandemic, one worker at the office tested positive for Covid-19, according to an internal email viewed by The Intercept.

According to a notification email sent to contractors working out of Accenture’s Facebook facility in Austin, Texas — where hourly contractors deal with the social media giant’s most graphic forms of violence and sexual abuse — the office has already been hit with a positive case. “We have learned that one of our people working at Facebook Domain 8 on the 12th floor has tested positive for COVID-19,” the email reads. “This individual was last in the office on 10/13, became symptomatic on 10/14 and received a positive test result on 10/16. Currently, this person is in self-quarantine.”

After months spent working from home since the onset of the coronavirus pandemic, hourly contractors assigned to Facebook’s most sensitive, traumatizing content moderation teams were informed at the start of this month they would return to in-office work on October 12. The decision triggered immediate protest among the moderators, who feared they were being put at greater risk of contracting the coronavirus, while Facebook’s salaried full-time staffers were told they could continue working from home through at least June 2021.

Accenture, the global outsourcing firm that staffs and manages the moderation teams on Facebook’s behalf, told workers the company was taking special precautions to minimize the transmission of the virus, including mandatory face masks, increased cleaning, and reduced seating capacity.

According to audio obtained by The Intercept, an Accenture human resources executive told the affected contractors that the risk of Covid-19 infection from a coworker was “not necessarily something to worry about.” Both companies have argued that the graphic, disturbing, and generally illegal nature of the content in question makes remote work impossible.

Workers at the Facebook moderation office now fear that an outbreak could be on the way. According to Harvard Medical School, the incubation period for the coronavirus is considered to be three to 14 days, with those infected potentially contagious for up to 72 hours before symptoms occur.

The message to workers at the office included a note about the next steps Accenture will be taking. “We have followed up with this person, and any people who might have come in close contact with this individual have been contacted already and asked to self-quarantine,” the email said. “We also are continuing our protocol of thoroughly sanitizing our offices per the recommendations from public health experts and our own protocols.”

In response to an inquiry, Facebook spokesperson Drew Pusateri said, “We’re confident in the health and safety measures we’ve created for any in-office work. They include social distancing, mandating mask usage, daily deep cleanings and a contact tracing program in the event of a positive case.”

In a statement, Accenture spokesperson Rachel Frey said, “We have contact tracing protocols in place so that any of our people who come in close contact with a team member who has tested positive for COVID-19 are immediately notified and asked to self-quarantine.” The statement went on, “We prioritize the safety and well-being of our people, and only invite our people to return to offices in cases where there is a critical need to do so, and only when we are comfortable that the right safety measures are in place, in compliance with local orders.”

Workers remain concerned. “I’ve got friends I care about who are literally putting their lives on the line because Facebook says this work can’t be done from home,” a Facebook moderator who works in the Austin office told The Intercept. “If the content is too graphic to be worked from home, [then] they need to do better not allowing it on their platform to begin with.”

Update: October 20, 2020, 12:38 p.m.
This story has been updated to include a statement from Facebook received after publication.

Update: October 20, 2020, 7:03 p.m.
This story has been updated to include a statement from Accenture made after publication.

The post Days After Returning to Office, Facebook Content Moderator Contracts Coronavirus appeared first on The Intercept.

Can We Trust Monopolies to Play Fair?

Published by Anonymous (not verified) on Mon, 19/10/2020 - 10:00pm in

Photo credit: CaseyMartin / Shutterstock.com For the anti-monopoly movement, the past three months have been exciting but sobering. In late...

Read More

Facebook Contractor Downplays Coronavirus Risk for Content Moderators

Published by Anonymous (not verified) on Wed, 14/10/2020 - 8:08am in

Tags 

Technology

Facebook contractors tasked with sifting through some of the most heinous and traumatizing content on the internet faced a new hurdle this week when they were told to return to company offices to do their work in person as a pandemic runs rampant around them. Audio obtained by The Intercept suggests that their employer, Accenture, is downplaying the risk of indoor exposure to Covid-19.

When the United States began a patchwork national lockdown in March, Facebook contractors, paid a relatively low hourly wage with few of the generous perks afforded to the company’s full-time staffers, began to feel even more acutely dispensable to the $750 billion company. Beginning this week, as first reported by The Verge, these contractors must now resume working in the same facilities that Facebook’s full-time can safely avoid, having been told that they’ll be permitted to work from home through July 2021. “Based on guidance from health and government experts, as well as decisions drawn from our internal discussions about these matters, we are allowing employees to continue voluntarily working from home until July 2021,” a Facebook spokesperson explained to Business Insider.

Facebook has said that the contractors in question, who must wade through so-called priority zero content encompassing the worst of child sexual abuse and graphic violence, can’t safely do this work from home. Three Facebook moderators employed through Accenture who spoke to The Intercept on the condition of anonymity, because they are not permitted to speak with the press, expressed a profound worry that the company, and their ultimate bosses at Facebook HQ, are once again ignoring their safety in the name of keeping the social network running smoothly.

An October 2 virtual meeting, a recording of which was obtained by The Intercept, did little to lessen moderators’ dread over resuming indoor work at previously shuttered Facebook offices in Texas and California. Accenture moderators were told that the company considers them “essential workers” and therefore not subject to any state or local “stay at home” orders in effect. After providing an overview of coronavirus precautions Accenture would be taking — including reducing the number of workers allowed in the office, mandatory use of masks, and entry temperature checks — an Accenture manager began to address questions submitted by the contractors.

“Some of the questions we’re getting are what happens when I get sick, or what happens when somebody in the office gets sick,” the manager said. “So now I’m going to dive in to, you know, how Accenture handles these situations. Some of you have been in buildings where there have been notifications sent that somebody has tested positive, and that is a reality of where we’re at today, and that will happen as people test positive, and it’s not necessarily something to worry about” — audio cuts out briefly — “been in direct contact.” The executive then described the steps Accenture would take to contact and “take care of” any infected contractor, as well as conduct contact tracing to determine further exposure.

The workers in question were left less than reassured. “They’re getting talking points straight from Trump,” one Accenture moderator told The Intercept. Although Accenture said that returning moderators will be spaced out at least six feet from one another and required to wear masks when not eating or drinking, such steps minimize but do not eliminate the risk of contracting the virus. Accenture managers on the call touted the use of thermal fever-scanning cameras, as well as Facebook’s ability to track employee movements via their ID badges as a means of contact tracing to alert workers who are potentially exposed. They made no mention of any added air filtration efforts that could remove viral particles as they accumulate and spread indoors. Also unmentioned was the potential use of plexiglass barriers, increasingly popular to reduce the circulation of virus-carrying particles in indoor environments.

Accenture managers touted the use of thermal fever-scanning cameras, as well as Facebook’s ability to track employee movements via ID badges as a means of contact tracing.

Accenture spokesperson Rachel Frey declined to answer specific questions about moderator concerns, but told The Intercept, “We prioritize the safety and well-being of our people, and we’ll continue to proactively communicate with them about these measures and answer any questions they have.” In a written statement, Facebook spokesperson Drew Pusateri told The Intercept, “Since March, we’ve increased our use of technology and enabled an overwhelming majority of our reviewers to work from home. But considering some of the most sensitive content can’t be reviewed from home, we’ve begun allowing reviewers back into some of our sites as government guidance has permitted. Our focus on reopening any office is on how it can be done in a way that prioritizes people’s health and safety. We are putting strict measures in place, making sure they’re followed, and addressing any confirmed cases of illness.”

Moderators who asked if they could use building stairs to avoid a logjam of people waiting for an elevator were told that the answer was probably no. Accenture similarly demurred on the subject of routine testing; when asked if moderators would need a negative test result to enter the office, a company manager replied, “It’s a great question. We don’t require you to test negative when you come to the office. What we do ask is that everyone does that internal check before they show up: How do I feel? Have in been in touch or in contact with somebody who is Covid-positive or exhibiting symptoms?”

The call provided moderators with little in the way of concrete specifics in case they or a co-worker contracted Covid-19, or what they ought to do if their personal or household health made in-office work too great a risk. “We will work with you,” the moderators were told repeatedly, a claim met with skepticism given what contractors say is a yearslong record of deception and neglect by Accenture and its HR teams, specifically when it comes to personal health. “If one person tests positive, they get assigned an HR case manager,” explained an Accenture manager on the call. “I’m actually one of the HR case managers, you and I would be buddies. … You would get assigned to us and we would work with you to determine proper quarantine, health concerns, etc.”

When asked what would happen in case of a genuine outbreak among the Facebook moderators, an Accenture manager said only that “there are different protocols set up for that,” but clarified that anyone asked to quarantine in case of such an event would not have to use their vacation days. There was no indication from the call that Accenture or Facebook would actually shutter these work sites entirely in case of infections, instead relying on notifying people determined to have been at risk of exposure. “It’s absolutely sloppy,” one Accenture moderator told The Intercept of the company’s pandemic outreach so far. “They didn’t give us anything in writing.”

Accenture also declined to answer employee questions about hazard pay for those forced back into offices, a galling omission to many of these contractors who for years have complained of second-class treatment and inadequate compensation given the psychologically brutalizing nature of their work. An employee petition, first reported by Motherboard, called for a 50 percent increase in hourly wages for Facebook contract workers risking exposure by working on-site. By comparison, Facebook’s full-time employees, not only assured they can work from home through the middle of next year, will receive an additional $1,000 “for home office needs,” Business Insider reported. Per the meeting recording, Accenture said it will reimburse workers for the cost of taking an Uber or Lyft into the office, though that of course carries its own additional risks of virus transmission.

Some moderators said this friendly, informal pledge of accommodation from Accenture is already showing cracks, telling The Intercept that high-risk workers who’ve presented the company with doctors’ notes requesting to continue safely teleworking from home have been denied, with HR saying that the contractors in question must formally consent to releasing their medical records to the company so that they can be vetted. “There are people with immediate health concerns and they were told to contact HR,” said one moderator. The relationship between moderators and Accenture HR has been fraught, particularly with regards to sensitive health information; in August 2019, The Intercept reported that Accenture moderators alleged the company had pressured in-office therapists to divulge patient data. “I’m angry that they think so little of our lives,” the moderator added. “They couldn’t even bother to give us hazard pay in a pandemic.”

The post Facebook Contractor Downplays Coronavirus Risk for Content Moderators appeared first on The Intercept.

A Common Sense Exorcism from a Sceptical Medieval Monk

Published by Anonymous (not verified) on Tue, 13/10/2020 - 6:27am in

The view most of us have grown up with about the Middle Ages is that it was ‘the age of faith’. Or to put it more negatively, an age of credulity and superstition. The scientific knowledge of the Greco-Roman world had been lost, and the Roman Catholic church retained its hold on the European masses through strict control, if not an outright ban, on scientific research and fostering superstitious credulity through fake miracles and tales of the supernatural.

More recently scholars have challenged this image. They’ve pointed out that from the 9th century onwards, western Christians scholars were extremely keen to recover the scientific knowledge of the ancients, as well as learn from Muslim scholarship obtained through the translation of scientific and mathematical texts from areas conquered from Islam, such as Muslim Spain and Sicily. Medieval churchmen had to master natural philosophy as part of the theology course, and scholars frequently digressed into questions of what we would call natural science for its own sake during examinations of theological issues. It was an age of invention which saw the creation of the mechanical clock, spectacles and the application of watermills as pumps to drain marshland and saw wood. There were also advances in medicine and maths.

At the same time, it was also an age of scepticism towards the supernatural. Agabard, a medieval Visigothic bishop of what is now France, laughed when he was told how ordinary people believed that storms were caused by people from Magonia in flying ships. The early medieval manual for bishops listing superstitions and heresies they were required to combat in their dioceses, the Canon Episcopi, condemns the belief of certain women that they rode out at night with Diana or Herodias in the company of other spirits. Scholars of the history of witchcraft, such as Jeffrey Burton Russell of Cornell University, argue that this belief is the ancestor of the later belief that witches flew through the air with demons on their way to meet Satan at the black mass. But at this stage, there was no suggestion that this really occurred. What the Canon Episcopi condemns is the belief that it really happens.

The twelfth century French scholar, William of Auvergne, considered that demonic visitations in which sleepers felt a supernatural presence pressing on their chest or body was due to indigestion. Rather than being a witch or demon trying to have sex with their sleeping victim, the incubus or succubus, it was the result of the sleeper having eaten rather too well during the day. Their full stomach was pressing on the body’s nerves, and so preventing the proper circulation of the fluids responsible for correct mental functioning. There were books of spells for the conjuration of demons produced during the Middle Ages, but by and large the real age of belief in witches and the mass witch hunts came in the later middle ages and especially the 16th and 17th centuries. And its from the 17th century that many of the best known spell books date.

One of the books I’ve been reading recently is G.G. Coulton’s Life in the Middle Ages. According to Wikipedia, Coulton was a professor of medieval history, who had originally studied for the Anglican church but did not pursue a vocation. The book’s a collection of medieval texts describing contemporary life and events. Coulton obviously still retained an acute interest in religion and the church, as the majority of these are about the church. Very many of the texts are descriptions of supernatural events of one kind or another – miracles, encounters with demons, apparitions of the dead and lists of superstitions condemned by the church. There’s ample material there to support the view that the middle ages was one of superstitious fear and credulity.

But he also includes an account from the Dutch/ German monk and chronicler, Johann Busch, who describes how he cured a woman, who was convinced she was demonically possessed through simple common sense and folk medicine without the involvement of the supernatural. Busch wrote

Once as I went from Halle to Calbe, a man who was ploughing ran forth from the field and said that his wife was possessed with a devil, beseeching me most instantly that I would enter his house (for it was not far out of our way) and liberate her from this demon. At last, touched by her prayers, I granted his request, coming down from my chariot and following him to his house. When therefore I had looked into the woman’s state, I found that she had many fantasies, for that she was wont to sleep and eat too little, when she fell into feebleness of brain and thought herself possessed by a demon; yet there was no such thing in her case. So I told her husband to see that she kept a good diet, that is, good meat and drink, especially in the evening when she would go to sleep. “for then” (said I” “when all her work is over, she should drink what is called in the vulgar tongue een warme iaute, that is a quart of hot ale, as hot as she can stand, without bread but with a ltitle butter of the bigness of a hazel-nut. And when she hath drunken it to the end, let her go forthwith to bed; thus she will soon get a whole brain again.” G.G. Coulton, translator and annotator, Life in the Middle Ages (Cambridge: Cambridge University Press 1967) pp.231-2).

The medieval worldview was vastly different from ours. By and large it completely accepted the reality of the supernatural and the truth of the Christian religion, although there were also scientific sceptics, who were condemned by the church. But this also did not stop them from considering rational, scientific explanations for supernatural phenomena when they believed they were valid. As one contemporary French historian of medieval magic has written, ‘no-one is more sceptical of miracles than a theologian’. Sometimes their scepticism towards the supernatural was religious, rather than scientific. For example, demons couldn’t really work miracles, as only God could do so. But nevertheless, that scepticism was also there.

The middle ages were indeed an age of faith, but it was also one of science and rationality. These were sometimes in conflict, but often united to provide medieval intellectuals with an intellectually stimulating and satisfying worldview.

Crumbling Case Against Assange Shows Weakness of "Hacking" Charges Related to Whistleblowing

Published by Anonymous (not verified) on Thu, 01/10/2020 - 1:37am in

By 2013, the Obama administration had concluded that it could not charge WikiLeaks or Julian Assange with crimes related to publishing classified documents — documents that showed, among other things, evidence of U.S. war crimes in Iraq and Afghanistan — without criminalizing investigative journalism itself. President Barack Obama’s Justice Department called this the “New York Times problem,” because if WikiLeaks and Assange were criminals for publishing classified information, the New York Times would be just as guilty.

Five years later, in 2018, the Trump administration indicted Assange anyway. But, rather than charging him with espionage for publishing classified information, they charged him with a computer crime, later adding 17 counts of espionage in a superseding May 2019 indictment.

The alleged hacking not only didn’t happen, according to expert testimony, but it also couldn’t have happened.

The computer charges claimed that, in 2010, Assange conspired with his source, Chelsea Manning, to crack an account on a Windows computer in her military base, and that the “primary purpose of the conspiracy was to facilitate Manning’s acquisition and transmission of classified information.” The account enabled internet file transfers using a protocol known as FTP.

New testimony from the third week of Assange’s extradition trial makes it increasingly clear that this hacking charge is incredibly flimsy. The alleged hacking not only didn’t happen, according to expert testimony at Manning’s court martial hearing in 2013 and again at Assange’s extradition trial last week, but it also couldn’t have happened.

The new testimony, reported earlier this week by investigative news site Shadowproof, also shows that Manning already had authorized access to, and the ability to exfiltrate, all of the documents that she was accused of leaking — without receiving any technical help from WikiLeaks.

The government’s hacking case appears to be rooted entirely in a few offhand remarks in what it says are chat logs between Manning and Assange discussing password cracking — a topic that other soldiers at Forward Operating Base Hammer in Iraq, where Manning was stationed, were also actively interested in.

The indictment claims that around March 8, 2010, after Manning had already downloaded everything she leaked to WikiLeaks other than the State Department cables, the whistleblower provided Assange with part of a “password hash” for the FTP account and Assange agreed to try to help crack it. A password hash is effectively an encrypted representation of a password from which, in some cases, it’s possible to recover the original.

Manning already had authorized access to all of the documents she was planning to leak to WikiLeaks, including the State Department cables, and cracking this password would not have given her any more access or otherwise helped her with her whistleblowing activities. At most, it might have helped her hide her tracks, but even that is not very likely. I suspect she was just interested in password cracking.

Assange, however, never cracked the password.

That’s it. That’s what the government’s entire computer crime case against Assange is based on: a brief discussion about cracking a password, which never actually happened, between a publisher and his source.

Therefore, the charge is not actually about hacking — it’s about establishing legal precedent to charge publishers with conspiring with their sources, something that so far the U.S. government has failed to do because of the First Amendment.

As Shadowproof points out: In June 2013, at Manning’s court martial hearing, David Shaver, a special agent for the Army Computer Crimes Investigating Unit, testified that Manning only provided Assange with part of the password hash and that, with only that part, it’s not possible to recover the original password. It would be like trying to make a cappuccino without any espresso; Assange was missing a key ingredient.

Last week at Assange’s extradition trial, Patrick Eller, a former Command Digital Forensics Examiner at the U.S. Army Criminal Investigation Command, further discredited the computer crime charge, according to Shadowproof.

Eller confirmed Shaver’s 2013 testimony that Manning didn’t provide Assange with enough information to crack the password. He pointed out, “The only set of documents named in the indictment that Manning sent after the alleged password cracking attempt were the State Department cables,” and that “Manning had authorized access to these documents.”

Eller also said that other soldiers at Manning’s Army base in Iraq were regularly trying to crack administrator passwords on military computers in order to install programs that they weren’t authorized to install. “While she” — Manning — “was discussing rainbow tables and password hashes in the Jabber chat” — with Assange — “she was also discussing the same topics with her colleagues. This, and the other factors previously highlighted, may indicate that the hash cracking topic was unrelated to leaking documents.”

Journalists have relationships with their sources. These relationships are not criminal conspiracies.

I’m not a fan of Julian Assange, particularly since his unethical actions and lies he’s told since the 2016 U.S. election. But I am a proponent of a strong free press, and his case is critically important for the future of journalism in this country.

Journalists have relationships with their sources. These relationships are not criminal conspiracies. Even if a source ends up breaking a law by providing the journalist with classified information, the journalist did not commit a felony by receiving it and publishing it.

Whether or not you believe Assange is a journalist is beside the point. The New York Times just published groundbreaking revelations from two decades of Donald Trump’s taxes showing obscene tax avoidance, massive fraud, and hundreds of millions of dollars of debt.

Trump would like nothing more than to charge the New York Times itself, and individual journalists that reported that story, with felonies for conspiring with their source. This is why the precedent in Assange’s case is so important: If Assange loses, the Justice Department will have established new legal tactics with which to go after publishers for conspiring with their sources.

The post Crumbling Case Against Assange Shows Weakness of “Hacking” Charges Related to Whistleblowing appeared first on The Intercept.

So an ancient TV set can bring down the mighty broadband? Good | David Mitchell

Published by Anonymous (not verified) on Sun, 27/09/2020 - 7:00pm in

As one who resists technological change, I think we should defend the telly that took out a Welsh village’s internet

The mystery of the disappearing Welsh broadband has been solved. I don’t know what you’d expect the broadband signal to be like in the isolated village of Aberhosan in Powys. Personally, I’d expect it to be terrible. And it really was terrible. But it seems the villagers didn’t expect that. To them, this was a mystery.

My low expectations of data flow to rural areas will doubtless offend some. I apologise: it may be outdated but I mean it nicely. It’s not a slur on the countryside. Not being able to access the internet is a plus as far as I’m concerned. I look back fondly on the afternoon in 2009 on the Isle of Skye that I spent waving a Samsung flip phone around my head in the hope of it coinciding with a big enough blob of reception to get a text to send. I was significantly more likely to catch a flying splat of seagull shit. But the inconvenience makes you feel remote and, for me, that was the point of going there. Nowadays, I could probably get streaming HD. Which sounds like a disease. And maybe it is.

We don’t have to pretend to like things just because they’re inevitable

Continue reading...

Feds Are Tapping Protesters' Phones. Here's How To Stop Them.

Published by Anonymous (not verified) on Sat, 26/09/2020 - 3:21am in

Tags 

Technology

Federal agents from the Department of Homeland Security and the Justice Department used “a sophisticated cell phone cloning attack—the details of which remain classified—to intercept protesters’ phone communications” in Portland this summer, Ken Klippenstein reported this week in The Nation. Put aside for the moment that, if the report is true, federal agents conducted sophisticated electronic surveillance against American protesters, an alarming breach of constitutional rights. Do ordinary people have any hope of defending their privacy and freedom of assembly against threats like this?

Yes, they do. Here are two simple things you can do to help mitigate this type of threat:

  • As much as possible, and especially in the context of activism, use an encrypted messaging app like Signal — and get everyone you work with to use it too — to protect your SMS text messages, texting groups, and voice and video calls.
  • Prevent other people from using your SIM card by setting a SIM PIN on your phone. There are instructions on how to do this below.

How SIM Cloning Works

Without more details, it’s hard to be entirely sure what type of surveillance was used, but The Nation’s mention of “cell phone cloning” makes me think it was a SIM cloning attack. This involves duplicating a small chip used by virtually every cellphone to link itself to its owner’s phone number and account; this small chip is the subscriber identity module, more commonly known as SIM.

Here’s how SIM cloning would work:

  • First, the feds would need physical access to their target’s phone; for example, they could arrest their target at a protest, temporarily confiscating their phone.
  • Then they would pop out the SIM card from the phone, a process designed to be easy, since end users often have reasons to replace the card (such as traveling abroad and needing a local SIM card to access the local cellular network, or when switching cellular providers).
  • The feds would then copy their target’s SIM card data onto a blank SIM card (this presents some challenges, as I explain below), and then put the original SIM card back without their target knowing.

SIM cards contain a secret encryption key that is used to encrypt data between the phone and cellphone towers. They’re designed so that this key can be used (like when you receive a text or call someone) but so the key itself can’t be extracted.

But it’s still possible to extract the key from the SIM card, by cracking it. Older SIM cards used a weaker encryption algorithm and could be cracked quickly and easily, but newer SIM cards use stronger encryption and might take days or significantly longer to crack. It’s possible that this is why the details of the type of surveillance used in Portland “remain classified.” Do federal agencies know of a way to quickly extract encryption keys from SIM cards? (On the other hand, it’s also possible that “cell phone cloning” doesn’t describe SIM cloning at all but something else instead, like extracting files from the phone itself instead of data from the SIM card.)

Assuming the feds were able to extract the encryption key from their target’s SIM card, they could give the phone back to their target and then spy on all their target’s SMS text messages and voice calls going forward. To do this, they would have to be physically close to their target, monitoring the radio waves for traffic between their target’s phone and a cell tower. When they see it, they can decrypt this traffic using the key they stole from the SIM card. This would also fit with what the anonymous former intelligence officials told The Nation; they said the surveillance was part of a “Low Level Voice Intercept” operation, a military term describing audio surveillance by monitoring radio waves.

If you were arrested in Portland and you’re worried that federal agents may have cloned your SIM card while you were in custody, it would be prudent to get a new SIM card.

Temporarily Taking Over a Phone Number

Even if law enforcement agencies don’t clone a target’s SIM card, they could gather quite a bit of information after temporarily confiscating the target’s phone.

They could power off the phone, pop out the SIM card, put it in a separate phone, and then power that phone on. If someone sends the target an SMS message (or texts a group that the target is in), the feds’ phone would receive that message instead of the target’s phone. And if someone called the target’s phone number, the feds’ phone would ring instead. They could also hack their target’s online accounts, so long as those accounts support resetting the password using a phone number.

But, in order to remain stealthy, they would need to power off their phone, put the SIM card back in their target’s phone, and power that phone on again before before returning it, which would restore the original phone’s access to the target’s phone number, and the feds would lose access.

Abandon SMS and Switch to Encrypted Messaging Apps

The simplest and best way to protect against SIM cloning attacks, as well as eavesdropping by stingrays, controversial phone surveillance devices that law enforcement has a history of using against protesters, is to stop using SMS and normal phone calls as much as possible. These are not and have never been secure.

Instead, you can avoid most communication surveillance by using an end-to-end encrypted messaging app. The Signal app is a great choice. It’s easy to use and designed to hold as little information about its users as possible. It also lets Android users securely talk with their iPhone compatriots. You can use it for secure text messages, texting groups, and voice and video calls. Here’s a detailed guide to securing Signal.

Signal requires sharing your phone number with others to use it. If you’d rather use usernames instead of phone numbers, Wire and Keybase are both good options.

If you use an iPhone and want to securely talk to other iPhone users, the built-in Messages and FaceTime apps are also encrypted. WhatsApp texts and calls are encrypted too. Though keep in mind that if you use Messages or WhatsApp, your phone may be configured to save unencrypted backups of your text messages to the cloud where law enforcement could access them.

You can’t use an encrypted messaging app all by yourself, so it’s important to get all of your friends and fellow activists to use the same app. The more people you can get to use an encrypted messaging app instead of insecure SMS and voice calls, the better privacy everyone has. (For example, I use Signal to text with my parents, and you should too.)

None of these encrypted messaging apps send data over insecure SMS messages or voice calls, so SIM cloning and stingrays can’t spy on them. Instead they send end-to-end encrypted data over the internet. This also means that the companies that run these services can’t hand over your message history to the cops even if they want to; police would instead need to extract those messages directly from a phone that sent or received them.

Another important consideration is preventing cops from copying messages directly off your phone. To prevent this, make sure your phone is locked with a strong passcode and avoid biometrics (unlocking your phone with your face or fingerprint) — or at least disable biometrics on your phone before you go to a protest. You also might consider bringing a cheap burner phone to a protest and leaving your main phone at home.

Lock Your SIM Card With a PIN

Another way to protect against certain forms of mobile phone spying is to lock your SIM card by setting a four- to eight-digit passcode known as a SIM PIN. Each time your phone reboots, you’ll need to enter this PIN if you want SMS, voice calls, and mobile data to work.

sim-pin

An iPhone’s SIM unlocking screen

Photo: Micah Lee

If you type the wrong PIN three times, your SIM card will get blocked, and you’ll need to call your phone carrier to receive a Personal Unblocking Key (PUK) to unblock it. If you enter the wrong PUK eight times, the SIM card will permanently disable itself.

With a locked SIM, you’ll still be able to use apps and Wi-Fi but not mobile data or cellphone service. So make sure that you securely record your SIM PIN somewhere safe, such as a password manager like Bitwarden, 1Password, or LastPass, and never try to guess it if you can’t remember it. (You can always click “Cancel” to get into your phone without unlocking your SIM card. From there, open a password manager app to look up your PIN, and then reboot your phone again to enter it correctly. I’ve done this numerous times myself just to be sure.)

If you want to lock your SIM card, first you’ll need to know the default SIM PIN for your cellphone company. For AT&T, Verizon, and Google Fi, it’s 1111; for T-Mobile, Sprint, and Metro, it’s 1234. If you use a different phone carrier, you should be able to search the internet to find it. (I would avoid guessing — if you type the wrong default PIN three times, your SIM card will get blocked.)

Once you know your default PIN, here’s how to you set a new one:

  • If you have an iPhone, go to Settings, then Cellular, then SIM PIN, and from there you can set your PIN. See here for more information.
  • If you have an Android phone, go to Settings, then Security, then “SIM card lock,” and from there you can set your PIN. If your Android phone doesn’t have these exact settings, you should be able to search the internet for your phone model and “SIM PIN” to find instructions for your phone.

Now if law enforcement gets physical access to your phone, they shouldn’t be able to use your locked SIM card without your PIN. If they guess your PIN incorrectly three times, the SIM card will block itself, and they’d need to convince your cellphone company to hand over the PUK for your SIM card in order to use it. If they guess the wrong PUK too many times, the SIM will permanently disable itself.

The post Feds Are Tapping Protesters’ Phones. Here’s How To Stop Them. appeared first on The Intercept.

Pages