Trauma Counselors Were Pressured to Divulge Confidential Information About Facebook Moderators, Internal Letter Claims

Published by Anonymous (not verified) on Sat, 17/08/2019 - 5:12am in



Nearly 1,500 miles from the Menlo Park headquarters of Facebook, at a company outpost in Austin, Texas, moderators toil around the clock to screen and scrub some the most gruesome, hateful, and heinous posts that make their way onto the social network and its photo-sharing subsidiary, Instagram. They are required to view as many as 800 pieces of disturbing content in a single shift, and routinely turn to on-site counselors to help cope with the procession of stomach-turning images, videos, and text. But some members of this invisible army have complained, in a statement widely circulated within Facebook, that the outsourcing giant that officially employs them, Accenture, has repeatedly attempted to violate the confidentiality of these therapy sessions.

The moderators work from within a special section for outsourced staffers at Facebook Austin. The Texas outpost is designed to mimic the look and feel of the company’s famously opulent Silicon Valley digs, but Accenture workers say they’re reminded daily of their secondary status and denied perks, prestige, and basic respect. This second-class tier at Facebook, a sort of international shadow workforce, has been well documented in the media, from Manila to Arizona, and it’s not clear whether the company has done anything to address it beyond issuing defensive PR statements. Moderators in Austin say their job is a brutalizing slog and that Facebook remains largely indifferent to their struggles. Access to on-site counseling is one of the few bright points for this workforce.

The letter alleges that Accenture managers attempted to pressure multiple on-site counselors to share information relating to topics discussed in employee trauma sessions.

But now even this grim perk has been undermined by corporate prying, according to a letter drafted by a group of about a dozen Austin moderators who work across Facebook and Instagram. The letter alleges that, starting in early July, Accenture managers attempted to pressure multiple on-site counselors to share information relating to topics discussed in employee trauma sessions. This information was understood by both counselors and Accenture employees to be confidential, said several Accenture sources interviewed by The Intercept. It is not clear what specific information related to the sessions was sought by the managers.

Facebook moderators, who spoke to The Intercept on the condition of anonymity fearing workplace reprisal, said a therapist — or “wellness coach,” as they’re known internally — refused to discuss a moderator’s session with Accenture management and later resigned over the incident.

Accenture’s Austin operation has a history of dissent: Its contractors have previously expressed workplace grievances on an internal company-wide Facebook message board known as Workplace. A May report from the Washington Post described how Austin moderators organized and published complaints over a starting wage of $16.50 an hour, which left some moderators working side jobs like driving for Uber “to make ends meet.” The article noted that thousands of employees had viewed or commented on posts on Workplace complaining over issues like “micromanagement, pay cuts and inadequate counseling support.”

Facebook’s Austin moderators spoke out again earlier this month, posting to Workplace a letter detailing the confidentiality concerns related to the Accenture counseling program, known as WeCare, which provides licensed “wellness coaches” to the company’s content screeners. The Workplace letter calls the alleged pressuring of workplace therapists “at best a careless breach of trust into the Wellness program and, at worst, an ethics and possible legal violation,” and “no longer an isolated incident but a systemic top-down problem plaguing Accenture management.”

The full letter, obtained by The Intercept, is below. We have removed specific references to Accenture managers who have not been contacted for comment.

Whistleblowers@ Complaint

I’m sharing the following on behalf of coworkers who wish to remain anonymous.

Please consider the following an official complaint to Whistleblowers@.

It has come to our attention that an Accenture [manager] pressured a WeCare licensed counselor to divulge the contents of their session with an Accenture employee. The counselor refused, stating confidentiality concerns, but the [manager] pressed on by stating that because this was not a clinical setting, confidentiality did not exist. The counselor again refused. This pressuring of a licensed counselor to divulge confidential information is at best a careless breach of trust into the Wellness program and, at worst, an ethics and possible legal violation.

Before we continue, we must unequivocally state that confidentiality does exist for these sessions. Because these counselors are licensed and required to keep confidentiality in their personal practices, there is an expectation of privacy prior to engagement. In order for that confidentiality to not exist, the patient must sign a confidentiality and HIPAA waiver prior to any sessions having taken place. The receiver of the care must be made fully aware that there is no confidentiality. Neither Facebook, Accenture, nor WeCare can remove confidentiality post facto from any previous session. If these entities wish for confidentiality to cease to exist in these sessions, they must have every single person utilizing these resources to sign a waiver. However, forcing us to sign away our confidentiality could open all counselors to losing their license due to ethics diligence set out by their governing boards. It could be very difficult for WeCare to run a multi-million dollar business contracting to Facebook if their counselors begin to fear losing their licenses AND workers stop utilizing this resource due to lack of confidentiality. Facebook, Accenture, and WeCare may try to feign ignorance or implement common liability limiting language in their response. We hope all parties do not succumb to these common and repeated trends, and instead do what is right instead of what you are legally allowed to get away with.

In order for workers to feel safe when divulging information to these counselors, we are requesting the following:

[Accenture] Manger: The manager who pressed the counselor for confidential medical information must be removed from the project immediately. To do any less would be Facebook, Accenture, and WeCare condoning breaches in medical confidentiality. Allowing the pressuring of a licensed counselor into committing an act [that] could strip the counselor of their credentials must be addressed swiftly.

Affirm Confidentiality: Facebook must affirm that wellness interactions with WeCare counselors and Wellness Champs have been and always will be confidential within the necessary safety reporting standards. To do any less would throw the validity of all wellness interactions into question, make it impossible for WeCare to deliver care, and most importantly would open all licensed counselors to losing their licenses and possible litigation for delivering counseling under false pretenses of confidentiality. Anything less than clinical confidentiality will lead to HIPAA violations by all parties.

Restructure Wellness Program: Any and all changes to the wellness program will be negotiated by and announced by WeCare and their [Facebook account manager] signing off on it. Any changes made to wellness procedures outside this chain of command will be unenforceable and seen as vendors overreaching their authority.

Before Facebook, Accenture, and WeCare launch their independent investigations into these claims against the [manager], we would like to thank everyone involved for their due diligence in this matter.

Since the beginning of writing this letter we became aware that [a different manager] that the above [manager] reports to is now pressuring these counselors to divulge more confidential information. This is no longer an isolated incident but a systemic top-down problem plaguing Accenture management. This must be addressed as soon as possible. Unless all entities involved address this issue properly and swiftly, they will open themselves up to a plethora of HIPAA violations that are incredibly financially punitive. Until FB affirms confidentiality has always and will always exist in those sessions we implore everyone to stop utilizing the licensed wellness counselors. If Accenture management is trying to use WeCare to gather information on workers, we as workers cannot in good faith trust that anything we say to a licensed counselor could not then be used to have us terminated.

If you would like to work with us in our efforts to ensure wellness confidentiality, the integrity of the wellness program, and the general wellbeing of [contingent workers], please send an email to [REDACTED].

Rolfe Lowe, an attorney of the firm Wachler & Associates who specializes in health care law and HIPAA compliance, told The Intercept that the incident as described likely didn’t constitute a HIPAA violation.

“We’re a body in a seat, and they don’t acknowledge the work we do.”

The letter, already viewed thousands of times, prompted a quick reply from an outsourcing manager at Facebook corporate, who claimed that an internal investigation had found “no violation or breach of trust between our licensed counselors and a contracted employee,” though he added that the company will “continue to address this with Accenture to ensure everyone is handling this appropriately,” and that the team’s “wellness coaches” will receive a “refresh” on what they “can and can’t share.”

A Facebook spokesperson didn’t answer specific questions posed about the allegations but provided a statement:

“All of our partners must provide a resiliency plan that is reviewed and approved by Facebook. This includes a holistic approach to wellbeing and resiliency that puts the needs of their employees first. All leaders and wellness coaches receive training on this employee resource and while we do not believe that there was a breach of privacy in this case, we have used this as an opportunity to reemphasize that training across the organization.”

Accenture provided this statement:

These allegations are inaccurate. Our people’s wellbeing is our top priority and our trust-and-safety teams in Austin have unrestricted access to wellness support. Additionally, our wellness program offers proactive and on-demand counseling and is backed by a strong employee assistance program. Our people are actively encouraged to raise wellness concerns through these programs. We also review, benchmark and invest in our wellness programs on an ongoing basis to create the most supportive workplace environment – regularly seeking input from industry experts, medical professionals and our people.

According to workers interviewed by The Intercept, hundreds of moderators at Facebook Austin sometimes share a single counselor for their shift. Some of them doubt that Facebook takes their well-being seriously: “We’re trash to them,” said one moderator. “We’re a body in a seat, and they don’t acknowledge the work we do.” Facebook is “largely responsible for any trauma reps experience, from a moral standpoint,” according to another moderator. “They just wanted to further remove themselves from responsibility for making our lives hell.”

One source familiar with the mental health situation in Austin, speaking on the condition of anonymity fearing retaliation, described a “toxic environment” where traumas compound and multiply as contractors are exposed to deeply disturbing imagery day in and day out while being denied meaningful care: “People are afraid to take a wellness break for 10 minutes because they’re gonna have hell to pay.”

The same source said that Austin moderators had at one point been encouraged by WeCare counselors to talk among themselves when struggling with mental anguish — “to just turn to their neighbor, and just start connecting, talk, take a walk, do something just to connect and disconnect from the screen. And that worked really well for a lot of people.” But this practice was soon banned by Accenture, the source said, because it cut into the time that could be spent clearing the queue of disturbing content; Accenture, the source said, told moderators that they could stretch their legs in an adjacent parking garage, but not stray any further outside the office.

Similar cuts have been made to counselor access: Multiple Accenture sources told The Intercept that moderators could previously count on 45 minutes every week with a counselor, or two hours a day for those viewing images of child sexual abuse, with a minimum quota of one visit per quarter. Today, moderators find themselves barred from even this scant mental health care unless “their productivity was high enough for that day,” said one of the sources, regardless of whether they’d spend all day reviewing Ku Klux Klan memes or acts of rape. “Management’s idea of wellness is that it needs to be as minimal as possible,” added another Accenture source, “because any time not in production is seen as bad.”

Neither Facebook nor Accenture responded to questions about these allegations beyond their general denials.

All of this has led to what one source familiar with the situation described as an “abysmal” mental health climate in Austin, where moderators are subjected to psychological horrors and then left feeling disposable and vulnerable. In some cases, the moderators are “poor, they’re felons, they’re people that don’t have any other options,” said the source. “They’re uneducated folks. How are we supposed to assume that they know how to and when to ask for help? Or even that there’s a problem?” But even with some semblance of financial security and mental health-savvy, this source doubts that anyone stands a chance in the long term: “No one should have to consume high levels of content with graphic violence, hate, gore, sexual abuse, child abuse, brutality, animal abuse, porn, self-mutilation and more at these rates, without proper mental health resources and advocates, and be expected to function normally.”

The post Trauma Counselors Were Pressured to Divulge Confidential Information About Facebook Moderators, Internal Letter Claims appeared first on The Intercept.

Hegel on labor and freedom

Published by Anonymous (not verified) on Mon, 12/08/2019 - 3:23am in

Hegel provided a powerful conception of human beings in the world and a rich conception of freedom. Key to that conception is the idea of self-creation through labor. Hegel had an "aesthetic" conception of labor: human beings confront the raw given of nature and transform it through intelligent effort into things they imagine that will satisfy their needs and desires.

Alexandre Kojève's reading of Hegel is especially clear on Hegel's conception of labor and freedom. This is provided in Kojève's analysis of the Master-Slave section of Hegel's Phenomenology in his Introduction to the Reading of Hegel. The key idea is expressed in these terms:

The product of work is the worker's production. It is the realization of his project, of his idea; hence, it is he that is realized in and by this product, and consequently he contemplates himself when he contemplates it.... Therefore, it is by work, and only by work, that man realizes himself objectively as man. (Kojève, Introduction to the Reading of Hegel)

It seems to me that this framework of thought provides an interesting basis for a philosophy of technology as well. We might think of technology as collective and distributed labor, the processes through which human beings collectively transform the world around themselves to better satisfy human needs. Through intelligence and initiative human beings and organizations transform the world around them to create new possibilities for human need satisfaction. Labor and technology are emancipating and self-creating. Labor and technology help to embody the conditions of freedom.

However, this assessment is only one side of the issue. Technologies are created for a range of reasons by a heterogeneous collection of actors: generating profits, buttressing power relations, serving corporate and political interests. It is true that new technologies often serve to extend the powers of the human beings who use them, or to satisfy their needs and wants more fully and efficiently. Profit motives and the market help to ensure that this is true to some extent; technologies and products need to be "desired" if they are to be sold and to generate profits for the businesses that produce them. But given the conflicts of interest that exist in human society, technologies also serve to extend the capacity of some individuals and groups to wield power over others.

This means that there is a dark side to labor and technology as well. There is the labor of un-freedom. Not all labor allows the worker to fulfill him- or herself through free exercise of talents. Instead the wage laborer is regulated by the time clock and the logic of cost reduction. This constitutes Marx's most fundamental critique of capitalism, as a system of alienation and exploitation of the worker as a human being. Here are a few paragraphs on alienated labor from Marx's Economic and Philosophical Manuscripts:

The worker becomes all the poorer the more wealth he produces, the more his production increases in power and size. The worker becomes an ever cheaper commodity the more commodities he creates. The devaluation of the world of men is in direct proportion to the increasing value of the world of things. Labor produces not only commodities; it produces itself and the worker as a commodity – and this at the same rate at which it produces commodities in general. 

This fact expresses merely that the object which labor produces – labor’s product – confronts it as something alien, as a power independent of the producer. The product of labor is labor which has been embodied in an object, which has become material: it is the objectification of labor. Labor’s realization is its objectification. Under these economic conditions this realization of labor appears as loss of realization for the workers objectification as loss of the object and bondage to it; appropriation as estrangement, as alienation. 

So much does the labor’s realization appear as loss of realization that the worker loses realization to the point of starving to death. So much does objectification appear as loss of the object that the worker is robbed of the objects most necessary not only for his life but for his work. Indeed, labor itself becomes an object which he can obtain only with the greatest effort and with the most irregular interruptions. So much does the appropriation of the object appear as estrangement that the more objects the worker produces the less he can possess and the more he falls under the sway of his product, capital. 

All these consequences are implied in the statement that the worker is related to the product of labor as to an alien object. For on this premise it is clear that the more the worker spends himself, the more powerful becomes the alien world of objects which he creates over and against himself, the poorer he himself – his inner world – becomes, the less belongs to him as his own. It is the same in religion. The more man puts into God, the less he retains in himself. The worker puts his life into the object; but now his life no longer belongs to him but to the object. Hence, the greater this activity, the more the worker lacks objects. Whatever the product of his labor is, he is not. Therefore, the greater this product, the less is he himself. The alienation of the worker in his product means not only that his labor becomes an object, an external existence, but that it exists outside him, independently, as something alien to him, and that it becomes a power on its own confronting him. It means that the life which he has conferred on the object confronts him as something hostile and alien.

So does labor fulfill freedom or create alienation? Likewise, does technology emancipate and fulfill us, or does it enthrall and disempower us? Marx's answer to the first question is that it does both, depending on the social relations within which it is defined, managed, and controlled.

It would seem that we can answer the second question for ourselves, in much the same terms. Technology both extends freedom and constricts it. It is indeed true that technology can extend human freedom and realize human capacities. The use of technology and science in agriculture means that only a small percentage of people in advanced countries are farmers, and those who are enjoy a high standard of living compared to peasants of the past. Communication and transportation technologies create new possibilities for education, personal development, and self-expression. The enhancements to economic productivity created by technological advances have permitted a huge increase in the wellbeing of ordinary people in the past century -- a fact that permits us to pursue the things we care about more freely. But new technologies also can be used to control people, to monitor their thoughts and actions, and to wage war against them. More insidiously, new technologies may "alienate" us in new ways -- make us less social, less creative, and less independent of mind and thought.

So it seems clear on its face that technology is both favorable to the expansion of freedom and the exercise of human capacities, and unfavorable. It is the social relations through which technology is exercised and controlled that make the primary difference in which effect is more prominent.

A New App Allows Readers in China to Bypass Censorship of The Intercept

Published by Anonymous (not verified) on Fri, 09/08/2019 - 3:42am in



Since June, people in China have been unable to read The Intercept, after the country’s government apparently banned our website, along with those of several other media organizations. Today, we are happy to announce a workaround that will allow people in China to circumvent the restrictions, access our full site, and continue to read our award-winning journalism.

In partnership with Psiphon, an anti-censorship organization based in Canada, we are launching a custom app for Android and Windows devices that bypasses China’s so-called Great Firewall and will allow our readers there to visit once again. (The app is not currently available for iOS in China because Apple has removed it from the app store there, citing local regulations.)

To get the app, readers in China and other countries where The Intercept is not accessible can send a blank email to, and they will receive an automated response from Psiphon containing a download link.

The Psiphon app encrypts all data that it carries across networks and uses proxy technology to defeat censorship, transmitting traffic between a network of secure servers. The app does not log any personally identifying information, and the software is open-source. You can read more about Psiphon’s technology here.

“Internet users in China face some of the most pervasive and technically sophisticated online censorship in the world,” said a spokesperson for Psiphon. “Psiphon is designed to provide robust, reliable access to the open Internet in the most difficult circumstances. Through our tools and technology, we support millions of people worldwide in their right to freely access information, and the organizations that stand for it.”

For years, China has blocked thousands of websites. But under the rule of President Xi Jinping, attempts to stifle the free flow of information have dramatically increased. The ruling Communist Party government often adds Western news organizations to its banned list after they have published stories exposing corruption within the regime or that otherwise reflect negatively on the country’s officials.

In June, coinciding with the 30th anniversary of the Tiananmen Square massacre, The Intercept’s website was blocked, as were the websites of The Guardian, the Washington Post, HuffPost, NBC News, the Christian Science Monitor, the Toronto Star, and Breitbart News. The New York Times, Bloomberg, the Wall Street Journal, and Reuters have all previously been censored in China.

Charlie Smith, co-founder of, an organization that monitors Chinese government internet censorship, told The Intercept following the crackdown in June that the country’s authorities appeared to be “accelerating their push to sever the link between Chinese citizens and any news source that falls outside of the influence” of the ruling Communist Party regime. The Chinese government has not responded to requests for comment on the matter.

The post A New App Allows Readers in China to Bypass Censorship of The Intercept appeared first on The Intercept.

Crowdfunded Solar Sail Spacecraft Makes Successful Flight

Published by Anonymous (not verified) on Wed, 07/08/2019 - 5:19am in

Bit of science news now. Last Friday’s I for 2nd August 2019 reported that a satellite developed by the Planetary Society and funded through internet fundraising had successfully climbed to a higher orbit using a solar sail. This propels spacecraft using only the pressure of light, just like an ordinary sail uses the force given by the window to propel a ship on Earth, or drive a windmill.

The article on this by Joey Roulette on page 23 ran

A small crowdfunded satellite promoted by a TV host in the United States has been propelled into a higher orbit using only the force of sunlight.

The Lightsail 2 spacecraft, which is about the size of a loaf of bread, was launched into orbit in June. 

It then unfurled a tin foil-like solar sail designed to steer and push the spacecraft, using the momentum of tiny particles of light called photons emanating from the Sun – into a higher orbit. The satellite was developed by the California-based research and education group, the Planetary Society, who chief executive is the television personality popularly known as Bill Nye the Science Guy.

The technology could potentially lead to an inexhaustible source of space propulsion as a substitute for finite supplies of rocket fuels that most spacecraft rely on for in-flight manoeuvres.

“We are thrilled to declare mission success for Lightsail 2,” said its programme manager Bruce Betts.

Flight by light, or “sailing on sunbeams”, as Mr Nye called it, could best be used for missions carrying cargo in space.

The technology could also reduce the need for expensive, cumbersome rocket propellants.

“We strongly feel taht missions like Lightsail 2 will democratise space, enable more people send spacecraft to remarkable destinations in the solar system”, Mr Nye said.

This is very optimistic. The momentum given to a spacecraft by the Sun’s light is very small. But, like ion propulsion, it’s constant and so enormous speeds can be built up over time. It may be through solar sail craft that we may one day send probes to some of the extrasolar planets now being discovered by astronomers.

In the 1990s, American scientists designed a solar sail spacecraft, Star Wisp, which would take a 50 kg instrument package to Alpha Centauri. The star’s four light years away. The ship would, however, reach a speed of 1/3 that of light, meaning that, at a very rough calculation, it would reach its destination in 12 years. The journey time for a conventional spacecraft propelled by liquid oxygen and hydrogen is tens of thousands of years.

Although the idea has been around since the 1970s, NASA attempt to launch a solar sail propelled satellite a few years ago failed. If we are ever to reach the stars, it will be through spacecraft and other highly advanced unconventional spacecraft, like interstellar ramjets. So I therefore applaud Nye and the Planetary Society on their great success.

The Trump Administration Is Using the Full Power of the U.S. Surveillance State Against Whistleblowers

Published by Anonymous (not verified) on Sun, 04/08/2019 - 9:00pm in

Government whistleblowers are increasingly being charged under laws such as the Espionage Act, but they aren’t spies.

They’re ordinary Americans and, like most of us, they carry smartphones that automatically get backed up to the cloud. When they want to talk to someone, they send them a text or call them on the phone. They use Gmail and share memes and talk politics on Facebook. Sometimes they even log in to these accounts from their work computers.

Then, during the course of their work, they see something disturbing. Maybe it’s that the government often has no idea if the people it kills in drone strikes are civilians. Or that the NSA witnessed a cyberattack against local election officials in 2016 that U.S. intelligence believes was orchestrated by Russia, even though the president is always on TV saying the opposite. Or that the FBI uses hidden loopholes to bypass its own rules against infiltrating political and religious groups. Or that Donald Trump’s associates are implicated in sketchy financial transactions.

So they search government databases for more information and maybe print some of the documents they find. They search for related information using Google. Maybe they even send a text message to a friend about how insane this is while they consider possible next steps. Should they contact a journalist? They look up the tips pages of news organizations they like and start researching how to use Tor Browser. All of this happens before they’ve reached out to a journalist for the first time.

Most people aren’t very aware of it, but we’re all under surveillance. Telecom companies and tech giants have access to nearly all of our private data, from our exact physical locations at any given time to the content of our text messages and emails. Even when our private data doesn’t get sent directly to tech companies, our devices are still recording it locally. Do you know exactly what you were doing on your computer two months ago today at 3:05 p.m.? Your web browser probably does.

Yet while we all live under extensive surveillance, for government employees and contractors — especially those with a security clearance — privacy is virtually nonexistent. Everything they do on their work computers is monitored. Every time they search a database, their search term and the exact moment they searched for it is logged and associated with them personally. The same is true when they access a secret document, or when they print anything, or when they plug a USB stick into their work computer. There might be logs of exactly when an employee takes screenshots or copies and pastes something. Even when they try to outsmart their work computer by taking photos directly of their screen, video cameras in their workplace might be recording their every move.

Government workers with security clearance promise “never [to] divulge classified information to anyone” who is not authorized to receive it. But for many whistleblowers, the decision to go public results from troubling insights into government activity, coupled with the belief that as long as that activity remains secret, the system will not change. While there are some protections for whistleblowers who raise their concerns internally or complain to Congress, there is also a long history of those same people being punished for speaking out.

The growing use of the Espionage Act, a 1917 law that criminalizes the release of “national defense” information by anyone “with intent or reason to believe that it is to be used to the injury of the United States or to the advantage of a foreign nation,” shows how the system is rigged against whistleblowers. Government insiders charged under the law are not allowed to defend themselves by arguing that their decision to share what they know was prompted by an impulse to help Americans confront and end government abuses. “The act is blind to the possibility that the public’s interest in learning of government incompetence, corruption, or criminality might outweigh the government’s interest in protecting a given secret,” Jameel Jaffer, head of the Knight First Amendment Institute, wrote recently. “It is blind to the difference between whistle-blowers and spies.”

While we all live under extensive surveillance, for government employees and contractors — especially those with a security clearance — privacy is virtually nonexistent.

Of the four Espionage Act cases based on alleged leaks in the Trump era, the most unusual concerned Joshua Schulte, a former CIA software developer accused of leaking CIA documents and hacking tools known as the Vault 7 disclosures to WikiLeaks. Schulte’s case is different from the others because, after the FBI confiscated his desktop computer, phone, and other devices in a March 2017 raid, the government allegedly discovered over 10,000 images depicting child sexual abuse on his computer, as well as a file and chat server he ran that included logs of him discussing child sexual abuse images and screenshots of him using racist slurs. Prosecutors initially charged Schulte with several counts related to child pornography and later with sexual assault in a separate case, based on evidence from his phone. Only in June 2018, in a superseding indictment, did the government finally charge him under the Espionage Act for leaking the hacking tools. He has pleaded not guilty to all charges.

The other three Espionage Act cases related to alleged leaks of government secrets have involved people who are said to have been sources for The Intercept. The Intercept does not comment on its anonymous sources, although it has acknowledged falling short of its own editorial standards in one case. It is not surprising that a publication founded as a result of the Snowden leaks, and one that has specialized in publishing secret government documents whose disclosure serves the public interest, has been an appealing target for the Trump administration’s war on whistleblowers.

The government comes to this war armed with laws like the Espionage Act that are ripe for abuse, and with the overwhelming firepower of surveillance technology that has almost no limits when applied to its own workers and contractors. But journalists also have tools at their disposal, including the First Amendment and the ability to educate ourselves about the methods the government uses to track and spy on its employees. We’ve mined the court filings in all seven leak cases filed by Trump’s Justice Department to identify the methods the government uses to unmask confidential sources. 

When a government worker becomes a whistleblower, the FBI gets access to reams of data describing exactly what happened on government computers and who searched for what in government databases, which helps narrow down the list of suspects. How many people accessed this document? How many people printed it? Can any of their work emails be used against them? What evidence can be extracted from their work computers?

Once the FBI has a list of suspects based on the vast amount of data the government itself has collected, they use court orders and search warrants to access even more information about the targets of its investigation. They compel tech companies, whose business models often rely on collecting as much information on their users as possible, to hand over everything, including personal emails, text messages, phone call metadata, smartphone backups, location data, files stored in Dropbox, and much more. FBI agents raid the houses and search the vehicles of these suspects, extracting whatever they can from any phones, computers, and hard drives they find. Sometimes, this includes files the suspects thought they had deleted or text messages and documents sent through encrypted messaging services like Signal or WhatsApp. The encryption these apps use protects messages while they’re sent over the internet so that the services themselves can’t spy on the content or hand it over to the government, but this encryption doesn’t protect messages stored on a phone or other device that is seized and searched.

Because whistleblowers aren’t spies, they normally don’t know how to avoid this kind of surveillance. One whistleblower who knew what he was up against, former CIA and National Security Agency contractor Edward Snowden, didn’t see any way to get secret government information into the public domain while retaining his anonymity.

“I appreciate your concern for my safety,” Snowden wrote in an encrypted email, from an anonymous address not associated with his real identity that he only accessed over the Tor network, to filmmaker Laura Poitras in the spring of 2013, “but I already know how this will end for me and I accept the risk.” In the documentary film “Citizenfour,” Snowden explains that the security measures he took while reaching out to journalists were only designed to buy him enough time to get information about the NSA’s overwhelming invasions of privacy to the American public. “I don’t think there’s a case that I’m not going to be discovered in the fullness of time,” he said from a hotel room in Hong Kong before he publicly came forward as the source.

If we want to live in a world where it’s safer for people to speak out when they see something disturbing, we need technology that protects everyone’s privacy, and it needs to be enabled by default. Such technology would also protect the privacy of whistleblowers before they decide to become sources.

In 2017, in the first indictment of an alleged whistleblower since Trump became president, the Justice Department charged Reality Leigh Winner under the Espionage Act for leaking a top-secret NSA document to a news organization that was widely reported to be The Intercept. At the time, Winner was a 25-year-old decorated U.S. Air Force veteran, who was also a dedicated CrossFit trainer with a passion for slowing the climate crisis. The document was an NSA intelligence report describing a cyberattack: Russian military intelligence officers hacked a U.S. company that provides election support in swing states and then, days before the 2016 election, sent local election officials — who were customers of this company — over 100 malware-infected emails, hoping to hack them next.

Government insiders charged under the Espionage Act are not allowed to defend themselves by arguing that their decision was in the public interest.

According to court documents, Winner was one of only six people who had printed the document she was accused of leaking (she had searched for, accessed, and printed the document on May 9, 2017). After searching all six of those employees’ work computers, they found that Winner was the only one who also had email contact with the news organization that published the document. (Using her private Gmail account, she had asked the news organization for a transcript of a podcast episode.) At the time, those who accused The Intercept of having revealed Winner’s identity said that the online publication, in an attempt to authenticate a document that had been sent anonymously, shared a copy with the government that contained a crease, suggesting that it had been printed. But Winner’s email and printing history alone would have made her the prime suspect.

FBI agents then raided her house and interrogated her without a lawyer present and without telling her she had a right to remain silent, leading to defense accusations that the government violated her Miranda rights. In her house, they found handwritten notes about how to use a burner phone and Tor Browser. They also seized her Android smartphone and her laptop and extracted evidence from both devices.

The FBI also ordered several tech companies to hand over information from Winner’s accounts. Facebook provided data from her Facebook and Instagram accounts, Google provided data from two separate Gmail accounts she used, Twitter provided data on her account, and AT&T also contributed.

We don’t know exactly what these companies turned over, but we do know that they were ordered to disclose all information associated with her accounts, including:

  • Usernames, email addresses, physical addresses, phone numbers, and credit card numbers
  • A history of every time she logged on, for how long, and from which IP addresses
  • Metadata about every instance of communication she ever had over these services, including the type of communication, the source and destination, and the file size or duration of the communication

The FBI also requested records of accounts that were linked to her Facebook, Instagram, Google, Twitter, and AT&T accounts — those that were created using the same email address, accessed from the same IP address, or logged into from the same web browser. (If users don’t take extra steps to remain anonymous, service providers can trivially link different accounts accessed from the same computer.)

The FBI also extracted everything it could from Winner’s phone:

  • Her photos, including one that was taken on February 7, 2017, of a webpage that listed eight SecureDrop servers run by different media organizations.
  • Data extracted from her smartphone apps like the Facebook app, which contained private messages she had exchanged with her sister, which were later used against her.
  • Her phone’s browser history: On March 7, she visited a website that contained a list of “dark web email providers,” and she searched the internet for “tor email.” On May 9, at “approximately 7:29:49 p.m. (EST),” Winner searched for and viewed the tips page of the news outlet to which she was accused of leaking the NSA document, as well as the tips page of a second news outlet; later that night, she logged into her Dropbox account, and three minutes after that, she viewed the first media organization’s tips page again.

So the FBI got a search warrant issued to Dropbox, demanding all the files and other information stored in Winner’s account, as well as “any messages, records, files, logs, or information that have been deleted but are still available to Dropbox, Inc.” Dropbox gave the FBI a thumb drive containing that data.

They also got a search warrant issued to Google, demanding nearly everything stored in Winner’s account, including:

  • All the messages in her Gmail account.
  • Her Google search history.
  • Her location history.
  • All of her web browser activity that could be identified based on web browser cookies (this could possibly include a list of every webpage she visited that used the Google Analytics service).
  • Backups of her Android phone.

Based on metadata that the FBI got from its previous court order to Google, the bureau learned about a new, separate Google account that it suspected Winner used, which it hadn’t previously known about. The search warrant demanded data from this other account as well. Google gave the FBI “electronic files in excess of 809mb (compressed)” of data from Winner’s two Google accounts.

The FBI also extracted data from her laptop. It discovered that she had downloaded Tor Browser on February 1, 2017, and had used it in February and March. The FBI also discovered a note saved to her desktop that contained the username and password for a small email company called VFEmail, and so it got another search warrant demanding a copy of everything in the VFEmail account as well.

Winner was found guilty and sentenced to five years in prison, the longest sentence ever given to an alleged journalistic source by a federal court. The Intercept’s parent company, First Look Media, contributed to Winner’s legal defense through the Press Freedom Defense Fund.


Illustration: Owen Freeman for The Intercept

During Terry Albury’s distinguished 16-year counterterrorism career at the FBI, he “often observed or experienced racism and discrimination within the Bureau,” according to court documents. The only black FBI special agent in the Minneapolis field office, he was especially disturbed by what he saw as “systemic biases” within the bureau, particularly when it came to the FBI’s mistreatment of informants. In 2018, the Justice Department charged Albury with espionage for leaking secret documents to a news organization, reportedly The Intercept, which in early 2017 published a series of revelations based on confidential FBI guidelines, including details about controversial tactics for investigating minorities and spying on journalists.

Even though the FBI did not know whether the documents had been printed before being shared, it was not hard to track down who had accessed them. The FBI identified 16 people who had accessed one of the 27 documents that the media organization published on its website. They searched all 16 of those people’s work computers, including Albury’s, and found that his computer had also accessed “over two-thirds” of the documents that were made public.

According to court documents, the FBI used a variety of activities on Albury’s computer as evidence against him: exactly which documents he accessed and when, when he took screenshots, when he copied and pasted these screenshots into unsaved documents, and when he printed them. For example, on May 10, 2016, between 12:34 p.m. and 12:50 p.m., Albury accessed two classified documents. Nineteen minutes later, he pasted two screenshots into an unsaved Microsoft Word document, and over the following 45 minutes, he pasted 11 more screenshots into an unsaved Excel document. Throughout the day, he accessed more secret documents, pasting more screenshots into the Excel document. At 5:29 p.m., he printed it and then closed the document without saving it.

And it wasn’t just his work computer that was under surveillance. Using a closed-circuit video surveillance system in his workplace, the FBI captured video of Albury. On June 16, August 23, and August 24, 2017, the system recorded Albury holding a silver digital camera, inserting “what appeared to be a digital memory stick” into it, and taking photos of his screen. On all three days, the court documents say, Albury was viewing documents on his computer screen.

“It became a human rights thing for him,” Albury’s wife said in a court document requesting a lenient sentence, “the mistreatment and tactics that were used by FBI and how he was a part of it.” Albury, who is 40 years old, pleaded guilty and was sentenced to four years in prison and three years of supervised release.

Services like Signal and WhatsApp have made it simple for journalists to communicate securely with their sources by encrypting messages so that only the phones on either side of the conversation can access them and not the service itself. (This isn’t true when using non-encrypted messaging services like Skype and Slack, direct messengers on Twitter and Facebook, or normal text messages and phone calls.) However, encrypted services don’t protect messages when a phone gets physically searched and the user hasn’t deleted their message history. This was made exceedingly clear on June 7, 2018, when the Justice Department indicted former Senate Intelligence Committee aide James Wolfe for making false statements to the FBI.

According to court documents, Wolfe had told FBI leak investigators that he had not been in contact with journalists. But the indictment against Wolfe quoted the content of Signal conversations he’d had with journalists. It doesn’t mention how the FBI obtained these messages, but the only reasonable conclusion is that agents found them when they searched his phone.

“I don’t think there’s a case that I’m not going to be discovered in the fullness of time,” Edward Snowden said from a hotel room in Hong Kong before he publicly came forward as the source.

In addition to obtaining his Signal messages, the FBI searched Wolfe’s work email and found messages he’d traded with a journalist. The FBI knew about physical meetings he’d had with journalists and where they had occurred. They mention hundreds of text messages he’d exchanged with journalists, which journalists he’d talked to on the phone, and for how long.

During the same investigation, the Justice Department sent court orders to Google and Verizon to seize years’ worth of phone and email records belonging to New York Times national security reporter Ali Watkins, who had previously worked for BuzzFeed News and Politico. The FBI was investigating Watkins’s source for a BuzzFeed article about a Russian spy trying to recruit Trump adviser Carter Page. The seized records went all the way back to when Watkins was in college. This was the first known case in which the Trump administration went after a reporter’s communications.

Wolfe pleaded guilty to lying to investigators about contacting the media and was sentenced to two months in prison and a $7,500 fine.

Even without physically searching a phone, the FBI can obtain real-time metadata, who sends messages to whom and when, for at least one encrypted messaging app. This happened in the case of Natalie Mayflower Sours Edwards, a senior official with the Treasury Department’s Financial Crimes Enforcement Network, or FinCEN. At the end of 2018, the Justice Department indicted Edwards for allegedly providing a journalist, widely reported to be BuzzFeed News’s Jason Leopold, with details about suspicious financial transactions involving GOP operatives, senior members of Trump’s campaign, and a Kremlin-connected Russian agent and Russian oligarchs.

According to court documents, the FBI got a “judicially-authorized pen register and trap and trace order” for Edwards’s personal cellphone. This is a court order that allows the FBI to collect various types of communication metadata from the phone using a range of techniques — ordering third parties to hand over this metadata, for instance, or using a device such as a StingRay, which simulates a cellphone tower in order to trick phones into connecting to it so they can be spied on.

Using this court order, the FBI was apparently able to gather real-time metadata from an encrypted messaging app on Edwards’s phone. For example, on August 1, 2018, at 12:33 a.m., six hours after the pen register order “became operative” and the day after BuzzFeed News published one of the articles, Edwards allegedly exchanged 70 encrypted messages with the journalist. The following day, a week before BuzzFeed News published another story, Edwards allegedly exchanged 541 encrypted messages with the journalist.

The FBI also extracted data from Winner’s laptop. It discovered that she had downloaded Tor Browser on February 1, 2017, and had used it in February and March.

The court documents don’t name the messaging app that was used, and it’s not clear how the government obtained the metadata. However, it could not have gotten the metadata by directly monitoring the internet traffic coming from Edwards’s phone, so it is most likely that the government ordered a messaging service to supply real-time metadata, and the service complied.

Moxie Marlinspike, the founder of Signal, said his app wasn’t responsible. “Signal is designed to be privacy-preserving and collects as little information as possible,” Marlinspike told The Intercept. “In addition to end-to-end encryption for every message, Signal does not have any record of a user’s contacts, the groups they are in, the titles or avatars of any group, or the profile names or avatars of users. Even GIF searches are protected. Most of the time, Signal’s new Sealed Sender technology means that we don’t even know who is messaging who. Every government request we’ve ever responded to is listed on our website along with our response, in which it’s possible to see that the data we’re capable of providing a third party is practically nothing.”

A spokesperson for WhatsApp said that they can’t comment on individual cases and pointed to a section of its frequently asked questions about responding to law enforcement requests. The document states that WhatsApp “may collect, use, preserve, and share user information if we have a good-faith belief that it is reasonably necessary” to “respond to legal process, or to government requests.” According to Facebook’s transparency report, which includes requests for WhatsApp user data, during the last half of 2018, which was when the pen register order against Edwards’s phone became operational, Facebook received 4,904 “Pen Register / Trap & Trace” requests, asking for data from 6,193 users, and responded with “some data” to 92 percent of the requests.

A spokesperson for Apple declined to comment but referenced the section of its legal process guidelines about the type of data related to iMessage that Apple can provide to law enforcement. “iMessage communications are end-to-end encrypted and Apple has no way to decrypt iMessage data when it is in transit between devices,” the guidelines state. “Apple cannot intercept iMessage communications and Apple does not have iMessage communication logs.” Apple does, however, acknowledge having “iMessage capability query logs,” which indicate that an app on one user’s Apple device has begun the process of sending a message to another user’s iMessage account. “iMessage capability query logs do not indicate that any communication between users actually took place,” the guidelines say. “iMessage capability query logs are retained up to 30 days. iMessage capability query logs, if available, may be obtained with an order under 18 U.S.C. §2703(d) or court order with the equivalent legal standard or search warrant.”

The FBI also ordered Edwards’s personal cellphone carrier to hand over her phone records; the bureau did the same with a colleague of hers, whom it referred to as a “co-conspirator.” The FBI obtained a search warrant for Edwards’s personal email account, mostly likely Gmail, and from that, accessed her “internet search history” records (she is accused of searching for multiple articles based on her alleged leaks shortly after they were published). The FBI got a search warrant to physically search her person, and it seized a USB flash drive, as well as her personal cellphone. According to the criminal complaint, the flash drive contained 24,000 files, including thousands of documents describing suspicious financial transactions. The bureau extracted the messaging app data from her phone, allowing agents to read the content of the messages she allegedly exchanged with the journalist.

Edwards faces up to 10 years in prison. She has pleaded not guilty.


Illustration: Owen Freeman for The Intercept

Government workers are often able to access restricted documents using internal databases that they log into and search, including databases run by private companies like defense contractor Palantir. These databases track what each user does: which terms they search for, which documents they click on, which ones they download to their computers, and exactly when. IRS official John Fry had access to multiple law enforcement databases, including one run by Palantir, as well as FinCEN’s database — the same one from which Edwards is accused of leaking suspicious activity reports.

This past February, the Justice Department indicted Fry for allegedly providing details about suspicious financial transactions involving Trump’s former attorney and fixer Michael Cohen to prominent attorney Michael Avenatti and at least one journalist, the New Yorker’s Ronan Farrow. In one of these transactions, Cohen had paid $130,000 of hush money shortly before the 2016 election to an adult film actress in exchange for her silence about an affair she says she had with Trump.

On May 4, 2018, at 2:54 p.m., Fry allegedly searched the Palantir database for information related to Cohen and downloaded five suspicious activity reports, according to court documents. The same day, Fry allegedly conducted several searches for specific documents in the FinCEN database.

The FBI obtained Fry’s phone records from his personal cellphone carrier. After downloading suspicious activity reports related to Cohen, Fry allegedly called Avenatti on the phone. Later, he allegedly called a journalist and spoke for 42 minutes. The FBI then obtained a search warrant for Fry’s phone. Between May 12 and June 8, 2018, Fry allegedly exchanged 57 WhatsApp messages with the journalist. After the article was published, he allegedly texted, “Beautifully written, as I suspected it would be.” The journalist’s cellphone number was allegedly in Fry’s cellphone contact list.

Fry faces up to five years in prison. He has pleaded not guilty.

Daniel Hale was ideologically opposed to war before he joined the military in 2009, when he was 21 years old, but he felt he had no choice. “I was homeless, I was desperate, I had nowhere else to go. I was on my last leg, and the Air Force was ready to accept me,” he said in “National Bird,” a 2016 documentary about drone warfare whistleblowers.

He spent the next five years working in the drone program for the NSA, the Joint Special Operations Task Force in Afghanistan, and as a defense contractor assigned to the National Geospatial-Intelligence Agency. His job included helping identify targets to be assassinated.

Hale is also an outspoken activist. “The most disturbing thing about my involvement in drones is the uncertainty if anybody that I was involved in kill[ing] or captur[ing] was a civilian or not,” he said in the film. “There’s no way of knowing.”

In May, the Justice Department charged Hale with espionage for allegedly leaking classified documents related to drone warfare to a news organization identified by Trump administration officials as The Intercept, which published a series of stories in 2015 that provide the most detail ever made public about the U.S. government’s assassination program.

“The most disturbing thing about my involvement in drones is the uncertainty if anybody that I was involved in kill[ing] or captur[ing] was a civilian or not. There’s no way of knowing.”

“In an indictment unsealed on May 9, the government alleges that documents on the U.S. drone program were leaked to a news organization,” Intercept Editor-in-Chief Betsy Reed said in a statement about Hale’s indictment. “These documents detailed a secret, unaccountable process for targeting and killing people around the world, including U.S. citizens, through drone strikes. They are of vital public importance, and activity related to their disclosure is protected by the First Amendment. The alleged whistleblower faces up to 50 years in prison. No one has ever been held accountable for killing civilians in drone strikes.”

On August 8, 2014, dozens of FBI agents raided Hale’s house with guns drawn and searched his computer and flash drives. This all happened during the Obama administration, which declined to file charges. Five years later, Trump’s Justice Department revived the case.

According to court documents, investigators could see the exact search terms that Hale allegedly typed into different computers he used, one for unclassified work and the other for classified work, and when. The evidence against him includes quotes from text messages that Hale allegedly sent to his friends and quotes from text and email conversations he allegedly had with a journalist who media outlets have identified as The Intercept’s Jeremy Scahill. It describes his phone call metadata. It alleges that he went to an event at a bookstore and sat next to the journalist. All of these things occurred before he had allegedly sent any documents to the media.

Between September 2013 and February 2014, according to the indictment, Hale and the journalist allegedly “had at least three encrypted conversations via Jabber,” a type of online chat service. It’s unclear where the government got this information; it could have been from internet surveillance, from the Jabber chat service provider, or from analyzing Hale’s computer. And as in the Winner and Albury cases, the FBI knew exactly which documents Hale had allegedly printed and when. Hale allegedly printed 32 documents, at least 17 of which were later published by the news organization “in whole or in part.”

When the FBI raided Hale’s house, agents allegedly found an unclassified document on his computer and a secret document on a USB stick that Hale had “attempted to delete.” They also found another USB stick that contained Tails, an operating system designed to keep data and internet activity private and anonymous and can be booted off a USB stick, though it does not appear that the FBI gathered any data from it. In Hale’s cellphone contacts, agents allegedly found the journalist’s phone number.

Hale, who is now 32, faces a maximum of 50 years in prison. He has pleaded not guilty.

Even though the odds are stacked against sources who want to remain anonymous, it’s not hopeless. Different sources face wildly different risks. If you work for a company like Google, Facebook, or Goldman Sachs, you might be under intense scrutiny on your work devices while your personal devices remain outside the reach of your employer’s surveillance (so long as you don’t rely on services it controls to communicate with journalists). And some government sources may have ways of accessing secret documents whose disclosure is in the public interest that don’t involve generating a log entry with a time stamp and associating their username with that access.

It’s increasingly clear that the primary evidence used against whistleblowers comes from events that happened before they contacted the media, or even before they made the decision to blow the whistle. But it’s still critical that journalists are prepared to protect their sources as best as they can in case a whistleblower reaches out to them. This includes running systems like SecureDrop, which gives sources secure, metadata-free ways to make first contact with journalists and minimizes traces of the contact on their devices.

Journalists should also take steps to reduce the amount of information about their communication with sources that tech companies can access, and that ends up on their sources’ devices, by always using encrypted messaging apps instead of insecure text messages and always using the disappearing messages feature in those apps. They should also encourage their sources not to add them to the contacts in their phone, which might get synced to Google or Apple servers.

The journalistic process of verifying the authenticity of documents also carries risk to anonymous sources, but that process is essential to establish that the material has not been falsified or altered, and to maintain credibility with readers. Authentication, which often involves sharing information about the contents of a forthcoming story with the government, is a common journalistic practice that allows the government to weigh in on any risks involved in publishing the material of which the journalist may not be aware. By turning that process into a trap for journalists and sources, the government is sacrificing an opportunity to safeguard its legitimate interests and tell its side of the story.

News organizations also need to make hard decisions about what to publish. Sometimes, they may decide that it is safer to not publish documents if the story can be reported by describing the contents of the documents and leaving it ambiguous where the revelations came from. However, these approaches diminish transparency with readers and can also limit the impact of a story, which is important to both journalists and whistleblowers. In an era when the label “fake news” is used to discredit serious investigative journalism, original source documents serve as powerful evidence to refute such charges.

Encrypted messaging apps have made significant progress in securing conversations online, but they still have major issues when it comes to protecting sources. Many, including WhatsApp and Signal, encourage users to add the phone numbers of people they message to their contacts, which often get synced to the cloud, and WhatsApp encourages users to back up their text message history to the cloud. Although Facebook, which owns WhatsApp, doesn’t have access to the content of those backed-up messages, Google and Apple do.

It’s not enough that these apps encrypt messages. They also need to do better at promptly deleting data that’s no longer needed. End-to-end encryption protects messages as they travel from one phone to another, but each phone still has a copy of the plain text of all these messages, leaving them vulnerable to physical device searches. Disappearing messages features are a great start, but they need to be improved. Users should have the option to automatically have all their chats disappear without having to remember to set disappearing messages each time they start a conversation, and they should be asked if they’d like to enable this when they first set up the app. And when all messages in a conversation disappear, all forensic traces that a conversation with that person happened should disappear too.

In an era when the label “fake news” is used to discredit serious investigative journalism, original source documents serve as powerful evidence to refute such charges.

There is also much more work to be done on protecting metadata. Signal’s “sealed sender” feature, which encrypts much of the metadata that the Signal service has access to, goes further than any other popular messaging app, but it’s still not perfect. Messaging apps need to engineer their services so that they cannot access any metadata about their users, including IP addresses. If services don’t have access to that metadata, then they can’t be compelled to hand it over to the FBI during a leak investigation.

By default, web browsers keep a detailed history of every webpage you ever visit. They should really stop doing this. Why not only retain a month of browser history by default, and allow power users to change a setting if they want more?

At the moment, Tor Browser is the best web browser for protecting user privacy. Not only does it never keep a history of anything that happens in it, but it also routes all internet traffic through an anonymity network and uses technology to combat a tracking technique called “browser fingerprinting,” so that the websites you visit don’t know anything about you either. Unfortunately, simply having Tor Browser or other privacy-specific tools installed on a computer has been used as evidence against alleged whistleblowers. This is one reason I’m excited about Mozilla’s plan to integrate Tor directly into Firefox as a “super private browsing” mode. In the future, instead of downloading Tor Browser, sources could simply use a feature built into Firefox to get the same level of protection. Maybe Google Chrome, Apple Safari, and Microsoft Edge should follow Mozilla’s lead here. (The privacy-oriented browser Brave already supports private Tor windows.)

Finally, tech giants that amass our private data through services like Gmail, Microsoft Outlook, Google Drive, iCloud, Facebook, and Dropbox should store less information about everyone to begin with, and encrypt more of the data they do store in ways that they themselves can’t access and therefore, can’t hand to the FBI. Some companies do this for certain categories of data — Apple doesn’t have the ability to access the passwords stored in your iCloud Keychain, and Google cannot access your synced Chrome profiles — but it’s not nearly enough. I’m not holding my breath.

Correction: August 8, 2019
An earlier version of this story misstated Daniel Hale’s age. He is 32, not 31.

The post The Trump Administration Is Using the Full Power of the U.S. Surveillance State Against Whistleblowers appeared first on The Intercept.

Academic Freedom and the LMS

Published by Anonymous (not verified) on Fri, 02/08/2019 - 9:48am in

This morning, I delivered this paper in the Academic Freedom session at the West Coast Division of the American Historical Association’s Conference in Las Vegas.  Thanks to my friend Hank Reichman for inviting me to participate.  I don’t usually write out my papers anymore, but I did this time so that I didn’t get tongue-tied by edtech-induced rage.  Being able to post it here on my largely inactive blog is a nice additional benefit.

I just started teaching a new course, filling in for a colleague who has left our university for greener pastures.  It’s a mostly online course, and one of the restrictions I faced when accepting it was that it had to be delivered through Blackboard, the learning management system (or LMS) on our campus.  In my usual online courses, I use the free version of Canvas, a Blackboard competitor.  Nevertheless, I accepted the rationale behind that requirement: that a group of incoming Freshmen needed to get used to the system that they would encounter most often once they started for real in the fall.  That system would definitely be Blackboard. 

I first encountered Blackboard around fifteen years ago.  I decided to go to a couple of training sessions just to see what this new online tool could do for me.  I decided quickly that whatever it offered wasn’t worth the trouble.  It was badly organized, hard to learn and didn’t offer anything besides a grade book that I didn’t use already.  Having used a competing learning management system for a few years now, I’m in a much better position to critique Blackboard than I was back then.  However, unless you too are burdened by having this particular LMS on your campus, that critique would not be very useful.  Instead, I want to offer a broader critique of LMSs in general as a threat to academic freedom because even if you don’t use whatever LMS your campus offers, their misuse is a threat to your freedom to teach your classes however you happen to see fit. 

Learning Management Systems first arrived on the scene during the mid-1990s as a way for universities to speed the offering of online classes. Your faculty can’t program? We’ll set up this shell course for them and teach them how to populate it with no coding necessary. It was kind of an AOL for the academic set, except you couldn’t pick up a disk at your nearest convenience store and your university paid the bill.

Somewhere in the first decade of this century, learning management systems evolved from what was then generally known as “distance ed” into ordinary face-to-face classrooms. Store your syllabus here. Upload your handouts here. Let your students see how they’re doing in the course at any time by uploading your grades into the LMS grade book. For people who wanted to quickly modernize their courses without building their own web sites, this proved tempting. For contingent faculty or faculty at community colleges, the use of the LMS quickly became an expectation for online and face-to-face courses alike. Indeed, as I’ve documented in the pages of the journal Academe, mandatory LMS usage is now fairly common at community colleges across the United States and even in other private and public institutions where faculty do not have the protection of tenure.

The American Association of University Professors has issued many statements concerning the relationship between academic freedom and teaching. For example, the 1999 Statement on Online and Distance Education reads, in part, “Teachers should have the same responsibility for selecting and presenting materials in courses offered through distance-eduction technologies as they have in those offered in traditional classroom settings.” What I want to argue here is that statement should go a little further: Academic freedom should not only include what professors teach, but how they choose to teach it. If you use a learning management system in an online or a face-to-face setting, all sorts of important choices about how you teach are made by actors that exist far outside any one faculty member’s control. No wonder so many faculty with academic freedom resist using their LMS, or at least refuse to do much with beyond employing its online grade book.

Here again I’m sorely tempted to start complaining about Blackboard again, and I will do a little bit of that in what follows. However, before I talk about any LMS mechanics, I need to emphasize that there are a lot more people involved in your campus learning management system than the people who created that system. In my case, it has been difficult to tell between which parts of Blackboard that I don’t like originate with Blackboard and which parts are a function of how our IT Department wanted Blackboard customized for our campus.

For example, when I was first figuring out Blackboard, I called our IT help desk and asked whether there was any other way to message individual students other than their university e-mail accounts, which in my experience very few of them ever check. The answer was no, because someone in our administration building had determined that any other means of communication was a potential FERPA violation. On the other hand, I had heard about how awful Blackboard discussion forums were long before I returned to Blackboard again a few weeks ago. Therefore, I’m almost certain that the fact that the comments there barely nest is entirely Blackboard’s fault. With many other complaints it’s impossible to tell who exactly is responsible because I wasn’t there when the decision got made.

If I’m teaching a face-to-face course I can hand back papers with grades on them, ask a student to visit me in office hours or – and sadly this is the most appropriate analogy to my first complaint above – ask students to give me an e-mail address that they actually check. With respect to class discussions, as long as I’m there to lead I can make sure that nobody’s points get lost in the back and forth of a large class by emphasizing their importance or requesting direct follow up. By teaching with Blackboard at CSU-Pueblo, I’m giving up both these prerogatives.

My usual workaround for the awfulness of all LMS discussion forums is to use Slack, the free office messaging program. Not only do the comments nest well, students can actually message each other without me seeing, which encourages them to be frank with one another, which is especially important if they’re doing group work. We can also use emojis and GIFs in Slack if we are so inclined. Perhaps most importantly for me at least, the smartphone app is really, really good so when I make an announcement it goes right to the notifications on everyone’s phone, so I can be reasonably certain that nobody will miss it.

Unfortunately, if the principle behind the Blackboard installation that only allows e-mail messaging ever gets applied to my class, I am in deep, deep trouble. I recently confessed my heresy to an administrator in the hopes of finding an early solution to the problem and I realized that this kind of inherent conservatism extends well beyond FERPA. His argument was that if our accrediting body ever asked for the documentation from my class and the university couldn’t produce it because they didn’t control it, we might have a problem on our hands. I argued that hundreds of faculty all over this country are using Slack in their classes and so far no university has lost their accreditation as a result. Besides this, that kind of risk aversion will inevitably stifle pedagogical creativity, either by faculty all using the same bad online tools or by eschewing online tools and classes altogether.

At present, I’m working on a happy compromise with which both faculty and my administration can live. While we’re not quite there yet, what I have learned is how important it is that faculty can’t just let key decisions about their online tools be made by other people. If you don’t, expectations will change while you cover your ears and hum loudly. Mandatory LMS usage will come not as a command, but in the name of your students or in the name of “efficiency” at your university and you will be swept up by change nonetheless.

I believe it is far better for faculty to be proactive. Ride the wave to save your prerogatives rather than just hold on for dear life. Technology will set expectations for the classrooms of the future, and if there’s no faculty representation in those discussions everything will change – probably for the worse – because of our lack of input.

The most important standard I would bring to any discussion about what technology should be employed on campus and the faculty role in how it should be employed is that faculty deserve the same prerogatives when they use an online tool as they do when they are teaching in an entirely conventional face-to-face classroom. To suggest anything else defeats the purpose of moving any part of a class online in the first place. I fear that administrations tend to favor contingent faculty for online teaching precisely because they don’t expect them to utilize their traditional prerogatives in any classroom setting because they are too worried about their continuing employment.

The second standard I would bring to any discussion of how technology like the LMS should be employed on campus is that faculty should be offered as many technological choices as possible and that they should be the ones who make the final decision about which ones they use. My co-author Jonathan Poritz and I compare the ideal edtech situation to a buffet in our 2016 book Education is Not an App: The future of university teaching in the Internet age. Everyone eats what they want or perhaps chooses not to eat at all. It is the administration’s job to lay out the table rather than to force the available offerings down anyone’s throat.

The final standard I would bring to a discussion of the LMS is that the result should be as close to the open Internet as humanly possible. That means faculty have to be able to employ tools that exist entirely outside their LMS if they so choose, like Slack or, the open source web annotation program. The best LMS available will play well with programs like these, as Canvas has tried to do – and I think the most recent versions of Blackboard does too – so that faculty can run them inside their campus shells with no extra logins and little trouble. To do otherwise is to go back to the days of Internet walled gardens, like America Online. And after all, college campuses are the kinds of places that are supposed to be on the cutting edge of technology since they have so many smart people on them. Treat those smart people like the average corporate peon when it comes to how they teach – the action at the center of their job descriptions – and you are going to have a lot of very unhappy smart people on your hands.

Japanese Scientist Obtains Permission for Animal-Human Hybrids

Published by Anonymous (not verified) on Fri, 02/08/2019 - 2:49am in

This is very ominous. A Japanese scientist has been granted permission to create animal-human hybrids, according to yesterday’s I. The man intends to use them in research for the possible creation of organs in animals, that could be used for transplantation into humans. There are limits to his research, however. He states that at the moment he will not keep them alive for longer than 15 and a half days, so it isn’t like he’s going to produce complete animal-human hybrids, like the chimpanzee-human creature developed by rogue scientists as a new slave animal in the 1990s ITV SF thriller, Chimera. But it is a step in that direction.

The article, ‘Human-animal hybrid research is approved’, by Colin Drury, on page 22, runs

Human-animal hybrids are to be developed in embryo form in Japan after the government approved controversial stem-cell research.

Human cells will be grown in rat and mouse embryos, then brought to term in a surrogate animal, as part of experiments to be carried out at the University of Tokyo.

Supporters say the work – led by the renowned geneticist Hiromitsu Nakauchi – could be a vital first step towards eventually growing organs that can then be transplanted into people in need.

But opponents have raised concerns that scientists are playing God. Critics worry the human cells could stray beyond the targeted organs into other areas of the animal, creating a creature that is part animal, part person.

For that reason, such prolonged experimentation has been banned or not been financed across the world in recent years.

In Japan, scientists were forbidden from going beyond a 14-day growth period. But those laws were relaxed in March when the country’s education and science ministry issued new guidelines saying such creations could now be brought to term.

Now, Dr. Nakauchi’s application to experiment is the first to be approved under that new framework.

Human-animal hybrid embryos have been made in countries such as the United States, but were never brought to term. The US National Institutes of Health has had a moratorium on funding such work since 2015.

“We don’t expect to create human organs immediately, but this allows us to advance our research based upon the know-how we have gained up to this point,” Dr. Nakauchi told the Asahi Shimbun newspaper.

He added that he planned to proceed slowly, and will not attempt to bring any hybrid embryos to term for several years, rather growing the hybrid mouse embhryos to 14.5 days, when the animal’s organs are mostly formed, and the hybrid rat embryo’s to 15.5 days.

Such caution was welcomed by bioethicists in the country.

There was also a little capsule, containing the comment that

Some bioethicists are concerned about the possibility that human cells might stray, travelling to the developing animal’s brain and potentially altering its cognition.

Which seems to be a concern that this research could unintentionally also result in animals acquiring some form of human intelligence accidentally.

The British philosopher Mary Midgley attacked that part of the biotech industry and those scientists, who looked forward to bioengineers being able to redesign whole new forms of humans in her book, The Myths We Live By (London: Routledge 2004). She writes

That ideology is what really disturbs me, and I think it is what disturbs the public. This proposed new way of looking at nature is not scientific. It is not something that biology has shown to be necessary. Far from that, it is scientifically muddled. It rests on bad genetics and dubious evolutionary biology. Though it uses science, it is not itself a piece of science but a powerful myth expressing a determination to put ourselves  in a relation of control to the non-human world around us, to be in the driving seat at all costs rather than attending to that world and trying to understand how it works. It is a myth that repeats, in a grotesquely simple sense, Marx’s rather rash suggestion that the important thing is not to understand the world, but to change it. Its imagery is a Brocken spectre, a huge shadow projected on to a cloudy background by the shape of a few recent technological achievements.

The debate then is not between Feeling, in the blue corner, objecting to the new developments, and Reason in the red corner, defending them. Rhetoric such as that of Stock and Sinsheimer and Eisner is not addressed to Reason. It is itself an exuberant power fantasy, very much like the songs sung in the 1950s during the brief period of belief in an atomic free lunch, and also like those in the early days of artificial intelligence. The euphoria is the same. It is, of course, also motivated by the same hope of attracting grant money, just as the earlier alchemists needed to persuade powerful persons tthat they were going to produce real, coinable gold. (p. 166).

She goes on to argue that such scientific hubris comes from the gradual advance of atheism with the victory of the mechanistic model of the universe introduced by Newton in the 17th century. As God receded, scientists have stepped in to take His place.

On the clockwork model the world thus became amazingly intelligible. God, however, gradually withdrew from the scene, leaving a rather unsettling imaginative vacuum. The imagery of machinery survived. But where there is no designer the whole idea of mechanism begins to grow incoherent. Natural Selection is supposed to fill the gap, but it is a thin idea, not very satisfying to the imagination.

That is how the gap that hopeful biotechnicians now elect themselves to fill arose. They see that mechanistic thinking calls for a designer, and they feel well qualified to volunteer for that vacant position. Their confidence about this stands out clearly from the words I have emphasised in Sinsheimer’s proposal that ‘the horizons of the new eugenics are in principle boundless – for we should have the potential to create new genes and new qualities yet undreamed of … For the first time in all time a living creature understands its origin and can undertake to design its future.’

Which living creature? It cannot be human beings in general, they wouldn’t know how to do it. It has to be the elite, the biotechnologists who are the only people able to make these changes. So it emerges that members of the public who complain that biotechnological projects involve playing God have in fact understood this claim correctly. That phrase, which defenders of the projects dismiss as mere mumbo jumbo, is actually a quite exact term for the sort of claim to omniscience and omnipotence on these matters that is being put forward.

One of the most profound artistic comments I have found about the implications of this new biotechnology is the sculpture ‘The Young Family’ by the Australian artist Patricia Piccinini. This shows a hybrid mother creature, bred for organ transplantation, surrounded by her young. Curled up like an animal, her human eyes peer back plaintively at the spectator. It’s a deeply disturbing work, although Piccinini states she is not opposed but optimistic about scientific progress. She says

In terms of the real world, these are some of the key issues that I am trying to question and discuss with my work. I’m not pessimistic about developments in biotechnology. We are living in a great time with a lot of opportunities, but opportunities don’t always turn out for the best. I just think we should discuss the full implications of these opportunities.

So if we look at The Young Family we see a mother creature with her babies. Her facial expression is very thoughtful. I imagine this creature to be bred for organ transplants. At the moment we are trying to do such a thing with pigs, so I gave her some pig-like features. That is the purpose humanity has chosen for her. Yet she has children of her own that she nurtures and loves. That is a side-effect beyond our control, as there will always be.

That is what makes the question of breeding animals purely for organ-transfer so difficult to answer. On one hand we need organs to help people in need, on the other hand we are looking at an animal that wants to exist for the sake of itself. I can’t help but feel an enormous empathy for this creature. And, to be very honest, if it would save the life of one of my children, I would be will to take one of these organs. I know it is probably not ethically right but sometimes honesty, emotions, empathy and ethics don’t always line up.

I am not nearly so optimistic. For me, this sculpture is a deeply moving, deeply disturbing comment on the direction this new technology can go. And I fear tht this latest advance is taking us there.

On (Lippmann and) Ignorant Voters

Published by Anonymous (not verified) on Thu, 01/08/2019 - 8:33pm in

The hypothesis, which seems to me the most fertile, is that news and truth are not the same thing, and must be clearly distinguished. The function of news is to signalize an event, the function of truth is to bring to light the hidden facts, to set them into relation with each other, and make a picture of reality on which men can act....For the troubles of the press, like the troubles of representative government, be it territorial or functional, like the troubles of industry, be it capitalist, cooperative, or communist, go back to a common source: to the failure of self-governing people to transcend their casual experience and their prejudice, by inventing, creating, and organizing a machinery of knowledge. It is because they are compelled to act without a reliable picture of the world, that governments, schools, newspapers and churches make such small headway against the more obvious failings of democracy, against violent prejudice, apathy, preference for the curious trivial as against the dull important, and the hunger for sideshows and three legged calves. This is the primary defect of popular government, a defect inherent in its traditions, and all its other defects can, I believe, be traced to this one.--Walter Lippmann (1922) Public Opinion (Chapter 24, 228-230

As regular readers know, I think reflection on Walter Lippmann is important in present circumstances. He is one of the intellectual architects (recall) of what I call the second-wave of liberalism (1945-2008), which has been imploding during the last decade, and even was the proximate cause of the development of what came to be known as neo-liberalism. More subtly, his conception of the good society, which hearkens back to the (attractive) kind of liberalism of Adam Smith and Sophie de Grouchy, rejects both state neutrality and state-craft as soul-craft; the good society is one that has many morally salient characteristics (the people are flourishing and can lead lives of their own choosing without fear or coercion). To attain the good society, not unlike a medieval cathedral, is a multi-generational, collective project. There is a sense in which this is a religious project. But, probably unlike the cathedral, the path and the outcome of the good society is, due to the uncertainty generating conditions of modern life, full of surprises. 

One reason to return to Lippmann (not in order to agree -- he also has (recall) serious weaknesses --) is that, as  JohnDewey emphasizes, he understands the problem of voter ignorance not, as anti-democrats do, as a problem of other (not-so-smart or gripped by ideology) voters, but as a structural feature of the human condition. One need not agree with all of Lippmann's ways of articulating the situation, to see that the (Platonic) cave is the human condition in mass society. And to overcome such ignorance, here he anticipates Hanna Arendt, is a rare achievement to be found in, or (perhaps its better to say) produced by, a few institutions:  the law and science.

As an aside, Lippmann, who knew the press from within, is adamant that the press is even during the best of times and ownership structures not that kind of institution capable of reliable generating truth.* And, in fact, he claims that, when the press is conducive, it is often relying on the rule-following features of the administrative state. Importantly, his skepticism about the press' truth-conducive-ness does not entail that the press has no positive contributions to make to democratic life. (If he thought that, his life would be self-undermining in a very obvious ways.)

I do not think Lippmann's answer to the problem of voter ignorance is successful, but it was (judging by what happened next) very influential; and if not influential, at least prophetic. It is possible that by Lippmann's own standards his solution would be deemed successful because Lippmann's aspirations were (again noted by Dewey) were more ameliorative than utopian.  Lippmann's proposal has three central features: first, is to advocate for an expansion of the (meritocratic) bureaucratic state and infuse it with what he calls 'technical knowledge' that he associates with "statisticians, accountants, auditors, industrial counsellors, engineers of many species, scientific managers, personnel administrators, research men, "scientists," and sometimes just as plain private secretaries." (234) He thinks such technical knowledge can also be turned into an "experimental social science." (237)  The point, for Lippmann, is not that such technicians will run the show, but rather that they become embedded (as "permanent intelligence section") in all government apparatuses (nationally and locally). For, by definition, a technician is not the decision maker.+

A key second feature of his proposal is that there will be a permanent circulation of college graduates from universities, perhaps even a privileged 'national university' (247) recruited from government staff, into government; and technical, government bureaucrats returning regularly to train and teach at universities. This intellectual flow of human capital would itself (in part) constitute "political science" and, in turn, would "associated with politics in America."+ (In 1922 economics was not yet seen as the future queen of the social sciences.) Lippmann optimistically assumes that all of this can be done transparently and openly (246).  The third, key feature is to insist that any political decision must be channeled through a procedure that will involve the bringing to bear of technical expertise on a problem (e.g., 255). 

As an aside, Lippmann's account is clearly distinct from the (later) Hayekian narrative, which sees the rise of the bureaucratic, technocratic state as a consequence of the experience of (mobilization and) war-planning during WWI. Lippmann, by contrast, thinks that the rise of the technician in the great society is itself the consequence "of blind natural selection" (234); complex organizations faced with complex decision environments started to require technical expertise merely to help integrate the flow of data and to structure decision-making for the executive. One discerns in Lippmann that the problem of big data is a nineteenth century problem, and that to overcome it even in competitive markets, requires the rise of expertise within organizations.

Now, Lippmann is not unaware that there is a real risk the technocrats will quickly run the show and displace elected officials and electorate. His response to this is to insist on a stark distinction between executive and technical functions and to aim to align incentives between them properly. [242]+ Only this can keep the technician's disinterestedness in place. If such a distinction were to be maintained in practice, and an esprit de corps among the technical class could be cultivated, then his hope that there will be circumstances in which the presence of a class of technicians can even abolish partisanship would not be comical. I don't mean to suggest that Lippmann's proposal is impractical: it very much reflects reality inside important elements of the governance of the EU and the US Federal government. As Dewey discerned, Lippmann proposes what became a tool, even an effective technique of governance one that has undoubtedly helped reduce the catastrophic mistakes all political agents can make.**

I used the word 'comical.' Perhaps that's unfortunate.  For Lippmann's weakness is the weakness of much of twentieth century liberalism. We can discern in Lippmann a tendency toward consensus and de-politicization in thinking about politics. In his (1937) Good Society, he demands that politicians combine, in spirit, the temperament of a legislator and a judge (who is impartial, fair, temperate, etc.) As the earlier Public Opinion teaches, that aspiration would only be possible in reality if political life is not governed by opinion. But if that were possible, we wouldn't (recall) need democracy at all



*He is also surprisingly astute on the incentives that prevent the press from becoming one.

+Much of this is naive, but he notes that in corporations it would be a good thing if accountants could be made independent from "directors and shareholders." (242) Sometimes one cannot help but feel that some of the solutions of the problems of social life are long known. 

**Regular readers know that I also argue that technocratic expertise can be the source of catastrophic political mistakes. In my view liberalism can only be revived when it comes to terms with this. 

We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive

Published by Anonymous (not verified) on Fri, 26/07/2019 - 7:00pm in

They call it the Silent Talker. It is a virtual policeman designed to strengthen Europe’s borders, subjecting travelers to a lie detector test before they are allowed to pass through customs.

Prior to your arrival at the airport, using your own computer, you log on to a website, upload an image of your passport, and are greeted by an avatar of a brown-haired man wearing a navy blue uniform.

“What is your surname?” he asks. “What is your citizenship and the purpose of your trip?” You provide your answers verbally to those and other questions, and the virtual policeman uses your webcam to scan your face and eye movements for signs of lying.

At the end of the interview, the system provides you with a QR code that you have to show to a guard when you arrive at the border. The guard scans the code using a handheld tablet device, takes your fingerprints, and reviews the facial image captured by the avatar to check if it corresponds with your passport. The guard’s tablet displays a score out of 100, telling him whether the machine has judged you to be truthful or not.

A person judged to have tried to deceive the system is categorized as “high risk” or “medium risk,” dependent on the number of questions they are found to have falsely answered. Our reporter — the first journalist to test the system before crossing the Serbian-Hungarian border earlier this year — provided honest responses to all questions but was deemed to be a liar by the machine, with four false answers out of 16 and a score of 48. The Hungarian policeman who assessed our reporter’s lie detector results said the system suggested that she should be subject to further checks, though these were not carried out.

Travelers who are deemed dangerous can be denied entry, though in most cases they would never know if the avatar test had contributed to such a decision. The results of the test are not usually disclosed to the traveler; The Intercept obtained a copy of our reporter’s test only after filing a data access request under European privacy laws.


The iBorderCtrl project’s virtual policeman.

Image: iBorderCtrl

The virtual policeman is the product of a project called iBorderCtrl, which involves security agencies in Hungary, Latvia, and Greece. Currently, the lie detector test is voluntary, and the pilot scheme is due to end in August. If it is a success, however, it may be rolled out in other European Union countries, a potential development that has attracted controversy and media coverage across the continent.

IBorderCtrl’s lie detection system was developed in England by researchers at Manchester Metropolitan University, who say that the technology can pick up on “micro gestures” a person makes while answering questions on their computer, analyzing their facial expressions, gaze, and posture.

An EU research program has pumped some 4.5 million euros into the project, which is being managed by a consortium of 13 partners, including Greece’s Center for Security Studies, Germany’s Leibniz University Hannover, and technology and security companies like Hungary’s BioSec, Spain’s Everis, and Poland’s JAS.

The researchers at Manchester Metropolitan University believe that the system could represent the future of border security. In an academic paper published in June 2018, they stated that avatars like their virtual policeman “will be suitable for detecting deception in border crossing interviews, as they are effective extractors of information from humans.”

However, some academics are questioning the value of the system, which they say relies on pseudoscience to make its decisions about travelers’ honesty.

Ray Bull, professor of criminal investigation at the University of Derby, has assisted British police with interview techniques and specializes in methods of detecting deception. He told The Intercept that the iBorderCtrl project was “not credible” because there is no evidence that monitoring microgestures on people’s faces is an accurate way to measure lying.

“They are deceiving themselves into thinking it will ever be substantially effective and they are wasting a lot of money,” said Bull. “The technology is based on a fundamental misunderstanding of what humans do when being truthful and deceptive.”

In recent years, following the refugee crisis and a spate of terrorist attacks in France, Belgium, Spain, and Germany, police and security agencies in Europe have come under increasing political pressure to more effectively track the movements of migrants. Border security officials on the continent say they are trying to find faster and more efficient new ways, using artificial intelligence, to check the travel documents and biometrics of the more than 700 million people who annually enter the EU.

“They are wasting a lot of money.”

The European Commission — the EU’s executive branch — has set aside a proposed €34.9 billion for border control and migration management between 2021 and 2027. Meanwhile, in September last year, European lawmakers agreed to establish a new automated system that will screen nationals from visa-free third countries — including the United States — to establish whether or not they should be allowed to enter the EU.

In the future, a visa-free traveler who, for whatever reason, has not been able to submit an application in advance will not be granted entry into the Schengen zone, an area covering 26 countries in Europe where travelers can move freely across borders without any passport checks.

IBorderCtrl is one technology designed to strengthen the prescreening process. But transparency activists say that the project should not be rolled out until more information is made available about the technology — such as the algorithms it uses to make its decisions.

Earlier this year, researchers at the Milan-based Hermes Center for Transparency and Digital Human Rights used freedom of information laws to obtain internal documents about the system. They received hundreds of pages; however, they were heavily redacted, with many pages completely blacked out.

“The attempt to suppress debate by withholding the documents that address these issues is really frightening,” said Riccardo Coluccini, a researcher at the Hermes Center. “It is absolutely necessary to understand the reasoning behind the funding process. What is written in those documents? How does the consortium justify the use of such a pseudoscientific technology?”

A study produced by the researchers in Manchester tested iBorderCtrl on 32 people and said that their results showed the system had 75 percent accuracy. The researchers noted, however, that their participant group was unbalanced in terms of ethnicity and gender, as there were fewer Asian or Arabic participants than white Europeans, and fewer women than men.

Giovanni Buttarelli, head of the EU’s data protection watchdog, told The Intercept that he was concerned that the iBorderCtrl system might discriminate against people on the basis of their ethnic origin.

“Are we only evaluating possible lies about identity or we are also trying to analyze some of the person’s somatic traits, the edges of the face, the color of the skin, the cut of the eyes?” Buttarelli said. “Who sets the parameters to establish that a certain subject is lying or not lying?”

A spokesperson for iBorderCtrl declined to answer questions for this story. A website for the project acknowledges that the lie detection system will “have an impact on the fundamental rights of travellers” but says that, because the test is currently voluntary, “issues with respect to discrimination, human dignity, etc. therefore cannot occur.”

The reporting for this story was supported by the Investigative Journalism for Europe grant and the Otto Brenner Foundation.

The post We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive appeared first on The Intercept.

Google Continues Investments in Military and Police AI Technology Through Venture Capital Arm

Published by Anonymous (not verified) on Wed, 24/07/2019 - 6:09am in



Last year,Google faced internal revolt from many employees over its handling of Project Maven, a secretive contract between the company and the Department of Defense to use artificial intelligence to improve the military’s drone targeting capabilities. After a series of internal, worker-led protests and resignations following reporting by The Intercept and Gizmodo, the company said it would wind down the drone project and promised a more transparent approach to similar work in the future.

Now, a number of Google workers are voicing concerns that the Mountain View, California-based search giant is continuing to deploy cutting-edge AI technology to the Pentagon and law enforcement customers.

Rather than directly engage in controversial contracts, Google is providing financial, technological, and engineering support to a range of startups through Gradient Ventures, a venture capital arm that Google launched in 2017 to nurture companies deploying AI in a range of fields. Google promises interested firms access to its own AI training data and sometimes places Google engineers within the companies as a resource. The firms it supports include companies that provide AI technology to military and law enforcement.

Cogniac, one of the firms in the Gradient Ventures portfolio, is providing image-processing software to the U.S. Army to quickly analyze battlefield drone data and to an Arizona county sheriff’s department to help identify when individuals cross the U.S.-Mexico border.

CAPE Productions, another Gradient Ventures-backed startup, has established itself as a premier AI-powered software solution to provide law enforcement with the ability to fly fleets of drones to conduct aerial surveillance over American cities and respond to crimes and other emergency calls.

Google employees — who spoke anonymously, fearing reprisal — said the work embraced by Gradient Ventures startups appears to circumvent the commitment by their employer to carefully vet and disclose military and law enforcement applications of AI technology.

The startups not only receive financial support from Google. Google employees shared internal company emails with The Intercept that stated that all firms backed by Gradient Ventures “will be able to access vast swaths of training data that Google has accumulated to train their own AI systems” and “will have the opportunity to receive advanced AI trainings from Google.”

Senior computer engineers from Google will rotate into firms backed by Gradient Ventures, the emails noted, to provide “the kind of hand-holding support that we think is helpful in growing an AI ecosystem.”

Google is providing financial, technological, and engineering support to a range of startups through Gradient Ventures, a venture capital arm that Google launched in 2017 to nurture companies deploying AI in a range of fields.

A spokesperson from Google downplayed the investments.

“Gradient Ventures is a venture fund within Google that makes minority investments (between $1-10M) in early-stage AI companies,” the spokesperson wrote. “In some cases, portfolio companies have the opportunity to work with Google employees who advise them on a variety of areas applicable to early startups, from machine learning techniques to website design.”

The spokesperson noted that Gradient Ventures portfolio firms receive routine access to publicly available data tools.

Cogniac, the spokesperson further noted, does not currently have a Google engineer assigned to the firm. The spokesperson further stated that while “we are not involved in the day-to-day operations of portfolio companies,” Google’s Gradient Ventures “adheres to Google’s AI Principles when making investments,” a reference to a statement of ethics released by CEO Sundar Pichai in July of last year.

Earlier this year, while formally announcing the end of Google’s involvement with Project Maven, Kent Walker, Google’s senior vice president for global affairs, had suggested that future military endeavors could still be part of the company’s future. “We continue to explore work across the public sector, including the military, in a wide range of areas, such as cybersecurity, search and rescue, training and health care, in ways consistent with our AI Principles,” Walker wrote.

Cogniac’s work on behalf of the Army was detailed two years ago in a story from UAS Weekly, a trade publication for the drone industry, which noted that the firm had been successfully used in combat exercises to analyze images from small battlefield drones to identify and enemy combatants.

Cogniac is also one of several firms competing to provide a so-called virtual fence at the border to identify and apprehend individuals. The company has reportedly participated in trials with U.S. Customs and Border Protection to test its ability to detect individuals engaged in unauthorized border crossing.

Cogniac is also one of several firms competing to provide a so-called virtual fence at the border to identify and apprehend individuals.

Arizona’s Cochise County Sheriff’s Office has also contracted with Cogniac to analyze footage from a number of surveillance cameras focused on activity along the border. “Cogniac demonstrated the ability to train the system recognize threats and filter out false alarms to the level acceptable to the Department,” the county reported on its website explaining the contract.

CAPE similarly provides AI-powered software to analyze drone images. The California town of Chula Vista has used the company’s “Aerial Telepresence” platform to help the local police department respond to 911 calls. According to CAPE, the pilot drone program, using CAPE’s software, has helped lead to 21 arrests.

“Gradient Ventures is an investor in Cogniac. I can’t comment any other aspect or terms of their investment,” Bill Kish, CEO of Cogniac, wrote in an email. CAPE Productions did not respond to a request for comment.

CAPE was co-founded by Thomas Finsterbusch, a former software engineer for the Google research lab known as X, which researches breakthrough technologies. Finsterbusch is now a partner at Gradient Ventures. The venture capital firm’s entire advisory board is comprised of various Google executives and heads of Google subsidiaries.

The investment arm, however, provides at least the appearance of distance while leaving a pathway for the company to continue exploring military and national security uses for machine learning technology.

The post Google Continues Investments in Military and Police AI Technology Through Venture Capital Arm appeared first on The Intercept.