Technology

Google CEO Hammered by Members of Congress on China Censorship Plan

Published by Anonymous (not verified) on Wed, 12/12/2018 - 9:25am in

Google CEO Sundar Pichai came under fire from lawmakers on Tuesday over the company’s secretive plan to launch a censored search engine in China.

During a hearing held by the House Judiciary Committee, Pichai faced sustained questions over the China plan, known as Dragonfly, which would blacklist broad categories of information about democracy, human rights, and peaceful protest.

The hearing began with an opening statement from Rep. Kevin McCarthy, R-Calif., who said launching a censored search engine in China would “strengthen China’s system of surveillance and repression.” McCarthy questioned whether it was the role of American companies to be “instruments of freedom or instruments of control.”

Pichai read prepared remarks, stating “even as we expand into new markets, we never forget our American roots.” He added: “I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests.”

The lawmakers questioned Pichai on a broad variety of subjects. Several Republicans on the committee complained that Google displayed too many negative stories about them in its search results, and claimed that there was “bias against conservatives” on the platform. They also asked about recent revelations of data leaks affecting millions of Google users, Android location tracking, and Google’s work to combat white supremacist content on YouTube.

It was not until Pichai began to face questions on China that he began to look at times uncomfortable.

Rep. David Cicilline, D-R.I., told Pichai that the Dragonfly plan seemed to be “completely inconsistent” with Google’s recently launched artificial intelligence principles, which state that the company will not “design or deploy” technologies whose purpose “contravenes widely accepted principles of international law and human rights.”

“It’s hard to imagine you could operate in the Chinese market under the current government framework and maintain a commitment to universal values, such as freedom of expression and personal privacy,” Cicilline said.

McCarthy questioned whether it was the role of American companies to be “instruments of freedom or instruments of control.”

Pichai repeatedly insisted that Dragonfly was an “internal effort” and that Google currently had “no plans to launch a search service in China.” Asked to confirm that the company would not launch “a tool for surveillance and censorship in China,” Pichai declined to answer, instead saying that he was committed to “providing users with information, and so we always — we think it’s ideal to explore possibilities. … We’ll be very thoughtful, and we will engage widely as we make progress.”

Pichai’s claim that the company does not have a plan to launch the search engine in China contradicted a leaked transcript from a private meeting inside the company. In the transcript, the company’s search chief Ben Gomes discussed an aim to roll out the service between January and April 2019. For Pichai’s statement to Congress to be truthful, there is only one possibility: that the company has put the brakes on Dragonfly since The Intercept first exposed the project in August.

During a separate exchange, Rep. Keith Rothfus, R-Pa., probed Pichai further on China. Rothfus asked Pichai how many months the company had been working to develop the censored search engine and how many employees were involved. Pichai seemed caught off guard and stumbled with his response. “We have had the project underway for a while,” he said, admitting that “at one point, we had over 100 people on it.” (According to sources who worked on Dragonfly, there have been closer to 300 people developing the plan.)

Rep. Tom Marino, R-Pa., quizzed Pichai on what user information the company would share with Chinese authorities. Pichai did not directly answer, stating, “We would look at what the conditions are to operate … [and we would] explore a wide range of possibilities.” Pichai said that he would be “transparent” with lawmakers on the company’s China plan going forward. He did not acknowledge that Dragonfly would still be secret — and he would not have been discussing it in Congress — had it not been for the whistleblowers inside the company who decided to leak information about the project.

At one point during the hearing, the proceedings were interrupted by a protester who entered the room carrying a placard that showed the Google logo altered to look like a China flag. The man was swiftly removed by Capitol Police. A handful of Tibetan and Uighur activists gathered in the hall outside the hearing, where they held a banner that stated “stop Google censorship.”

“We are protesting Google CEO Sundar Pichai to express our grave concern over Google’s plan to launch Project Dragonfly, a censored search app in China which will help Chinese government’s brutal human right abuses,” said Dorjee Tseten, executive director of Students for a Free Tibet. “We strongly urge Google to immediately drop Project Dragonfly. With this project, Google is serving to legitimize the repressive regime of the Chinese government and authorities to engage in censorship and surveillance.”

Earlier on Tuesday, more than 60 leading human rights groups sent a letter to Pichai calling on him to cancel the Dragonfly project. If the plan proceeds, the groups wrote, “there is a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online, making the company complicit in human rights violations.”

The post Google CEO Hammered by Members of Congress on China Censorship Plan appeared first on The Intercept.

Rights Groups Turn Up Pressure on Google Over China Censorship Ahead of Congressional Hearing

Published by Anonymous (not verified) on Tue, 11/12/2018 - 11:00am in

Tags 

Technology

Google is facing a renewed wave of criticism from human rights groups over its controversial plan to launch a censored search engine in China.

A coalition of more than 60 leading groups from countries across the world have joined forces to blast the internet giant for failing to address concerns about the secretive China project, known as Dragonfly. They come from countries including China, the United States, the United Kingdom, Argentina, Bolivia, Chile, France, Kazakhstan, Mexico, Norway, Pakistan, Palestine, Romania, Syria, Tibet, and Vietnam.

A prototype for the censored search engine was designed to blacklist broad categories of information about human rights, democracy, and peaceful protest. It would link Chinese users’ searches to their personal cellphone number and store people’s search records inside the data centers of a Chinese company in Beijing or Shanghai, which would be accessible to China’s authoritarian Communist Party government.

If the plan proceeds, “there is a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online, making the company complicit in human rights violations,” the human rights groups wrote in a letter that will be sent to Google’s leadership on Tuesday.

The letter highlights mounting anger and frustration within the human rights community that Google has rebuffed concerns about Dragonfly, concerns that have been widely raised both inside and outside the company since The Intercept first revealed the plan in August. The groups say in their 900-word missive that Google’s China strategy is “reckless,” piling pressure on CEO Sundar Pichai, who is due to appear Tuesday before the House Judiciary Committee, where he will likely face questions on Dragonfly.

The groups behind the letter include Amnesty International, the Electronic Frontier Foundation, Access Now, Human Rights Watch, Reporters Without Borders, the Center for Democracy and Technology, Human Rights in China, the International Campaign for Tibet, and the World Uyghur Congress. They have been joined in their campaign by several high-profile individual signatories, such as former National Security Agency contractor Edward Snowden and Google’s former head of free expression in Asia, Lokman Tsui.

In late August, some of the same human rights groups had contacted Google demanding answers about the censored search plan. In October, the groups revealed on Monday, Google’s policy chief Kent Walker responded to them. In a two-page reply, Walker appeared to make the case for launching the search engine, saying that “providing access to information to people around the world is central to our mission.”

Walker did not address specific human rights questions on Dragonfly and instead claimed that the company is “still not close to launching such a product and whether we would or could do so remains unclear,” contradicting a leaked transcript from Google search chief Ben Gomes, who stated that the company aimed to launch the search engine between January and April 2019 and instructed employees to have it ready to be “brought off the shelf and quickly deployed.”

Walker agreed in his letter that Google would “confer” with human rights groups ahead of launching any search product in China, and said that the company would “carefully consider” feedback received. “While recognizing our obligations under the law in each jurisdiction in which we operate, we also remain committed to promoting access to information as well as protecting the rights to freedom of expression and privacy for our users globally,” Walker wrote.

“The company may knowingly compromise its commitments to human rights and freedom of expression.”

The human rights groups were left unsatisfied with Walker’s comments. They wrote in their new letter, to be sent Tuesday, that he “failed to address the serious concerns” they had raised. “Instead of addressing the substantive issues,” they wrote, Walker’s response “only heightens our fear that the company may knowingly compromise its commitments to human rights and freedom of expression, in exchange for access to the Chinese search market.”

The groups added: “We welcome that Google has confirmed the company ‘takes seriously’ its responsibility to respect human rights. However, the company has so far failed to explain how it reconciles that responsibility with the company’s decision to design a product purpose-built to undermine the rights to freedom of expression and privacy.”

Separately, former Google research scientist Jack Poulson, who quit the company in protest over Dragonfly, has teamed up with Chinese, Tibetan, and Uighur rights groups to launch an anti-Dragonfly campaign. In a press conference on Monday, Poulson said it was “time for Google to uphold its own principles and publicly end this regressive experiment.”

Teng Biao, a Chinese human rights lawyer who said he had been previously detained and tortured by the country’s authorities for his work, recalled how he had celebrated in 2010 when Google decided to pull its search services out of China, with the company citing concerns about the Communist Party’s censorship and targeting of activists. Teng said he had visited Google headquarters in Beijing and laid flowers outside the company’s doors to thank the internet giant for its decision. He was dismayed by the company’s apparent reversal on its anti-censorship stance, he said, and called on “every one of us to stop Google from being an accomplice in China’s digital totalitarianism.”

Lhadon Tethong, director of the Tibet Action Institute, said there is currently a “crisis of repression unfolding across China and territories it controls.” Considering this, “it is shocking to know that Google is planning to return to China and has been building a tool that will help the Chinese authorities engage in censorship and surveillance,” she said. “Google should be using its incredible wealth, talent, and resources to work with us to find solutions to lift people up and help ease their suffering — not assisting the Chinese government to keep people in chains.”

Google did not respond to a request for comment.

The post Rights Groups Turn Up Pressure on Google Over China Censorship Ahead of Congressional Hearing appeared first on The Intercept.

Democracy Now on the Crimes and Mass Murders of President George H.W. Bush

The Friday before last, former president George H.W. Bush, the father of former president George ‘Dubya’ Bush, finally fell off his perch at the age of 94. Like Monty Python’s parrot, he had shuffled off this mortal coil and joined the choir invisible. He was an ex-president, and well and truly. He was buried with due state honours last Wednesday.

And the press and media fell over themselves to praise him to the rafters. If you believed them, you would have thought that America had lost a statesman of the stature of the ancient Athenian politico, Pericles. Or that he combined in himself the wisdom of Thomas Jefferson, Maddison and the rest of the Founding Fathers.

He wasn’t. He was the successor to Ronald Reagan and a former head of the CIA, and had been involved with shady dealings, dirty, proxy wars and invasions in Latin America and Iraq, that had cost thousands their lives, while thousands others were tortured by the dictators he supported. And domestically he was responsible for racist electioneering and a highly discriminatory drugs policy that has resulted in the massive disproportionate incarceration of Black American men.

Mehdi Hasan on George Bush Senior

He was a disgusting creature, and Mehdi Hasan wrote a piece in the Intercept describing just how disgusting and reprehensible he was. In the piece below, he also appeared on Democracy Now! to talk to host Amy Goodman about Bush senior and his legacy of corruption, murder and terror.

Bush was elected president in 1990. He was a former director of the CIA, and served from 1981-89 as Reagan’s vice-president. Despite calling for a kinder, gentler politics when he was vice-president, Bush refused to tackle climate change, saying that the American way of life was not up for negotiation, defended future supreme court justice Clarence Thomas even after he was accused of sexual harassment. He was responsible for launching the first Gulf War in Iraq in 1991. During the War, the US air force deliberately bombed an air raid shelter in Baghdad killing 408 civilians. The relatives of some of those killed tried to sue Bush and his deputy, Dick Cheney, for war crimes. The attack on Iraq continued after the end of the war with a devastating sanctions regime imposed by Bush, and then his son’s invasion in 2003.

The Invasion of Panama

In 1990 Bush sent troops into Panama to arrest the country’s dictator, General Manuel Noriega on charges of drug trafficking. Noriega had previously been a close ally, and had been on the CIA’s payroll. 24,000 troops were sent into the country to topple Noriega against Panama’s own military, which was smaller than the New York police department. 3,000 Panamanians died in the attack. In November 2018, the inter-American Commission on Human Rights called on Washington to pay reparations for what they considered to be an illegal invasion.

Pardoning the Iran-Contra Conspirators

As one of his last acts in office, Bush also gave pardons to six officials involved in the Iran-Contra scandal. This was a secret operation in which Reagan sold arms to Iran in order to fund the Contras in Nicaragua, despite Congress banning the administration from funding them. Bush was never called to account for his part in it, claiming he was ‘out of the loop’, despite the testimony of others and a mass of documents suggesting otherwise.

The Collapse of Communism and Neoliberalism

Bush’s period in office coincided with the collapse of Communism. In the period afterwards, which Bush termed the New World Order, he was instrumental in spreading neoliberalism and the establishment of the NAFTO WTO treaties for international trade.

Hasan not only wrote for the Intercept, he also hosted their Deconstructed podcast, as well as a show, Up Front, on Al-Jazeera English.

The Media’s Praise of Bush

Goodman and Hasan state that there is a natural reluctance against speaking ill of the dead. But they aren’t going to speak ill of Bush, just critically examine his career and legacy. Hasan states that as a Brit living in Washington he’s amazed at the media hagiography of Bush. He recognizes that Bush had many creditable achievements, like standing up to the NRA and AIPAC, but condemns the way the media ignored the rest of Bush’s legacy, especially when it involves the deaths of thousands of people as absurd, a dereliction of duty. He states that Bush is being described as the ‘anti-Trump’, but he did many things that were similar to the Orange Buffoon. Such as the pardoning of Caspar Weinberger on the eve of his trial, which the independent special counsel at the time said was misconduct and that it covered up the crime. And everyone’s upset when Trump says he might pardon Paul Manafort. Bush should be held to the same account. It doesn’t matter that he was nicer than Trump, and less aggressive than his son, he still has a lot to answer for.

The Iran-Contra Scandal

Goodman gets Hasan to explain about the Iran-Contra scandal, in which Reagan sold arms to Iran, then an enemy state, to fund a proxy war against a ‘Communist’ state in South America despite a congressional ban. He states that it was a huge scandal. Reagan left office without being punished for it, there was a Special Council charged with looking into it, led by Lawrence Walsh, a deputy attorney general under Eisenhower. When he looked into it, he was met with resistance by Reagan’s successor, Bush. And now we’re being told how honest he was. But at the time Bush refused to hand over his diary, cooperate with the Special Counsel, give interviews, and pardoned the six top neocons responsible. The Special Counsel’s report is online, it can be read, and it says that Bush did not cooperate, and that this was the first time the president pardoned someone in a trial in which he himself would have to testify. He states that Bush and Trump were more similar in their obstruction of justice than some of the media would have us believe.

Iraq Invasion

They then move on to the Iraq invasion, and play the speech in which Bush states that he has begun bombing to remove Saddam Hussein’s nuclear bomb potential. It was done now, because ‘the world could wait no longer’. Because of Bush’s attack on Iraq, his death was marked by flags at half-mast in Kuwait as well as Washington. Hasan states that Hussein invaded Kuwait illegally, and it was a brutal occupation. But Hasan also says that Bush told the country that it came without any warning or provocation. But this came after the American ambassador to Iraq, April Glaspie, told Hussein that American had no opinion on any border dispute with Kuwait. This was interpreted, and many historians believe, that this was a green light to Hussein to invade.

Bush also told the world that America needed to go into Iraq to protect Saudi Arabia, as there were Iraqi troops massing on the border of that nation. This was another lie. One reporter bought satellite photographs of the border and found there were no troops there. It was lie, just as his son lied when he invaded twelve years later. As for the bombing of the Amariyya air raid shelter, which was condemned by Human Rights Watch, this was a crime because the Americans had been told it contained civilians. Bush also bombed the civilian infrastructure, like power stations, food processing plants, flour mills. This was done deliberately. Bush’s administration told the Washington Post that it was done so that after the war they would have leverage over the Iraqi government, which would have to go begging for international assistance. And this was succeeded by punitive sanctions that killed hundreds of thousands of Iraqi children. It all began on Bush’s watch.

Racism, Willie Horton and Bush’s Election Campaign

They then discuss his 1988 election campaign, and his advert attacking his opponent, Michael Dukakis. Dukakis was attacked for having given a weekend pass from prison to Willie Horton, a Black con serving time for murder, who then went and kidnapped a young couple, stabbing the man and repeatedly raping the woman. This was contrasted with Bush, who wanted the death penalty for first degree murder. The advert was created by Lee Atwater and Roger Ailes, who later apologized for it on his deathbed. This advert is still studied in journalism classes, and until Trump’s ad featuring the migrant caravan appeared it was considered the most racist advert in modern American political history. Atwater said that they were going to talk about Horton so much, people would think he was Dukakis’ running mate. Bush approved of this, and talked about Horton at press conferences. And unlike Atwater, he never apologized. Roger Stone, whom Hasan describes as one of the most vile political operatives of our time, an advisor to Donald Trump and Nixon, actually walked up to Atwater and told him he would regret it, as it was clearly a racist ad. When even Roger Stone says that it’s a bad idea, you know you’ve gone too far. But the press has been saying how decent Bush was. Hasan states he has only two words for that: Willie Horton.

In fact, weekend passes for prison inmates was a policy in many states, including California, where Ronald Reagan had signed one. Hasan calls the policy what it was: an attempt to stoke up racial fears and division by telling the public that Dukakis was about to unleash a horde of Black murderers, who would kill and rape them. And ironically the people who were praising Bush after his death were the same people attacking Trump a week earlier for the migrant caravan fearmongering. It reminded everyone of the Willie Horton campaign, but for some reason people didn’t make the connection between the two.

Racism and the War on Drugs

Hasan also makes the point that just as Bush senior had no problem creating a racist advert so he had no problem creating a racist drug war. They then move on to discussing Bush’s election advert, in which he waved a bag of crack cocaine he claimed had been bought in a park just a few metres from the White House. But the Washington Post later found out that it had all been staged. A drug dealer had been caught selling crack in Lafayette Square, but he had been lured there by undercover Federal agents, who told him to sell it there. The drug dealer even had to be told the address of the White House, so he could find it. It was a nasty, cynical stunt, which let to an increase in spending of $1 1/2 billion on more jails, and prosecutors to combat the drugs problem. And this led to the mass incarceration of young Black men, and thousands of innocent lives lost at home and abroad in the drug wars. And today Republican senators like Chris Christie will state that this is a failed and racist drug war.

This was the first in a series of programmes honouring the dead – which meant those killed by Bush, not Bush himself. The next programme in the series was on what Bush did in Panama.

Dark Rock and Bush: The Sisters of Mercy’s ‘Vision Thing’

I’ve a suspicion that the track ‘Vision Thing’ by the Sisters of Mercy is at least partly about George Bush senior. The Sisters are a dark rock band. Many of front man Andrew Eldritch’s lyrics are highly political, bitterly attacking American imperialism. Dominion/Mother Russia was about acid rain, the fall of Communism, and American imperialism and its idiocy. Eldritch also wanted one of their pop videos to feature two American servicemen in a cage being taunted by Arabs, but this was naturally rejected about the bombing of American servicemen in Lebanon. Another song in the same album, ‘Dr Jeep’, is about the Vietnam War.

‘Vision Thing’ seems to take its title from one of Bush’s lines, where he said, if I remember correctly, ‘I don’t have the vision thing.’ The song talks about ‘another black hole in the killing zone’, and ‘one million points of light’. It also has lines about ‘the prettiest s**t in Panama’ and ‘Take back what I paid/ to another M*****f****r in a motorcade’. These are vicious, bitter, angry lyrics. And if they are about Bush senior, then it’s no wonder.

Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”

Published by Anonymous (not verified) on Fri, 07/12/2018 - 3:17am in

Tags 

Technology

Facial recognition has quickly shifted from techno-novelty to fact of life for many, with millions around the world at least willing to put up with their faces scanned by software at the airport, their iPhones, or Facebook’s server farms. But researchers at New York University’s AI Now Institute have issued a strong warning against not only ubiquitous facial recognition, but its more sinister cousin: so-called affect recognition, technology that claims it can find hidden meaning in the shape of your nose, the contours of your mouth, and the way you smile. If that sounds like something dredged up from the 19th century, that’s because it sort of is.

AI Now’s 2018 report is a 56-page record of how “artificial intelligence” — an umbrella term that includes a myriad of both scientific attempts to simulate human judgment and marketing nonsense — continues to spread without oversight, regulation, or meaningful ethical scrutiny. The report covers a wide expanse of uses and abuses, including instances of racial discrimination, police surveillance, and how trade secrecy laws can hide biased code from an AI-surveilled public. But AI Now, which was established last year to grapple with the social implications of artificial intelligence, expresses in the document particular dread over affect recognition, “a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces.” The thought of your boss watching you through a camera that uses machine learning to constantly assess your mental state is bad enough, while the prospect of police using “affect recognition” to deduce your future criminality based on “micro-expressions” is exponentially worse.

“The ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims.”

That’s because “affect recognition,” the report explains, is little more than the computerization of physiognomy, a thoroughly disgraced and debunked strain of pseudoscience from another era that claimed a person’s character could be discerned from their bodies — and their faces, in particular. There was no reason to believe this was true in the 1880s, when figures like the discredited Italian criminologist Cesare Lombroso promoted the theory, and there’s even less reason to believe it today. Still, it’s an attractive idea, despite its lack of grounding in any science, and data-centric firms have leapt at the opportunity to not only put names to faces, but to ascribe entire behavior patterns and predictions to some invisible relationship between your eyebrow and nose that can only be deciphered through the eye of a computer. Two years ago, students at a Shanghai university published a report detailing what they claimed to be a machine learning method for determining criminality based on facial features alone. The paper was widely criticized, including by AI Now’s Kate Crawford, who told The Intercept it constituted “literal phrenology … just using modern tools of supervised machine learning instead of calipers.”

Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads. “The idea that AI systems might be able to tell us what a student, a customer, or a criminal suspect is really feeling or what type of person they intrinsically are is proving attractive to both corporations and governments, even though the scientific justifications for such claims are highly questionable, and the history of their discriminatory purposes well-documented.”

In an email to The Intercept, Crawford, AI Now’s co-founder and distinguished research professor at NYU, along with Meredith Whittaker, co-founder of AI Now and a distinguished research scientist at NYU, explained why affect recognition is more worrying today than ever, referring to two companies that use appearances to draw big conclusions about people. “From Faception claiming they can ‘detect’ if someone is a terrorist from their face to HireVue mass-recording job applicants to predict if they will be a good employee based on their facial ‘micro-expressions,’ the ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims,” said Crawford.

Faception has purported to determine from appearance if someone is “psychologically unbalanced,” anxious, or charismatic, while HireVue has ranked job applicants on the same basis.

As with any computerized system of automatic, invisible judgment and decision-making, the potential to be wrongly classified, flagged, or tagged is immense with affect recognition, particularly given its thin scientific basis: “How would a person profiled by these systems contest the result?,” Crawford added. “What happens when we rely on black-boxed AI systems to judge the ‘interior life’ or worthiness of human beings? Some of these products cite deeply controversial theories that are long disputed in the psychological literature, but are are being treated by AI startups as fact.”

What’s worse than bad science passing judgment on anyone within camera range is that the algorithms making these decisions are kept private by the firms that develop them, safe from rigorous scrutiny behind a veil of trade secrecy. AI Now’s Whittaker singles out corporate secrecy as confounding the already problematic practices of affect recognition: “Because most of these technologies are being developed by private companies, which operate under corporate secrecy laws, our report makes a strong recommendation for protections for ethical whistleblowers within these companies.” Such whistleblowing will continue to be crucial, wrote Whittaker, because so many data firms treat privacy and transparency as a liability, rather than a virtue: “The justifications vary, but mostly [AI developers] disclaim all responsibility and say it’s up to the customers to decide what to do with it.” Pseudoscience paired with state-of-the-art computer engineering and placed in a void of accountability. What could go wrong?

The post Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History” appeared first on The Intercept.

Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Harm Your Facebook Privacy

Published by Anonymous (not verified) on Thu, 06/12/2018 - 11:20am in

Tags 

Technology, World

In 2015, rising star, Stanford University graduate, winner of the 13th season of “Survivor,” and Facebook executive Yul Kwon was profiled by the news outlet Fusion, which described him as “the guy standing between Facebook and its next privacy disaster,” guiding the company’s engineers through the dicey territory of personal data collection. Kwon described himself in the piece as a “privacy sherpa.” But the day it published, Kwon was apparently chatting with other Facebook staffers about how the company could vacuum up the call logs of its users without the Android operating system getting in the way by asking for the user for specific permission, according to confidential Facebook documents released today by the British Parliament.

“This would allow us to upgrade users without subjecting them to an Android permissions dialog.”

The document, part of a larger 250-page parliamentary trove, shows what appears to be a copied-and-pasted recap of an internal chat conversation between various Facebook staffers and Kwon, who was then the company’s deputy chief privacy officer and is currently working as a product management director, according to his LinkedIn profile.

The conversation centered around an internal push to change which data Facebook’s Android app had access to, to grant the software the ability to record a user’s text messages and call history, to interact with bluetooth beacons installed by physical stores, and to offer better customized friend suggestions and news feed rankings . This would be a momentous decision for any company, to say nothing of one with Facebook’s privacy track record and reputation, even in 2015, of sprinting through ethical minefields. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it,” Michael LeBeau, a Facebook product manager, is quoted in the document as saying of the change.

Crucially, LeBeau commented, according to the document, such a privacy change would require Android users to essentially opt in; Android, he said, would present them with a permissions dialog soliciting their approval to share call logs when they were to upgrade to a version of the app that collected the logs and texts. Furthermore, the Facebook app itself would prompt users to opt in to the feature, through a notification referred to by LeBeau as “an in-app opt-in NUX,” or new user experience. The Android dialog was especially problematic; such permission dialogs “tank upgrade rates,” LeBeau stated.

But Kwon appeared to later suggest that the company’s engineers might be able to upgrade users to the log-collecting version of the app without any such nagging from the phone’s operating system. He also indicated that the plan to obtain text messages had been dropped, according to the document. “Based on [the growth team’s] initial testing, it seems this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,”  he stated. Users would have to click to effect the upgrade, he added, but, he reiterated, “no permissions dialog screen.”

It’s not clear if Kwon’s comment about “no permissions dialog screen” applied to the opt-in notification within the Facebook app. But even if the Facebook app still sought permission to share call logs, such in-app notices are generally designed expressly to get the user to consent and are easy to miss or misinterpret. Android users rely on standard, clear dialogs from the operating system to inform them of serious changes in privacy. There’s good reason Facebook would want to avoid “subjecting” its users to a screen displaying exactly what they’re about to hand over to the company:

It’s not clear how this specific discussion was resolved, but Facebook did eventually begin obtaining call logs and text messages from users of its Messenger and Facebook Lite apps for Android. This proved highly controversial when revealed in press accounts and by individuals posting on Twitter after receiving data Facebook had collected on them; Facebook insisted it had obtained permission for the phone log and text massage collection, but some users and journalists said it had not.

It’s Facebook’s corporate stance that the documents released by Parliament “are presented in a way that is very misleading without additional context.” The Intercept has asked both Facebook and Kwon personally about what context is missing here, if any, and will update with their response.

The post Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Harm Your Facebook Privacy appeared first on The Intercept.

Imprisoned Hacktivist Jeremy Hammond Bumped a Guard With a Door — and Got Thrown in Solitary Confinement

Published by Anonymous (not verified) on Wed, 05/12/2018 - 7:55am in

Last month, a famed hacker who has been serving a 10-year prison sentence since 2012 was accused by a guard at a federal detention center of “minor assault,” landing the so-called hacktivist in solitary confinement, according to advocates. The guard at Michigan’s Federal Correctional Institute-Milan made the accusation against Jeremy Hammond — the activist associated with hacking groups Anonymous and LulzSec and best know for hacking private intelligence firm Stratfor and leaking documents to WikiLeaks — on either November 19 or 20. Hammond has been held in solitary confinement ever since, according to the Jeremy Hammond Support Network.

The guard claims that Hammond hit him with a door, “stood his ground,” and pushed his shoulder into the guard. The head of Hammond’s support network said the prison guard’s account is an overblown. “Jeremy says that he was exiting his unit through a door that has no windows and could not see the guard on the other side, and as he’s exiting, bumped the guard with the door,” Grace North told The Intercept. “The guard immediately grabbed Jeremy and threw him up against the wall and dragged him down to solitary, with no handcuffs, without calling for backup, which is against prison protocol, and Jeremy has been there ever since.”

North’s version of events also portrays the guard as overly aggressive: After the guard was hit with the door, North said, he asked Hammond if he “wanted to go.”

“It’s absurd to classify being bumped with a door as assault and to think that an appropriate response is to subject the person who bumped you to torture.”

Hammond, who pleaded guilty to violating one count of the Computer Fraud and Abuse Act in a noncooperating plea deal, had never been part of any physical alteration since his arrest in Chicago on March 5, 2012. In 2013, Hammond pleaded guilty to hacking the private intelligence firm Stratfor Global Intelligence and other targets. The Stratfor hack lead to numerous revelations, including that the firm spied on activists for major corporations on several occasions.

Hammond’s run-in with the guard could have severe implications on his time in prison, disrupting his studies toward a higher-education degree and potentially precipitating a move from the minimum-security Milan facility to a medium-security prison.

“It’s absurd to classify being bumped with a door as assault and to think that an appropriate response is to subject the person who bumped you to torture,” said North. “This is yet another example of the wildly unchecked systems of power and abuse that are endemic to American prisons, and illustrate the need not just for reform, but the complete abolition of the entire prison-industrial complex.”

This week will mark the start of Hammond’s third week in a so-called segregated housing unit — more commonly known as solitary confinement. The United Nations has said that confinement of such length could be considered torture. “Considering the severe mental pain or suffering solitary confinement may cause,” U.N. Special Rapporteur on Torture Juan Méndez said in 2011, “it can amount to torture or cruel, inhuman, or degrading treatment or punishment.” He added that prolonged isolation for more than 15 days — around the length of Hammond’s current stint in solitary — should be absolutely prohibited because scientific studies have established that it can lead to lasting mental damage.

The charge that led to Hammond’s move to solitary confinement was upheld in a disciplinary hearing last week, which Hammond attended over the phone because he was barred from attending in person. North said that the “minor assault” charge against him is a disciplinary matter — as opposed to criminal — so Hammond was not allowed to have a lawyer. “He’s not entitled to representation of any kind,” North said. North added that Hammond was left unaware if any evidence against him was presented at the hearing, such as video of the incident. “It’s a prison, obviously there’s video of every corner of the building,” North said. “So we’re not aware if there was video shown, or if it was just the word of the guard.” The recommendation from the hearing is to transfer Hammond from FCI Milan, a low-security federal prison in Michigan, to a medium-security federal prison, according to North. (A spokesperson for FCI Milan declined to comment, citing the Privacy Act of 1974 that prohibits them from releasing information about any incarcerated people without their written permission.)

The “minor assault” charge is severely disrupting Hammond’s life in prison. Hammond has been taking college classes through a local community college that has a prison education program and was expecting to earn an associate’s degree in general studies next semester, making him part of the first class of incarcerated people to receive a college degree through the program. Since he’s been in solitary confinement, however, he has missed his classes, been unable to turn in assignments, and is unable to take his finals. “He greatly enjoys his studies, he greatly enjoys the classes he’s been taking,” North said. “Most prisons don’t offer the prison education program. Milan is one of them. It would almost certainly be guaranteed that whatever prison he was transferred to would not offer the program that Milan offers.”

In 2004, while Hammond was a freshman at University of Illinois at Chicago on a full scholarship, he hacked into the website of the computer science department, told them about it, and offered to help fix the vulnerability. In the cybersecurity industry, this is called responsible disclosure, but university administrators expelled him for it, and he never finished his degree.

If he gets transferred to a medium-security prison, Hammond will enjoy fewer freedoms than he currently does at Milan. He’ll also be farther from friends and family who right now are able to visit him frequently.

In 2011, hacktivists affiliated with Anonymous and LulzSec, including Hammond and FBI informant Hector Monsegur, also known as “Sabu,” hacked Stratfor and leaked seven and a half years of the company’s emails to WikiLeaks. At the time, Stratfor — which describes itself as “the world’s leading geopolitical intelligence platform” — had clients ranging from military agencies and defense contractors to global corporations that wanted to spy on activists.

Among other things, the hack and leak exposed how Dow Chemical hired Stratfor to spy on the culture-jamming activist group the Yes Men; Coca-Cola, a sponsor of the 2010 Winter Olympics in Vancouver, Canada, hired the firm to spy on activists associated with animal rights organization PETA, worried that they might be planning direct action against the corporation during the games; and American Petroleum Institute, the U.S. oil and gas industry lobby group, hired Stratfor to spy on Pulitzer Prize-winning investigative journalist outfit ProPublica, which in 2008 broke the first news stories about the environmental and health risks posed by fracking.

Monsegur, who was often referred to as the leader of LulzSec, was secretly arrested by the FBI on June 7, 2011. Immediately after his arrest, he began working closely with the FBI as an informant, building a case against Hammond and the other hackers associated with LulzSec and Anonymous. With Monsegur’s help, the FBI was aware of — and helped fund and participate in — the hacking of Stratfor and other targets. Monsegur provided Hammond with an FBI-owned server to exfiltrate emails and documents to during the Stratfor hack.

In a statement during his sentencing hearing, Hammond referred to his hacking as “acts of civil disobedience and direct action,” describing “an obligation to use my skills to expose and confront injustice and to bring the truth to light.” He says he had never heard of Stratfor until Monsegur — who was already an FBI informant at the time — brought it to his attention. “Why the FBI would introduce us to the hacker who found the initial vulnerability and allow this hack to continue remains a mystery,” he said at the sentencing.

Hammond is currently scheduled for release in February 2020.

The post Imprisoned Hacktivist Jeremy Hammond Bumped a Guard With a Door — and Got Thrown in Solitary Confinement appeared first on The Intercept.

Homeland Security Will Let Computers Predict Who Might Be a Terrorist on Your Plane — Just Don’t Ask How It Works

Published by Anonymous (not verified) on Tue, 04/12/2018 - 5:47am in

You’re rarely allowed to know exactly what’s keeping you safe. When you fly, you’re subject to secret rules, secret watchlists, hidden cameras, and other trappings of a plump, thriving surveillance culture. The Department of Homeland Security is now complicating the picture further by paying a private Virginia firm to build a software algorithm with the power to flag you as someone who might try to blow up the plane.

The new DHS program will give foreign airports around the world free software that teaches itself who the bad guys are, continuing society’s relentless swapping of human judgment for machine learning. DataRobot, a northern Virginia-based automated machine learning firm, won a contract from the department to develop “predictive models to enhance identification of high risk passengers” in software that should “make real-time prediction[s] with a reasonable response time” of less than one second, according to a technical overview that was written for potential contractors and reviewed by The Intercept. The contract assumes the software will produce false positives and requires that the terrorist-predicting algorithm’s accuracy should increase when confronted with such mistakes. DataRobot is currently testing the software, according to a DHS news release.

The contract also stipulates that the software’s predictions must be able to function “solely” using data gleaned from ticket records and demographics — criteria like origin airport, name, birthday, gender, and citizenship. The software can also draw from slightly more complex inputs, like the name of the associated travel agent, seat number, credit card information, and broader travel itinerary. The overview document describes a situation in which the software could “predict if a passenger or a group of passengers is intended to join the terrorist groups overseas, by looking at age, domestic address, destination and/or transit airports, route information (one-way or round trip), duration of the stay, and luggage information, etc., and comparing with known instances.”

DataRobot’s bread and butter is turning vast troves of raw data, which all modern businesses accumulate, into predictions of future action, which all modern companies desire. Its clients include Monsanto and the CIA’s venture capital arm, In-Q-Tel. But not all of DataRobot’s clients are looking to pad their revenues; DHS plans to integrate the code into an existing DHS offering called the Global Travel Assessment System, or GTAS, a toolchain that has been released as open source software and which is designed to make it easy for other countries to quickly implement no-fly lists like those used by the U.S.

According to the technical overview, DHS’s predictive software contract would “complement the GTAS rule engine and watch list matching features with predictive models to enhance identification of high risk passengers.” In other words, the government has decided that it’s time for the world to move beyond simply putting names on a list of bad people and then checking passengers against that list. After all, an advanced computer program can identify risky fliers faster than humans could ever dream of and can also operate around the clock, requiring nothing more than electricity. The extent to which GTAS is monitored by humans is unclear. The overview document implies a degree of autonomy, listing as a requirement that the software should “automatically augment Watch List data with confirmed ‘positive’ high risk passengers.”

The document does make repeated references to “targeting analysts” reviewing what the system spits out, but the underlying data-crunching appears to be almost entirely the purview of software, and it’s unknown what ability said analysts would have to check or challenge these predictions. In an email to The Intercept, Daniel Kahn Gillmor, a senior technologist with the American Civil Liberties Union, expressed concern with this lack of human touch: “Aside from the software developers and system administrators themselves (which no one yet knows how to automate away), the things that GTAS aims to do look like they could be run mostly ‘on autopilot’ if the purchasers/deployers choose to operate it in that manner.” But Gillmor cautioned that even including a human in the loop could be a red herring when it comes to accountability: “Even if such a high-quality human oversight scheme were in place by design in the GTAS software and contributed modules (I see no indication that it is), it’s free software, so such a constraint could be removed. Countries where labor is expensive (or controversial, or potentially corrupt, etc) might be tempted to simply edit out any requirement for human intervention before deployment.”

“Countries where labor is expensive might be tempted to simply edit out any requirement for human intervention.”

For the surveillance-averse, consider the following: Would you rather a group of government administrators, who meet in secret and are exempt from disclosure, decide who is unfit to fly? Or would it be better for a computer, accountable only to its own code, to make that call? It’s hard to feel comfortable with the very concept of profiling, a practice that so easily collapses into prejudice rather than vigilance. But at least with uniformed government employees doing the eyeballing, we know who to blame when, say, a woman in a headscarf is needlessly hassled, or a man with dark skin is pulled aside for an extra pat-down.

If you ask DHS, this is a categorical win-win for all parties involved. Foreign governments are able to enjoy a higher standard of security screening; the United States gains some measure of confidence about the millions of foreigners who enter the country each year; and passengers can drink their complimentary beverage knowing that the person next to them wasn’t flagged as a terrorist by DataRobot’s algorithm. But watchlists, among the most notorious features of post-9/11 national security mania, are of questionable efficacy and dubious legality. A 2014 report by The Intercept pegged the U.S. Terrorist Screening Database, an FBI data set from which the no-fly list is excerpted, at roughly 680,000 entries, including some 280,000 individuals with “no recognized terrorist group affiliation.” That same year, a U.S. district court judge ruled in favor of an ACLU lawsuit, declaring the no-fly list unconstitutional. The list could only be used again if the government improved the mechanism through which people could challenge their inclusion on it — a process that, at the very least, involved human government employees, convening and deliberating in secret.

Screen-Shot-2018-11-14-at-3.55.39-PM-1542234254

Diagram from a Department of Homeland Security technical document illustrating how GTAS might visualize a potential terrorist onboard during the screening process.

Document: DHS

But what if you’re one of the inevitable false positives? Machine learning and behavioral prediction is already widespread; The Intercept reported earlier this year that Facebook is selling advertisers on its ability to forecast and pre-empt your actions. The consequences of botching consumer surveillance are generally pretty low: If a marketing algorithm mistakenly predicts your interest in fly fishing where there is none, the false positive is an annoying waste of time. The stakes at the airport are orders of magnitude higher.

What happens when DHS’s crystal ball gets it wrong — when the machine creates a prediction with no basis in reality and an innocent person with no plans to “join a terrorist group overseas” is essentially criminally defamed by a robot? Civil liberties advocates not only worry that such false positives are likely, possessing a great potential to upend lives, but also question whether such a profoundly damning prediction is even technologically possible. According to  DHS itself, its predictive software would have relatively little information upon which to base a prognosis of impending terrorism.

Even from such mundane data inputs, privacy watchdogs cautioned that prejudice and biases always follow — something only worsened under the auspices of self-teaching artificial intelligence. Faiza Patel, co-director of the Brennan Center’s Liberty and National Security Program, told The Intercept that giving predictive abilities to watchlist software will present only the veneer of impartiality. “Algorithms will both replicate biases and produce biased results,” Patel said, drawing a parallel to situations in which police are algorithmically allocated to “risky” neighborhoods based on racially biased crime data, a process that results in racially biased arrests and a checkmark for the computer. In a self-perpetuating bias machine like this, said Patel, “you have all the data that’s then affirming what the algorithm told you in the first place,” which creates “a kind of cycle of reinforcement just through the data that comes back.” What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

Indeed, DHS’s system stands to deliver a computerized turbocharge to the bias that is already endemic to the American watchlist system. The overview document for the the Delphic profiling tool made repeated references to the fact that it will create a feedback loop of sorts. The new system “shall automatically augment Watch List data with confirmed ‘positive’ high risk passengers,” one page read, with quotation marks doing some very real work. The software’s predictive abilities “shall be able to improve over time as the system feeds actual disposition results, such as true and false positives,” said another section. Given that the existing watchlist framework has ensnared countless thousands of innocent people , the notion of “feeding” such “positives” into a machine that will then search even harder for that sort of person is downright dangerous. It also becomes absurd: When the criteria for who is “risky” and who isn’t are kept secret, it’s quite literally impossible for anyone on the outside to tell what is a false positive and what isn’t. Even for those without civil libertarian leanings, the notion of an automatic “bad guy” detector that uses a secret definition of “bad guy” and will learn to better spot “bad guys” with every “bad guy” it catches would be comical were it not endorsed by the federal government.

For those troubled by the fact that this system is not only real but currently being tested by an American company, the fact that neither the government nor DataRobot will reveal any details of the program is perhaps the most troubling of all. When asked where the predictive watchlist prototype is being tested, the DHS tech directorate spokesperson, John Verrico, told The Intercept, “I don’t believe that has been determined yet,” and stressed that the program was meant for use with foreigners. Verrico referred further questions about test location and which “risk criteria” the algorithm will be trained to look for back to DataRobot. Libby Botsford, a DataRobot spokesperson, initially told The Intercept that she had “been trying to track down the info you requested from the government but haven’t been successful,” and later added, “I’m not authorized to speak about this. Sorry!” Subsequent requests sent to both DHS and DataRobot were ignored.

Verrico’s assurance — that the watchlist software is an outward-aiming tool provided to foreign governments, not a means of domestic surveillance — is an interesting feint given that Americans fly through non-American airports in great numbers every single day. But it obscures ambitions much larger than GTAS itself: The export of opaque, American-style homeland security to the rest of the world and the hope of bringing every destination in every country under a single, uniform, interconnected surveillance framework. Why go through the trouble of sifting through the innumerable bodies entering the United States in search of “risky” ones when you can move the whole haystack to another country entirely? A global network of terrorist-scanning predictive robots at every airport would spare the U.S. a lot of heavy, politically ugly lifting.

“Automation will exacerbate all of the worst aspects of the watchlisting system.”

Predictive screening further shifts responsibility. The ACLU’s Gillmor explained that making these tools available to other countries may mean that those external agencies will prevent people from flying so that they never encounter DHS at all, which makes DHS less accountable for any erroneous or damaging flagging, a system he described as “a quiet way of projecting U.S. power out beyond U.S. borders.” Even at this very early stage, DHS seems eager to wipe its hands of the system it’s trying to spread around the world: When Verrico brushed off questions of what the system would consider “risky” attributes in a person, he added in his email that “the risk criteria is being defined by other entities outside the U.S., not by us. I would imagine they don’t want to tell the bad guys what they are looking for anyway. ;-)” DHS did not answer when asked whether there were any plans to implement GTAS within the United States.

Then there’s the question of appeals. Those on DHS’s current watchlists may seek legal redress; though the appeals system is generally considered inadequate by civil libertarians, it offers at least a theoretical possibility of removal. The documents surrounding DataRobot’s predictive modeling contract make no mention of an appeals system for those deemed risky by an algorithm, nor is there any requirement in the DHS overview document that the software must be able to explain how it came to its conclusions. Accountability remains a fundamental problem in the fields of machine learning and computerized prediction, with some computer scientists adamant that an ethical algorithm must be able to show its work, and others objecting on the grounds that such transparency compromises the accuracy of the predictions.

Gadeir Abbas, an attorney with the Council on American-Islamic Relations, who has spent years fighting the U.S. government in court over watchlists, saw the DHS software as only more bad news for populations already unfairly surveilled. The U.S. government is so far “not able to generate a single set of rules that have any discernible level of effectiveness,” said Abbas, and so “the idea that they’re going to automate the process of evolving those rules is another example of the technology fetish that drives some amount of counterterrorism policy.”

The entire concept of making watchlist software capable of terrorist predictions is mathematically doomed, Abbas added, likening the system to a “crappy Minority report. … Even if they make a really good robot, and it’s 99 percent accurate,” the fact that terror attacks are “exceedingly rare events” in terms of naked statistics means you’re still looking at “millions of false positives. … Automation will exacerbate all of the worst aspects of the watchlisting system.”

The ACLU’s Gillmor agreed that this mission is simply beyond what computers are even capable of:

For very-low-prevalence outcomes like terrorist activity, predictive systems are simply likely to get it wrong. When a disease is a one-in-a-million likelihood, the surest bet is a negative diagnosis. But that’s not what these systems are designed to do. They need to “diagnose” some instances positively to justify their existence. So, they’ll wrongly flag many passengers who have nothing to do with terrorism, and they’ll do it on the basis of whatever meager data happens to be available to them.

Predictive software is not just the future, but the present. Its expansion into the way we shop, the way we’re policed, and the way we fly will soon be commonplace, even if we’re never aware of it. Designating enemies of the state based on a crystal ball locked inside a box represents a grave, fundamental leap in how societies appraise danger. The number of active, credible terrorists-in-waiting is an infinitesimal slice of the world’s population. The number of people placed on watchlists and blacklists is significant. Letting software do the sorting — no matter how smart and efficient we tell ourselves it will be — will likely do much to worsen this inequity.

The post Homeland Security Will Let Computers Predict Who Might Be a Terrorist on Your Plane — Just Don’t Ask How It Works appeared first on The Intercept.

I Quit Google Over Its Censored Chinese Search Engine. The Company Needs to Clarify Its Position on Human Rights.

Published by Anonymous (not verified) on Sat, 01/12/2018 - 11:00pm in

Tags 

Technology, World

A woman and her child play on a Google sign at the World Artificial Intelligence Conference (WAIC) in Shanghai on September 26, 2018. (Photo by Johannes EISELE / AFP)        (Photo credit should read JOHANNES EISELE/AFP/Getty Images)

A woman and her child play on a Google sign at the World Artificial Intelligence Conference in Shanghai on Sept. 26, 2018.

Photo: Johannes Eisele/AFP/Getty Images

John Hennessy, the chair of Google’s parent company, Alphabet Inc., was recently asked whether Google providing a search engine in China that censored results would provide a net benefit for Chinese users. “I don’t know the answer to that. I think it’s — I think it’s a legitimate question,” he responded. “Anybody who does business in China compromises some of their core values. Every single company, because the laws in China are quite a bit different than they are in our own country.”

Hennessy’s remarks were in relation to Project Dragonfly, a once-secret project within Google to build a version of its search engine that meets the demands of the ruling Chinese Communist Party — namely, that Google proactively censor “sensitive” speech and comply with China’s data provenance and surveillance laws.

I worked as a research scientist at Google when Dragonfly was revealed — including to most Google employees — and resigned in protest after a month of internally fighting for clarification.

I worked as a research scientist at Google when Dragonfly was revealed — including to most Google employees — and resigned in protest after a month of internally fighting for clarification. That’s part of why I object to this constant drift of conversations about Dragonfly from concrete, indefensible details toward the vague language of difficult compromise.

When news of Dragonfly first broke on August 1, a Google staff member who had secretly worked on Dragonfly took to the company-only Google Plus forum. The language was clear: “In my opinion it is just as bad as the leak mentions,” the staffer wrote, adding that they had asked to be removed from the project and another employee had left the company over their discomfort. At this point, my internal alarms went off, and I started pointedly asking my team and management if there was any official company response.

While employees were waiting for an official response at the next company-wide meeting, we were also sharing links to details about the project that we found through directly scouring Google’s source code, which is mostly available to all engineers. Even though much of Dragonfly had been kept from prying eyes, or “siloed,” the pieces that slipped through were disturbing. One of the Google-constructed blacklists for search terms contained numerous phrases, including “human rights” and “Nobel prize.” Code had been written to show only Chinese air quality data from an unnamed source in Beijing. And Dragonfly linked searches to the users’ phone numbers.

Due to having recently moved to Toronto to support my wife’s career, I was working remotely and was disconnected from any internal organizing efforts against Dragonfly. So when the company-wide meeting came and went without any substantive response to hundreds of impassioned appeals from employees, I exercised the strongest speech available to me and submitted my two-weeks notice to my manager — and the rest of the company — in the form of a six-page document listing my objections to the project.

My final two weeks at Google were spent balancing between handing off my projects to other engineers and meeting with increasingly senior management about my letter; my penultimate evening was spent in a disappointing direct meeting with Jeff Dean, the head of artificial intelligence research and my interface to Google’s CEO. Dean argued that only a small number of queries would be censored and that China’s surveillance is analogous to the U.S.’s Foreign Intelligence Surveillance Act warrants, secret warrants purportedly issued for the purpose of rooting out foreign spies. The next day, I worked late to finish my last project handoff and anticlimactically turned in my company badge and laptop to an empty office.

Ironically, I had no intention of speaking with the press until I later read an interview Hennessy had done as part of a promotion for his recent book, “Leading Matters.” When asked about Google re-entering the Chinese market, he dismissively said, “There’s a shifting set of grounds of how you think about that problem, and how you think about the issue of censorship. The truth is, there are forms of censorship virtually everywhere around the world.”

Soon after, I went public with my resignation, and after a few more weeks of silence from Google, I detailed my objections in a letter to the Senate Commerce Committee ahead of a privacy hearing attended by Google Chief Privacy Officer Keith Enright. During the hearing, Sen. Ted Cruz, R-Texas, repeatedly pushed for answers on Dragonfly, but Enright pleaded ignorance, saying he was “not clear on the contours of what is in scope or out of scope for that project.” When asked whether China censors what its citizens can see, he dodged: “As the privacy representative of Google, I’m not sure that I have an informed opinion on that question.”

Google’s response was evasive enough that in the weeks after the hearing, Vice President Mike Pence gave a speech in which he demanded an end to Dragonfly. “Google should immediately end development of the ‘Dragonfly’ app that will strengthen Communist Party censorship and compromise the privacy of Chinese customers,” Pence said.

Yet, a little more than a week later, Google CEO Sundar Pichai attempted to invoke an engineering defense by arguing that Google would not need to censor “well over 99 percent” of queries. Such a framing is perhaps the most extreme example of a broad pattern of redirecting conversations away from their concrete governmental concessions — which, again, literally involved blacklisting the phrase “human rights,” risking health by censoring air quality data, and allowing for easy surveillance by tying queries to phone numbers. Human rights and basic political speech are not an ignorable edge case.

It’s important to remember that Google’s 2010 withdrawal of its censored Chinese search engine was provoked by Beijing hacking the inner sanctum of Google’s software — their source code repository — to access the Gmail accounts of Chinese dissidents. Despite the obvious connection, Google’s leadership has entirely avoided clarifying Dragonfly’s surveillance concessions or addressing one of the main demands in a letter from a coalition of 14 human rights organizations. The letter implored google to “[d]isclose its position on censorship in China and what steps, if any, Google is taking to safeguard against human rights violations linked to Project Dragonfly and its other Chinese mobile app offerings.”

I, for my part, would ask that Sundar Pichai honestly engage on what the chair of Google’s parent company has agreed is a compromise of some of Google’s “core values.” Google’s AI principles have committed the company to not “design or deploy … technologies whose purpose contravenes widely accepted principles of … human rights.”

Human rights organizations around the world, as well as Google’s own employees, have cried out. Google owes them all forthright answers.

The post I Quit Google Over Its Censored Chinese Search Engine. The Company Needs to Clarify Its Position on Human Rights. appeared first on The Intercept.

Jon Pertwee ‘Dr. Who’ Strip on the Bronze Age of Blogs

Published by Anonymous (not verified) on Sat, 01/12/2018 - 9:59pm in

The Bronze Age of Blogs is a website dedicated to comics of the 1970s, though sometimes this is stretched to include strips from the late ’60s and ’80s. One of the strips it’s covered recently is a ‘Dr. Who’ strip from the comic Countdown/TV Action, which apparently ran from 1971 to 1973. The strip features the 3rd Doctor, as played by Jon Pertwee, and was written and drawn by Barry Haylock. According to the Pete Doree, the site’s author, the comic carried work by a number of great British comics artists, like Frank Bellamy, one of the artists on The Eagle’s Dan Dare, and Ron Embleton, whose name I recognize from 2000 AD.

I can vaguely remember TV 21 from my early childhood, including the Dr. Who strip. I can remember reading one such story, about an alien influence beaming in through a radio telescope and the TARDIS dematerializing just before we had a Hallowe’en party.

The Bronze Age of Blogs reproduces stories from the comics discussed, and so this post duly has one of the Doctor’s from the comic. To enlarge the images so that you can see them more clearly, and read the speech bubbles, simply click on them.

http://bronzeageofblogs.blogspot.com/2018/11/gerry-haylocks-dr-who.html

Frankie Boyle Jokes about Israel

I found this short video of about 5 1/2 minutes long posted by Foam Chomsky on YouTube. It’s a series of jokes about Israel by Scots comedian Frankie Boyle, intercut with footage of Israeli soldiers beating, shooting and killing unarmed Palestinians.

As you would expect from Boyle, some of the jokes are coarse and nearly all of them savage. He starts by saying that he’s learning the Israeli military martial art, and now knows various ways to kick a Palestinian woman in the back. Female porn stars are now calling their pubic area a ‘Gaza Strip’, because it’s been viciously pummeled and there’s no hope of children getting out there alive. The monarchy also gets a drubbing. On Prince William’s visit to Israel, Boyle quips that he’ll be the first British royal to go there who wasn’t leading a crusade. He shreds the Israeli claim that the Palestinians use people as human shields. He talks about how you have to visualize Palestine as a cake. It’s a cake that’s being pummeled by an angry Jew. Netanyahu’s name spoken with a Glaswegian accent sounds like a rubbish Scots internet provider. They should call comb-overs Netanyahus, because they’re an attempt to colonise territory they’ve no right to.

One of the jokes was cut by the Beeb from Boyle’s appearance on stage at the Palladium. There’s also a series of tweets by Boyle attacking Israel, including caustic replies to American comedian Bill Maher and former Tory MP David Cameron. The latter gets it for his hypocrisy wishing Muslims a happy Eid al-Fitr at the same time he was bombing their countries.

I don’t know if Boyle has been accused of anti-Semitism yet. It wouldn’t surprise me if he had. But there’s also an element of irony here, in that Boyle also appears to have swallowed the media lie that Jeremy Corbyn’s anti-Semitic, even though Corbyn isn’t, and the people making the accusation are the same apologists for Israel that Boyle himself has torn into.

Pages