Technology

Philosophy of technology?

Published by Anonymous (not verified) on Fri, 22/03/2019 - 5:08am in

Tags 

Technology



Is there such a thing as “philosophy of technology”? Is there a “philosophy of cooking” or a “philosophy of architecture”? All of these are practical activities – praxis – with large bodies of specialized knowledge and skill involved in their performance. But where does philosophy come in?

Most of us trained in analytic philosophy think of a philosophical topic as one that can be formulated in terms of a small number of familiar questions: what are the nature and limitations of knowledge in this area? What ethical or normative problems does this area raise? What kinds of conceptual issues need to be addressed before we can discuss problems in this area clearly and intelligently? Are there metaphysical issues raised by this area -- special kinds of things that need special philosophical attention? Does "technology" support this kind of analytical approach?
We might choose to pursue a philosophy of technology in an especially minimalist (and somewhat Aristotelian) way, along these lines:

  • Human beings have needs and desires that require material objects for their satisfaction. 
  • Human beings engage in practical activity to satisfy their needs and desires.
  • Intelligent beings often seek to utilize and modify their environments so as to satisfy their needs and desires. 
  • Physical bodies are capable of rudimentary environment modification, which may permit adequate satisfaction of needs and desires in propitious environments (dolphins).
  • Intelligent beings often seek to develop "tools" to extend the powers of their bodies to engage in environment modification.
  • The use of tools produces benefits and harms for self and others, which raises ethical issues.

Now we can introduce the idea of the accumulation of knowledge ("science"):

  • Human beings have the capacity to learn how the world around them works, and they can learn the causal properties of materials and natural entities. 
  • Knowledge of causal properties permits intelligent intervention in the world.
  • Gaining scientific knowledge of the world creates the possibility of the invention of knowledge-based artifacts (instruments, tools, weapons).

And history suggests we need to add a few Hobbesian premises:

  • Human beings often find themselves in conflict with other agents for resources supporting the satisfaction of their needs and desires.
  • Intelligent beings seek to develop tools (weapons) to extend the powers of their bodies to engage in successful conflict with other agents.

Finally, history seems to make it clear that tools, machines, and weapons are not purely individual products; rather, social circumstances and social conflict influence the development of the specific kinds of tools, machines, and weapons that are created in a particular historical setting.

The idea of technology can now be fitted into the premises identified here. Technology is the sum of a set of tools, machines, and practical skills available at a given time in a given culture through which needs and interests are satisfied and the dialectic of power and conflict furthered.
This treatment suggests several leading questions for a philosophy of technology:

  1. How does technology relate to human nature and human needs?
  2. How does technology relate to intelligence and creativity?
  3. How does technology relate to scientific knowledge?
  4. How does technology fit into the logic of warfare?
  5. How does technology fit into the dialectic of social control among groups?
  6. How does technology relate to the social, historical, and cultural environment?
  7. Is the process of technology change determined by the technical characteristics of the technology?
  8. How does technology relate to issues of justice and morality?

Here are a few important contributions to several of these topics.

Lynn White's Medieval Technology and Social Change illustrates almost all elements of this configuration. His classic book begins with the dynamics of medieval warfare (the impact of the development of the stirrup on mounted combat); proceeds to food production (the development and social impact of the heavy iron plough); and closes with medieval machines.

Charles Sabel's treatment of industrialization and the creation of powered machinery in Work and Politics: The Division of Labour in Industry addresses topic 5; Sabel demonstrates how industrialization and the specific character of mechanization that ensued was a process substantially guided by conflicts of interest between workers and owners, and technologies were selected by owners that reduced the powers of resistance of workers. Sabel and Zeitlin make this argument in greater detail in World of Possibilities: Flexibility and Mass Production in Western Industrialization. One of their most basic arguments is the idea that firms are strategic and adaptive as they deal with a current set of business challenges. Rather than an inevitable logic of new technologies and their organizational needs, we see a highly adaptive and selective process in which firms pick and choose among alternatives, often mixing the choices to hedge against failure. They consider carefully a range of possible changes on the horizon, a set of possible strategic adaptations that might be selected; and they frequently hedge their bets by investing in both the old and the new technology. "Economic agents, we found again and again in the course of the seminar's work, do not maximize so much as they strategize" (5). (Here is a more extensive discussion of Sabel and Zeitlin; link.)

The logic underlying the idea of technological inevitability (topic 7) goes something like this: a new technology creates a set of reasonably accessible new possibilities for achieving new forms of value: new products, more productive farming techniques, or new ways of satisfying common human needs. Once the technology exists, agents or organizations in society will recognize those new opportunities and will attempt to take advantage of them by investing in the technology and developing it more fully. Some of these attempts will fail, but others will succeed. So over time, the inherent potential of the technology will be realized; the technology will be fully exploited and utilized. And, often enough, the technology will both require and force a new set of social institutions to permit its full utilization; here again, agents will recognize opportunities for gain in the creation of social innovations, and will work towards implementing these social changes.

This view of history doesn't stand up to scrutiny, however. There are many examples of technologies that failed to come to full development (the water mill in the ancient world and the Betamax in the contemporary world). There is nothing inevitable about the way in which a technology will develop -- imposed, perhaps, by the underlying scientific realities of the technology; and there are numerous illustrations of a more complex back-and-forth between social conditions and the development of a technology. So technological determinism is not a credible historical theory.
Thomas Hughes addresses topic 6 in his book, Human-Built World: How to Think about Technology and Culture. Here Hughes considers how technology has affected our cultures in the past two centuries. The twentieth-century city, for example, could not have existed without the inventions of electricity, steel buildings, elevators, railroads, and modern waste-treatment technologies. So technology "created" the modern city. But it is also clear that life in the twentieth-century city was transformative for the several generations of rural people who migrated to them. And the literature, art, values, and social consciousness of people in the twentieth century have surely been affected by these new technology systems. Each part of this complex story involves processes that are highly contingent and highly intertwined with social, economic, and political relationships. And the ultimate shape of the technology is the result of decisions and pressures exerted throughout the web of relationships through which the technology took shape. But here is an important point: there is no moment in this story where it is possible to put "technology" on one side and "social context" on the other. Instead, the technology and the society develop together.

Peter Galison's treatment of the simultaneous discovery of the relativity of time measurement by Einstein and Poincaré in Einstein's Clocks and Poincaré's Maps: Empires of Time provides a valuable set of insights into topic 3. Galison shows that Einstein's thinking was very much influenced by practical issues in the measurement of time by mechanical devices. This has an interesting corollary: the scientific imagination is sometimes stimulated by technology issues, just as technology solutions are created through imaginative use of new scientific theories.

Topic 8 has produced an entire field of research of its own. The morality of the use of autonomous drones in warfare; the ethical issues raised by CRISPR technology in human embryos; the issues of justice and opportunity created by the digital divide between affluent people and poor people; privacy issues created by ubiquitous facial recognition technology -- all these topics raise important moral and social-justice issues. Here is an interesting thought piece by Michael Lynch in the Guardian on the topic of digital privacy (link). Lynch is the author of The Internet of Us: Knowing More and Understanding Less in the Age of Big Data.
So, yes, there is such a thing as the philosophy of technology. But to be a vibrant and intellectually creative field, it needs to be cross-disciplinary, and as interested in the social and historical context of technology as it is the conceptual and normative issues raised by the field.

Inside the Video Surveillance Program IBM Built for Philippine Strongman Rodrigo Duterte

Published by Anonymous (not verified) on Thu, 21/03/2019 - 12:35am in

Tags 

Technology, World

Jaypee Larosa was standing in front of an internet cafe in Davao City, a metropolitan hub on the Philippine island of Mindanao, when three men in dark jackets pulled up on a motorcycle and opened fire. That summer evening, Larosa, 20, was killed. After the shooting, according to witnesses, one of the men reportedly removed Larosa’s baseball cap and said, “Son of a bitch. This is not the one.” Then they drove off.

Larosa’s murder, on July 17, 2008, was one of hundreds of extrajudicial killings carried out in Davao City, now a city of 1.6 million, while Rodrigo Duterte, now president of the Philippines, was mayor there. Years before launching his notorious, bloody “drug war” across the country, Duterte presided over similar tactics at the local level. During his tenure as mayor, according to a 2009 investigation by Human Rights Watch, death squads assassinated street children, drug dealers, and petty criminals; in some cases, researchers found evidence of the complicity or direct involvement of government officials and police.

Duterte has consistently denied any connection to this campaign of killings, but at times, his support for the violence was barely concealed. As mayor, Duterte would publicly announce the names or locations of “criminals,” and some of them would later be killed, according to human rights groups and local newspapers. Although it stopped short of accusing Duterte himself of misconduct or direct involvement, the Philippines’ Office of the Ombudsman partially acknowledged in 2012 the police’s role in tolerating the killings, finding that 21 Davao City police officials and officers were “remiss in their duty” for failing to solve them.

 Ezra Acayan/NurPhoto (Photo by Ezra Acayan/NurPhoto via Getty Images)

Children hold the coffin of 13-year-old Aldrin Pineda, who was shot by a police officer, during his funeral in Manila, Philippines, on March 14, 2018.

Photo: Ezra Acayan/NurPhoto via Getty Images

But this potential complicity in human rights violations did not stop IBM from agreeing to provide surveillance technology to law enforcement in Davao City. On June 27, 2012, three years after the devastating Human Rights Watch report, IBM issued a short news release announcing an agreement with Davao to upgrade its police command center in order to “further enhance public safety operations in the city.” IBM’s installation, known as the Intelligent Operations Center, promised to enhance authorities’ ability to monitor residents in real time with cutting-edge video analytics, multichannel communications technology, and GPS-enabled patrol vehicles. Less than two months later, the Philippine Commission on Human Rights published a resolution condemning Davao authorities for fostering a “climate of impunity” with regard to the killings, recommending that the National Bureau of Investigation undertake an impartial investigation into potential obstruction of justice by local police officials. (Duterte has recently condemned the commission, questioning its motives and suggesting that it should be abolished.)

The 2012 IBM deal was signed by Rodrigo Duterte’s daughter, Sara Duterte, who was Davao City’s nominal mayor at the time, while her term-limited father served as vice mayor; under Sara Duterte, the killings continued. The system, according to local news reports, was deployed in June 2013, just as Rodrigo Duterte was about to return to the mayoral seat he had already held for nearly two decades. The police command center, Sara Duterte told the Durian Post, “is now infused with IBM’s IOC technology,” allowing police to “shift from responding to critical events to anticipating and preventing them.”

While The Intercept and Type Investigations were unable to locate any reference to Davao’s death squads in IBM’s public corporate documents about the program, a 2014 company overview of the installation made clear that IBM knew “illegal drugs,” predictive policing, and crime suppression were among Davao City security forces’ “priority areas.” From 2013 through late 2016, when one Davao security official estimated the IBM program stopped being in active use, Filipino human rights activists who worked closely with the Commission on Human Rights claimed to have documented at least 213 extrajudicial killings carried out by Davao death squads.

Davao City officials did not respond to queries related to IBM’s video surveillance system or its potential role in extrajudicial killing operations during its run. But three police and city security officials interviewed in Davao City last year said the program had strengthened police video monitoring capabilities, which they said had proved useful in Davao’s controversial war on so-called drug syndicates. That war, human rights reports and former death squad participants have shown, often targeted low-level drug users and peddlers, rather than major traffickers.

Amado Picardal, a former spokesperson of the Coalition Against Summary Executions, a Davao-based human rights group, called IBM’s work “unethical,” given that some of the killings had been linked to Duterte’s police in the years before its deal with Davao City.

IBM declined to respond to queries about its human rights record in Davao City. IBM spokesperson Edward Barbini briefly noted that the company “no longer supplies technology to the Intelligent Operations Center in Davao, and has not done so since 2012,” though he declined to clarify whether IBM serviced the technology after that point, and IBM’s public filings mention the program as ongoing after that date. “The Philippines city of Davao’s 1.5 million citizens will be the first in Asia to benefit from an Intelligent Operations Center,” an April 3, 2013, IBM disclosure reads. “A new early warning system will monitor key risk indicators so agencies can take quick action before situations escalate.”

In the years since the IBM program was phased out, Philippine police interest in cutting-edge surveillance infrastructure has hardly waned. National authorities are now looking to deploy real-time facial recognition across the country, in a project called “Safe Philippines,” and have considered technology from a variety of international vendors, including the Chinese telecom Huawei.

In December, a local newspaper reported that the Philippines had secured a 20 billion-peso loan for the installation of thousands of surveillance cameras across Davao City and metro Manila in collaboration with a Chinese firm, an installation that would reportedly include a national command center and feature facial and vehicle recognition software. In a January interview on Filipino television, Epimaco Densing III, undersecretary of the Department of the Interior and Local Government, said that a goal of the project is to detect the faces of terrorist suspects and prevent crimes before they take place.

Filipino activists worry that such capabilities could facilitate human rights violations. Over the last three years, parts of the country have been under temporary declarations of martial law, and Duterte’s “war on drugs” has left at least 5,000 and possibly as many as 27,000 dead (police and human rights groups’ estimates vary widely). Those killed have included anti-Duterte activistselected officials, and outspoken Catholic priests. Currently, Duterte is campaigning to modify the constitution, a move that could afford powers to the executive to further the suppression of political opponents.

Surveillance Capabilities in Davao

In June 2012, Mayor Sara Duterte announced a 128 million-peso deal, worth just over $3 million at the time, with IBM to improve its real-time monitoring capabilities. The announcement promised to “scale up” Davao’s Public Safety and Security Command Center, or PSSCC, with improved communications and surveillance technology.

Sayaji Shinde, a former IBM sales leader who says he was part of the team that secured the command center deal, recalls that his team was eager to partner with the Duterte administration. “If you look at the Dutertes as such, they focus a lot on public-sector security,” said Shinde. “And I think that is one of the drivers, for even us, to go and spend our time and advise them because we saw that they are really keen to ensure that the city become more safer.”

To seal the deal, Shinde said, IBM pointed to the international recognition that such a project would bring Davao. “That is precisely what we sold them: ‘You know if you do this, work with us, and it becomes first of its kind, then this will be highlighted globally.’”

In the initial phase of the project, IBM mapped Davao’s police cameras onto a geographic information system, allowing operators to quickly access camera feeds near locations of interest, Shinde said.

According to Shinde, the rollout also featured a multichannel communications system, allowing police, traffic, and defense personnel to communicate with one another. It also included video analytics technology that automatically tagged objects captured on camera, like cars and people, by their physical attributes. The tags included the objects’ size, speed, color, trajectory, and direction, according to a November 2014 IBM presentation to the Asian Development Bank, allowing command center operators to comb through camera footage in search of suspects by their descriptions. (IBM had refined these kinds of surveillance capabilities using secret access to New York Police Department camera footage, as The Intercept and Type Investigations reported in September.)

“That was probably the first-ever video analytics surveillance that was done in Asia,” said Shinde, noting that the system could be used in the wake of robberies or murders to track a suspect’s car before and after a crime. The software was “very user-friendly,” he noted, so Davao security officials at the command center could easily have become competent in the program’s object search capabilities.

Screen-Shot-2019-03-11-at-4.04.08-PM-1552680170

Davao City PSSCC video showcasing IBM’s “Face Capture” technology.

Screenshot: The Intercept

The 2014 IBM presentation on its Davao project also mentions a tool known as “Face Capture,” which boxes out images of faces in real time and stores them for retroactive analysis. In a recent interview, Emmanuel Jaldon, head of Davao City’s 911 Center, claimed that this functionality was planned but never formally deployed. Barbini also claims that IBM “never supplied facial recognition capability for the center.” And Shinde, who left IBM in 2014, said that Face Capture was not integrated while he was there during the first phase of the Davao project. But a February 2015 promotional video for the PSSCC, highlighting the command center’s monitoring capabilities and ability to “suppress all forms of criminalities,” features a clip of IBM’s Face Capture interface in action, gathering facial images from pedestrians on the streets of Davao City. Footage of what appears to be the IBM Davao City dashboard, pictured above, shows the software boxing out and collecting facial images as people walked past street cameras.

The program also helped authorities monitor “crowd behavior” and instances of “loitering” — a crime that Duterte has cracked down on nationally as president — according to the 2014 IBM presentation.

IBM’s Technology in Duterte’s War on Crime

When asked what assurances he was given about how the surveillance program would be used, Shinde defended IBM’s sale, saying that it was intended for legitimate public safety activities, such as responding to fires. “That particular implementation was not meant to track people,” he said. “It was meant to track the incidents and faster responses to those incidences.”

But in interviews in the command center, the nearby 911 center, and other locations in Davao City, local law enforcement officials familiar with the IBM program told The Intercept and Type Investigations that the technology had assisted them in carrying out Duterte’s controversial anti-crime agenda.

Manuel Gaerlan, a former regional Philippines National Police chief superintendent, said the command center, which IBM substantially upgraded, functions as a force multiplier in counter-drug operations. “It records events so it’s easier to identify the perpetrators, then you can go after the member of the syndicates,” he said. “If you can see more areas, you can send patrol to respond. It’s like putting more men on the ground. And you can put more cameras in drug areas.”

Jaldon, the 911 chief, pointed to IBM’s object tagging and search feature as the most useful tool the program gave law enforcement in counter-drug operations, especially when it came to “backtracking,” or investigating incidents after the fact. “After an event, the system helps find them quickly, give you awareness,” he said. “It helps in investigations to slice and dice by time, color, type of physical feature.” Most significantly, he said the program’s real-time alerts could also increase authorities’ “awareness of suspects’ presence.”

Antonio Boquiren, a training and research officer at the Davao command center, said the video capabilities helped police crack down on low-level quality of life violations.

“Whether it’s criminality, smoking, or jaywalking, any violation of ordinance is a crime and a police is sent,” he said, laughing. “People who smoke complain, ‘How did you catch us before we even lit?’ The police officer will point to the CCTV.”

The targeting of petty criminals, gang members, and street children by Davao death squads figures prominently in the 2009 Human Rights Watch report. And a 2015 promotional video featuring IBM’s technology shows authorities aggressively going after low-level crimes. One clip highlights a young man, caught on CCTV, stealing a bag from a truck. Later, the narrator notes that the technology gives police faster response times and cuts to footage of police officers chasing after a group of people on the street. One then raises his baton as if to hit one of them.

A former Philippine Army security consultant with close ties to Philippine intelligence, who requested anonymity for fear of reprisal, claimed that IBM’s program assisted police not only in monitoring criminal activities, but also in gathering intelligence on the activities of the political opposition in Davao. Based on his dealings with Davao City law enforcement officials, he said he couldn’t rule out that the data feed was implicated in extrajudicial killings.

TOPSHOT - Activists burn an effigy of Philippine President Rodrigo Duterte during a protest in Manila on December 10, 2017, as they commemorate the International Human Rights Day.  Philippine President Rodrigo Duterte on December 5 told human rights groups criticising his deadly anti-drug war to "go to hell" after ordering police back to the frontlines of the crackdown. / AFP PHOTO / NOEL CELIS        (Photo credit should read NOEL CELIS/AFP/Getty Images)

Activists commemorate International Human Rights Day by burning an effigy of Philippine President Rodrigo Duterte during a protest in Manila on Dec. 10, 2017.

Photo: Noel Celis/AFP/Getty Images

Even if IBM’s program was solely used to assist in legitimate police responses to crime and fires, as Shinde said it was designed to do, surveillance researchers point out that it could well have enabled extrajudicial killings, simply by helping police capture or monitor everyday criminal suspects. The government has long denied the existence of police death squads, but in the Dutertes’ Davao, victims of extrajudicial killing were sometimes targeted immediately after being released from police custody, and police frequently killed suspects during planned raids.

In October 2015, for example, Duterte warned a group of drug dealers on a street called Dewey Boulevard that they had 48 hours to leave the city or be killed. “If you are into drugs, I’m warning you,” he announced, according to local press reports. “I’m giving you 48 hours, 48 hours. If I see you there, I’ll have you killed.” Police reportedly monitored the area and relayed that some known dealers had left. But a day after the warning, police fatally shot Armanuel Atienza, a 38-year-old community leader, claiming that he had resisted arrest during a buy-and-bust operation and that they found a handgun and drugs on his person. Such claims are suspect. According to 2016 Senate testimony by Edgar Matobato, who allegedly served as a death squad member from 1988 to 2013, Davao police regularly planted guns and drugs on suspects after killing them. (Duterte has asserted that he does not know Matobato and has implied that he may have committed perjury in this testimony. The Duterte administration’s communications office did not respond to detailed queries related to the IBM program, or its potential role in human rights violations.)

IBM’s object-tagging capability, for example, could have been used to locate a suspect by their physical attributes, someone who may then have become a target of extrajudicial violence, explains Kade Crockford, a technologist with the American Civil Liberties Union of Massachusetts, whose research focuses on police surveillance. “Maybe the system identifies three to four people, then law enforcement are sent to find those people,” Crockford said. “Maybe that person isn’t executed on the spot by law enforcement, but police question him about him and his associates; now he and some of the people he named make their way on to a list which ends up in the hands of a death squad.”

Social media posts from a PSSCC department head, archived on a local blog, suggest that the center, using IBM’s technology, was effective at nabbing suspected criminals.

A policeman investigates the scene where the body of an alleged drug user lies dead at a slum area in Manila after unidentified assailants killed him on December 8, 2017. Philippine President Rodrigo Duterte on December 5 told human rights groups criticising his deadly anti-drug war to "go to hell" after ordering police back to the frontlines of the crackdown.  Duterte had removed the police less than two months ago in response to rising opposition to the campaign. But his spokesman said he was now reinstating them because drug crimes had risen in their absence. / AFP PHOTO / NOEL CELIS        (Photo credit should read NOEL CELIS/AFP/Getty Images)

A policeman investigates the scene where the body of an alleged drug user lies dead at a slum area in Manila after unidentified assailants killed him on Dec. 8, 2017.

Photo: Noel Celis/AFP/Getty Images

In August 2014, that official claimed that police monitored and caught a group of street kids stealing from a cab driver “through the coordination” of the PSSCC and city police. That December, he claimed that the Intelligent Operations Center was a factor in the police surveillance and capture of a man cruising around Davao City with a gun.

IBM’s “Face Capture” feature, if deployed, also could have helped authorities locate wanted people in near real time — including residents on watchlists, according to Crockford. “Imagine a scenario in which someone in the police force, who has access to this system and works with the local death squad, producing lists of people to be killed,” she said. “This technology could help the police leader to ID a person on the kill list in real time and then have them deploy the death squads to go get them.”

The Davao command center, according to a local news report, did have facial recognition capabilities in place by 2014, though the technology was not identified with IBM. And according to the 2009 Human Rights Watch report, Davao’s death squads were known to rely in part on photos of targets on their watchlists.

In August 2016, Artemio Jimenez Jr., a neighborhood political leader and vocal supporter of Duterte’s war on drugs, turned himself in to Davao City police after apparently discovering that he was on a government watchlist of suspected drug users, offering to be tested for drugs in order to clear his name. Police tested his urine for methamphetamine and cannabinol, according to The Inquirer, tests that came up negative. Nonetheless, the next month, “unidentified gunmen” drove up to his car and fired repeatedly, killing him and wounding his driver and bodyguard. Police claimed that they were investigating, but never announced a suspect or motive in the shooting. Nor did they explain how the assassins knew Jimenez’s location.

IBM’s Public Human Rights Commitments

IBM publicly claims to be “committed to high standards of corporate responsibility” and to consider the “social concerns” of the communities in which it operates. IBM’s Human Rights Statement of Principles cites a number of international standards, including the U.N. Guiding Principles on Business and Human Rights, which calls on corporations to perform due diligence on the “human rights context prior to a proposed business activity,” identify “who may be affected,” and project “how the proposed activity and associated business relationships could have adverse human rights impacts on those identified.” These standards also call on companies to proactively track potential human rights abuses related to their business activities and require “active engagement” in the remediation of any identified abuses.

IBM’s Securities and Exchange Commission documents and annual reports between 2012 and 2016 contain a few scattered mentions of its project in Davao, but no discussion of any potential human rights concerns or any preventative measures taken by the company. None of IBM’s corporate social responsibility reports have ever mentioned its collaboration with Duterte in Davao.

Despite reporting by Human Rights Watch and local papers, Shinde claimed that the human rights allegations against the Duterte regime were “not in the news at all during those days.” There was “nothing said like that about him at that time,” he continued, pointing out that IBM contracted with Sara Duterte, not her father, who, he said, “didn’t have such a kind of record.”

Yet when IBM agreed to work with the Duterte family’s administration in 2012, his regime’s support of extrajudicial killings in Davao City had been well-established; as early as 2009, he had described criminals as “a legitimate target for assassination.” In 2012, the year IBM signed the deal with Sara Duterte, local human rights activists claimed to have documented 61 death squad killings.

According to IBM documents and law enforcement officials, the Philippine National Police also received information from the surveillance command center. Before the IBM deal was signed, the Philippine National Police had also been criticized for failing to investigate death squad killings, and since Duterte became president, it has played a role in the deadly national “war on drugs.”

“If they had the technology then, I have no doubt that they used it and continue to use it to locate the targets for elimination,” said Picardal, formerly of the Coalition Against Summary Executions. “And not only drug users but human rights defenders, activists, and anyone they consider as enemies of the state.”

IBM had to have known about the Dutertes’ track record at the time, said a U.S. official who recalled being briefed by IBM about its Davao City project. “I can’t see how they wouldn’t have known about it. They have local people working for them,” said the official, who requested anonymity because he is not authorized to speak on U.S. government matters.

Joshua Franco, head of technology and human rights at Amnesty International, noted that Rodrigo Duterte’s record as mayor was so well-documented that any company engaging with the Davao police at that time would have had a responsibility to investigate and avoid potential complicity in human rights violations before signing any agreements.

“There is documentation of the killings, by persons believed to be linked to the police, that went on in Davao City while Duterte was mayor,” he said. “Human rights organizations have documented that, during this period, as many as 1,000 people were killed, including street children, people who used and sold drugs, as well as petty criminals. Without implementing a rigorous human rights due diligence process, companies supplying the local police forces suspected of having been involved in the killings with policing equipment and technology may have enabled or facilitated the commission of human rights violations.”

Asked about the human rights implications of the surveillance program, Philippine law enforcement officials familiar with the IBM system made light of such concerns.

“If police do some human rights abuses, who cares?” said one official, claiming that such tactics had resulted in significant crime reductions.

Gaerlan, the regional police superintendent, joked about the extrajudicial killing of alleged drug lord Melvin Odicta Sr., who was shot, according to police, by two “unidentified assailants.” Gaerlan’s agency, the Philippine National Police, officially speculated that he may have been killed by other drug dealers. But the commander waved off that version of events. “He was shot right off the ship,” he said, laughing. “He was trying to evade authorities by not coming here on a plane. He never holds drugs. You can’t catch him, but he was killed. Not by anyone in uniform! It was just some vigilantes, but they weren’t in uniform!”

Legal protections for the accused, such as due process, may be good in theory, argued Boquiren, the PSSCC officer, but they aren’t practical because of a court system he characterized as inefficient and corrupt. “Due process is good on the point of lawyers, but if we are talking about the criminal justice system, it’s weak. Even clear-cut cases of murder take years, witnesses die, so something is wrong,” he said.

“If people don’t have discipline, they don’t obey,” he continued. “But if there is fear, they will obey.”

Philippine President Rodrigo Duterte delivers a speech during the "Digong's Day for Women" event on March 31, 2017.  Philippine President Rodrigo Duterte on March 30 described two major media outlets as "sons of whores" and warned them of karmic repercussions for their critical coverage of his deadly drug war. / AFP PHOTO / NOEL CELIS        (Photo credit should read NOEL CELIS/AFP/Getty Images)

Philippine President Rodrigo Duterte delivers a speech during the “Digong’s Day for Women” event on March 31, 2017.

Photo: Noel Celis/AFP/Getty Images

Duterte’s Mass Surveillance Plans

In November, Jaldon said that IBM’s surveillance program was no longer active in Davao. He said that authorities switched over to an in-house software system in 2016. Still, he and Boquiren said that the urban surveillance center model IBM helped build in Davao City has served as an inspiration for the Duterte administration. “Within the next few years, the president will have replicated our system everywhere,” Boquiren said last January. “Every time he goes somewhere, he keeps telling local leaders, go to Davao City and replicate the PSSCC.”

Duterte’s plan is to expand and unify public safety and emergency response centers at a regional and national level in the coming years, Jaldon said. “The hard part before was the budget costs, but that won’t be a problem anymore with the president prioritizing this.”

Jaldon and Boquiren said national authorities — including Duterte himself — are interested in expanding surveillance centers across the country and upgrading their video capabilities to include real-time facial recognition, which could compare the faces of suspects to facial images caught on CCTV.

In February 2018, a local news report cited anonymous sources indicating that Duterte was pursuing a partnership with Huawei, a Chinese telecom firm, to provide facial recognition technology, a development Boquiren confirmed at the time.

Then in December 2018, the Philippine legislature learned that a different Chinese firm, the state-owned China International Telecommunications and Construction Corp., had loaned the Philippines Department of the Interior and Local Government 20 billion pesos to install 12,000 surveillance cameras across Davao City and metro Manila. The “Safe Philippines” infrastructure, according to a report in the Philippine Star, will include a national command center and a backup data center, equipped with facial and vehicle recognition software. At a Senate hearing, Sen. Ralph Recto raised concerns about China’s involvement in the project, and officials from the national Department of Information and Communications Technology testified that they had not been consulted about the deal.

Several other Chinese firms had originally been proposed by the Chinese Embassy for the project, including Huawei. But, according to a January 2019 Senate resolution introduced by Recto, Huawei was slated to become only a major subcontractor as the “primary equipment supplier.”

According to Boquiren, Huawei promised that its facial recognition product could capture someone “even with an image of the side of their face” and “store up to a million faces.” In a November 2018 call, Boquiren reiterated that unspecified police authorities were looking at Huawei technology, but declined to discuss any additional details, citing a lack of technical expertise. Jaldon cautioned that while the Chinese firm had “a good system,” authorities were still in the process of assessing a variety of facial recognition vendors as part of the implementation of the “safe city project” across the country.

The Philippines’ potential collaboration with Chinese firms, which resulted from an agreement reached during the visit of Chinese President Xi Jinping last November, reflects Duterte’s ongoing pivot to China and away from the United States. Huawei, in particular, is alleged to have such close ties to the Chinese state that it has been banned from U.S. government contracts and from providing some security products to Australia for fear of backdoor intrusions by Chinese intelligence actors.

The former consultant to the Philippine Army said his understanding is that the Safe Philippines installation will be modeled after Chinese facial recognition infrastructure, uniting CCTV installations and intelligence databases from security agencies across the country into one unified system. “The project aims to establish new CCTV networks and cascade them with all existing CCTV installations,” he said. “Patterned after the Chinese police state, the system is intended to tap databases from a variety of agencies of the government and integrate them with the data streams from the CCTV networks.”

In a more recent interview, the former consultant said that, given the scrutiny Huawei has drawn, the Department of the Interior and Local Government may opt for another technology equipment supplier, a claim that Densing, the Department of the Interior official, echoed in the January television interview.

Maya Wang, senior researcher on China at Human Rights Watch, said the potential adoption of a Chinese-style surveillance infrastructure, facilitated by Chinese companies, is very concerning given the “context of Duterte’s increasing abuses, drug war, and large-scale extrajudicial violence.” But Wang cautioned that the costs and expertise required for such systems are not easily replicable. The Philippine government could potentially “replicate one or some of the systems, but not all of the overlapping, multitiered mass surveillance systems seen in China,” she said.

Anti-Duterte activists worry that this planned consolidation of surveillance capabilities could further enable Duterte-aligned forces to stamp out pockets of political resistance. An integrated national system of real-time facial recognition technology, according to Picardal, the former spokesperson for the Coalition Against Summary Executions, would ensure the fulfillment of what he called Duterte’s “plan to exercise full authoritarian/dictatorial rule and repress dissent.” Picardal, who is currently in an undisclosed location, said such a system would also threaten him personally, as he believes that Duterte’s deaths squads want him dead. Since Duterte came to power nationally, several other dissident priests in the Philippines have been murdered. (Duterte has denied condoning extrajudicial killings as president. A presidential spokesperson said last year that Picardal should seek court protection if he feels threatened.)

 Ezra Acayan/NurPhoto (Photo by Ezra Acayan/NurPhoto via Getty Images)

Relatives of victims of extrajudicial killings light candles next to pictures of their loved ones during a vigil in Quezon city, Metro Manila, Philippines, on Dec. 1, 2017.

Photo: Ezra Acayan/NurPhoto via Getty Images

“I have transferred to a more secure location,” he said. “But with that technology, it would be more difficult for me to come out in the open and that will restrict my freedom of movement. That technology will be used not just to locate, arrest, and charge dissidents in court, but, worse, to inform the death squads of their whereabouts.” He warned that the technology will increase extrajudicial killings, “instill fear on those who oppose his rule,” and curtail citizens’ “right to free assembly and redress of grievances. This type of technology will weaken democracy and will advance authoritarian rule all over the country.”

Since taking power, the Duterte administration has attempted to shut down or mitigate critical news coverage, including, in January, the online news site Rappler, and numerous political activists have been among those assassinated. Meanwhile, the president’s notorious “drug war” has left thousands more dead. “It’s not just the killing of thousands,” the former army security consultant warned. “It results in a killing organization, the police, that is easy to expand. The drug war is a mirror to the larger future.”

Gaerlan, the recently retired national police commander, scoffed at such concerns. “The human rights activists, Rappler, all of them act like this is a dictatorship, but if that is so, tell me how are they protesting and not being suppressed?” he said. “Obedience to the law, before what you think is right, above all else.”

This article was reported in partnership with Type Investigations.

The post Inside the Video Surveillance Program IBM Built for Philippine Strongman Rodrigo Duterte appeared first on The Intercept.

Defense Tech Startup Founded by Trump’s Most Prominent Silicon Valley Supporters Wins Secretive Military AI Contract

Published by Anonymous (not verified) on Sun, 10/03/2019 - 12:00am in

A startup founded by a young and outspoken supporter of President Donald Trump is among the latest tech companies to quietly win a contract with the Pentagon as part of Project Maven, the secretive initiative to rapidly leverage artificial intelligence technology from the private sector for military purposes.

Anduril Industries is the latest venture of Palmer Luckey, the 26-year-old entrepreneur best known for having founded the virtual reality firm Oculus Rift. Luckey began work on Project Maven last year, along with efforts to support the Defense Department’s newly formed Joint Artificial Intelligence Center, according to documents viewed by The Intercept.

The previously unreported Project Maven contract could be a boon for Anduril’s bottom line. Founded in 2017, the company has said it seeks to remake the defense contracting industry by incorporating the latest innovations of Silicon Valley into warfighting technology.

Last year, Google’s involvement with Project Maven stirred a controversy inside the tech giant. The company had signed a contract with the Defense Department to develop artificial intelligence that could interpret video images in order to improve drone targeting. But after the contract’s disclosure sparked an internal rebellion among employees, Google allowed its contract to expire. The Google flap and the wider military drive to adopt commercial artificial intelligence technology unleashed a fierce debate among tech companies about their role in society and ethics around advanced computing.

Anduril Industries is developing virtual reality technology using Lattice, a product the firm offers that uses ground- and autonomous helicopter drone-based sensors to provide a three-dimensional view of terrain. The technology is designed to provide a virtual view of the front lines to soldiers, including the ability to identify potential targets and direct unmanned military vehicles into combat. The first phase of the research has been completed, according to the documents reviewed by The Intercept, with initial plans to deploy virtual reality battlefield-management systems for the war in Afghanistan. (Anduril and the Pentagon did not respond to requests for comment.)

“We’re deployed at several military bases. We’re deployed in multiple spots along the U.S. border. We’re deployed around some other infrastructure I can’t talk about.”

Luckey dropped hints about Anduril’s involvement in the project last November in Lisbon, Portugal, at the Web Summit, a technology conference. “We’re deployed at several military bases. We’re deployed in multiple spots along the U.S. border,” Luckey said, cryptically adding: “We’re deployed around some other infrastructure I can’t talk about.” He also discussed how he hoped the military would apply Anduril’s technology.

“What we’re working on is taking data from lots of different sensors, putting it into an AI-powered sensor fusion platform so that you can build a perfect 3D model of everything that’s going on in a large area,” Luckey said. “Then we take that data and run predictive analytics on it, and tag everything with metadata, find what’s relevant, then push it to people who are out in the field.”

“Practically speaking, in the future, I think soldiers are going to be superheroes who have the power of perfect omniscience over their area of operations, where they know where every enemy is, every friend is, every asset is,” he said. Luckey said he thinks it is “unlikely” that soldiers of the future will directly carry weapons in the field; instead, they would remotely operate machines and weapons from far away.

Anduril previously garnered attention for its efforts to help U.S. Customs and Border Protection create a “virtual wall” at the U.S.-Mexico border. The initial 10-week demonstration used Anduril’s Lattice technology to monitor a stretch of land along the Rio Grande Valley. The system reportedly helped the government identify and apprehend 55 unauthorized individuals crossing the border.

The company has also publicly acknowledged work to develop perimeter defense monitoring around two U.S. Marine bases.

Anduril’s pitch deck, the presentation it provided to solicit investors, imagines a future of warfighting by means that might look like science fiction to the average observer. The company is pushing battlefield management technology capable of utilizing long-range bombers and swarms of military attack drones. The firm has reportedly rented a warehouse in Oakland, California, to develop at least one remote-control tank, designed for fighting California wildfires.

 Patrick T. Fallon/Bloomberg via Getty Images

Palmer Luckey, founder of Anduril Industries, smiles during the Wall Street Journal D.Live global technology conference in Laguna Beach, Calif., on Nov. 12, 2018.

Photo: Patrick T. Fallon/Bloomberg via Getty Images

A Defense Contractor in Flip Flops

Palmer Luckey stands out among other defense industry executives. In contrast to the buttoned-down image of executives at Lockheed Martin or Northrop Grumman, he typically appears in public wearing flip-flops and a partially unbuttoned Hawaiian shirt. And he stands out in other ways: Unlike other tech industry leaders, Luckey is unabashedly partisan. Being an avowed supporter of the Republican Party has made him a lightning rod in Silicon Valley.

In contrast to the buttoned-down image of executives at Lockheed Martin or Northrop Grumman, Palmer Luckey typically appears in public wearing flip-flops and a partially unbuttoned Hawaiian shirt.

The son of car salesman, Luckey was homeschooled by his mother in Long Beach, California. He got his start by parlaying his passion for tinkering with video game-optimized home computers into a virtual reality business. Using funds raised on Kickstarter, Luckey developed a new model for virtual reality headsets at age 17. Four years later, he sold Oculus Rift to Facebook for over $2 billion, earning him an estimated fortune of around $700 million.

During the 2016 election, Luckey posted on pro-Trump forums on Reddit, encouraging community members to develop memes critical of Hillary Clinton. He donated $10,000 to a group called Nimble America, which paid for a billboard stating that Clinton was “Too Big to Jail.”

Amid the ensuing controversy, Palmer lost his job at Facebook, which has adamantly denied that he was let go over his political views. According to the Wall Street Journal, however, Palmer retained an employment lawyer and negotiated a $100 million payout corresponding to bonuses and stock options that he would have had if he had stayed on at Facebook.

In a detailed profile of Anduril, Wired magazine reported that Luckey was first inspired to develop a military technology company through an event hosted by billionaire investor Peter Thiel, another Silicon Valley conservative. At a 2016 retreat in Canada, Luckey met Trae Stephens, a former intelligence official who works at Thiel’s venture capital firm, Founders Fund.

The pair bonded over an interest in reshaping defense contracting using the incentives and structures of the tech startup scene. Stephens had previously worked at Palantir, the secretive data-crunching company backed by Thiel, which is known for its work on behalf of spy agencies and the military. (Both Palantir and Anduril are references to the classic fantasy trilogy “Lord of the Rings”; Anduril is the unbreakable sword used by one of the series’s protagonists.)

Thiel’s high-profile support for Trump during the 2016 election gave him influence with the new administration. Stephens, the Thiel deputy, was appointed to the group on Trump’s transition team that dealt with the new administration’s move into the Defense Department. By March 2017, Luckey had left Facebook and was ready to work with Stephens on launching a new company focused on weapons systems.

The Military-Tech Complex

Anduril needed talent, political connections, and an injection of capital. Enter the Founders Fund, Thiel’s venture capital firm. Several Palantir alumni quickly joined up. Stephens came on as Anduril’s chair and another Founders Fund partner provided seed funding. Other executives followed suit, including Brian Schimpf, the former director of engineering at Palantir, who now serves as chief executive at Anduril.

The timing of Anduril’s founding was fortuitous in many ways. Under the Obama administration, the government had begun massive efforts to swiftly incorporate commercial technology into its security efforts. In 2015, both the Department of Homeland Security and the Defense Department opened satellite offices in Silicon Valley as beachheads to coordinate partnerships with the private sector. The Defense Department opened a Defense Innovation Unit office in Mountain View, California, where Google is based. And the Homeland Security opened its Silicon Valley Innovation Program in Menlo Park, California.

In 2017, as part of an initiative that had begun the previous year, the Defense Department also unveiled the Algorithmic Warfare Cross-Functional Team, known as Project Maven, to harness the latest artificial intelligence research into battlefield technology, starting with a project to improve image recognition for drones operating in the Middle East.

This wave of outreach from the government provided a unique entry point for Anduril, which partnered with the Department of Homeland Security’s satellite office to successfully pitch its test project for the virtual wall. As Luckey recently explained to Defense and Aerospace Report, a trade publication, he also worked closely with the Pentagon’s Defense Innovation Unit, crediting its former director, Raj Shah, with making his company possible.

The Defense Innovation Unit, said Luckey, proved “that people in Silicon Valley could actually get stuff into production, actually do work with the government.” He added, “I don’t think that I would have started this company if it wasn’t for the work of people like Raj Shah doing great work and proving that you actually could get into it.”

Lobbying and Political Donations

Building out major government contracts is an inherently political endeavor — something that appears not to be lost on Luckey. Publicly filed lobbying disclosures show that Anduril paid $290,000 last year to Invariant, a lobbying firm founded by Heather Podesta, a Democratic fundraiser known for her extensive relationships in Washington, D.C., including with Hillary Clinton. The lobbying effort focused on shaping the border security appropriations issued by Congress, as well as on educating lawmakers on “artificial intelligence and autonomous systems and their application to military force protection,” according to the filings.

Luckey donated $100,000 to Trump’s inauguration through a company he founded and gave over $670,000 to congressional Republican campaign funds over the last two years.

Luckey also opened his wallet to the powers that be in Congress and the White House. He donated $100,000 to Trump’s inauguration through a company he founded and gave over $670,000 to congressional Republican campaign funds over the last two years.

Rep. Will Hurd, R-Texas, a former CIA agent turned moderate border-district lawmaker, received a $2,700 donation. Hurd worked to help Anduril find a volunteer land owner to test its sensor technology along the border, according to Wired. Hurd later sponsored legislation to finance a virtual border wall likely using Anduril’s technology.

Among his political largesse, Luckey donated to political action committees supporting Trump, the senior lawmakers on the defense and appropriations committees, and a number of controversial conservative lawmakers, including Rep. Steve King, R-Iowa, who has defended white supremacy and questioned the contributions of nonwhite people to society.

In previous interviews, Luckey has sharply criticized traditional defense contracting, noting that the iPhone and other commercial technology innovations were developed with massive incentives, rather than the “cost-plus” model preferred by the Pentagon. That approach has seen the Defense Department negotiate with contractors to provide a fixed price for expenses and profits, one that, in Luckey’s telling, has limited the military’s ability to encourage the kind of breakthrough technologies needed for the future of war.

In a white paper filed with the Defense Department’s National AI Strategic Plan last year, Anduril urged officials to consider the ambitious approach by the government in China with regard to AI technology. China, an Anduril employee wrote in the paper, has provided a “multibillion-dollar national investment initiative to support ‘moonshot’ projects, start-ups and academic research in A.I.”

Even as it seeks to shake up the model for contracts, though, Anduril is also embracing the traditional approach.

In November, the company announced its first major revolving-door hire. Anduril brought on Christian Brose, a former top staffer with the Senate Armed Services Committee, which oversees defense spending, as its head of strategy. Brose formerly worked under the late Sen. John McCain, R-Ariz., and served as a speechwriter to then-Secretary of State Condoleezza Rice. Two months later, Anduril formally joined the National Armaments Consortium, a nonprofit that facilitates bids by traditional defense contracting firms for business with the military.

Scott Sanders, head of operations for Anduril Industries, prepares a Lattice Modular Heli-Drone for a test flight at the Red Beach training area, Marine Corps Base Camp Pendleton, California, Nov. 8, 2018. The Lattice Modular Heli-Drone was being tested to demonstrate its capabilities and potential for increasing security. (U.S. Marine Corps photo by Cpl. Dylan Chagnon)

Scott Sanders, head of operations for Anduril Industries, prepares a Lattice Modular Heli-Drone for a test flight at the Red Beach training area, Marine Corps Base Camp Pendleton, Calif., on Nov. 8, 2018.

Photo: Cpl. Dylan Chagnon/U.S. Marine Corps

No “Digital Geneva Convention”

As the military worked to bring in leading Silicon Valley firms as contractors, the resulting relationships have sparked massive resistance from workers, many of whom have argued that they became engineers to make the world a better place, not a more violent one.

After the The Intercept and other media outlets revealed that Google had been quietly tapped to work on Project Maven, applying its AI technology to help analysts identify drone targets on the battlefield, thousands of workers protested the contract.

The uprising led Google to announce that it would not renew its contract with the military on the initiative. Microsoft, too, faced internal opposition as the company prepared work on a $480 million contract with the Army to develop augmented reality headsets for soldiers.

The ethical debates that have rocked large technology companies — Amazon, Salesforce, and others have similarly faced worker protests over contracts on immigration enforcement — have presented Anduril with an opportunity.

Despite the various protests around Silicon Valley, Anduril’s brash attitude has not prevented it from recruiting top engineering talent. In its white paper filed with the Defense Department’s National AI Strategic Plan last year, Anduril boasted that it has recruited engineers from top tech firms like General Atomics, SpaceX, Tesla, and Google.

In an opinion column for the Washington Post, Luckey and Stephens sharply criticized Google for abandoning the U.S. government by rejecting Project Maven. “We understand that tech workers want to build things used to help, not harm,” the pair wrote. “We feel the same way. But ostracizing the U.S. military could have the opposite effect of what these protesters intend: If tech companies want to promote peace, they should stand with, not against, the United States’ defense community.”

What was left out of the column, however, was that, as the piece went to print, Anduril was beginning its own work on Project Maven.

In Anduril, Luckey is presenting a company that is unapologetic about its work capturing immigrants or killing people on the battlefield.

In interviews and public appearances, Luckey slammed engineers for protesting government work, arguing that those claiming conscious opposition to military work are among a “vocal minority” that empowers American adversaries abroad. Moreover, he said that the Defense Department has failed to connect with top tech talent because many engineers are “stuck in Silicon Valley at companies that don’t want to work on national security.”

In Anduril, Luckey is presenting a company that is unapologetic about its work capturing immigrants or killing people on the battlefield. The U.S., Luckey argued in a previous interviews, “has a really strong record of protecting human rights” and should be trusted to use AI without any ethical constraints.

“The biggest threats are not going to be Western democracies abusing these technologies,” he told the audience at the Web Summit in Lisbon. The real enemies are China and Russia, both of which have invested in AI military technology.

China is not only investing in AI, but has unfair advantages to develop the technology using its entire population as a data training set through use of mass surveillance to run experiments. In contrast, Luckey told Defense and Aerospace Report, the U.S. can train its AI software “in industry, in enterprise, in national security.” The U.S., Luckey went on, could test AI “using our current military advantage to train future AI developments and we need to start using our current military advantage.” He called for employing these technologies in ongoing “large-scale conflicts” around the world.

Asked in Lisbon about a digital Geneva Convention or another ethical rulebook to govern the use of AI weaponry, Luckey was forthright in his rejection of the idea.

“That’s not really going to solve the problem,” he said. “I have no hopes that a digital Geneva Convention, whatever it will be, will prevent China from using surveillance tools to watch every citizen in their country. I have very little confidence that it will prevent Russia from building autonomous systems that can acquire and fire on targets without any kind of human intervention whatsoever.”

Ethics experts have criticized the development of AI-based weapons, noting that the lethal autonomous weapons could be hijacked by hackers, kill without clear explanation, or lead to catastrophic accidental conflict if weapons are used as escalation in response to an incident that appears to be an act of war. Moreover, as humans are removed from face-to-face combat, the dehumanization of lethal decisions could lead to more killing.

Luckey hasn’t proffered any direct answers to the questions being raised over the use of artificial intelligence in warfighting. Anduril, however, has stated that it will not sell to Russia or China, but would be willing to sell its products to U.S. allies. A request for comment about whether the company would sell to Saudi Arabia, the United Arab Emirates, or other undemocratic U.S. allies was not returned.

Among the many Palantir alumni who joined Anduril, one name sticks out to those concerned with abuse of civil liberties and human rights. In May of last year, the firm hired former Palantir executive Matthew Steckman.

Steckman took a lead role in the HBGary Federal scandal in 2011. In the scandal, a cache of hacked emails showed that Palantir and two other defense contractors had cooked up a plot to spy on journalists, trade unions, and activists on behalf of the U.S. Chamber of Commerce, the largest pro-business lobby in America. The plot included hacking target computers and using social media analysis to monitor the behaviors of a large set of left-leaning figures and journalists viewed as sympathetic to Wikileaks, including The Intercept’s Glenn Greenwald. In negotiations with the Chamber’s law firm, Steckman wrote at the time that he and another Palantir executive were “spearheading this from the Palantir side.”

After the plan was revealed, Palantir briefly placed Steckman on leave. He is now at work as head of corporate and government affairs at Anduril.

One thing is clear: Luckey wants to win — in every way imaginable. The U.S.’s goal, Luckey said at the Web Summit, should be dominance and beating other foreign adversaries to control the best artificial intelligence technology. “You have to be the leader,” he said. “Technological superiority is a prerequisite for ethical superiority.”

Nick Surgey contributed research.

The post Defense Tech Startup Founded by Trump’s Most Prominent Silicon Valley Supporters Wins Secretive Military AI Contract appeared first on The Intercept.

Elizabeth Warren’s Big Tech Beatdown Will Spark a Vital and Unprecedented Debate

Published by Anonymous (not verified) on Sat, 09/03/2019 - 6:29am in

File-This Feb. 29, 2019, file photo shows Senate Armed Services Committee member, Sen. Elizabeth Warren, D-Mass., during a Senate Armed Services Committee hearing on "Nuclear Policy and Posture" on Capitol Hill in Washington. Sen. Warren, who is seeking the Democratic nomination for president in 2020, did not call for the minimum wage to be $22 an hour, as posts circulating on social media suggest. However, she did discuss the findings of a study that showed if minimum wage had been tied to productivity between 1960 and 2013, it would be $22 an hour, during a March 2013 Senate Committee on Health, Education, Labor and Pensions hearing. (AP Photo/Carolyn Kaster, File)

Sen. Elizabeth Warren during a Senate Armed Services Committee hearing in Washington, D.C., on Feb. 29, 2019.

Photo: Carolyn Kaster/AP

It’s imperfect, it’s vague in parts, and it will face a conflagration of opposition from the tech lobbying freight train and congressional conservatives for whom antitrust efforts are anathema. But presidential candidate Elizabeth Warren’s new plan — well, for now it’s just a Medium post — to break up some of the world’s biggest tech firms provides the rarest sign that someone seeking power wants to use that power to weaken Silicon Valley.

Warren’s plan to “break up Big Tech,” as she described it, begins on a strong premise that is uncontroversial outside of Silicon Valley boardrooms: “Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

While it’s hard to choke up too much at the thought of “stifled innovation” at a time when we seem to be suffering from a glut of it, the rest rings true. Facebook, Google, and Amazon have become so large in terms of both revenue and their ability to collect and process data that they’ve come to resemble quasi-governmental entities — uncanny hybrids of private capital and public policy. In its current form, with the ability to control what information reaches over 2 billion people around the world, Facebook is too big to govern, from within or without. With an obvious monopoly on search and its own mammoth, opaque data-harvesting business, Google is also peerless and entirely out of the range of competition.

Facebook is too big to govern.

Warren’s solution is twofold. One component would essentially hit “undo” on various tech acquisitions that have helped Facebook, in particular, build an enormous moat between itself and potential competitors. Warren’s plan would see Instagram and WhatsApp spun off from the mothership, so that photo-sharing on Facebook proper would be forced to compete with photo-sharing on Instagram. This much seems cut and dry and obviously in the spirit of trust-busting, though the attack on Google is more muddled: Warren says she’d have Google divest DoubleClick, the online advertising company it acquired over a decade ago, even though it essentially no longer functions as an independent entity.

The other component of Warren’s plan would be “passing legislation that requires large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform” — meaning that “these companies would be prohibited from owning both the platform utility and any participants on that platform.” “Platform utilities would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users,” Warren wrote. “A company found to violate these requirements would also have to pay a fine of 5 percent of annual revenue.” Finally, some teeth.

Warren says this move would require Google’s entire search business to be spun off from the rest of the company, but further industry implications are unclear. Would Apple have to divest iTunes and the App Store? Would Android phones still come bundled with apps that beam data back to Google? How is Warren going to define “a platform for connecting third parties”? There are countless other questions and quibbles. Using Medium posts to develop far-reaching policy proposals has its limits, I guess.

Warren’s plan also falls short in tackling these companies’ ability to unilaterally control the dissemination of information. Facebook can decide who, out of over 2 billion souls, sees, reads, and hears what. The argument that no company, tech or otherwise, should ever have that capacity, remains unaddressed. But this is a start — a very hopeful start — and it’s been a very long time since we’ve seen anything that suggests that wresting power from Facebook et al. is an idea being taken seriously outside of advocacy groups, academia, and opinion columns.

Whether or not Warren wins the Democratic nomination or the presidency, we can expect to see powerful people hoping to become infinitely more powerful forced to discuss whether Facebook should be required to divest itself of Instagram. That’s more than we, the humble data-mined, have ever known.

The post Elizabeth Warren’s Big Tech Beatdown Will Spark a Vital and Unprecedented Debate appeared first on The Intercept.

Should We Trust Artificial Intelligence Regulation by Congress If Facebook Supports It?

Published by Anonymous (not verified) on Thu, 07/03/2019 - 11:00pm in

congress-ai-final-1551736571

Photo illustration: Soohee Cho/The Intercept, Getty Images

Try to imagine for a moment a declaration from Congress to the effect that safeguarding the environment is important, that the effects of pollution on the environment ought to be monitored, and that special care should be taken to protect particularly vulnerable and marginalized communities from toxic waste. So far, so good! Now imagine this resolution is enthusiastically endorsed by ExxonMobil and the American Coal Council. You would have good reason to be suspicious. Keep that in mind while you consider the newly announced House Resolution 153.

Last week, several members of Congress began pushing the resolution with the aim of “supporting the development of guidelines for ethical development of artificial intelligence.” It was introduced by Reps. Brenda Lawrence and Ro Khanna — the latter of whom, crucially, represents Silicon Valley, which is to the ethical development of software what West Virginia is to the rollout of clean energy. This has helped make Khanna a national figure, in part because, far from being a tech industry cheerleader, he’s publicly supported cracking down on the data Wild West his home district helped create. For example, he has criticized the wrist-slaps Google and Facebook receive in the wakes of their regular privacy scandals and called for congressional action against Amazon’s labor practices.

The resolution, co-sponsored by seven other representatives, has some strange fans. Its starting premises are unimpeachable: “Whereas the far-reaching societal impacts of AI necessitates its safe, responsible, and democratic development,” the resolution “supports the development of guidelines for the ethical development of artificial intelligence (AI), in consultation with diverse stakeholders.” It also supports adherence to a list of crucial values in the development of any kind of machine or algorithmic intelligence, including “[i]nformation privacy and the protection of one’s personal data”; “[a]ccountability and oversight for all automated decision making”; and “[s]afety, security, and control of AI systems now and in the future.”

These are laudable goals, if a little inexact: Key terms like “control” and “oversight” are left entirely undefined. Are we talking about self-regulation here — which algorithmic software companies want because of its ineffectiveness — or real, governmental regulation? When the resolution mentions accountability, are Khanna and company envisioning harsh penalties for AI mishaps, or is this a call for more public relations mea culpas after the fact?

It’s hard to square the track records of Facebook and IBM with many of the values listed in the AI resolution.

Details in the press release that accompanied the resolution might explain the wiggle room — or make one question the whole spiel. H.R. 153 “has been endorsed by the Future of Life Institute, BSA | The Software Alliance, IBM, and Facebook,” the release says.

The Future of Life Institute is a loose organization of concerned academics, as well as Elon Musk and, inexplicably, actors Alan Alda and Morgan Freeman. Those guys aren’t the problem, though. The real cause for concern is not that a resolution expresses a desire to rein in artificial intelligence, but that it does so with endorsements from Facebook and IBM — two fantastic examples of why such reining in is crucial. It’s hard to square the track records of either company with many of the values listed in the resolution.

Facebook — the world’s largest advertising network that happens to include social sharing features — is already leveraging artificial intelligence in earnest, and not just to track and purge extremist content, as touted by CEO Mark Zuckerberg. According to a confidential Facebook document obtained and reported on last year by The Intercept, the company is courting corporate partners with a new machine learning ability that makes explicit the goal of all marketing: to predict the future choices of consumers and invisibly change their decision without any forewarning. Using a technology called FBLearner Flow, the company boasts of its ability to “predict future behavior”; this allows it offer corporations the ability to target advertisements at users who are “at risk” of making choices that are considered unfavorable to such and such brand, ideally changing users’ decision before they even know they are going to make it. The company is also facing a class-action lawsuit over its controversial facial tagging feature, which uses machine intelligence to automatically identify and pair a Facebook user’s likeness with the company’s existing trove of personal information. The feature was rolled out without notice or anything resembling informed consent.

IBM’s machine intelligence adventures so far have been arguably more disquieting. Watson, the firm’s flagship AI product formerly known for its “Jeopardy!” victories, was found last year to have “often spit out erroneous cancer treatment advice,” according to a report in Stat. Last year, The Intercept revealed that the New York Police Department was sharing troves of surveillance camera footage with IBM to develop software that would allow other police departments to search for people by hair color, facial hair, and skin tone. Another 2018 Intercept report revealed that IBM was one of several tech firms lining up for a crack at aiding the Trump administration’s algorithmic “extreme vetting” program for immigrants — perhaps unsurprising, given that IBM CEO Ginni Rometty personally offered the company’s services to Trump following his election and later sat on a private-sector advisory board supporting the White House.

Although it’s true that AI has yet to be developed and perhaps never will, its precursors — lesser machine-learning or self-training algorithms — are already powerful instruments and growing more so every day. It’s hard to imagine two firms who should be farther from the oversight of such wide-reaching technology. For Facebook, a company that keeps the functionality of its intelligent software secret with a fervor rarely seen outside of the Pentagon, to endorse a resolution that calls for “[a]ccountability and oversight for all automated decision making” is absurd. That Facebook co-signed a resolution that hailed “[i]nformation privacy and the protection of one’s personal data” is something worse than absurd. So, too, is the fact that IBM, which sought the opportunity to build software to support the Trump administration’s immigration policies, would endorse a resolution to “empower … underrepresented or marginalized populations” through technology.

“It would be foolish to not involve some of the leading thinkers who happen to be at these companies.”

In a phone interview with The Intercept, Khanna defended the endorsements as being little more than the proverbial thumbs-up, and insisted that Facebook and IBM should have a seat at the table if and when Congress tackles meaningful federal regulation of AI. Such legislation, he thinks, must be “crafted by experts,” if not outright drafted by them. “I think the leaders of Silicon Valley are very concerned about an ethical framework for artificial intelligence,” Khanna said, “whether it’s Facebook or Sheryl Sandberg. That doesn’t mean they’ve been perfect actors.” 

Khanna was careful to reject the notion of “self-regulation,” which tech firms have favored for its total meaninglessness. “The past few years have showed self-regulation doesn’t work,” said Khanna. Although he rejected the idea that tech firms could help directly shape future AI regulation, Khanna added, “It would be foolish to not involve some of the leading thinkers who happen to be at these companies.”

Asked if he imagined future AI “oversight,” as mentioned in the resolution, including independent audits of corporate black-box algorithms, Khanna replied that it “depends for what” — as long as it doesn’t mean that Facebook has to run every one of its algorithms before a regulatory agency, which would “stifle innovation.” Khanna, however, suggested that there are scenarios where government involvement would be necessary, if “it were periodic checks on algorithms.” He said, “If, for example, the FTC” — Federal Trade Commission— “received a complaint that an algorithm was systematically showing bias and there was some standard of probable cause, that should trigger an audit.”

Yet hashing out these and countless other specifics on the how, when, and who of algorithmic oversight will be a long slog, with or without Facebook’s endorsement.

The post Should We Trust Artificial Intelligence Regulation by Congress If Facebook Supports It? appeared first on The Intercept.

UBI Taiwan to discuss ‘key trends’ at international summit

Published by Anonymous (not verified) on Thu, 07/03/2019 - 9:01pm in

The third annual UBI Taiwan international summit will be held in Taipei this month. This year’s theme is “Key Trends of the Next Generation,” focusing on technological development as well as growing income inequality.

Mark Zuckerberg Is Trying to Play You — Again

Published by Anonymous (not verified) on Thu, 07/03/2019 - 8:45am in

Tags 

Technology

 David Paul Morris/Bloomberg via Getty Images

Mark Zuckerberg watches a demonstration during the Oculus Connect 5 product launch event in San Jose, Calif., on Sept. 26, 2018.

Photo: David Paul Morris/Bloomberg via Getty Images

If you click enough times through the website of Saudi Aramco, the largest oil producer in the world, you’ll reach a quiet section called “Addressing the climate challenge.” In this part of the website, the fossil fuel monolith claims, “Our contributions to the climate challenge are tangible expressions of our ethos, supported by company policies, of conducting our business in a way that addresses the climate challenge.” This is meaningless, of course — as is the announcement Mark Zuckerberg made today about his newfound “privacy-focused vision for social networking.” Don’t be fooled by either.

Like Saudi Aramco, Facebook inhabits a world in which it is constantly screamed at, with good reason, for being a contributor to the world’s worsening state. Writing a vague blog post, however, is far easier than completely restructuring the way your enormous corporation does business and reckoning with the damage it’s caused.

Promising to someday soon forfeit to your ability to eavesdrop on over 2 billion people doesn’t exactly make you eligible for sainthood in 2019.

And so here we are: “As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Zuckerberg writes in his road-to-Damascus revelation about personal privacy. The roughly 3,000-word manifesto reads as though Facebook is fundamentally realigning itself as a privacy champion — a company that will no longer track what you read, buy, see, watch, and hear in order to sell companies the opportunity to intervene in your future acts. But, it turns out, the new “privacy-focused” Facebook involves only one change: the enabling of end-to-end encryption across the company’s instant messaging services. Such a tech shift would prevent anyone, even Facebook, outside of chat participants from reading your messages.

That’s it.

Although the move is laudable — and will be a boon for dissident Facebook chatters in countries where government surveillance is a real, perpetual risk — promising to someday soon forfeit to your ability to eavesdrop on over 2 billion people doesn’t exactly make you eligible for sainthood in 2019. It doesn’t help that Zuckerberg’s post is completely absent of details beyond a plan to implement these encryption changes “over the next few years” — which is particularly silly considering Facebook has yet to implement privacy features promised in the wake of its previous mega-scandals.

“I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform,” reads Zuckerberg’s awakening. Count me into “many people,” just like I’m a skeptic of Saudi Aramco’s attempt to pre-empt criticism: “For some, the idea of an oil and gas company positively contributing to the climate challenge is a contradiction. We don’t think so.”

The skepticism of Facebook is warranted. To pick just one of many examples, the company, as The Intercept recently reported, is involved behind the scenes in fighting attempts to pass more stringent privacy laws in California.

What’s more, this is a dramatic 3,000-word opus, but only about one new privacy feature, to be released at some unknown future point. On the other hand, Facebook has a long history to consider: It’s a company whose business model relies entirely on worldwide data mining. Facebook may someday offer end-to-end chats between WhatsApp and Messenger users — which would be great! — but there’s no sign the company would ever expand such encryption beyond instant messages, because it would destroy the company. For everything Facebook protects with end-to-end encryption, that’s one less thing Facebook can comb for behavioral data, consumer preferences, and so forth.

Your chats may be secure, but that will do virtually nothing to change how Facebook follows and monitors your life, on and offline. Facebook could, say, encrypt the contents of your profile or your photo albums so that no one but your friends could decrypt that information — but then how would they sell ads against it?

The unblogged truth, which Zuckerberg knows as well as anyone else, is that a “privacy-focused vision for social networking” looks nothing like Facebook; more to the point, it would resemble Facebook’s negative image. The company will wave its arms around this “announcement” and point to it whenever its next privacy screw-up occurs — likely sometime later today.

Don’t mistake this attempt at pantomiming contrition and techno-progress as anything more than theater. And don’t mistake a long blog post about privacy for anything more than many, many words from a man who knows he’s in trouble.

The post Mark Zuckerberg Is Trying to Play You — Again appeared first on The Intercept.

How not to Ruin Everything: Futures Thinking Launch

Published by Anonymous (not verified) on Tue, 05/03/2019 - 11:56pm in

Launch event for Futures Thinking, a new research group looking into future problems and opportunities created by advances in technology and artificial intelligence. In literature, in popular media, in scientific research, and in public consciousness, discourse about the future, machine learning, and the human elements of digital technologies proliferates more now than ever before. Thanks to developments in artificial intelligence (AI), we are able to speculate about how our fundamentally social species might interact with performatively human-like machines of our own making. Television shows like Black Mirror and The Handmaid’s Tale, and novels like The Circle or Never Let Me Go speculate about dystopian futures that reflect political realities not unlike those that are currently unfolding in the Global North.

Ethics in AI are much debated in science fiction. However, the scholars in the fields of AI and those in literature, history, and gender studies seldom interact to discuss the realities and probabilities of the future of a technologically advanced mankind. Crucially important to our network is the recognition of how narrative informs and shapes the future. Bringing scholars of historical and literary narratives into conversation with ethicists and developers of digital AI technologies is of paramount importance to futures thinking.

Discussion on AI and global governance is thriving at Oxford, while speculative fiction is an important emerging field in literary studies. This network brings these fields into conversation. We extend from exploring speculative fiction research, questions about the robustness of machine learning, the future trade-offs between privacy and security, to thinking about how we might use historical feminist consciousness-raising methods to engage in interdisciplinary collaboration.

We are keen for interested parties to join our group so if you work on or are interested in any aspect of futures thinking, be it in science or the humanities, in any of the University’s divisions, please contact us and come along to our events!

We are a network founded on principles of access and inclusion, and strive to host events that consider the lifestyle ethics and carer-responsibilities of our members and attendees, as well as their access needs, pronouns, and other inclusion needs. Please do contact us for further information on our manifesto.
Chelsea Haith, Futures Thinking Founder, DPhil in Contemporary Literature

Prof Robert Iliffe, Professor of History of Science

Dr Gretta Corporaal, Sociologist of Work and Organisations in the OII

Dr Alexandra Paddock, Editorial Lead on LitHits, Postdoctoral Fellow in the Faculty of English

Prof Kirsten Shepherd-Barr, LitHits Founder, Professor of English and Theatre Studies

Alice Billington, Futures Thinking Co-Convenor, DPhil in Modern History

#1463; On the Information Frontage Road

Published by Anonymous (not verified) on Tue, 05/03/2019 - 4:00pm in

Tags 

comic, Technology

Now I know how Grandpa felt about fire!


Google Employees Uncover Ongoing Work on Censored China Search

Published by Anonymous (not verified) on Tue, 05/03/2019 - 5:28am in

Tags 

Technology

Google employees have carried out their own investigation into the company’s plan to launch a censored search engine for China and say they are concerned that development of the project remains ongoing, The Intercept can reveal.

Late last year, bosses moved engineers away from working on the controversial project, known as Dragonfly, and said that there were no current plans to launch it. However, a group of employees at the company was unsatisfied with the lack of information from leadership on the issue — and took matters into their own hands.

The group has identified ongoing work on a batch of code that is associated with the China search engine, according to three Google sources. The development has stoked anger inside Google offices, where many of the company’s 88,000 workforce previously protested against plans to launch the search engine, which was designed to censor broad categories of information associated with human rights, democracy, religion, and peaceful protest.

In December, The Intercept reported that an internal dispute and political pressure on Google had stopped development of Dragonfly. Google bosses had originally planned to launch it between January and April of this year. But they changed course after the outcry over the plan and indicated to employees who were working on the project that it was being shelved.

Google’s Caesar Sengupta, an executive with a leadership role on Dragonfly, told engineers and others who were working on the censored search engine in mid-December that they would be allocated new projects funded by different “cost centers” of the company’s budget. In a message marked “confidential – do not forward,” which has been newly obtained by The Intercept, Sengupta told the Dragonfly workers:

Over the past few quarters, we have tackled different aspects of what search would look like in China. While we’ve made progress in our understanding of the market and user needs, many unknowns remain and currently we have no plans to launch.

Back in July we said at our all hands that we did not feel we could make much progress right now. Since then, many people have effectively rolled off the project while others have been working on adjacent areas such as improving our Chinese language capabilities that also benefit users globally. Thank you for all of your hard work here.

As we finalize business planning for 2019, our priority is for you to be productive and have clear objectives, so we have started to align cost centers to better reflect what people are actually working on.

Thanks again — and your leads will follow up with you on next steps.

Sources with knowledge of Dragonfly said staff who were working on the project were not told to immediately cease their efforts. Rather, they were instructed to finish up the jobs they were doing and then they would be allocated new work on other teams. Some of those who were working on Dragonfly were moved into different areas, focusing on projects related to Google’s search services in India, Indonesia, Russia, the Middle East, and Brazil.

“I just don’t know where the leadership is coming from anymore.”

But Google executives, including CEO Sundar Pichai, refused both publicly and privately to completely rule out launching the censored search engine in the future. This led a group of concerned employees — who were themselves not directly involved with Dragonfly — to closely monitor the company’s internal systems for information about the project and circulate their findings on an internal messaging list.

The employees have been keeping tabs on repositories of code that are stored on Google’s computers, which they say is linked to Dragonfly. The code was created for two smartphone search apps — named Maotai and Longfei — that Google planned to roll out in China for users of Android and iOS mobile devices.

The employees identified about 500 changes to the code in December, and more than 400 changes to the code between January and February of this year, which they believe indicates continued development of aspects of Dragonfly. (Since August 2017, the number of code changes has varied between about 150 to 500 each month, one source said.) The employees say there are still some 100 workers allocated to the “cost center” associated with Dragonfly, meaning that the company is maintaining a budget for potential ongoing work on the plan.

Google sources with knowledge of Dragonfly said that the code changes could possibly be attributed to employees who have continued this year to wrap up aspects of the work they were doing to develop the Chinese search platform.

“I still believe the project is dead, but we’re still waiting for a declaration from Google that censorship is unacceptable and that they will not collaborate with governments in the oppression of their people,” said one source familiar with Dragonfly.

The lack of clarity from management has resulted in Google losing skilled engineers and developers. In recent months, several Google employees have resigned in part due to Dragonfly and leadership’s handling of the project. The Intercept knows of six staff at the company, including two in senior positions, who have quit since December, and three others who are planning to follow them out the door.

Colin McMillen, who worked as a software engineer at Google for nine years, quit the company in early February. He told The Intercept that he had been concerned about Dragonfly and other “ethically dubious” decisions, such as Google’s multimillion-dollar severance packages for executives accused of sexual harassment.

“I think they are going to try it again in a year or two.”

Prior to leaving the company, McMillen said he and his colleagues had “strong indications that something is still happening” with Google search in China. But they were left confused about the status of the China plan because upper management would not discuss it.

“I just don’t know where the leadership is coming from anymore,” he said. “They have really closed down communication and become significantly less transparent.”

In 2006, Google launched a censored search engine in China, but stopped operating the service in the country in 2010, taking a clear anti-censorship position. At the time, Google co-founder Sergey Brin declared that he wanted to show that the company was “opposing censorship and speaking out for the freedom of political dissent.”

Pichai, Google’s CEO since 2015, has taken a different position. He has a strong desire to launch search again in China — viewing the censorship as a worthwhile trade-off to gain access to the country’s more than 800 million internet users — and he may now be waiting for the controversy around Dragonfly to die down before quietly resurrecting the plan.

“Right now it feels unlaunchable, but I don’t think they are canceling outright,” McMillen said. “I think they are putting it on the back burner and are going to try it again in a year or two with a different code name or approach.”

Anna Bacciarelli, a technology researcher at Amnesty International, called on Google “to publicly confirm that it has dropped Dragonfly for good, not just ‘for now.’” Bacciarelli told The Intercept that Amnesty’s Secretary General Kumi Naidoo had visited Google’s Mountain View headquarters in California last week to reiterate concerns over Dragonfly and “the apparent disregard for transparency and accountability around the project.”

If Google is still developing the censored search engine, Bacciarelli said, “it’s not only failing on its human rights responsibilities but ignoring the hundreds of Google employees, more than 70 human rights organizations, and hundreds of thousands of campaign supporters around the world who have all called on the company to respect human rights and drop Dragonfly.”

Google did not respond to a request for comment.

The post Google Employees Uncover Ongoing Work on Censored China Search appeared first on The Intercept.

Pages