Information Technology

Democracy as an information system

Published by Anonymous (not verified) on Wed, 28/11/2018 - 12:45am in

Democracy is an information system.

That’s the starting place of our new paper: “Common-Knowledge Attacks on Democracy.” In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don’t like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count­—even if only roughly and imperfectly.

We contrast this with a very different form of knowledge that we call contested political knowledge,which is, broadly, things that people in society disagree about. Examples are easy to bring to mind: how much of a role the government should play in the economy, what the tax rules should be, what sorts of regulations are beneficial and what sorts are harmful, and so on.

This seems basic, but it gets interesting when we contrast both of these forms of knowledge across autocracies and democracies. These two forms of government have incompatible needs for common and contested political knowledge.

For example, democracies draw upon the disagreements within their population to solve problems. Different political groups have different ideas of how to govern, and those groups vie for political influence by persuading voters. There is also long-term uncertainty about who will be in charge and able to set policy goals. Ideally, this is the mechanism through which a polity can harness the diversity of perspectives of its members to better solve complex policy problems. When no-one knows who is going to be in charge after the next election, different parties and candidates will vie to persuade voters of the benefits of different policy proposals.

But in order for this to work, there needs to be common knowledge both of how government functions and how political leaders are chosen. There also needs to be common knowledge of who the political actors are, what they and their parties stand for, and how they clash with each other. Furthermore, this knowledge is decentralized across a wide variety of actors­—an essential element, since ordinary citizens play a significant role in political decision making.

Contrast this with an autocracy. There, common political knowledge about who is in charge over the long term and what their policy goals are is a basic condition of stability. Autocracies do not require common political knowledge about the efficacy and fairness of elections, and strive to maintain a monopoly on other forms of common political knowledge. They actively suppress common political knowledge about potential groupings within their society, their levels of popular support, and how they might form coalitions with each other. On the other hand, they benefit from contested political knowledge about nongovernmental groups and actors in society. If no one really knows which other political parties might form, what they might stand for, and what support they might get, that itself is a significant barrier to those parties ever forming.

This difference has important consequences for security. Authoritarian regimes are vulnerable to information attacks that challenge their monopoly on common political knowledge. They are vulnerable to outside information that demonstrates that the government is manipulating common political knowledge to their own benefit. And they are vulnerable to attacks that turn contested political knowledge­—uncertainty about potential adversaries of the ruling regime, their popular levels of support and their ability to form coalitions­—into common political knowledge. As such, they are vulnerable to tools that allow people to communicate and organize more easily, as well as tools that provide citizens with outside information and perspectives.

For example, before the first stirrings of the Arab Spring, the Tunisian government had extensive control over common knowledge. It required everyone to publicly support the regime, making it hard for citizens to know how many other people hated it, and it prevented potential anti-regime coalitions from organizing. However, it didn’t pay attention in time to Facebook, which allowed citizens to talk more easily about how much they detested their rulers, and, when an initial incident sparked a protest, to rapidly organize mass demonstrations against the regime. The Arab Spring faltered in many countries, but it is no surprise that countries like Russia see the Internet openness agenda as a knife at their throats.

Democracies, in contrast, are vulnerable to information attacks that turn common political knowledge into contested political knowledge. If people disagree on the results of an election, or whether a census process is accurate, then democracy suffers. Similarly, if people lose any sense of what the other perspectives in society are, who is real and who is not real, then the debate and argument that democracy thrives on will be degraded. This is what seems to be Russia’s aims in their information campaigns against the US: to weaken our collective trust in the institutions and systems that hold our country together. This is also the situation that writers like Adrian Chen and Peter Pomerantsev describe in today’s Russia, where no one knows which parties or voices are genuine, and which are puppets of the regime, creating general paranoia and despair.

This difference explains how the same policy measure can increase the stability of one form of regime and decrease the stability of the other. We have already seen that open information flows have benefited democracies while at the same time threatening autocracies. In our language, they transform regime-supporting contested political knowledge into regime-undermining common political knowledge. And much more recently, we have seen other uses of the same information flows undermining democracies by turning regime-supported common political knowledge into regime-undermining contested political knowledge.

In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society.

This framework not only helps us understand how different political systems are vulnerable and how they can be attacked, but also how to bolster security in democracies. First, we need to better defend the common political knowledge that democracies need to function. That is, we need to bolster public confidence in the institutions and systems that maintain a democracy. Second, we need to make it harder for outside political groups to cooperate with inside political groups and organize disinformation attacks, through measures like transparency in political funding and spending. And finally, we need to treat attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners.

There’s a lot more in the paper.

[This short piece was co-authored with Bruce Schneier, and originally appeared at Lawfare.]

Never Mind the Privacy: The Great Web 2.0 Swindle

Published by Matthew Davidson on Wed, 01/03/2017 - 1:43pm in

The sermon today comes from this six minute video from comedian Adam Conover: The Terrifying Cost of "Free” Websites

I don't go along with the implication here that the only conceivable reason to run a website is to directly make money by doing so, and that therefore it is our expectation of zero cost web services that is the fundamental problem. But from a technical point of view the sketch's analogy holds up pretty well. Data-mining commercially useful information about users is the business model of Software as a Service (SaaS) — or Service as a Software Substitute (SaaSS) as it's alternately known.

You as the user of these services — for example social networking services such as Facebook or Twitter, content delivery services such as YouTube or Flickr, and so on — provide the "content", and the service provider provides data storage and processing functionality. There are two problems with this arrangement:

  1. You are effectively doing your computing using a computer and software you don't control, and whose workings are completely opaque to you.
  2. As is anybody who wants to access anything you make available using those services.

Even people who don't have user accounts with these services can be tracked, because they can be identified via browser fingerprinting, and you can be tracked as you browse beyond the tracking organisation's website. Third party JavaScript "widgets" embedded in many, if not most, websites silently deliver executable code to users' browsers, allowing them to be tracked as they go from site to site. Common examples of such widgets include syndicated advertising, like buttons, social login services (eg. Facebook login), and comment hosting services. Less transparent are third-party services marketed to the site owner, such as Web analytics. These provide data on a site's users in the form of graphs and charts so beloved by middle management, with the service provider of course hanging on to a copy of all the data for their own purposes. My university invites no less than three organisations to surveil its students in this way (New Relic, Crazy Egg, and of course Google Analytics). Thanks to Edward Snowden, we know that government intelligence agencies are secondary beneficiaries of this data collection in the case of companies such as Google, Facebook, Apple, and Microsoft. For companies not named in these leaks, all we can say is we do not — because as users we cannot — know if they are passing on information about us as well. To understand how things might be different, one must look at the original vision for the Internet and the World Wide Web.

The Web was a victim of its own early success. The Internet was designed to be "peer-to-peer", with every connected computer considered equal, and the network which connected them completely oblivious to the nature of the data it was handling. You requested data from somebody else on the network, and your computer then manipulated and transformed that data in useful ways. It was a "World of Ends"; the network was dumb, and the machines at each end of a data transfer were smart. Unfortunately the Web took off when easy to use Web browsers were available, but before easy to use Web servers were available. Moreover, Web browsers were initially intended to be tools to both read and write Web documents, but the second goal soon fell away. You could easily consume data from elsewhere, but not easily produce and make it available yourself.

The Web soon succumbed to the client-server model, familiar from corporate computer networks — the bread and butter of tech firms like IBM and Microsoft. Servers occupy a privileged position in this model. The value is assumed to be at the centre of the network, while at the ends are mere consumers. This translates into social and economic privilege for the operators of servers, and a role for users shaped by the requirements of service providers. This was, breathless media commentary aside, the substance of the "Web 2.0" transformation.

Consider how the ideal Facebook user engages with their Facebook friends. They share an amusing video clip. They upload photos of themselves and others, while in the process providing the machine learning algorithm of Facebook's facial recognition surveillance system with useful feedback. They talk about where they've been and what they've bought. They like and they LOL. What do you do with a news story that provokes outrage, say the construction of a new concentration camp for refugees from the endless war on terror? Do you click the like button? The system is optimised, on the users' side, for face-work, and de-optimised for intellectual or political substance. On the provider's side it is optimised for exposing social relationships and consumer preferences; anything else is noise to be minimised.

In 2014 there was a minor scandal when it was revealed that Facebook allowed a team of researchers to tamper with Facebook's news feed algorithm in order to measure the effects of different kinds of news stories on users' subsequent posts. The scandal missed the big story: Facebook has a news feed algorithm.  Friending somebody on Facebook doesn't mean you will see everything they post in your news feed, only those posts that Facebook's algorithm selects for you, along with posts that you never asked to see. Facebook, in its regular day-to-day operation, is one vast, ongoing, uncontrolled experiment in behaviour modification. Did Facebook swing the 2016 US election for Trump? Possibly, but that wasn't their intention. The fracturing of Facebook's user base into insular cantons of groupthink, increasingly divorced from reality, is a predictable side-effect of a system which regulates user interactions based on tribal affiliations and shared consumer tastes, while marginalising information which might threaten users' ontological security.

Resistance to centralised, unaccountable, proprietary, user-subjugating systems can be fought on two fronts: minimising current harms; and migrating back to an environment where the intelligence of the network is at the ends, under the user's control. You can opt out of pervasive surveillance with browser add-ons like the Electronic Frontier Foundation's Privacy Badger. You can run your own instances of software which provide federated, decentralised services equivalent to the problematic ones, such as:

  • GNU Social is a social networking service similar to Twitter (but with more features). I run my own instance and use it every day to keep in touch with people who also run their own, or have accounts on an instance run by people they trust.
  • Diaspora is another distributed social networking platform more similar to Facebook.
  • OpenID is a standard for distributed authentication, replacing social login services from Facebook, Google, et al.
  • Piwik is a replacement for systems like Google Analytics. You can use it to gather statistics on the use of your own website(s), but it grants nobody the privacy-infringing capability to follow users as they browse around a large number of sites.

The fatal flaw in such software is that few people have the technical ability to set up a web server and install it. That problem is the motivation behind the FreedomBox project. Here's a two and a half minute news story on the launch of the project: Eben Moglen discusses the freedom box on CBS news

I also recommend this half-hour interview, pre-dating the Snowden leaks by a year, which covers much of the above with more conviction and panache than I can manage: Eben Moglen on Facebook, Google and Government Surveillance

Arguably the stakes are currently as high in many countries in the West as they were in the Arab Spring. Snowden has shown that for governments of the Five Eyes intelligence alliance there's no longer a requirement for painstaking spying and infiltration of activist groups in order to identify your key political opponents; it's just a database query. One can without too much difficulty imagine a Western despot taking to Twitter to blurt something like the following:

"Protesters love me. Some, unfortunately, are causing problems. Huge problems. Bad. :("

"Some leaders have used tough measures in the past. To keep our country safe, I'm willing to do much worse."

"We have some beautiful people looking into it. We're looking into a lot of things."

"Our country will be so safe, you won't believe it. ;)"

The Politics of Technology

Published by Matthew Davidson on Fri, 24/02/2017 - 4:03pm in

"Technology is anything that doesn't quite work yet." - Danny Hillis, in a frustratingly difficult to source quote. I first heard it from Douglas Adams.

Here is, at minimum, who and what you need to know:

Organisations

Sites

  • Boing Boing — A blog/zine that posts a lot about technology and society, as well as - distressingly - advertorials aimed at Bay Area hipsters.

People

Reading

Viewing

[I'm aware of the hypocrisy in recommending videos of talks about freedom, privacy and security that are hosted on YouTube.]

 

 

Tuesday, 1 November 2016 - 1:12pm

Published by Matthew Davidson on Tue, 01/11/2016 - 2:00pm in

COFFS Harbour company Janison has today launched a cloud-based enterprise learning solution, developed over several years working with organisations such as Westpac and Rio Tinto.

Really? In 2016 businesses are supposed to believe that a corporate MOOC (Massively Open Online Course; a misnomer from day one) will do for them what MOOC's didn't do for higher education? There are two issues here: quality and dependability.

In 2012, the "year of the MOOC", the ed-tech world was full of breathless excitement over a vision of higher education consisting of a handful of "superprofessors" recording lectures that would be seen by millions of students, with the rest of the functions of the university automated away. There was just one snag, noticed by MOOC pioneer, superprofessor, and founder of Udacity Sebastian Thrun. "We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don't educate people as others wished, or as I wished. We have a lousy product," he said. That is not to say that there isn't a market for lousy products. As the president of San Jose State University cheerfully admitted of their own MOOC program, "It could not be worse than what we do face to face." It's not hard to imagine a certain class of institution happy to rip off their students by outsourcing their instruction to a tech firm, but harder to see why a business would want to rip themselves off on an inferior mode of training. Technology-intensive modes of learning work best among tech-savvy, self-modivated learners, so-called "roaming autodidacts". Ask yourself how many of your employees fit into that category; they are a very small minority among the general population.

The other problem is gambling on a product that depends on multiple platforms which reside in the hands of multiple vendors, completely beyond your own control. The longevity of these vendors is not guaranteed, and application development platforms are discontinued on a regular basis. Sticking with large, successful, reputable vendors is no guarantee; Google, for instance, is notorious for euthanising their "Software-as-a-Service" (SaaS) offerings on a regular basis, regardless of the fanfare with which they were launched. You may be willing to trade quality for affordability in the short term, but future migration costs are a matter of "when", not "if".