Main menu

Information Technology

Never Mind the Privacy: The Great Web 2.0 Swindle

Published by Matthew Davidson on Wed, 01/03/2017 - 1:43pm in

The sermon today comes from this six minute video from comedian Adam Conover: The Terrifying Cost of "Free” Websites

I don't go along with the implication here that the only conceivable reason to run a website is to directly make money by doing so, and that therefore it is our expectation of zero cost web services that is the fundamental problem. But from a technical point of view the sketch's analogy holds up pretty well. Data-mining commercially useful information about users is the business model of Software as a Service (SaaS) — or Service as a Software Substitute (SaaSS) as it's alternately known.

You as the user of these services — for example social networking services such as Facebook or Twitter, content delivery services such as YouTube or Flickr, and so on — provide the "content", and the service provider provides data storage and processing functionality. There are two problems with this arrangement:

  1. You are effectively doing your computing using a computer and software you don't control, and whose workings are completely opaque to you.
  2. As is anybody who wants to access anything you make available using those services.

Even people who don't have user accounts with these services can be tracked, because they can be identified via browser fingerprinting, and you can be tracked as you browse beyond the tracking organisation's website. Third party JavaScript "widgets" embedded in many, if not most, websites silently deliver executable code to users' browsers, allowing them to be tracked as they go from site to site. Common examples of such widgets include syndicated advertising, like buttons, social login services (eg. Facebook login), and comment hosting services. Less transparent are third-party services marketed to the site owner, such as Web analytics. These provide data on a site's users in the form of graphs and charts so beloved by middle management, with the service provider of course hanging on to a copy of all the data for their own purposes. My university invites no less than three organisations to surveil its students in this way (New Relic, Crazy Egg, and of course Google Analytics). Thanks to Edward Snowden, we know that government intelligence agencies are secondary beneficiaries of this data collection in the case of companies such as Google, Facebook, Apple, and Microsoft. For companies not named in these leaks, all we can say is we do not — because as users we cannot — know if they are passing on information about us as well. To understand how things might be different, one must look at the original vision for the Internet and the World Wide Web.

The Web was a victim of its own early success. The Internet was designed to be "peer-to-peer", with every connected computer considered equal, and the network which connected them completely oblivious to the nature of the data it was handling. You requested data from somebody else on the network, and your computer then manipulated and transformed that data in useful ways. It was a "World of Ends"; the network was dumb, and the machines at each end of a data transfer were smart. Unfortunately the Web took off when easy to use Web browsers were available, but before easy to use Web servers were available. Moreover, Web browsers were initially intended to be tools to both read and write Web documents, but the second goal soon fell away. You could easily consume data from elsewhere, but not easily produce and make it available yourself.

The Web soon succumbed to the client-server model, familiar from corporate computer networks — the bread and butter of tech firms like IBM and Microsoft. Servers occupy a privileged position in this model. The value is assumed to be at the centre of the network, while at the ends are mere consumers. This translates into social and economic privilege for the operators of servers, and a role for users shaped by the requirements of service providers. This was, breathless media commentary aside, the substance of the "Web 2.0" transformation.

Consider how the ideal Facebook user engages with their Facebook friends. They share an amusing video clip. They upload photos of themselves and others, while in the process providing the machine learning algorithm of Facebook's facial recognition surveillance system with useful feedback. They talk about where they've been and what they've bought. They like and they LOL. What do you do with a news story that provokes outrage, say the construction of a new concentration camp for refugees from the endless war on terror? Do you click the like button? The system is optimised, on the users' side, for face-work, and de-optimised for intellectual or political substance. On the provider's side it is optimised for exposing social relationships and consumer preferences; anything else is noise to be minimised.

In 2014 there was a minor scandal when it was revealed that Facebook allowed a team of researchers to tamper with Facebook's news feed algorithm in order to measure the effects of different kinds of news stories on users' subsequent posts. The scandal missed the big story: Facebook has a news feed algorithm.  Friending somebody on Facebook doesn't mean you will see everything they post in your news feed, only those posts that Facebook's algorithm selects for you, along with posts that you never asked to see. Facebook, in its regular day-to-day operation, is one vast, ongoing, uncontrolled experiment in behaviour modification. Did Facebook swing the 2016 US election for Trump? Possibly, but that wasn't their intention. The fracturing of Facebook's user base into insular cantons of groupthink, increasingly divorced from reality, is a predictable side-effect of a system which regulates user interactions based on tribal affiliations and shared consumer tastes, while marginalising information which might threaten users' ontological security.

Resistance to centralised, unaccountable, proprietary, user-subjugating systems can be fought on two fronts: minimising current harms; and migrating back to an environment where the intelligence of the network is at the ends, under the user's control. You can opt out of pervasive surveillance with browser add-ons like the Electronic Frontier Foundation's Privacy Badger. You can run your own instances of software which provide federated, decentralised services equivalent to the problematic ones, such as:

  • GNU Social is a social networking service similar to Twitter (but with more features). I run my own instance and use it every day to keep in touch with people who also run their own, or have accounts on an instance run by people they trust.
  • Diaspora is another distributed social networking platform more similar to Facebook.
  • OpenID is a standard for distributed authentication, replacing social login services from Facebook, Google, et al.
  • Piwik is a replacement for systems like Google Analytics. You can use it to gather statistics on the use of your own website(s), but it grants nobody the privacy-infringing capability to follow users as they browse around a large number of sites.

The fatal flaw in such software is that few people have the technical ability to set up a web server and install it. That problem is the motivation behind the FreedomBox project. Here's a two and a half minute news story on the launch of the project: Eben Moglen discusses the freedom box on CBS news

I also recommend this half-hour interview, pre-dating the Snowden leaks by a year, which covers much of the above with more conviction and panache than I can manage: Eben Moglen on Facebook, Google and Government Surveillance

Arguably the stakes are currently as high in many countries in the West as they were in the Arab Spring. Snowden has shown that for governments of the Five Eyes intelligence alliance there's no longer a requirement for painstaking spying and infiltration of activist groups in order to identify your key political opponents; it's just a database query. One can without too much difficulty imagine a Western despot taking to Twitter to blurt something like the following:

"Protesters love me. Some, unfortunately, are causing problems. Huge problems. Bad. :("

"Some leaders have used tough measures in the past. To keep our country safe, I'm willing to do much worse."

"We have some beautiful people looking into it. We're looking into a lot of things."

"Our country will be so safe, you won't believe it. ;)"

The Politics of Technology

Published by Matthew Davidson on Fri, 24/02/2017 - 4:03pm in

"Technology is anything that doesn't quite work yet." - Danny Hillis, in a frustratingly difficult to source quote. I first heard it from Douglas Adams.

Here is, at minimum, who and what you need to know:

Organisations

Sites

  • Boing Boing — A blog/zine that posts a lot about technology and society, as well as - distressingly - advertorials aimed at Bay Area hipsters.

People

Reading

Viewing

[I'm aware of the hypocrisy in recommending videos of talks about freedom, privacy and security that are hosted on YouTube.]

 

 

Algorithmic price fixing

Published by Anonymous (not verified) on Tue, 10/01/2017 - 3:43am in

This FT article is pretty interesting:

The classic example of industrial-era price fixing dates back to a series of dinners hosted amid the 1907 financial panic by Elbert Gary, then chairman of US Steel. In a narrow first-floor ballroom at New York’s Waldorf Astoria Hotel, men controlling 90 per cent of the nation’s steel output revealed to each other their respective wage rates, prices and “all information concerning their business”, one attendee recalled. Gary’s aim was to stabilise falling prices. The government later sued, saying that the dinner talks — the first of several over a four-year period — showed that US Steel was an illegal monopoly.

Algorithms render obsolete the need for such face-to-face plotting. Pricing tools scour the internet for competitors’ prices, prowl proprietary databases for relevant historical demand data, analyse digitised information and arrive at pricing solutions within milliseconds — far faster than any flesh-and-blood merchant could. That should, in theory, result in lower prices and wider consumer choice. Algorithms raise antitrust concerns only in certain circumstances, such as when they are designed explicitly to facilitate collusion or parallel pricing moves by competitors.

… a German software application that tracks petrol-pump prices. Preliminary results suggest that the app discourages price-cutting by retailers, keeping prices higher than they otherwise would have been. As the algorithm instantly detects a petrol station price cut, allowing competitors to match the new price before consumers can shift to the discounter, there is no incentive for any vendor to cut in the first place.

“Algorithms are sharing information so quickly that consumers are not aware of the competition,” says Mr Stucke. “Two gas stations that are across the street from each other are already familiar with this.” This episode suggests that the availability of perfect information, a hallmark of free market theory, might harm rather than empower consumers. If the concern is borne out, a central assumption of the digital economy — that technology lowers prices and expands choices — could be upended.

The argument here, if it is right, is twofold. One – that even without direct collusion, firms’ best strategy may be to act as if they are colluding by maintaining higher prices. Firms have a much weaker temptation to ‘defect’ from an entirely implicit bargain by lowering their prices so as to attract more customers, since there are unlikely to be significant gains from so doing, even in the short run. The plausible equilibrium is something that might be described as distributed oligopoly. Harrison White once defined a market as being a “tangible clique of producing firms, observing each other in the context of an aggregate set of buyers.” With super-cheap information, it doesn’t have to be a clique any more to be tangible.

The second is that where there is direct collusion, the information burden on regulators is much higher. For example, one may plausibly imagine that oligopoly-type outcomes might emerge as a second-order outcome of the aggregated behavior of automated agents. One might also imagine that it might be possible artfully to tweak these agents’ behavior in such a way that this will indeed be the most likely result. However, proving ex post that this was indeed the intent will likely at best require a ton of forensic resources, and at worst may be effectively impossible.

NB that both of these can happen entirely independently of traditional arguments about concentration and monopoly/oligopoly – even if Amazon, Google, Facebook, Uber etc suddenly and miraculously disappeared, these kinds of distributed or occulted oligopoly problems would be untouched. If you take this set of claims seriously (the evidence presented in the FT piece still looks tentative tentative), then the most fundamental problem that the Internet poses is not one of network advantage, increasing returns to scale and so on advantaging big players (since, with a non-supine anti-trust authority, these could in principle be addressed). It’s the problem of how radically cheaper communication makes new forms of implicit and explicit collusion possible at scale, squeezing consumers.

Brian Stableford and David Langford on Automation, Unemployment and Retraining in the 21st Century

Over the past year there have been a number of warnings that within the next three decades, 2/3 of all jobs could vanish due to mechanization. The science fiction writers Brian Stableford and David Langford also cover this projected crisis in their fictitious history of the thousand years from the beginning of this century to the end of the 29th, The Third Millennium (London: Paladin Grafton Books 1988). They predict that governments and society will find a solution to this in life-long learning and direction of the unemployment into the construction industry for a massive programme of public works.

They write

Massive Unemployment in the West
By the year 2000 automation was having such a significant effect on manufacturing that unskilled and semi-skilled workers were being made redundant in large numbers. Less skilled holders of ‘white-collar jobs’ were also being displaced by information technology. There seemed no immediate prospect of redeploying these workers, and their increasing numbers were a source of embarrassment to many Western governments. In the Soviet countries, where employment was guaranteed, jobs were found, but it was becoming all too obvious that many of these were unnecessary. The communist countries had other problems too. The political power to redeploy labour easily was there, and the educational system was better equipped than in the West for practical training, but there were no economic incentives to motivate the workers.

In the West the real problem was p0artly economic and partly educational. Allowing market forces to govern patterns of employment was inefficient. It was not that there was no work – there were chronic housing problems in most of the affected nations, and the need for urban renewal was desperate. Unfortunately, there was no institutional apparatus to divert unused labour to these socially desirable but essentially unprofitable tasks. To pay workers to do such jobs, instead of doling out a pittance to compensate them for not having jobs, would have required massive and politically unacceptable increases in taxation. The educational part of the problem was the absence of effective retraining to allow people to switch easily from one semi-skilled task to another, thus allowing the movement of labour into the new areas of employment.

With hindsight, it is easy to see the pattern of changes that had to occur in both systems, and it may seem ridiculous that it was not obvious what had to be done. In fact, it probably was obvious to many, and the patterns of change were directed by common sense, but there was much superstitious resistance to the evolution of the economic system away from the capitalist and communist extremes.

Lifelong education
The educational reforms were easier to implement in the West than the economic reforms (though even education tended to be dominated by tradition, and was certainly not without its superstitions). it became accepted in the course of the early twenty-first century the adaptability of labour was a priority. It was simply not sufficient for an individual to learn a skill while still at school, or during an apprenticeship, and then to expect his skill to remain in demand throughout his lifetime. By the year 2010, the idea that a man or woman ought to have a single ‘educational phase’ early in life was becoming obsolete in the developed nations, and educational institutions were being adapted to provide for people of all ages, who would visit and use them continually or periodically, by choice as well as by necessity. By 2050 there was an almost universally accepted opinion in the West that ‘an education’ was something that extended over an entire lifetime. The old familiar cliché ‘Jack of all trades, master of none’ was now beginning to take on a musty air, like something in Chaucerian English, approaching its near-incomprehensibility to the average citizen of today.

Enforced growth of the public sector
Despite the robotization of many manufacturing processes, the demand for manual labour did not decline markedly during the twenty-first century. To some extent, displaced factory-workers were shifted into various kinds of building work in the private sector. But it was the expansion of public sector construction and maintenance that kept the demand high. There were, of course, special opportunities created by the building of the information networks, and much manual work as a result of flooding, but there was a more fundamental reason for the state’s increased need for manual workers. As society became more highly technological, depending on an ever-increasing range of complicated artefacts, more and more work had to be put into reconstructing and repairing the artificial environment. Because maintenance work, unlike most manufacturing processes, is occasional and idiosyncratic rather than ceaseless and repetitive, it cannot – even to this day – be whole turned over to machines. Machinery is vital to such work, but so are human agents. Governments employed more and more people to do centrally organized work, and collected the taxes they needed to do it.

There were no such redeployment prospects for the redundant white-collar workers. As their jobs disappeared, they had to undertake more radical retraining, and it was mostly these workers who moved into such new jobs as were being created by the spread of the information networks. Their skills had to be ‘upgraded’, but the same was true of the manual labourers, who had a least to become more versatile. The working population as a whole needed to be better educated, if only in the sense of being always able to learn new skills. Relative few individuals lacked the capacity for this kind of education, and the vast majority adapted readily enough. (pp. 98-100)

I’m not sure how realistic the solutions Stableford and Langford propose are. Looking back, some of the book’s predictions now seem rather dated. For example, the book takes it for granted that the Communist bloc would continue to exist, whereas it collapsed in eastern Europe very swiftly in the years following the book’s publication.

I also think the idea of lifelong learning has similarly been abandoned. It was very popular in the late 1980s and the 1990s, when higher education was expanding rapidly. But there has certainly been a reaction against the massive expansion of university education to the extent that half of the population are now expect to acquire degrees. Critics of the expansion of graduate education have pointed out that it has not brought the greater innovation and prosperity that was expected of it, and has served instead to take jobs away from those without an academic background as graduates are forced instead to take unskilled jobs.

I also think that it’s highly debatable whether the expansion of the construction industry on public works would compensate for the jobs lost through further mechanisation. Even if the government were to accept the necessity of raising taxes to finance such ‘make work’ programmes. My guess is that they’d simply carry on with the ‘workfare’ policy of forcing the unemployed to work on such projects as were strictly necessary in return for their unemployment benefit.

As for the various retraining programmes, some schemes like this have been tried already. For example, back in the 1990s some councils ran programmes, which gave free computer training to the unemployed. But I can see any further retraining schemes launched in the future being strictly limit in scope, and largely cosmetic. The point of such programmes would be to give the impression that the government was tackling the problem, whereas in fact the government would be only too eager for the situation to carry on as it is and keep labour cheap and cowed through massive unemployment.

I also don’t believe that the jobs created by the expansion of information technology will also be adequate to solve the problems. To be fair, the next paragraph from the passage above states that these solutions were only partly successful.

Of course, this situation could all change over the next three decades. But I can see no real solutions to the increasingly desperate problem of unemployment unless neoliberalism is completely discarded along with the Tories, Lib Dems and Blairite Labour, which support it.

Tuesday, 1 November 2016 - 1:12pm

Published by Matthew Davidson on Tue, 01/11/2016 - 2:00pm in

COFFS Harbour company Janison has today launched a cloud-based enterprise learning solution, developed over several years working with organisations such as Westpac and Rio Tinto.

Really? In 2016 businesses are supposed to believe that a corporate MOOC (Massively Open Online Course; a misnomer from day one) will do for them what MOOC's didn't do for higher education? There are two issues here: quality and dependability.

In 2012, the "year of the MOOC", the ed-tech world was full of breathless excitement over a vision of higher education consisting of a handful of "superprofessors" recording lectures that would be seen by millions of students, with the rest of the functions of the university automated away. There was just one snag, noticed by MOOC pioneer, superprofessor, and founder of Udacity Sebastian Thrun. "We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don't educate people as others wished, or as I wished. We have a lousy product," he said. That is not to say that there isn't a market for lousy products. As the president of San Jose State University cheerfully admitted of their own MOOC program, "It could not be worse than what we do face to face." It's not hard to imagine a certain class of institution happy to rip off their students by outsourcing their instruction to a tech firm, but harder to see why a business would want to rip themselves off on an inferior mode of training. Technology-intensive modes of learning work best among tech-savvy, self-modivated learners, so-called "roaming autodidacts". Ask yourself how many of your employees fit into that category; they are a very small minority among the general population.

The other problem is gambling on a product that depends on multiple platforms which reside in the hands of multiple vendors, completely beyond your own control. The longevity of these vendors is not guaranteed, and application development platforms are discontinued on a regular basis. Sticking with large, successful, reputable vendors is no guarantee; Google, for instance, is notorious for euthanising their "Software-as-a-Service" (SaaS) offerings on a regular basis, regardless of the fanfare with which they were launched. You may be willing to trade quality for affordability in the short term, but future migration costs are a matter of "when", not "if".