Information Technology

The Trinet

Published by Anonymous (not verified) on Thu, 02/11/2017 - 8:28pm in

Discuss.

Before the year 2014, there were many people using Google, Facebook, and Amazon. Today, there are still many people using services from those three tech giants (respectively, GOOG, FB, AMZN). Not much has changed, and quite literally the user interface and features on those sites has remained mostly untouched. However, the underlying dynamics of power on the Web have drastically changed, and those three companies are at the center of a fundamental transformation of the Web

….

We forget how useful it has been to remain anonymous and control what we share, or how easy it was to start an internet startup with its own independent servers operating with the same rights GOOG servers have. On the Trinet, if you are permanently banned from GOOG or FB, you would have no alternative. You could even be restricted from creating a new account. As private businesses, GOOG, FB, and AMZN don’t need to guarantee you access to their networks. You do not have a legal right to an account in their servers, and as societies we aren’t demanding for these rights as vehemently as we could, to counter the strategies that tech giants are putting forward.

The Web and the internet have represented freedom: efficient and unsupervised exchange of information between people of all nations. In the Trinet, we will have even more vivid exchange of information between people, but we will sacrifice freedom. Many of us will wake up to the tragedy of this tradeoff only once it is reality.

New SF Series Coming to Channel 4: Philip K. Dick’s Electric Dreams

Published by Anonymous (not verified) on Tue, 29/08/2017 - 5:04am in

Last Sunday I caught this trailer on Channel 4 for a new science fiction series, Philip K. Dick’s Electric Dreams.

The title is obviously an homage to Dick’s most famous work, Do Androids Dream of Electric Sheep?, which became one of the great, classic SF films of all time, Ridley Scott’s Blade Runner.

The series will consist of ten, self-contained episodes, each based on a different Dick short story, starring some of film and TV’s top actors. These include Timothy Spall, Steve Buscemi, Jack Raynor, Benedict Wong, Bryan Cranston, Essie Davis, Greg Kinnear, Anna Paquin, Richard Madden, Holliday Grainger, Anneika Rose, Mel Rodriguez, Vera Formiga, Annalisa Basso, Maura Tierney, Juno Temple and Janelle Monae.

One of the executive producers is Ronald D. Moore, who worked on the Star Trek series, Star Trek: The Next Generation, Deep Space 9 and Voyage, as well as Battlestar Galactica and Outlander.

More information, including plot summaries, can be found on Channel 4’s website at http://www.channel4.com/info/press/news/philip-k-dicks-electric-dreams And Den of Geek, http://www.denofgeek.com/uk/tv/philip-k-dick-s-electric-dreams/50380/philip-k-dicks-electric-dreams-7-reasons-to-get-excited.

This looks really promising. Den of Geek say in their article that the anthology format already recalls Channel 4’s Black Mirror, and The Twilight Zone. I have to say I wasn’t drawn to watch Black Mirror. It was created by Charlie Brooker, and was an intelligent, dark examination of the dystopian elements of our media-saturated modern culture and its increasing reliance on information technology. However, it just wasn’t weird enough for me. Near future SF is great, but I also like spacecraft, aliens, ray guns and robots. And this promises to have some of them, at least.

Channel 4 have also produced another intelligent, critically SF series, Humans, based on the Swedish series, Real Humans. With Black Mirror, it seems Channel 4 is one of the leading broadcasters for creating intelligent, mature Science Fiction.

Forthcoming Programme on the Destructive Consequence of IT

Next Sunday, the 6th August, BBC 2 is showing a documentary at 8.00 pm on the negative aspects of automation and information technology. Entitled Secrets of Silicon Valley, it’s the first part of a two-part series. The blurb for it in the Radio Times reads

The Tech Gods – who run the biggest technology companies – say they’re creating a better world. Their utopian visions sound persuasive: Uber say the app reduces car pollution and could transform how cities are designed; Airbnb believes its website empowers ordinary people. some hope to reverser climate change or replace doctors with software.

In this doc, social media expert Jamie Bartlett investigates the consequences of “disruption” – replacing old industries with new ones. The Gods are optimistic about our automated future but one former Facebook exec is living off-grid because he fears the fallout from the tech revolution. (p. 54).

A bit more information is given on the listings page for the programmes on that evening. This gives the title of the episode – ‘The Disruptors’, and states

Jamie Bartlett uncovers the dark reality behind Silicon Valley’s glittering promise to build a better world. He visits Uber’s offices in San Francisco and hears how the company believes it is improving our cities. But Hyderabad, India, Jamie sees for himself the apparent human consequences of Uber’s utopian vision and asks what the next wave of Silicon Valley’s global disruption – the automation of millions of jobs – will mean for us. He gets a stark warning from an artificial intelligence pioneer who is replacing doctors with software. Jamie’s journey ends in the remote island hideout of a former social media executive who fears this new industrial revolution could lead to social breakdown and the collapse of capitalism. (p. 56).

I find the critical tone of this documentary refreshing after the relentless optimism of last Wednesday’s first instalment of another two-part documentary on robotics, Hyper Evolution: the Rise of the Robots. This was broadcast at 9 O’clock on BBC 4, with second part shown tomorrow – the second of August – at the same time slot.

This programme featured two scientists, the evolutionary biologist, Dr. Ben Garrod, and the electronics engineer Professor Danielle George, looking over the last century or so of robot development. Garrod stated that he was worried by how rapidly robots had evolved, and saw them as a possible threat to humanity. George, on the other hand, was massively enthusiastic. On visiting a car factory, where the vehicles were being assembled by robots, she said it was slightly scary to be around these huge machines, moving like dinosaurs, but declared proudly, ‘I love it’. At the end of the programme she concluded that whatever view we had of robotic development, we should embrace it as that way we would have control over it. Which prompts the opposing response that you could also control the technology, or its development, by rejecting it outright, minimizing it or limiting its application.

At first I wondered if Garrod was there simply because Richard Dawkins was unavailable. Dawko was voted the nation’s favourite public intellectual by the readers of one of the technology or current affairs magazines a few years ago, and to many people’s he’s the face of scientific rationality, in the same way as the cosmologist Stephen Hawking. However, there was a solid scientific reason he was involved through the way robotics engineers had solved certain problems by copying animal and human physiology. For example, Japanese cyberneticists had studied the structure of the human body to create the first robots shown in the programme. These were two androids that looked and sounded extremely lifelike. One of them, the earlier model, was modelled on its creator to the point where it was at one time an identical likeness. When the man was asked how he felt about getting older and less like his creation, he replied that he was having plastic surgery so that he continued to look as youthful and like his robot as was possible.

Japanese engineers had also studied the human hand, in order to create a robot pianist that, when it was unveiled over a decade ago, could play faster than a human performer. They had also solved the problem of getting machines to walk as bipeds like humans by giving them a pelvis, modeled on the human bone structure. But now the machines were going their own way. Instead of confining themselves to copying the human form, they were taking new shapes in order to fulfil specific functions. The programme makers wanted to leave you in new doubt that, although artificial, these machines were nevertheless living creatures. They were described as ‘a new species’. Actually, they aren’t, if you want to pursue the biological analogy. They aren’t a new species for the simple reason that there isn’t simply one variety of them. Instead, they take a plethora of shapes according to their different functions. They’re far more like a phylum, or even a kingdom, like the plant and animal kingdoms. The metal kingdom, perhaps?

It’s also highly problematic comparing them to biological creatures in another way. So far, none of the robots created have been able to reproduce themselves, in the same way biological organisms from the most primitive bacteria through to far more complex organisms, not least ourselves, do. Robots are manufactured by humans in laboratories, and heavily dependent on their creators both for their existence and continued functioning. This may well change, but we haven’t yet got to that stage.

The programme raced through the development of robots from Eric, the robot that greeted Americans at the World’s Fair, talking to one of the engineers, who’d built it and a similar metal man created by the Beeb in 1929. It also looked at the creation of walking robots, the robot pianist and other humanoid machines by the Japanese from the 1980s to today. It then hopped over the Atlantic to talk to one of the leading engineers at DARPA, the robotics technology firm for the American defence establishment. Visiting the labs, George was thrilled, as the company receives thousands of media requests, to she was exceptionally privileged. She was shown the latest humanoid robots, as well as ‘Big Dog’, the quadruped robot carrier, that does indeed look and act eerily like a large dog.

George was upbeat and enthusiastic. Any doubts you might have about robots taking people’s jobs were answered when she met a spokesman for the automated car factory. He stated that the human workers had been replaced by machines because, while machines weren’t better, they were more reliable. But the factory also employed 650 humans running around here and there to make sure that everything was running properly. So people were still being employed. And by using robots they’d cut the price on the cars, which was good for the consumer, so everyone benefits.

This was very different from some of the news reports I remember from my childhood, when computers and industrial robots were just coming in. There was shock by news reports of factories, where the human workers had been laid off, except for a crew of six. These men spent all day playing cards. They weren’t employed because they were experts, but simply because it would have been more expensive to sack them than to keep them on with nothing to do.

Despite the answers given by the car plant’s spokesman, you’re still quite justified in questioning how beneficial the replacement of human workers with robots actually is. For example, before the staff were replaced with robots, how many people were employed at the factory? Clearly, financial savings had to be made by replacing skilled workers with machines in order to make it economic. At the same time, what skill level were the 650 or so people now running around behind the machines? It’s possible that they are less skilled than the former car assembly workers. If that’s the case, they’d be paid less.

As for the fear of robots, the documentary traced this from Karel Capek’s 1920’s play, R.U.R., or Rossum’s Universal Robot, which gave the word ‘robot’ to the English language. The word ‘robot’ means ‘serf, slave’ or ‘forced feudal labour’ in Czech. This was the first play to deal with a robot uprising. In Japan, however, the attitude was different. Workers were being taught to accept robots as one of themselves. This was because of the animist nature of traditional Japanese religion. Shinto, the indigenous religion besides Buddhism, considers that there are kami, roughly spirits or gods, throughout nature, even inanimate objects. When asked what he thought the difference was between humans and robots, one of the engineers said there was none.

Geoff Simons also deals with the western fear of robots compared to the Japanese acceptance of them in his book, Robots: The Quest for Living Machines. He felt that it came from the Judeo-Christian religious tradition. This is suspicious of robots, as it allows humans to usurp the Lord as the creator of living beings. See, for example, the subtitle of Mary Shelley’s book, Frankenstein – ‘the Modern Prometheus’. Prometheus was the tAstritan, who stole fire from the gods to give to humanity. Victor Frankenstein was similarly stealing a divine secret through the manufacture of his creature.

I think the situation is rather more complex than this, however. Firstly, I don’t think the Japanese are as comfortable with robots as the programme tried to make out. One Japanese scientist, for example, has recommended that robots should not be made too humanlike, as too close a resemblance is deeply unsettling to the humans, who have to work with it. Presumably the scientist was basing this on the experience of Japanese as well as Europeans and Americans.

Much Japanese SF also pretty much like its western counterpart, including robot heroes. One of the long-time comic favourites in Japan is Astroboy, a robot boy with awesome abilities, gadgets and weapons. But over here, I can remember reading the Robot Archie strip in Valiant in the 1970s, along with the later Robusters and A.B.C. Warriors strips in 2000 AD. R2D2 and C3PO are two of the central characters in Star Wars, while Doctor Who had K9 as his faithful robot dog.

And the idea of robot creatures goes all the way back to the ancient Greeks. Hephaestus, the ancient Greek god of fire, was a smith. Lame, he forged three metal girls to help him walk. Pioneering inventors like Hero of Alexandria created miniature theatres and other automata. After the fall of the Roman Empire, this technology was taken up by the Muslim Arabs. The Banu Musa brothers in the 9th century AD created a whole series of machines, which they simply called ‘ingenious devices’, and Baghdad had a water clock which included various automatic figures, like the sun and moon, and the movement of the stars. This technology then passed to medieval Europe, so that by the end of the Middle Ages, lords and ladies filled their pleasure gardens with mechanical animals. The 18th century saw the fascinating clockwork machines of Vaucanson, Droz and other European inventors. With the development of steam power, and then electricity in the 19th century came stories about mechanical humans. One of the earliest was the ‘Steam Man’, about a steam-powered robot, which ran in one of the American magazines. This carried on into the early 20th century. One of the very earliest Italian films was about a ‘uomo machina’, or ‘man machine’. A seductive but evil female robot also appears in Fritz Lang’s epic Metropolis. Both films appeared before R.U.R., and so don’t use the term robot. Lang just calls his robot a ‘maschinemensch’ – machine person.

It’s also very problematic whether robots will ever really take human’s jobs, or even develop genuine consciousness and artificial intelligence. I’m going to have to deal with this topic in more detail later, but the questions posed by the programme prompted me to buy a copy of Hubert L. Dreyfus’ What Computers Still Can’t Do: A Critique of Artificial Reason. Initially published in the 1970s, and then updated in the 1990s, this describes the repeated problems computer scientists and engineers have faced trying to develop Artificial Intelligence. Again and again, these scientists predicted that ‘next year’ ,’in five years’ time’, ‘in the next ten years’ or ‘soon’, robots would achieve human level intelligence, and would make all of us unemployed. The last such prediction I recall reading was way back in 1999 – 2000, when we were all told that by 2025 robots would be as intelligent as cats. All these forecasts have proven wrong. But they’re still being made.

In tomorrow’s edition of Hyperevolution, the programme asks the question of whether robots will ever achieve consciousness. My guess is that they’ll conclude that they will. I think we need to be a little more skeptical.

Never Mind the Privacy: The Great Web 2.0 Swindle

Published by Matthew Davidson on Wed, 01/03/2017 - 1:43pm in

The sermon today comes from this six minute video from comedian Adam Conover: The Terrifying Cost of "Free” Websites

I don't go along with the implication here that the only conceivable reason to run a website is to directly make money by doing so, and that therefore it is our expectation of zero cost web services that is the fundamental problem. But from a technical point of view the sketch's analogy holds up pretty well. Data-mining commercially useful information about users is the business model of Software as a Service (SaaS) — or Service as a Software Substitute (SaaSS) as it's alternately known.

You as the user of these services — for example social networking services such as Facebook or Twitter, content delivery services such as YouTube or Flickr, and so on — provide the "content", and the service provider provides data storage and processing functionality. There are two problems with this arrangement:

  1. You are effectively doing your computing using a computer and software you don't control, and whose workings are completely opaque to you.
  2. As is anybody who wants to access anything you make available using those services.

Even people who don't have user accounts with these services can be tracked, because they can be identified via browser fingerprinting, and you can be tracked as you browse beyond the tracking organisation's website. Third party JavaScript "widgets" embedded in many, if not most, websites silently deliver executable code to users' browsers, allowing them to be tracked as they go from site to site. Common examples of such widgets include syndicated advertising, like buttons, social login services (eg. Facebook login), and comment hosting services. Less transparent are third-party services marketed to the site owner, such as Web analytics. These provide data on a site's users in the form of graphs and charts so beloved by middle management, with the service provider of course hanging on to a copy of all the data for their own purposes. My university invites no less than three organisations to surveil its students in this way (New Relic, Crazy Egg, and of course Google Analytics). Thanks to Edward Snowden, we know that government intelligence agencies are secondary beneficiaries of this data collection in the case of companies such as Google, Facebook, Apple, and Microsoft. For companies not named in these leaks, all we can say is we do not — because as users we cannot — know if they are passing on information about us as well. To understand how things might be different, one must look at the original vision for the Internet and the World Wide Web.

The Web was a victim of its own early success. The Internet was designed to be "peer-to-peer", with every connected computer considered equal, and the network which connected them completely oblivious to the nature of the data it was handling. You requested data from somebody else on the network, and your computer then manipulated and transformed that data in useful ways. It was a "World of Ends"; the network was dumb, and the machines at each end of a data transfer were smart. Unfortunately the Web took off when easy to use Web browsers were available, but before easy to use Web servers were available. Moreover, Web browsers were initially intended to be tools to both read and write Web documents, but the second goal soon fell away. You could easily consume data from elsewhere, but not easily produce and make it available yourself.

The Web soon succumbed to the client-server model, familiar from corporate computer networks — the bread and butter of tech firms like IBM and Microsoft. Servers occupy a privileged position in this model. The value is assumed to be at the centre of the network, while at the ends are mere consumers. This translates into social and economic privilege for the operators of servers, and a role for users shaped by the requirements of service providers. This was, breathless media commentary aside, the substance of the "Web 2.0" transformation.

Consider how the ideal Facebook user engages with their Facebook friends. They share an amusing video clip. They upload photos of themselves and others, while in the process providing the machine learning algorithm of Facebook's facial recognition surveillance system with useful feedback. They talk about where they've been and what they've bought. They like and they LOL. What do you do with a news story that provokes outrage, say the construction of a new concentration camp for refugees from the endless war on terror? Do you click the like button? The system is optimised, on the users' side, for face-work, and de-optimised for intellectual or political substance. On the provider's side it is optimised for exposing social relationships and consumer preferences; anything else is noise to be minimised.

In 2014 there was a minor scandal when it was revealed that Facebook allowed a team of researchers to tamper with Facebook's news feed algorithm in order to measure the effects of different kinds of news stories on users' subsequent posts. The scandal missed the big story: Facebook has a news feed algorithm.  Friending somebody on Facebook doesn't mean you will see everything they post in your news feed, only those posts that Facebook's algorithm selects for you, along with posts that you never asked to see. Facebook, in its regular day-to-day operation, is one vast, ongoing, uncontrolled experiment in behaviour modification. Did Facebook swing the 2016 US election for Trump? Possibly, but that wasn't their intention. The fracturing of Facebook's user base into insular cantons of groupthink, increasingly divorced from reality, is a predictable side-effect of a system which regulates user interactions based on tribal affiliations and shared consumer tastes, while marginalising information which might threaten users' ontological security.

Resistance to centralised, unaccountable, proprietary, user-subjugating systems can be fought on two fronts: minimising current harms; and migrating back to an environment where the intelligence of the network is at the ends, under the user's control. You can opt out of pervasive surveillance with browser add-ons like the Electronic Frontier Foundation's Privacy Badger. You can run your own instances of software which provide federated, decentralised services equivalent to the problematic ones, such as:

  • GNU Social is a social networking service similar to Twitter (but with more features). I run my own instance and use it every day to keep in touch with people who also run their own, or have accounts on an instance run by people they trust.
  • Diaspora is another distributed social networking platform more similar to Facebook.
  • OpenID is a standard for distributed authentication, replacing social login services from Facebook, Google, et al.
  • Piwik is a replacement for systems like Google Analytics. You can use it to gather statistics on the use of your own website(s), but it grants nobody the privacy-infringing capability to follow users as they browse around a large number of sites.

The fatal flaw in such software is that few people have the technical ability to set up a web server and install it. That problem is the motivation behind the FreedomBox project. Here's a two and a half minute news story on the launch of the project: Eben Moglen discusses the freedom box on CBS news

I also recommend this half-hour interview, pre-dating the Snowden leaks by a year, which covers much of the above with more conviction and panache than I can manage: Eben Moglen on Facebook, Google and Government Surveillance

Arguably the stakes are currently as high in many countries in the West as they were in the Arab Spring. Snowden has shown that for governments of the Five Eyes intelligence alliance there's no longer a requirement for painstaking spying and infiltration of activist groups in order to identify your key political opponents; it's just a database query. One can without too much difficulty imagine a Western despot taking to Twitter to blurt something like the following:

"Protesters love me. Some, unfortunately, are causing problems. Huge problems. Bad. :("

"Some leaders have used tough measures in the past. To keep our country safe, I'm willing to do much worse."

"We have some beautiful people looking into it. We're looking into a lot of things."

"Our country will be so safe, you won't believe it. ;)"

The Politics of Technology

Published by Matthew Davidson on Fri, 24/02/2017 - 4:03pm in

"Technology is anything that doesn't quite work yet." - Danny Hillis, in a frustratingly difficult to source quote. I first heard it from Douglas Adams.

Here is, at minimum, who and what you need to know:

Organisations

Sites

  • Boing Boing — A blog/zine that posts a lot about technology and society, as well as - distressingly - advertorials aimed at Bay Area hipsters.

People

Reading

Viewing

[I'm aware of the hypocrisy in recommending videos of talks about freedom, privacy and security that are hosted on YouTube.]

 

 

Tuesday, 1 November 2016 - 1:12pm

Published by Matthew Davidson on Tue, 01/11/2016 - 2:00pm in

COFFS Harbour company Janison has today launched a cloud-based enterprise learning solution, developed over several years working with organisations such as Westpac and Rio Tinto.

Really? In 2016 businesses are supposed to believe that a corporate MOOC (Massively Open Online Course; a misnomer from day one) will do for them what MOOC's didn't do for higher education? There are two issues here: quality and dependability.

In 2012, the "year of the MOOC", the ed-tech world was full of breathless excitement over a vision of higher education consisting of a handful of "superprofessors" recording lectures that would be seen by millions of students, with the rest of the functions of the university automated away. There was just one snag, noticed by MOOC pioneer, superprofessor, and founder of Udacity Sebastian Thrun. "We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don't educate people as others wished, or as I wished. We have a lousy product," he said. That is not to say that there isn't a market for lousy products. As the president of San Jose State University cheerfully admitted of their own MOOC program, "It could not be worse than what we do face to face." It's not hard to imagine a certain class of institution happy to rip off their students by outsourcing their instruction to a tech firm, but harder to see why a business would want to rip themselves off on an inferior mode of training. Technology-intensive modes of learning work best among tech-savvy, self-modivated learners, so-called "roaming autodidacts". Ask yourself how many of your employees fit into that category; they are a very small minority among the general population.

The other problem is gambling on a product that depends on multiple platforms which reside in the hands of multiple vendors, completely beyond your own control. The longevity of these vendors is not guaranteed, and application development platforms are discontinued on a regular basis. Sticking with large, successful, reputable vendors is no guarantee; Google, for instance, is notorious for euthanising their "Software-as-a-Service" (SaaS) offerings on a regular basis, regardless of the fanfare with which they were launched. You may be willing to trade quality for affordability in the short term, but future migration costs are a matter of "when", not "if".