Information Technology

Kevin Logan on Milo Yiannopolis’ Editor’s Notes

I’ve been avoiding talking too much about politics this week as I simply haven’t had the strength to tackle the issues in as much detail as they deserve. Quite apart from the fact that the issues that have been raised in the media this week – the continuing running down of the NHS, the growth of food banks, homelessness and grinding poverty, all to make the poor poorer and inflate the already bloated incomes of the Tory elite, all make me absolutely furious. I’ve been feeling so under the weather that, quite simply, I couldn’t face blogging about them and making myself feel worse mentally as well as physically.

But this is slightly different.

Slate has published a piece about the guidance notes Alt-Right Trumpist cheerleader Milo Yiannopolis has got from his publishers at Simon and Schuster. In this short video, scourge of anti-feminists, racists and general Nazis Kevin Logan goes through the notes, and it’s hilarious.

There are pages and pages of them. And the more you read, the funnier it gets.

You remember Milo Yiannopolis? He was one of the rising stars of the Alt-Right. He’s anti-feminist, anti-immigration and in many peoples’ eyes, racist, although he’s denied that he actually has any Nazi connections. All this despite the fact that he was filmed in a bar getting Hitler salutes from a party of Alt-Right fans.

He was the IT correspondent for Breitbart, many of whose founders, managers and leading staff are racists, and have been described as such by the anti-racism, anti-religious extremism organisation and site Hope Not Hate. Yiannopolis has constantly denied that he’s racist or bigoted by playing the race and sexuality card. He’s half-Jewish, gay, and his partner is Black. And so he argues that he can’t possibly be prejudiced against people of different ethnicities and gays. Well, possibly. But he has said some extremely bigoted, racist and homophobic comments, quite apart from his anti-feminism.

He describes himself as ‘a virtuous troll’. Others just call him a troll. That’s all he is. He’s only good at writing deliberately offensive material, but is otherwise completely unremarkable. But he’s British public school elite, and so Americans, who should know much better, assume that somehow he’s more cultured, knowledgeable, better educated and insightful than he actually is. Sam Seder commented on Yiannopolis that if he wasn’t British, nobody would take any notice of him. I think it’s a fair comment. But it does show the snobbery that goes with class and accent. Incidentally, when I was a kid reading comics, my favourite characters were the Thing in the Fantastic Four, and Powerman, in Powerman and Iron Fist. And it was partly because of their accents. Stan Lee has a terrible memory, and to help him remember which character said what, he used to give them different voices, sometimes based on who was in the media at the time. He made the Thing talk like Jimmy Durante. He was a space pilot, but his speech was that of New York working class. I liked him because he was kind of a blue-collar joe, like my family.

The same with Powerman. He was a Black superhero, real name Luke Cage, who had been subjected to unethical medical experiments to create a superman by a corrupt prison governor after being wrongly convicted. I didn’t understand the racial politics around the strip, but liked the character because he was another lower class character with a working class voice. He also had the same direct approach as the Thing in dealing with supervillains. Whereas Mr. Fantastic, the leader of the Fantastic Four, and Cage’s martial artist partner in fighting crime, Iron Fist would debate philosophically how to deal with the latest threat to the world and the cosmos, according to the demands of reason and science in the case of Mr. Fantastic, and ancient Chinese mystical traditions, in Iron Fists’, the Thing and Powerman simply saw another megalomaniac, who needed to be hit hard until they cried for mercy and stopped trying to take over the world or the universe.

But I digress. Back to Milo. Milo was due to have a book published, but this fell through after he appeared on Joe Rogan’s show defending child abuse. Yiannopolis had been sexually abused himself by a paedophile Roman Catholic priest, but believed that he had been the predator in that situation. From what I understand, the victims of sexual abuse often unfairly blame themselves for their assault, so I’m quite prepared to believe that something like that happened to Yiannopolis. What was unusual – and revolting – was that Yiannopolis appeared to feel no guilt and regret at all about the incident.

Very, very many people were rightly disgust. He got sacked from Breitbart, along with a lot of other companies, his speaking tour had to be cancelled, and the book deal he had managed to finagle fell through.

Well, as Sergeant Major Shut Up used to say on It Ain’t ‘Alf Hot, Mum, ‘Oh, dear. How sad. Never mind.’ It couldn’t happen to a nicer bloke, and Yiannopolis got a taste of the kind invective and vitriol he poured on the ‘SJWs’ and the Left.

He appeared later on to ‘clarify’ his statement – not an apology – saying that he now knew he was the victim of child abuse, and stating that he didn’t promote or approve of the sexual abuse of children. But the damage was done.

Now it seems Yiannopolis’ book deal is back on, though Simon and Schuster really aren’t happy with the manuscript.

Comments include recommendations that he remove the jokes about Black men’s willies, doesn’t call people ‘cucks’, and stop sneering at ugly people. One of these is particularly hilarious, as his editor writes that you can’t claim that ugly people are attracted to the Left. ‘Have you seen the crowd at a Trump rally?’ Quite. I saw the front row of the crowd at BBC coverage of the Tory party convention one year, and they were positively horrific. It seemed to be full of old school country squire types, as drawn by Gerald Scarfe at his most splenetic.

The guidance goes on with comments like ‘No, I will not tolerate you describing a whole class of people as mentally retarded’, and then factual corrections. Like ‘This never happened’. ‘This never happened too.’ ‘No, you’re repeating fake news. There was no Satanism, no blood and no semen’. At one point the editor demands that an entire chapter be excised because it’s just off-topic and offensive.

Here’s the video.

There probably isn’t anything unusual in the amount of editing that Simon and Schuster require. Mainstream publishing houses often request changes or alteration to the manuscript. It happens to the best writers and academics. Years ago I read an interview with the editors of some of the authors of the world’s most influential books. One of them was Germaine Greer’s. Greer had sent in a manuscript about cross-dressing in Shakespeare. A fair enough subject, as there’s a lot of female characters disguising themselves as boys in the Bard’s plays. But she had the insight that Greer was far more interested in gender roles, and suggested she write about that instead. And the result was The Female Eunuch.

At a much lower level of literature, Private Eye had a good chortle about one of ‘Master Storyteller’ Jeffrey Archer’s tawdry epics. Apparently the gossip was that it went through seven rewrites. Ian Fleming’s editor for the Bond books, according to one TV documentary, was a gay man with a keen interest in dressing well. Which is why some of the sex in Bond was less explicit than Fleming intended, but also why Bond became suave, stylish dresser fighting supervillains in impeccably cut dinner suits.

No shame in any of this, then. But what makes it funny is that it’s happened to Yiannopolis, who seems to have been too much of an egotist to think that anything like it could ever really happen to him. Looking through the comments, it’s also clear that the editor really doesn’t like his bigotry, and the invective he spews against racial minorities and the disadvantaged. I got the impression that he or she really didn’t want to have anything to do with book, but has presumably been told they had to work with Yiannopolis because the publishers were going to put it out anyway, no matter what anyone else in the company felt.

And the editor’s clear dislike of his bigotry is a problem for Yiannopolis, because he’s a troll, and that’s just about all he does: pour out sneers, scorn and abuse, like a male version of Anne Coulter, another right-winger, who’s far less intelligent than she thinks she is. And I know that grammatically standards are a bit looser now than they were a few years ago, but when you have the comment ‘This is not a sentence’, it’s clear that Yiannopolis is failing at one of the basic demands of any writer from the editors of small press magazines to the biggest publishing houses and newspapers and magazines. They all insist that you should write properly in grammatically correct sentences. But Yiannopolis has shown that he can’t do that either.

As for the kind of literary snobbery that used to look down very hard on comics and graphic novels, while promoting opinionated bigots like Yiannopolis as ‘serious’ writers, my recommendation is that if you’re given a choice between going to comics convention or seeing Milo, go to the comics convention. You’ll be with nicer people, the comics creators on the panels are very good speakers, and themselves often very literate and cultured. I can remember seeing Charles Vess at the UKCAC Convention in Reading in 1990. Vess is a comics artist, but he’s also produced cover art for SF novels. He gave a fascinating talk about the great artists that have influenced him with slides. And one of the highlights was listening to the publisher of DC, Roy Kanigher, who was very broad New York. Didn’t matter. He was genuinely funny, to the point where the interviewer lost control of the proceedings and Kanigher had the crowd behind him all the way.

Which shows what a lot of people really know already: just because someone’s got a British public school accent, does not make them a genius, or that they’re capable of producing anything worth reading. Comics at their best can be brilliant. They open up children’s and adults’ imaginations, the art can be frankly amazing and quite often the deal with difficult, complex issues in imaginative ways. Think of Neil Gaiman, who started off as one of the writers at 2000 AD before writing the Sandman strip for DC. Or Alan Moore.

Yiannopolis is the opposite. All he does is preach hate, trying to get us to hate our Black, Asian and Latin brothers and sisters, despise the poor, and tell women to know their place. He has no more right to be published, regardless of his notoriety, than anyone else. And the editor’s demand for amendments show it.

Oh, and as regarding publishing fake news, he’d have had far less sympathy from Mike, if by some misfortune Mike had found himself as Yiannopolis’ editor. Proper journalists are expected to check their facts, which Mike was always very keen on. It was he was respected by the people he actually dealt when he was working as a journalist. The problem often comes higher up, at the level of the newspaper editors and publishers. In the case of Rupert Murdoch, I’ve read account of his behaviour at meetings with his legal staff that shows that Murdoch actually doesn’t care about publishing libellous material, if the amount of the fine will be lower than the number of extra copies of the paper the fake news will sale. Fortunately it appears that Simon and Schusters’ editors don’t quite have that attitude. But who knows for how long this will last under Trump. The man is determined to single-handedly destroy everything genuinely great and noble in American culture.

E-Com at MC11 is effort to hijack basic internet governance issues

Published by Anonymous (not verified) on Tue, 05/12/2017 - 7:22am in

Chakravarthi Raghavan

As issues relating to the monopolistic/oligopolistic control over information and data by the Silicon Valley technology giants and their platforms are beginning to attract adverse public and political attention around the world, these technology platforms (Google, Facebook, Twitter) are attempting to hijack the issue of internet governance and democracy by writing trade rules at the WTO under the rubric of “e-commerce”.

Scholars and specialists in communication issues have been studying and focussing on this issue for a while, but some recent “incidents” and actions by these platforms have now brought the issue to the centre of political debate in various countries in relation to issues of Democracy, pluralism and democratic governance.

The latest example is that the “tweets” from The Hindu were not appearing in Twitter’s search results. The Hindu is a leading English language daily newspaper of India printed and published from several centres, and its Twitter handle has over 4.5 million verified followers. And when The Hindu’s attention was drawn to this, and its internet desk took up the matter with Twitter, its tweets began appearing again in the “search results”. (See article here by The Hindu’s Readers’ Editor A. S. Panneerselvan.)

Twitter admitted to The Hindu digital team that @the_hindu handle got “inadvertently” caught in its spam filter. Funnily though, real spams seem to escape the “spam filters” of most email service enterprises/platforms, and flood the regular in-boxes of email users, often resulting in recipients’ mailboxes “becoming full, and unable to accept new genuine messages”.

So much for the ability of these tech giants and platforms (Google, Facebook, Twitter, Microsoft) to filter out spams!

In an email communication to this writer, Prof. Dean Baker, Co-Director of the Washington DC-based Center for Economic and Policy Research (CEPR), comments that it is an “amazing story” of The Hindu’s tweets not appearing on Twitter’s search results, and Twitter’s explanation that The Hindu’s tweets “inadvertently” got caught in its spam filter.

“There are a variety of different issues here,” Prof. Baker says. “But most immediately, these huge platforms (Google, Facebook, Twitter) need to be regulated in the same way the phone company was regulated when it had a monopoly.”

“The phone company could not ‘accidentally’ deny service to a political party or organization it didn’t like. We need similar rules for these platforms. They also should not be allowed to use their platforms as springboards to other lines of business. That isn’t the whole story of a democratic media, but it seems a simple first step.”

On The Hindu Twitter issue, Richard Hill, a civil society activist and independent consultant based in Geneva, Switzerland, and formerly a senior official at the International Telecommunication Union (ITU), notes that “many of us have noticed that much of the news we read is the same, no matter which newspaper or web site we consult: they all seem to be recycling the same agency feeds. To understand why this is happening, there are few better analyses than the one developed by media scholar Robert McChesney in his most recent book, Digital Disconnect.”

McChesney is a Professor in the Department of Communication at the University of Illinois at Urbana- Champaign, specializing in the history and political economy of communications. He is the author or co-author of more than 20 books.

In reviewing McChesney’s book, Richard Hill says (the review cited below in full was originally published online at “boundary2.org”, with the title “Internet vs Democracy”, and is reproduced here in full with permission):

“Many see the internet as a powerful force for improvement of human rights, living conditions, the economy, rights of minorities, etc. And indeed, like many communications technologies, the internet has the potential to facilitate social improvements. But in reality the internet has recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities.

One might have expected that democracies would have harnessed the internet to serve the interests of their citizens, as they largely did with other technologies such as roads, telegraphy, telephony, air transport, pharmaceuticals (even if they used these to serve only the interests of their own citizens and not the general interests of mankind).

But this does not appear to be the case with respect to the internet: it is used largely to serve the interests of a few very wealthy individuals, or certain geo-economic and geo-political interests.

As McChesney puts the matter: “It is supremely ironic that the internet, the much-ballyhooed champion of increased consumer power and cutthroat competition, has become one of the greatest generators of monopoly in economic history” (p131 in the print edition).

This trend to use technology to favor special interests, not the general interest, is not unique to the internet. As Josep Ramoneda puts the matter: “We expected that governments would submit markets to democracy and it turns out that what they do is adapt democracy to markets, that is, empty it little by little.”

McChesney’s book explains why this is the case: despite its great promise and potential to increase democracy, various factors have turned the internet into a force that is actually destructive to democracy, and that favors special interests.

McChesney reminds us what democracy is, citing Aristotle (p53): “Democracy [is] when the indigent, and not the men of property are the rulers. If liberty and equality … are chiefly to be found in democracy, they will be best attained when all persons alike share in the government to the utmost.”

He also cites US President Lincoln’s 1861 warning against despotism (p55): “the effort to place capital on an equal footing with, if not above, labor in the structure of government.” According to McChesney, it was imperative for Lincoln that the wealthy not be permitted to have undue influence over the government.

Yet what we see today in the internet is concentrated wealth in the form of large private companies that exert increasing influence over public policy matters, going to so far as to call openly for governance systems in which they have equal decision-making rights with the elected representatives of the people. Current internet governance mechanisms are celebrated as paragons of success, whereas in fact they have not been successful in achieving the social promise of the internet. And it has even been said that such systems need not be democratic.

What sense does it make for the technology that was supposed to facilitate democracy to be governed in ways that are not democratic? It makes business sense, of course, in the sense of maximizing profits for shareholders.

McChesney explains how profit-maximization in the excessively laissez-faire regime that is commonly called neoliberalism has resulted in increasing concentration of power and wealth, social inequality and, worse, erosion of the press, leading to erosion of democracy. Nowhere is this more clearly seen than in the US, which is the focus of McChesney’s book. Not only has the internet eroded democracy in the US, it is used by the US to further its geo-political goals; and, adding insult to injury, it is promoted as a means of furthering democracy. Of course it could and should do so, but unfortunately it does not, as McChesney explains.

The book starts by noting the importance of the digital revolution and by summarizing the views of those who see it as an engine of good (the celebrants) versus those who point out its limitations and some of its negative effects (the skeptics). McChesney correctly notes that a proper analysis of the digital revolution must be grounded in political economy. Since the digital revolution is occurring in a capitalist system, it is necessarily conditioned by that system, and it necessarily influences that system.

A chapter is devoted to explaining how and why capitalism does not equal democracy: on the contrary, capitalism can well erode democracy, the contemporary United States being a good example. To dig deeper into the issues, McChesney approaches the internet from the perspective of the political economy of communication.

He shows how the internet has profoundly disrupted traditional media, and how, contrary to the rhetoric, it has reduced competition and choice – because the economies of scale and network effects of the new technologies inevitably favor concentration, to the point of creating natural monopolies (who is number two after Facebook? Or Twitter?).

The book then documents how the initially non-commercial, publicly-subsidized internet was transformed into an eminently commercial, privately-owned capitalist institution, in the worst sense of “capitalist”: domination by large corporations, monopolistic markets, endless advertising, intense lobbying, and cronyism bordering on corruption.

Having explained what happened in general, McChesney focuses on what happened to journalism and the media in particular. As we all know, it has been a disaster: nobody has yet found a viable business model for respectable online journalism.

As McChesney correctly notes, vibrant journalism is a pre-condition for democracy: how can people make informed choices if they do not have access to valid information? The internet was supposed to broaden our sources of information. Sadly, it has not, for the reasons explained in detail in the book. Yet there is hope: McChesney provides concrete suggestions for how to deal with the issue, drawing on actual experiences in well functioning democracies in Europe.

The book goes on to call for specific actions that would create a revolution in the digital revolution, bringing it back to its origins: by the people, for the people. McChesney’s proposed actions are consistent with those of certain civil society organizations, and will no doubt be taken up in the forthcoming Internet Social Forum, an initiative whose intent is precisely to revolutionize the digital revolution along the lines outlined by McChesney.

Anybody who is aware of the many issues threatening the free and open internet, and democracy itself, will find much to reflect upon in Digital Disconnect, not just because of its well-researched and incisive analysis, but also because it provides concrete suggestions for how to address the issues.”

Chakravarthi Raghavan is Editor Emeritus of the SUNS. This comment was originally published in SUNS #8580 dated 22 November 2017.

Triple Crisis welcomes your comments. Please share your thoughts below.

Triple Crisis is published by

The Trinet

Published by Anonymous (not verified) on Thu, 02/11/2017 - 8:28pm in

Discuss.

Before the year 2014, there were many people using Google, Facebook, and Amazon. Today, there are still many people using services from those three tech giants (respectively, GOOG, FB, AMZN). Not much has changed, and quite literally the user interface and features on those sites has remained mostly untouched. However, the underlying dynamics of power on the Web have drastically changed, and those three companies are at the center of a fundamental transformation of the Web

….

We forget how useful it has been to remain anonymous and control what we share, or how easy it was to start an internet startup with its own independent servers operating with the same rights GOOG servers have. On the Trinet, if you are permanently banned from GOOG or FB, you would have no alternative. You could even be restricted from creating a new account. As private businesses, GOOG, FB, and AMZN don’t need to guarantee you access to their networks. You do not have a legal right to an account in their servers, and as societies we aren’t demanding for these rights as vehemently as we could, to counter the strategies that tech giants are putting forward.

The Web and the internet have represented freedom: efficient and unsupervised exchange of information between people of all nations. In the Trinet, we will have even more vivid exchange of information between people, but we will sacrifice freedom. Many of us will wake up to the tragedy of this tradeoff only once it is reality.

New SF Series Coming to Channel 4: Philip K. Dick’s Electric Dreams

Published by Anonymous (not verified) on Tue, 29/08/2017 - 5:04am in

Last Sunday I caught this trailer on Channel 4 for a new science fiction series, Philip K. Dick’s Electric Dreams.

The title is obviously an homage to Dick’s most famous work, Do Androids Dream of Electric Sheep?, which became one of the great, classic SF films of all time, Ridley Scott’s Blade Runner.

The series will consist of ten, self-contained episodes, each based on a different Dick short story, starring some of film and TV’s top actors. These include Timothy Spall, Steve Buscemi, Jack Raynor, Benedict Wong, Bryan Cranston, Essie Davis, Greg Kinnear, Anna Paquin, Richard Madden, Holliday Grainger, Anneika Rose, Mel Rodriguez, Vera Formiga, Annalisa Basso, Maura Tierney, Juno Temple and Janelle Monae.

One of the executive producers is Ronald D. Moore, who worked on the Star Trek series, Star Trek: The Next Generation, Deep Space 9 and Voyage, as well as Battlestar Galactica and Outlander.

More information, including plot summaries, can be found on Channel 4’s website at http://www.channel4.com/info/press/news/philip-k-dicks-electric-dreams And Den of Geek, http://www.denofgeek.com/uk/tv/philip-k-dick-s-electric-dreams/50380/philip-k-dicks-electric-dreams-7-reasons-to-get-excited.

This looks really promising. Den of Geek say in their article that the anthology format already recalls Channel 4’s Black Mirror, and The Twilight Zone. I have to say I wasn’t drawn to watch Black Mirror. It was created by Charlie Brooker, and was an intelligent, dark examination of the dystopian elements of our media-saturated modern culture and its increasing reliance on information technology. However, it just wasn’t weird enough for me. Near future SF is great, but I also like spacecraft, aliens, ray guns and robots. And this promises to have some of them, at least.

Channel 4 have also produced another intelligent, critically SF series, Humans, based on the Swedish series, Real Humans. With Black Mirror, it seems Channel 4 is one of the leading broadcasters for creating intelligent, mature Science Fiction.

Forthcoming Programme on the Destructive Consequence of IT

Next Sunday, the 6th August, BBC 2 is showing a documentary at 8.00 pm on the negative aspects of automation and information technology. Entitled Secrets of Silicon Valley, it’s the first part of a two-part series. The blurb for it in the Radio Times reads

The Tech Gods – who run the biggest technology companies – say they’re creating a better world. Their utopian visions sound persuasive: Uber say the app reduces car pollution and could transform how cities are designed; Airbnb believes its website empowers ordinary people. some hope to reverser climate change or replace doctors with software.

In this doc, social media expert Jamie Bartlett investigates the consequences of “disruption” – replacing old industries with new ones. The Gods are optimistic about our automated future but one former Facebook exec is living off-grid because he fears the fallout from the tech revolution. (p. 54).

A bit more information is given on the listings page for the programmes on that evening. This gives the title of the episode – ‘The Disruptors’, and states

Jamie Bartlett uncovers the dark reality behind Silicon Valley’s glittering promise to build a better world. He visits Uber’s offices in San Francisco and hears how the company believes it is improving our cities. But Hyderabad, India, Jamie sees for himself the apparent human consequences of Uber’s utopian vision and asks what the next wave of Silicon Valley’s global disruption – the automation of millions of jobs – will mean for us. He gets a stark warning from an artificial intelligence pioneer who is replacing doctors with software. Jamie’s journey ends in the remote island hideout of a former social media executive who fears this new industrial revolution could lead to social breakdown and the collapse of capitalism. (p. 56).

I find the critical tone of this documentary refreshing after the relentless optimism of last Wednesday’s first instalment of another two-part documentary on robotics, Hyper Evolution: the Rise of the Robots. This was broadcast at 9 O’clock on BBC 4, with second part shown tomorrow – the second of August – at the same time slot.

This programme featured two scientists, the evolutionary biologist, Dr. Ben Garrod, and the electronics engineer Professor Danielle George, looking over the last century or so of robot development. Garrod stated that he was worried by how rapidly robots had evolved, and saw them as a possible threat to humanity. George, on the other hand, was massively enthusiastic. On visiting a car factory, where the vehicles were being assembled by robots, she said it was slightly scary to be around these huge machines, moving like dinosaurs, but declared proudly, ‘I love it’. At the end of the programme she concluded that whatever view we had of robotic development, we should embrace it as that way we would have control over it. Which prompts the opposing response that you could also control the technology, or its development, by rejecting it outright, minimizing it or limiting its application.

At first I wondered if Garrod was there simply because Richard Dawkins was unavailable. Dawko was voted the nation’s favourite public intellectual by the readers of one of the technology or current affairs magazines a few years ago, and to many people’s he’s the face of scientific rationality, in the same way as the cosmologist Stephen Hawking. However, there was a solid scientific reason he was involved through the way robotics engineers had solved certain problems by copying animal and human physiology. For example, Japanese cyberneticists had studied the structure of the human body to create the first robots shown in the programme. These were two androids that looked and sounded extremely lifelike. One of them, the earlier model, was modelled on its creator to the point where it was at one time an identical likeness. When the man was asked how he felt about getting older and less like his creation, he replied that he was having plastic surgery so that he continued to look as youthful and like his robot as was possible.

Japanese engineers had also studied the human hand, in order to create a robot pianist that, when it was unveiled over a decade ago, could play faster than a human performer. They had also solved the problem of getting machines to walk as bipeds like humans by giving them a pelvis, modeled on the human bone structure. But now the machines were going their own way. Instead of confining themselves to copying the human form, they were taking new shapes in order to fulfil specific functions. The programme makers wanted to leave you in new doubt that, although artificial, these machines were nevertheless living creatures. They were described as ‘a new species’. Actually, they aren’t, if you want to pursue the biological analogy. They aren’t a new species for the simple reason that there isn’t simply one variety of them. Instead, they take a plethora of shapes according to their different functions. They’re far more like a phylum, or even a kingdom, like the plant and animal kingdoms. The metal kingdom, perhaps?

It’s also highly problematic comparing them to biological creatures in another way. So far, none of the robots created have been able to reproduce themselves, in the same way biological organisms from the most primitive bacteria through to far more complex organisms, not least ourselves, do. Robots are manufactured by humans in laboratories, and heavily dependent on their creators both for their existence and continued functioning. This may well change, but we haven’t yet got to that stage.

The programme raced through the development of robots from Eric, the robot that greeted Americans at the World’s Fair, talking to one of the engineers, who’d built it and a similar metal man created by the Beeb in 1929. It also looked at the creation of walking robots, the robot pianist and other humanoid machines by the Japanese from the 1980s to today. It then hopped over the Atlantic to talk to one of the leading engineers at DARPA, the robotics technology firm for the American defence establishment. Visiting the labs, George was thrilled, as the company receives thousands of media requests, to she was exceptionally privileged. She was shown the latest humanoid robots, as well as ‘Big Dog’, the quadruped robot carrier, that does indeed look and act eerily like a large dog.

George was upbeat and enthusiastic. Any doubts you might have about robots taking people’s jobs were answered when she met a spokesman for the automated car factory. He stated that the human workers had been replaced by machines because, while machines weren’t better, they were more reliable. But the factory also employed 650 humans running around here and there to make sure that everything was running properly. So people were still being employed. And by using robots they’d cut the price on the cars, which was good for the consumer, so everyone benefits.

This was very different from some of the news reports I remember from my childhood, when computers and industrial robots were just coming in. There was shock by news reports of factories, where the human workers had been laid off, except for a crew of six. These men spent all day playing cards. They weren’t employed because they were experts, but simply because it would have been more expensive to sack them than to keep them on with nothing to do.

Despite the answers given by the car plant’s spokesman, you’re still quite justified in questioning how beneficial the replacement of human workers with robots actually is. For example, before the staff were replaced with robots, how many people were employed at the factory? Clearly, financial savings had to be made by replacing skilled workers with machines in order to make it economic. At the same time, what skill level were the 650 or so people now running around behind the machines? It’s possible that they are less skilled than the former car assembly workers. If that’s the case, they’d be paid less.

As for the fear of robots, the documentary traced this from Karel Capek’s 1920’s play, R.U.R., or Rossum’s Universal Robot, which gave the word ‘robot’ to the English language. The word ‘robot’ means ‘serf, slave’ or ‘forced feudal labour’ in Czech. This was the first play to deal with a robot uprising. In Japan, however, the attitude was different. Workers were being taught to accept robots as one of themselves. This was because of the animist nature of traditional Japanese religion. Shinto, the indigenous religion besides Buddhism, considers that there are kami, roughly spirits or gods, throughout nature, even inanimate objects. When asked what he thought the difference was between humans and robots, one of the engineers said there was none.

Geoff Simons also deals with the western fear of robots compared to the Japanese acceptance of them in his book, Robots: The Quest for Living Machines. He felt that it came from the Judeo-Christian religious tradition. This is suspicious of robots, as it allows humans to usurp the Lord as the creator of living beings. See, for example, the subtitle of Mary Shelley’s book, Frankenstein – ‘the Modern Prometheus’. Prometheus was the tAstritan, who stole fire from the gods to give to humanity. Victor Frankenstein was similarly stealing a divine secret through the manufacture of his creature.

I think the situation is rather more complex than this, however. Firstly, I don’t think the Japanese are as comfortable with robots as the programme tried to make out. One Japanese scientist, for example, has recommended that robots should not be made too humanlike, as too close a resemblance is deeply unsettling to the humans, who have to work with it. Presumably the scientist was basing this on the experience of Japanese as well as Europeans and Americans.

Much Japanese SF also pretty much like its western counterpart, including robot heroes. One of the long-time comic favourites in Japan is Astroboy, a robot boy with awesome abilities, gadgets and weapons. But over here, I can remember reading the Robot Archie strip in Valiant in the 1970s, along with the later Robusters and A.B.C. Warriors strips in 2000 AD. R2D2 and C3PO are two of the central characters in Star Wars, while Doctor Who had K9 as his faithful robot dog.

And the idea of robot creatures goes all the way back to the ancient Greeks. Hephaestus, the ancient Greek god of fire, was a smith. Lame, he forged three metal girls to help him walk. Pioneering inventors like Hero of Alexandria created miniature theatres and other automata. After the fall of the Roman Empire, this technology was taken up by the Muslim Arabs. The Banu Musa brothers in the 9th century AD created a whole series of machines, which they simply called ‘ingenious devices’, and Baghdad had a water clock which included various automatic figures, like the sun and moon, and the movement of the stars. This technology then passed to medieval Europe, so that by the end of the Middle Ages, lords and ladies filled their pleasure gardens with mechanical animals. The 18th century saw the fascinating clockwork machines of Vaucanson, Droz and other European inventors. With the development of steam power, and then electricity in the 19th century came stories about mechanical humans. One of the earliest was the ‘Steam Man’, about a steam-powered robot, which ran in one of the American magazines. This carried on into the early 20th century. One of the very earliest Italian films was about a ‘uomo machina’, or ‘man machine’. A seductive but evil female robot also appears in Fritz Lang’s epic Metropolis. Both films appeared before R.U.R., and so don’t use the term robot. Lang just calls his robot a ‘maschinemensch’ – machine person.

It’s also very problematic whether robots will ever really take human’s jobs, or even develop genuine consciousness and artificial intelligence. I’m going to have to deal with this topic in more detail later, but the questions posed by the programme prompted me to buy a copy of Hubert L. Dreyfus’ What Computers Still Can’t Do: A Critique of Artificial Reason. Initially published in the 1970s, and then updated in the 1990s, this describes the repeated problems computer scientists and engineers have faced trying to develop Artificial Intelligence. Again and again, these scientists predicted that ‘next year’ ,’in five years’ time’, ‘in the next ten years’ or ‘soon’, robots would achieve human level intelligence, and would make all of us unemployed. The last such prediction I recall reading was way back in 1999 – 2000, when we were all told that by 2025 robots would be as intelligent as cats. All these forecasts have proven wrong. But they’re still being made.

In tomorrow’s edition of Hyperevolution, the programme asks the question of whether robots will ever achieve consciousness. My guess is that they’ll conclude that they will. I think we need to be a little more skeptical.

Never Mind the Privacy: The Great Web 2.0 Swindle

Published by Matthew Davidson on Wed, 01/03/2017 - 1:43pm in

The sermon today comes from this six minute video from comedian Adam Conover: The Terrifying Cost of "Free” Websites

I don't go along with the implication here that the only conceivable reason to run a website is to directly make money by doing so, and that therefore it is our expectation of zero cost web services that is the fundamental problem. But from a technical point of view the sketch's analogy holds up pretty well. Data-mining commercially useful information about users is the business model of Software as a Service (SaaS) — or Service as a Software Substitute (SaaSS) as it's alternately known.

You as the user of these services — for example social networking services such as Facebook or Twitter, content delivery services such as YouTube or Flickr, and so on — provide the "content", and the service provider provides data storage and processing functionality. There are two problems with this arrangement:

  1. You are effectively doing your computing using a computer and software you don't control, and whose workings are completely opaque to you.
  2. As is anybody who wants to access anything you make available using those services.

Even people who don't have user accounts with these services can be tracked, because they can be identified via browser fingerprinting, and you can be tracked as you browse beyond the tracking organisation's website. Third party JavaScript "widgets" embedded in many, if not most, websites silently deliver executable code to users' browsers, allowing them to be tracked as they go from site to site. Common examples of such widgets include syndicated advertising, like buttons, social login services (eg. Facebook login), and comment hosting services. Less transparent are third-party services marketed to the site owner, such as Web analytics. These provide data on a site's users in the form of graphs and charts so beloved by middle management, with the service provider of course hanging on to a copy of all the data for their own purposes. My university invites no less than three organisations to surveil its students in this way (New Relic, Crazy Egg, and of course Google Analytics). Thanks to Edward Snowden, we know that government intelligence agencies are secondary beneficiaries of this data collection in the case of companies such as Google, Facebook, Apple, and Microsoft. For companies not named in these leaks, all we can say is we do not — because as users we cannot — know if they are passing on information about us as well. To understand how things might be different, one must look at the original vision for the Internet and the World Wide Web.

The Web was a victim of its own early success. The Internet was designed to be "peer-to-peer", with every connected computer considered equal, and the network which connected them completely oblivious to the nature of the data it was handling. You requested data from somebody else on the network, and your computer then manipulated and transformed that data in useful ways. It was a "World of Ends"; the network was dumb, and the machines at each end of a data transfer were smart. Unfortunately the Web took off when easy to use Web browsers were available, but before easy to use Web servers were available. Moreover, Web browsers were initially intended to be tools to both read and write Web documents, but the second goal soon fell away. You could easily consume data from elsewhere, but not easily produce and make it available yourself.

The Web soon succumbed to the client-server model, familiar from corporate computer networks — the bread and butter of tech firms like IBM and Microsoft. Servers occupy a privileged position in this model. The value is assumed to be at the centre of the network, while at the ends are mere consumers. This translates into social and economic privilege for the operators of servers, and a role for users shaped by the requirements of service providers. This was, breathless media commentary aside, the substance of the "Web 2.0" transformation.

Consider how the ideal Facebook user engages with their Facebook friends. They share an amusing video clip. They upload photos of themselves and others, while in the process providing the machine learning algorithm of Facebook's facial recognition surveillance system with useful feedback. They talk about where they've been and what they've bought. They like and they LOL. What do you do with a news story that provokes outrage, say the construction of a new concentration camp for refugees from the endless war on terror? Do you click the like button? The system is optimised, on the users' side, for face-work, and de-optimised for intellectual or political substance. On the provider's side it is optimised for exposing social relationships and consumer preferences; anything else is noise to be minimised.

In 2014 there was a minor scandal when it was revealed that Facebook allowed a team of researchers to tamper with Facebook's news feed algorithm in order to measure the effects of different kinds of news stories on users' subsequent posts. The scandal missed the big story: Facebook has a news feed algorithm.  Friending somebody on Facebook doesn't mean you will see everything they post in your news feed, only those posts that Facebook's algorithm selects for you, along with posts that you never asked to see. Facebook, in its regular day-to-day operation, is one vast, ongoing, uncontrolled experiment in behaviour modification. Did Facebook swing the 2016 US election for Trump? Possibly, but that wasn't their intention. The fracturing of Facebook's user base into insular cantons of groupthink, increasingly divorced from reality, is a predictable side-effect of a system which regulates user interactions based on tribal affiliations and shared consumer tastes, while marginalising information which might threaten users' ontological security.

Resistance to centralised, unaccountable, proprietary, user-subjugating systems can be fought on two fronts: minimising current harms; and migrating back to an environment where the intelligence of the network is at the ends, under the user's control. You can opt out of pervasive surveillance with browser add-ons like the Electronic Frontier Foundation's Privacy Badger. You can run your own instances of software which provide federated, decentralised services equivalent to the problematic ones, such as:

  • GNU Social is a social networking service similar to Twitter (but with more features). I run my own instance and use it every day to keep in touch with people who also run their own, or have accounts on an instance run by people they trust.
  • Diaspora is another distributed social networking platform more similar to Facebook.
  • OpenID is a standard for distributed authentication, replacing social login services from Facebook, Google, et al.
  • Piwik is a replacement for systems like Google Analytics. You can use it to gather statistics on the use of your own website(s), but it grants nobody the privacy-infringing capability to follow users as they browse around a large number of sites.

The fatal flaw in such software is that few people have the technical ability to set up a web server and install it. That problem is the motivation behind the FreedomBox project. Here's a two and a half minute news story on the launch of the project: Eben Moglen discusses the freedom box on CBS news

I also recommend this half-hour interview, pre-dating the Snowden leaks by a year, which covers much of the above with more conviction and panache than I can manage: Eben Moglen on Facebook, Google and Government Surveillance

Arguably the stakes are currently as high in many countries in the West as they were in the Arab Spring. Snowden has shown that for governments of the Five Eyes intelligence alliance there's no longer a requirement for painstaking spying and infiltration of activist groups in order to identify your key political opponents; it's just a database query. One can without too much difficulty imagine a Western despot taking to Twitter to blurt something like the following:

"Protesters love me. Some, unfortunately, are causing problems. Huge problems. Bad. :("

"Some leaders have used tough measures in the past. To keep our country safe, I'm willing to do much worse."

"We have some beautiful people looking into it. We're looking into a lot of things."

"Our country will be so safe, you won't believe it. ;)"

The Politics of Technology

Published by Matthew Davidson on Fri, 24/02/2017 - 4:03pm in

"Technology is anything that doesn't quite work yet." - Danny Hillis, in a frustratingly difficult to source quote. I first heard it from Douglas Adams.

Here is, at minimum, who and what you need to know:

Organisations

Sites

  • Boing Boing — A blog/zine that posts a lot about technology and society, as well as - distressingly - advertorials aimed at Bay Area hipsters.

People

Reading

Viewing

[I'm aware of the hypocrisy in recommending videos of talks about freedom, privacy and security that are hosted on YouTube.]

 

 

Tuesday, 1 November 2016 - 1:12pm

Published by Matthew Davidson on Tue, 01/11/2016 - 2:00pm in

COFFS Harbour company Janison has today launched a cloud-based enterprise learning solution, developed over several years working with organisations such as Westpac and Rio Tinto.

Really? In 2016 businesses are supposed to believe that a corporate MOOC (Massively Open Online Course; a misnomer from day one) will do for them what MOOC's didn't do for higher education? There are two issues here: quality and dependability.

In 2012, the "year of the MOOC", the ed-tech world was full of breathless excitement over a vision of higher education consisting of a handful of "superprofessors" recording lectures that would be seen by millions of students, with the rest of the functions of the university automated away. There was just one snag, noticed by MOOC pioneer, superprofessor, and founder of Udacity Sebastian Thrun. "We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don't educate people as others wished, or as I wished. We have a lousy product," he said. That is not to say that there isn't a market for lousy products. As the president of San Jose State University cheerfully admitted of their own MOOC program, "It could not be worse than what we do face to face." It's not hard to imagine a certain class of institution happy to rip off their students by outsourcing their instruction to a tech firm, but harder to see why a business would want to rip themselves off on an inferior mode of training. Technology-intensive modes of learning work best among tech-savvy, self-modivated learners, so-called "roaming autodidacts". Ask yourself how many of your employees fit into that category; they are a very small minority among the general population.

The other problem is gambling on a product that depends on multiple platforms which reside in the hands of multiple vendors, completely beyond your own control. The longevity of these vendors is not guaranteed, and application development platforms are discontinued on a regular basis. Sticking with large, successful, reputable vendors is no guarantee; Google, for instance, is notorious for euthanising their "Software-as-a-Service" (SaaS) offerings on a regular basis, regardless of the fanfare with which they were launched. You may be willing to trade quality for affordability in the short term, but future migration costs are a matter of "when", not "if".