Robots

Do We Need Ethical AI?

Published by Anonymous (not verified) on Sat, 09/03/2019 - 12:26am in

Amanda Sharkey has produced a very good paper on robot ethics which reviews recent research and considers the right way forward – it’s admirably clear, readable and quite wide-ranging, with a lot of pithy reportage. I found it comforting in one way, as it shows that the arguments have had a rather better airing to date than I had realised.

To cut to the chase, Sharkey ultimately suggests that there are two main ways we could respond to the issue of ethical robots (using the word loosely to cover all kinds of broadly autonomous AI). We could keep on trying to make robots that perform well, so that they can safely be entrusted with moral decisions; or we could decide that robots should be kept away from ethical decisions. She favours the latter course.

What is the problem with robots making ethical decisions? One point is that they lack the ability to understand the very complex background to human situations. At present they are certainly nowhere near a human level of understanding, and it can reasonably be argued that the prospects of their attaining that level of comprehension in the foreseeable future don’t look that good. This is certainly a valid and important consideration when it comes to, say, military kill-bots, which may be required to decide whether a human being is hostile, belligerent, and dangerous. That’s not something even humans find easy in all circumstances. However, while absolutely valid and important, it’s not clear that this is a truly ethical concern; it may be better seen as a safety issue, and Sharkey suggests that that applies to the questions examined by a number of current research projects.

A second objection is that robots are not, and may never be, ethical agents, and so lack the basic competence to make moral decisions. We saw recently that even Daniel Dennett thinks this is an important point. Robots are not agents because they lack true autonomy or free will and do not genuinely have moral responsibility for their decisions.

I agree, of course, that current robots lack real agency, but I don’t think that matters in the way suggested. We need here the basic distinction between good people and good acts. To be a good person you need good motives and good intentions; but good acts are good acts even if performed with no particular desire to do good, or indeed if done from evil but confused motives. Now current robots, lacking any real intentions, cannot be good or bad people, and do not deserve moral praise or blame; but that doesn’t mean they cannot do good or bad things. We will inevitably use moral language in talking about this aspect of robot behaviour just as we talk about strategy and motives when analysing the play of a chess-bot. Computers have no idea that they are playing chess; they have no real desire to win or any of the psychology that humans bring to the contest; but it would be tediously pedantic to deny that they do ‘really’ play chess and equally absurd to bar any discussion of whether their behaviour is good or bad.

I do give full weight to the objection here that using humanistic terms for the bloodless robot equivalents may tend to corrupt our attitude to humans. If we treat machines inappropriately as human, we may end up treating humans inappropriately as machines. Arguably we can see this already in the arguments that have come forward recently against moral blame, usually framed as being against punishment, which sounds kindly, though it seems clear to me that they might also undermine human rights and dignity. I take comfort from the fact that no-one is making this mistake in the case of chess-bots; no-one thinks they should keep the prize money or be set free from the labs where they were created. But there’s undoubtedly a legitimate concern here.

That legitimate concern perhaps needs to be distinguished from a certain irrational repugnance which I think clearly attaches to the idea of robots deciding the fate of humans, or having any control over them. To me this very noticeable moral disgust which arises when we talk of robots deciding to kill humans, punish them, or even constrain them for their own good, is not at all rational, but very much a fact about human nature which needs to be remembered.

The point about robots not being moral persons is interesting in connection with another point. Many current projects use extremely simple robots in very simple situations, and it can be argued that the very basic rule-following or harm prevention being examined is different in kind from real ethical issues. We’re handicapped here by the alarming background fact that there is no philosophical consensus about the basic nature of ethics. Clearly that’s too large a topic to deal with here, but I would argue that while we might disagree about the principles involved (I take a synthetic view myself, in which several basic principles work together) we can surely say that ethical judgements relate to very general considerations about acts. That’s not necessarily to claim that generality alone is in itself definitive of ethical content (it’s much more complicated than that), but I do think it’s a distinguishing feature. That carries the optimistic implication that ethical reasoning, at least in terms of cognitive tractability, might not otherwise be different in kind from ordinary practical reasoning, and that as robots become more capable of dealing with complex tasks they might naturally tend to acquire more genuine moral competence to go with it. One of the plausible arguments against this would be to point to agency as the key dividing line; ethical issues are qualitatively different because they require agency. It is probably evident from the foregoing that I think agency can be separated from the discussion for these purposes.

If robots are likely to acquire ethical competence as a natural by-product of increasing sophistication, then do we need to worry so much? Perhaps not, but the main reason for not worrying, in my eyes, is that truly ethical decisions are likely to be very rare anyway. The case of self-driving vehicles is often cited, but I think our expectations must have been tutored by all those tedious trolley problems; I’ve never encountered a situation in real life where a driver faced a clear cut decision about saving a bus load of nuns at the price of killing one fat man. If a driver follows the rule; ‘try not to crash, and if crashing is unavoidable, try to minimise the impact’, I think almost all real cases will be adequately covered.

A point to remember is that we actually do often make rules about this sort of thing which a robot could follow without needing any ethical sense of its own, so long as its understanding of the general context was adequate. We don’t have explicit rules about how many fat men outweigh a coachload of nuns just because we’ve never really needed them; if it happened every day we’d have debated it and made laws that people would have to know in order to pass their driving test. While there are no laws, even humans are in doubt and no-one can say definitively what the right choice is; so it’s not logically possible to get too worried that the robot’s choice in such circumstances would be wrong.

I do nevertheless have some sympathy with Sharkey’s reservations. I don’t think we should hold off from trying to create ethical robots though; we should go on, not because we want to use the resulting bots to make decisions, but because the research itself may illuminate ethical questions in ways that are interesting (a possibility Sharkey acknowledges). Since on my view we’re probably never really going to need robots with a real ethical sense, and on the other hand if we did, there’s a good chance they would naturally have developed the required competence, this looks to me like a case where we can have our cake and eat it (if that isn’t itself unethical).

Scientists Told to Halt Development of War Robots

This week’s been an interesting one for robot news. Yesterday, or a few days ago, there was a piece about the creation of a robot that could draw and paint thanks to facial recognition software. The robot’s art has been sold commercially. This follows an artistic group in France that has also developed an art robot. I’ll see if I can fish that story out, as it sounds like one of the conceits of 2000AD is becoming science fact. The Galaxy’s Greatest Comic told its readers that all its strips were the work of robots, so that the credits for the strips read ‘Script Robot X’, and ‘Art Robot Y’. Of course it was all created by humans, just as it really wasn’t edited by a green alien from Betelgeuse called Tharg. But it was part of the fun.

Killer robots aren’t, however. Despite the fact that they’ve been in Science Fiction for a very long time, autonomous military machines really are a very ominous threat to humanity. In today’s I for 15th February 2019 there was a report by Tom Bawden on page 11 about human rights campaigners telling the scientists at an American symposium on the technology that these machines should be preemptively banned. The article, ‘Scientists warned over ‘killer robots’ in future wars’, runs

Killer robots pose a threat to humanity and should be pre-emptively banned under an international treaty, the world’s biggest gathering of scientists was told yesterday.

Lethal, autonomous weapons – military robots that can engage and kill targets without human control – do not yet exist.

But rapid advances in autonomy and artificial intelligence mean they are well on their way to becoming a reality, delegates attending the American Association for the Advancement of Science’s symposium on the technology were told in Washington DC.

A poll conducted in 26 countries found that 54 per cent of Briton’s – and 61 per cent of respondents overall – opposed the development of killer robots that would select and attack targets without human intervention.

“Killer robots should be banned in a similar way to anti-personnel landmines,” said Mary Wareham, of the arms division at the campaign group Human Rights Watch, who also co-ordinates the Campaign to Stop Killer Robots.

“The security of the world and future of humanity hinges on achieving a ban on killer robots,” she added. “Public sentiment is hardening against the prospect of fully autonomous weapons. Bold, political leadership is needed for a new treaty to pre-emptively ban these weapons systems”.

The article was accompanied by a picture of one of the robots from the film Terminator Genisys, with a caption stating that it was perhaps unsurprising that most Britons oppose the development of such robots, but they wouldn’t look quite like those in the film.

I’ve put up several pieces before about military robots and the threat they pose to humanity, including a piece from the popular science magazine, Focus, published sometime in the 1990s, if I recall. Around about that time one state or company announced that it intended to develop such machines, and was immediately met with condemnation by scientists and campaigners. Their concern is that such machines don’t have the human quality of compassion. Once released, they could go on to kill indiscriminately, killing both civilians and soldiers. The scientists were also concerned that if truly intelligent killing machines are developed, then they could have the potential to turn on us and begin wiping us out or enslaving us. This was one of the threats to humanity’s future in the book Our Final Minute by the British Astronomer Royal, Martin Rees. When I saw him speak at the Cheltenham Festival of Science about his book a few years ago, one of the audience said that perhaps it would be a good thing if humanity was overthrown by the robots, because they could be better for the environment. Well, they could, I suppose, but it’s still not something I’d like to see happen.

Kevin Warwick, the robotics professor at the University of Reading, is also very worried about the development of such machines. In his 1990’s book, March of the Machines, he describes how, as far back as the 1950s, the Americans developed an autonomous military vehicle, consisting of a jeep adapted with a machine gun. He also discussed how one of the robots currently at the university could also be turned into a lethal killing machine. This is firefighting robot. It has a fire extinguisher, and instruments to detect fire. When it sees one, it rushes towards it and puts it out using the extinguisher. Warwick states, however, that if you replaced the extinguisher with a gun, gave it a neural net and then trained the machine to shoot people with blue eyes, say, then the machine would do just that until it ran out of power.

This comes at the end of the book. But it’s introduction is also chilling. It foresees a future, around 2050, when the machines really will have taken over. Those humans that have not been exterminated by the robots are kept as slaves, to work in those parts of the world that are still inaccessible or inhospitable to the robots, and to hunt down and kill the very few surviving humans that remain free. Pretty much like the far future envisioned by the SF writer Gregory Benford in his ‘Galactic Centre’ series novels.

Warwick was, however, very serious about the threat posed by these robots. I can remember seeing him also speak in Cheltenham, and one of the audience asked whether he still believed that this was a real threat that could occur about that time. He said he did, but that the he’d lowered the time at which it could become a real possibility.

Warwick has also said that one reason why he began to explore cyborgisation – the cybernetic enhancement of humans with robotic technology – was because he was so depressed with the threat robots cast over our future. Augmenting ourselves with high technology was a way we could compete with them, something Benford also explores in his novels through an alien race that has pursued just such a course. This, however, poses its own risks of loss of humanity, as depicted in Star Trek’s Borg and Dr. Who’s Cybermen.

This article sounds like something from Science Fiction, and I don’t think that at the moment robots are anywhere near as sophisticated to pose an existential threat to humanity right now. But killer robots are being developed, and very serious robotic scientists and engineers are very worried about them. Mary Wareham, Human Rights Watch and the Campaign to Stop Killer Robots are right. This technology needs to be halted now. Before it becomes a reality.

Book Review: Robot Rights by David J. Gunkel

Published by Anonymous (not verified) on Fri, 18/01/2019 - 10:50pm in

Tags 

Robots

In Robot RightsDavid J. Gunkel explores the question of whether rights should be extended to robots, examining the philosophical foundations of four key positions and their implications. Gunkel’s interrogation of what has been seen as an ‘unthinkable’ idea offers a valuable and accessible contribution that will prompt reflection on the place of humans in the world and our relationship with other entities of our own making, recommends Ignas Kalpokas

Robot Rights. David J. Gunkel. MIT Press. 2018.

Find this book: amazon-logo

The post-human turn in thinking about rights, privileges and agency has resulted in efforts to overturn anthropocentrism in considering both living and non-living things as well as machinic and algorithmic extensions of human beings (see here for a useful overview). However, discussing robot rights has remained, by author David J. Gunkel’s own admission, an ‘unthinkable’ idea, something that is susceptible to distrust at best and ridicule at worst. Hence, his new book Robot Rights is a crucial innovation in the way we think about our proper place in the world and relationships with entities of our own making. And while this questioning of the specificity and exclusivity of humanness is what connects the book to the wider post-humanist literature, Gunkel simultaneously engages with a broad spectrum of other literature spanning the domains of technology, law, communication, ethics and philosophy.

In trying to establish whether robots can and should have rights, Gunkel explores four main propositions, starting with an assertion that robots neither can nor should have rights. After all, robots are typically perceived to be mere tools or technological artefacts, designed and manufactured for the specific purpose of human use, i.e. as a means to an end rather than ends in themselves. As a result, the argument goes, there is simply no basis for a moral or legal status to arise, implying also that humans have no obligations to robots as independent entities. The only obligations towards robots would arise from them being somebody else’s property.

On the other hand, this mode of thinking opens up some fundamental questions that Gunkel is right to point out. Particularly, as robots get ever more sophisticated and autonomous, their influence on the social and psychological states of human beings is increasingly on a par with that of fellow humans and significantly exceeding the influence that mere tools can exert. Hence, it would not be unreasonable to assume that such affective capacity, instead of the nature of the affecting object, should count as the main criterion for attribution of moral status and, therefore, rights and privileges. As a result, it would perhaps be wrong to reduce all technological artefacts to tools regardless of their design and operation.

But there is an even deeper point that Gunkel raises: we are postulating a very Eurocentric idea of what ‘human nature’ actually is by emphasising the distinctness of human beings from their surroundings, while other traditions embrace very different ways of imagining the same relationships. As a result, Gunkel asserts, simplistically denying robot rights on the basis of their different nature ‘is not just insensitive to others but risks a kind of cultural and intellectual imperialism’ (77).

Image Credit: (Matt Wiebe CC BY 2.0)

The second proposition entertained by Gunkel asserts that robots both can and should have rights. It is a chiefly future-oriented proposition: although in their current stage of development robots are not yet capable of meriting rights, at some stage in the future (probably sooner rather than later) as they become more ‘human-like’, robots will cease to be mere things and will become moral subjects instead. Once that happens and robots, making use of proper artificial intelligence, become feeling and self-reflective conscious beings that enjoy autonomy and free will, it will become increasingly difficult (and morally unjustifiable) to deny robots the rights enjoyed by their fellow feeling and self-reflective conscious beings endowed with the capacity for autonomy and free will – humans. As a result, privileges, rights and immunities shall be granted. On the other hand, the same accusation of employing a Eurocentric anthropocentric standard of ‘human-like’ nature still applies, thus undermining the morality of such propositions.

The third proposition, even more anthropocentric than the previous one, stipulates that even though robots can have rights, they should not have them. The premise, as Gunkel emphasises, is deceptively simple: as far as law is concerned, the attribution of rights is a matter of fiat: once a legitimate authority following the right procedure passes a decision according rights to robots (or anything else), the latter immediately become endowed with such rights. In other words, rights do not depend on the qualities of their possessor (to be), as suggested by the two previous propositions, but merely on the will of the lawgiver: as such, even the current version of robots could legally be bearers of rights. However, as the proponents of this proposition suggest, the mere fact that something can be done does not mean that it should be done. Instead, this proposition is based on the assumption that we are, in fact, obliged ‘not to build things to which we would feel obligated’ (109), because the opposite would open up floodgates to uncontrollable social transformations. However, such a premise, Gunkel asserts, yet again necessitates accepting the Eurocentric and anthropocentric thesis of human exceptionality with all the intellectual imperialism that it involves, which is, obviously, not the most attractive option.

Finally, the fourth proposition stipulates that even though robots cannot have rights, they should still have them nevertheless. This particular perspective finds its basis in our tendency to accord value and social standing to the things we hold dear, particularly if such artefacts exhibit a social presence of some sort, robots being the obvious candidates. In other words, we invest things with our love and/or affection and, by doing so, dignify and elevate them from mere things to something more. As a result, robot rights would inhere, once again, not in the robots themselves but in their human owners. This, however, appears to be one of the weaker propositions, and Gunkel quickly dismisses it on the grounds of ‘moral sentimentalism’, its focus on appearances and, as readers might have been guessed already, anthropocentrism (the instrumentalisation of others for the purpose of our own sentimental wellbeing).

The alternative proposed by Gunkel himself involves turning to the philosopher Emmanuel Levinas and his thesis of an encounter with otherness being at the heart of ethics. Hence, it is not some predefined set of substantive characteristics or necessary properties inherent to the encountered other that determine the latter’s status, but relationships that are extrinsic and empirically observable. To put it in a form more immediately applicable to the book’s subject:

As we encounter and interact with other entities – whether they are another human person, an animal, the natural environment, or a domestic robot – this other entity is first and foremost experienced in relationship to us (165).

The key question is, therefore, neither whether robots can have rights nor whether they should, but instead how that which I encounter ‘comes to appear or supervene before me and how I decide, in the “face of the other” […] to make a response to or to take responsibility for (and in the face of) another’ (166). On the one hand, the Levinas-inspired solution solves the problem of anthropocentric prescription of the necessary possession of quasi-human traits. On the other hand, however, despite somewhat diffusing the sentimentalism for which Gunkel criticises the fourth proposition, this approach still retains anthropocentrism in another way – by implying the necessity of a human subject experiencing ‘the other’ and the encounter itself.

Although the issue of robot rights remains essentially unsolved in the book (which Gunkel openly acknowledges), the problematisation of the matter is itself a valuable and meaningful contribution, opening up for serious consideration of the otherwise ‘unthinkable’ proposition. Moreover, being accessibly written, the book is likely to appeal far beyond an academic readership. The caveat is, of course, that any closure to this debate will have to be worked out independently. Hence, a prospective reader has to be adventurous enough to engage in some intellectual DIY.

Ignas Kalpokas is currently assistant professor at LCC International University and lecturer at Vytautas Magnus University (Lithuania). He received his PhD from the University of Nottingham. Ignas’s research and teaching covers the areas of international relations and international political theory, primarily with respect to sovereignty and globalisation of norms, identity and formation of political communities, the political use of social media, the political impact of digital innovations and information warfare. He is the author of Creativity and Limitation in Political Communities: Spinoza, Schmitt and Ordering (Routledge, 2018). Read more by Ignas Kalpokas.

Note: This review gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics. 


Zarjaz! Rebellion to Open Studio for 2000AD Films

Published by Anonymous (not verified) on Tue, 27/11/2018 - 5:45am in

Here’s a piece of good news for the Squaxx dek Thargo, the Friends of Tharg, editor of the Galaxy’s Greatest Comic. According to today’s I, 26th November 2018, Rebellion, the comic’s current owners, have bought a film studio and plan to make movies based on 2000AD characters. The article, on page 2, says

A disused printing factory in Oxfordshire is to be converted into a major film studio. The site in Didcot has been purchased by Judge Dredd publisher Rebellion to film adaptations from its 2000 AD comic strips. The media company based in Oxford hopes to create 500 jobs and attract outside contractors.

Judge Dredd, the toughest lawman of the dystopian nightmare of Megacity 1, has been filmed twice, once as Judge Dredd in the 1990s, starring Sylvester Stallone as Dredd, and then six years ago in 2012, as Dredd, with Karl Urban in the starring role. The Stallone version was a flop and widely criticized. The Dredd film was acclaimed by fans and critics, but still didn’t do very well. Two possible reasons are that Dredd is very much a British take on the weird absurdities of American culture, and so doesn’t appeal very much to an American audience. The other problem is that Dredd is very much an ambiguous hero. He’s very much a comment on Fascism, and was initially suggested by co-creator Pat Mills as a satire of American Fascistic policing. The strip has a very strong satirical element, but nevertheless it means that the reader is expected to identify at least partly with a Fascist, though recognizing just how dreadful Megacity 1 and its justice system is. It nevertheless requires some intellectual tight rope walking, though it’s one that Dredd fans have shown themselves more than capable of doing. Except some of the really hardcore fans, who see Dredd as a role model. In interviews Mills has wondered where these people live. Did they have their own weird chapterhouse somewhere?

Other 2000AD strips that looked like they were going to make the transition from the printed page to the screen, albeit the small one of television, were Strontium Dog and Dan Dare. Dare, of course, was the Pilot of Future, created by Marcus Morris for the Eagle, and superbly drawn by Franks Hampson and Bellamy. He was revived for 2000 AD when it was launched in the 1970s, where he was intended to be the lead strip before losing this to Dredd. The strip was then revived again for the Eagle, when this was relaunched in the 1980s. As I remember, Edward Norton was to star as Dare.

Strontium Dog came from 2000 AD’s companion SF comic, StarLord, and was the tale of Johnny Alpha, a mutant bounty hunter, his norm partner, the Viking Wulf, and the Gronk, a cowardly alien that suffered from a lisp and a serious heart condition, but who could eat metal. It was set in a future, where the Earth had been devastated by a nuclear war. Mutants were a barely tolerated minority, forced to live in ghettos after rising in rebellion against an extermination campaign against them by Alpha’s bigoted father, Nelson Bunker Kreelman. Alpha and his fellow muties worked as bounty hunters, the only job they could legally do, hunting down the galaxy’s crims and villains.

Back in the 1990s the comic’s then publishers tried to negotiate a series of deals with Hollywood for the translation on their heroes on to the big screen. These were largely unsuccessful, and intensely controversial. In one deal, the rights for one character was sold for only a pound, over the heads of the creators. They weren’t consulted, and naturally felt very angry and bitter about the deal.

This time, it all looks a lot more optimistic. I’d like to see more 2000 AD characters come to life, on either the big screen or TV. Apart from Dredd, it’d good to see Strontium Dog and Dare be realized for screen at last. Other strips I think should be adapted are Slaine, the ABC Warriors and The Ballad of Halo Jones. Slaine, a Celtic warrior strip set in the period before rising sea levels separated Britain, Ireland and Europe, and based on Celtic myths, legends and folklore, is very much set in Britain and Ireland. It could therefore be filmed using some of the megalithic remains, hillforts and ancient barrows as locations, in both the UK and Eire. The ABC Warriors, robotic soldiers fighting injustice, as well as the Volgan Republic, on Earth and Mars, would possibly be a little more difficult to make. It would require both CGI and robotics engineers to create the Warriors. But nevertheless, it could be done. There was a very good recreation of an ABC Warrior in the 1990s Judge Dredd movie, although this didn’t do much more than run amok killing the judges. It was a genuine machine, however, rather than either a man in a costume or animation, either with a model or by computer graphics. And the 1980s SF movie Hardware, which ripped off the ‘Shock!’ tale from 2000AD, showed that it was possible to create a very convincing robot character on a low budget.

The Ballad of Halo Jones might be more problematic, but for different reasons. The strip told the story of a young woman, who managed to escape the floating slum of an ocean colony to go to New York. She then signed on as a waitress aboard a space liner, before joining the army to fight in a galactic war. It was one of the comic’s favourite strips in the 1980s, and for some of its male readers it was their first exposure to something with a feminist message. According to Neil Gaiman, the strip’s creator, Alan Moore, had Jones’ whole life plotted out, but the story ended with Jones’ killing of the Terran leader, General Cannibal, on the high-gravity planet Moab. There was a dispute over the ownership of the strip and pay between Moore and IPC. Moore felt he was treated badly by the comics company, and left for DC, never to return to 2000 AD’s pages. Halo Jones was turned into a stage play by one of the northern theatres, and I don’t doubt that even after a space of thirty years after she first appeared, Jones would still be very popular. But for it to be properly adapted for film or television, it would have to be done involving the character’s creators, Moore and Ian Gibson. Just as the cinematic treatment of the other characters should involve their creators. And this might be difficult, given that Moore understandably feels cheated of the ownership of his characters after the film treatments of Watchmen and V For Vendetta.

I hope that there will be no problems getting the other 2000 AD creators on board, and that we can soon look forward to some of the comics many great strips finally getting on to the big screen.

Splundig vur thrig, as the Mighty One would say.

Thoughts and Prayers for Pittsburgh after Nazi Shooting Outrage

The people of Britain, as I’m sure others were across the world, were shocked by the news of the terrible shooting yesterday at the Tree of Life synagogue in Pittsburgh. BBC News this evening reported that the gunman killed eleven people. He walked into this place of worship carrying a semi-automatic rifle and two handguns. I think the dead included two police officers. One of his victims was 97. What makes this even more heinous is that, as I understand it, it was done during a service for babies.

This, unfortunately, is only the latest mass shooting in the Land of the Free. There have been many, many, indeed too many others – at schools, nightclubs, sports events and other places of worship. A little while ago another racist shooter gunned down the folks in a black church. Another maniac attacked a Sikh gurdwara. And the Beeb’s reporter also stated that there had been another shooting in Kentucky, which had been overshadowed by the Pittsburgh shooting.

The alleged killer, John Bowers, is reported to have a history of posting on anti-Semitic websites. Yesterday, 27th October 2018, Hope Not Hate published on their website a piece about Bowers by their researcher Patrik Hermansson. Bowers had been a frequent poster on Gab, a social network associated with the Alt-Right. Hermansson states that Bowers’ name was removed from the network after it was published, but it still retained archived material posted by him. His profile banners in recent months included the number 1488. This is a White Supremacist code. The 14 refers to the infamous 14 Words of one particular neo-Nazi. I can’t quite remember the exact quote, but it’s something about creating a White homeland and securing ‘a future for White children’. The ’88’ bit is simply a numerical code. The 8 stands for the 8th letter of the alphabet, which is H. 88 = HH, which stands for ‘Heil Hitler’. on the 21st June 2018 he posted this prayer

Lord,

Make me fast and accurate. Let my aim be true and my hand faster than those who would seek to destroy me. Grant me victory over my foes and those that wish to do harm to me and mine. Let not my last thought be “If only I had my gun” and Lord if today is truly the day that You call me home, let me die in a pile of brass.

He also watched videos by Colin McCarthy, a far-right author of books claiming that Whites now suffer more racial violence at the hands of Blacks than the reverse, and that racial discrimination against Black is a hoax.

He has also posted messages expressing his disappointment that George Soros hasn’t been assassinated, presumably referring to the far-right Qanon conspiratorial movement. Another message supported the Rise Above Movement, a far-right ‘Fight Club’.

He also posted that ‘HIAS likes to bring in invaders that kill our people. I can’t sit by and let my people get slaughtered.’ The Beeb tonight said that he was angry at a Jewish charity for bringing Jews into the country. This sounds like the post they were referring to.

https://www.hopenothate.org.uk/2018/10/27/exclusive-pittsburgh-shooters-social-media-profile-reveals-white-supremacist-views/

From this it appears that Bowers believed in all the stupid conspiracy theories about the Jews secretly plotting to destroy the White race. It’s foul nonsense which has been disproved again and again, but there are still people who believe and are determined to promote it.

The Beeb in their report also discussed whether anything would be done about the availability of firearms in America after this. Their reporter said ‘No’. On the other hand, he said that while Trump couldn’t be blamed for this outrage – he condemned it – there would now be pressure on him to retreat on some of the incendiary rhetoric, which appears to have had a role in encouraging another right-winger to post letter bombs to Barack Obama and Hillary Clinton. This would, however, be a problem for Trump, as this has occurred in the run-up to the mid-term elections, when the contrary pressure was on to increase the verbal attacks against political opponents.

Kevin Logan, a vlogger who attacks and refutes the misogynist Men’s Rights movement and various racist and Fascistic individuals on YouTube, posted a video last night arguing that Trump really was anti-Semitic. This was based on some of Trump’s comments, which appear to be dog-whistle remarks about the Jews. To everyone not a Nazi, they appear to be perfectly innocuous. But to the members of the Alt-Right, they’re clear expressions of his own racial hatred. And then there’s his support amongst the Alt-Right, and his foul statement equivocating the morality responsibility behind the violence at the ‘Unite the Right’ racist rally in Charlottesville last year. Remember, he claimed that there were ‘good people on both sides’. No, the violence was done by the Nazis and Klansmen who turned up. And it’s automatically true that people chanting ‘the Jews will not replace us’ are not good people. Trump also took a suspiciously long time distancing himself and condemning a nasty, anti-Semitic comment from David Duke, a leader of the Klan down in Lousiana. Who is, needless to say, bitterly anti-Semitic. Logan’s an atheist, so he doesn’t offer prayers, but states very clearly that he stands in solidarity and sympathy with the victims of the Pittsburgh shooter. And ends his piece with the Spanish anti-Fascist slogan ‘No Pasaran!’ – ‘They shall not pass’.

I don’t think Trump is an anti-Semite, as his son-in-law, Jared Kushner, is Jewish and his daughter converted to Judaism to marry him. But his supporters are Fascists, and Trump does seem to have far-right sympathies. And he has rightly condemned the shooting. Nevertheless, he doesn’t condemn the type of people who support and commit these actions. Like the Alt-Right, like Richard Spencer, Sebastian Gorka and Steve Bannon, the far-right politicos, who served in his cabinet.

And the problem isn’t confined to Trump by any means. Some of the rhetoric coming out of the Republican party is extraordinarily venomous. I can remember one Republican Pastor denouncing Hillary Clinton back in the 1990s as ‘the type of woman who turns to lesbianism, leaves her husband, worships Satan and sacrifices her children’. Which is not only poisonous, but stark staring bonkers. Secular Talk a few years ago commented on the two hosts of a church radio station, who declared that Barack Obama was full of a genocidal hatred towards Whites and was planning to kill everyone with a White skin in the US. He would, they blithely announced, kill more people than Mao and Stalin combined. No, he didn’t. As for the conspiracy theorist Alex Jones, he claimed that Barack Obama was possessed by Satan, Hillary was having a lesbian affair with one of her aides, and was a robot, at least from the waist down. Or she was an alien, or possessed by aliens. He also said something about her having sex with goblins. Oh yes, and she was also a Satanic witch. Back to Barack Obama, Jones claimed that he was planning to have a state of emergency declared to force people into FEMA camps and take their guns away. He also said on his programme that the Democrats were running a paedophile ring from out of a Boston pizza parlour. He also denied that the various school shootings that have tragically occurred were real. Instead they were government fakes, intended to produce an outcry against guns so that, once again, the government could take the public’s guns away. Leaving them vulnerable and ready to be slaughtered by the globalists.

I don’t know whether Jones is a charlatan or a nutter. It’s unclear whether he really believes this bilge, or just spouts it because it’s a money-maker and gets him noticed. Either way, YouTube and a slew of other internet sites and networks have refused to carry his material because of its inflammatory and libelous nature. Someone walked into the pizza parlour he named as the centre of the Democrats’ paedophile ring with a gun, demanding to free the kids he claimed were kept in the basement. There were, obviously, no children, and no basement. Fortunately, the incident ended without anyone being killed. The grieving parents of kids murdered in some of the school shootings he falsely claimed were fake took legal action against him because they had people turning up accusing them of being ‘crisis actors’ sent in by the government as part of the staged event. The shootings weren’t staged, and understandably the parents were angry.

Trump’s rhetoric is part of the problem, but it’s not the whole problem. It’s not just the political rhetoric that needs to be curtailed, but also the vicious demonization of those of other races, and the encouragement of Fascist organisations. In the meantime, my thoughts and prayers are with the victims, relatives and first responders of this latest killing. May they be comforted, and no more have to suffer as they and so many before them have.

Video of Three Military Robots

This is another video I round on robots that are currently under development on YouTube, put up by the channel Inventions World. Of the three, one is Russian and the other two are American.

The first robot is shown is the Russian, Fyodor, now being developed by Rogozin. It’s anthropomorphic, and is shown firing two guns simultaneously from its hands on a shooting range, driving a car and performing a variety of very human-style exercises, like press-ups. The company says that it was taught to fire guns to give it instant decision-making skills. And how to drive a car to make it autonomous. Although it can move and act on its own, it can also mirror the movements of a human operator wearing a mechanical suit. The company states that people shouldn’t be alarmed, as they are building AI, not the Terminator.

The next is CART, a tracked robot which looks like nothing so much as a gun and other equipment, possibly sensors, on top of a tank’s chassis and caterpillar tracks. It seems to be one of a series of such robots, designed for the American Marine corps. The explanatory text on the screen is flashed up a little too quickly to read everything, but it seems intended to provide support for the human troopers by providing extra power and also carrying their equipment for them. Among the other, similar robots which appear is a much smaller unit about the size of a human foot, seen trundling about.

The final robot is another designed by Boston Dynamics, which has already built a man-like robot and a series of very dog-like, four-legged robots, if I remember correctly. This machine is roughly humanoid. Very roughly. It has four limbs, roughly corresponding to arms and legs. Except the legs end in wheels and the arms in rubber grips, or end effectors. Instead of a head, it has a square box and the limbs look like they’ve been put on backwards. It’s shown picking up a crate in a say which reminds me of a human doing it backward, bending over to pick it up behind him. But if his legs were also put on back to front. It’s also shown spinning around, leaping into the area and scooting across the test area with one wheel on the ground and another going up a ramp.

Actually, what the Fyodor robot brings to my mind isn’t so much Schwarzenegger and the Terminator movies, but Hammerstein and his military robots from 2000AD’s ‘ABC Warriors’ strip. The operation of the machine by a human wearing a special suite also reminds me of a story in the ‘Hulk’ comic strip waaaay back in the 1970s. In this story, the Hulk’s alter ego, Banner, found himself inside a secret military base in which robots very similar to Fyodor were being developed. They were also controlled by human operators. Masquerading as the base’s psychiatrist, Banner meets one squaddie, who comes in for a session. The man is a robot operator, and tells Banner how he feels dehumanized through operating the robot. Banner’s appalled and decides to sabotage the robots to prevent further psychological damage. He’s discovered, of course, threatened or attacked, made angry, and the Hulk and mayhem inevitably follow.

That story is very definitely a product of the ’70s and the period of liberal self-doubt and criticism following the Vietnam War, Nixon and possibly the CIA’s murky actions around the world, like the coup against Salvador Allende in Chile. The Hulk always was something of a countercultural hero. He was born when Banner, a nuclear scientist, got caught with the full force of the gamma radiation coming off a nuclear test saving Rick, a teenager, who had strayed into the test zone. Rick was an alienated, nihilistic youth, who seems to have been modelled on James Dean in Rebel Without A Cause. Banner pulls him out of his car, and throws him into the safety trench, but gets caught by the explosion before he himself could get in. Banner himself was very much a square. He was one of the scientists running the nuclear tests, and his girlfriend was the daughter of the army commander in charge of them. But the Hulk was very firmly in the sights of the commander, and the strip was based around Banner trying to run away from him while finding a cure for his new condition. Thus the Hulk would find himself fighting a series of running battles against the army, complete with tanks. The Ang Lee film of the Hulk that came out in the 1990s was a flop, and it did take liberties with the Hulk’s origin, as big screen adaptations often do with their source material. But it did get right the antagonism between the great green one and the army. The battles between the two reminded me very much of their depictions in the strip. The battle between the Hulk and his father, who now had the power to take on the properties of whatever he was in contact with was also staged and shot very much like similar fights also appeared in the comic, so that watching the film I felt once again a bit like I had when I was a boy reading it.

As for the CART and related robots, they remind me of the tracked robot the army sends in to defuse bombs. And research on autonomous killing vehicles like them were begun a very long time ago. The Germans in the Second World War developed small robots, remotely operated which also moved on caterpillar tracks. These carried bombs, and the operators were supposed to send them against Allied troops, who would then be killed when they exploded. Also, according to the robotics scientist Kevin Warwick of Reading University, the Americans developed an automatic killer robot consisting of a jeep with a machine gun in the 1950s. See his book, March of the Machines.

Despite the Russians’ assurances that they aren’t building the Terminator, Warwick is genuinely afraid that the robots will eventually take over and subjugate humanity. And he’s not alone. When one company a few years ago somewhere said that they were considering making war robots, there was an outcry from scientists around the world very much concerned about the immense dangers of such machines.

Hammerstein and his metallic mates in ‘ABC Warriors’ have personalities and a conscience, with the exception of two: Blackblood and Mekquake. These robots have none of the intelligence and humanity of their fictional counterparts. And without them, the fears of the opponents of such machines are entirely justified. Critics have made the point that humans are needed on the battle to make ethical decisions that robots can’t or find difficult. Like not killing civilians, although you wouldn’t guess that from the horrific atrocities committed by real, biological flesh and blood troopers.

The robots shown here are very impressive technologically, but I’d rather have their fictional counterparts created by Mills and O’Neill. They were fighting machines, but they had a higher purpose behind their violence and havoc:

Increase the peace!

What Does it Mean to be Human in the Digital Age?

Published by Anonymous (not verified) on Sat, 23/01/2016 - 12:25am in

A librarian, literary scholar, museum director and digital commentator explore how the digital age has shaped, and will continue to shape, the human experience and the humanities The TORCH Humanities and the Digital Age series will explore the relationship between Humanities and the digital. It will consider digital’s at once disruptive and creative potential, and imagine future territory to be prospected. Underpinning this is perhaps the most important question of all: What does it mean to be human in the digital age? How might it reshape the way we create meaning and values? In this opening event we bring together a panel of experts from across the Humanities and the cultural sector to examine how the digital age has shaped, and will continue to shape, the human experience and the Humanities. We are joined by Tom Chatfield (author and broadcaster), Chris Fletcher (Professorial Fellow at Exeter College, Member of the English Faculty and Keeper of Special Collections at the Bodleian Library) Diane Lees (Director-General of Imperial War Museum Group) and Emma Smith (Fellow and Tutor in English, University of Oxford). The discussion is chaired by Dame Lynne Brindley (Master, Pembroke College).