Technology

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

‘I’ Report of Successful Test of Virgin Hyperloop Maglev Train

Published by Anonymous (not verified) on Sat, 14/11/2020 - 1:55am in

Here’s an interesting piece of science/technology news. Tuesday’s I, for 10th November 2020, carried a piece by Rhiannon Williams, ‘New tube: Hyperloop carries first passengers in 100 mph test run’, which reported that Virgin Hyperloop had successfully tested their proposed maglev transport system. This is a type of magnetically levitated train running in a sealed tunnel from which the air has been removed so that there is no atmospheric resistance. The article ran

Two passengers have become the first to use Hyperloop, a technology which claims to be the future of ultra-fast ground transport.

The demonstration took place on a 500-metre test track in the Nevada desert outside Las Vegas on Sunday.

Josh Giegel, Virgin Hyperloop’s chief technology officer and co-founder, and Sara Luchlan, the company’s head of passenger experience, climbed into a Virgin Hyperloop pod before it entered an airlock inside an enclosed vacuum tube.

Footage showed the pod taking about 15 seconds to complete the journey as the air inside the tube was removed, accelerating the pod to 100 mph before it slowed to a halt.

The futuristic system is intended eventually to allow journeys of up to 670 mph using electric propulsion, and magnetic levitation in a tube, which is in near-vacuum conditions.

The Shanghai Maglev, the fastest commercial bullet train, which also uses magnetic levitation, is capable of top speeds of 3000 mph, meaning it could end up being considered slow by the Hyperloop’s theoretical future standards. The fastest speed achieved by a maglev train was 375 mph on a test run in Japan.

Virgin Hyperloop was founded in 2014 and builds on a proposal by Tesla and SpaceX founder Elon Musk.

The technology could allow passengers to travel between Heathrow and Gatwick airports, which are 45 miles apart, in just four minutes, the company’s previous chief executive, Rob Lloyd, told the BBC in 2018.

Ms Luchlan described the experience as “exhilarating”. It had, she added, been smooth, and “not at all like a rollercoaster”.

The business hopes to seat up to 23 passengers in a pod and make its technology “a reality in years, not decades”. Jay Walder, the current chief executive, said: “I can’t tell you how often I get asked, ‘is hyperloop safe?’ With today’s passenger testing, we have successfully answered this question, demonstrating that not only can Virgin Hyperloop safely put a person in a pod in a vacuum environment but that the company has a thoughtful approach to safety.”

The article was accompanied by this handy explanatory diagram.

The text’s blurry, but should read:

How it works

Hyperloop is a new mode of long-distance transportation that uses electromagnetic levitation and propulsion to glide a vehicle at airline speeds through a low-pressure tube.

Electromagnetic coils along the tube are supplied with an alternating current, causing them to rapidly switch polarity. Permanent magnets beneath the pod are attracted then repelled, creating forward motion and magnetic levitation.

It then shows a diagram of various other high speed vehicles with the proposed Hyperloop system for comparison. These are

Virgin Hyperloop …. 670 mph.

Boeing 787 Dreamliner …. 593 mph.

Maglev (Japan) …. 375 mph.

Javelin (UK) … 140 mph.

Well, colour me sceptical about all this. The ‘Virgin’ part of the company’s name makes me wonder if it’s part of Beardie Branson’s empire of tat. In which case, we’re justified in wondering if it this will ever, ever actually be put into operation. After all, Branson has been telling the good peeps, who’ve bought tickets for his Virgin Galactic journeys into space that everything’s nearly complete, and they’ll be going into space next year, for the past 25 years or so. I don’t believe that his proposed Spaceship 1 or whatever it’s called will ever fly, and that the whole business is being run as a loss so he can avoid paying tax legally. I don’t know how much it would cost to set up a full scale Hyperloop line running between two real towns between several stops within a single city like a subway, but I’d imagine it’d cost tens, if not hundreds of millions. I think it’s too expensive for any government, whether national or local authority, to afford, at least in the present economic situation.

And on a more humorous level, it also reminds me of the rapid transit system in the 2000 AD ‘Nemesis the Warlock’ strip. This was set in a far future in which humanity cowered underground, ruled over by the Terminators. They were a kind of futuristic medieval crusading order, dedicated to the extermination of all intelligent alien life, led by their ruthless leader, Torquemada. Earth was now called Termight, and humanity lived in vast underground cities linked by rapid transit tunnels. A system similar to the Hyperloop, the Overground, ran across Termight’s devastated surface. Termight’s surface had been devastated, not by aliens, but by strange creatures from Earth’s future, which had appeared during the construction of a system of artificial Black and White Holes linking Earth to the rest of the galaxy. These creatures included the Gooney Bird, a giant predatory bird that looked like it had evolved from the Concorde plane, which swept down from its nest in an abandoned city to attack the Overground trains and feed them to its young.

From: Nemesis the Warlock: Volume One, by Pat Mills, Kevin O’Neill and Jesus Redondo (Hachette Partworks Ltd: 2017)

The Hyperloop’s too close to the fictional Overground system for comfort. Will the company’s insurance cover attacks by giant rampaging carnivorous mechanical birds? The comparison’s particularly close as Termight’s surface is a desert waste, and the system was tested out in the Nevada desert.

I realise that ‘Nemesis the Warlock’ is Science Fiction, and that even with its successful test run on Tuesday, it’ll be years before the hyperloop system ever becomes a reality, but I think it might be wise to avoid it if it ever does. After all, you wouldn’t want to be on it when the metal claws and beak start tearing through the tunnel.

Robot Takeover Comes Nearer as Britain Intends to Employ 30,000 Robot Soldiers

Published by Anonymous (not verified) on Wed, 11/11/2020 - 9:31pm in

If this is true, then the robot revolution that’s been haunting the imagination of Science Fiction writers ever since Frankenstein and Karel Capek’s Rossum’s Universal Robots just got that bit nearer. Monday’s edition of the I for 9th November 2020 carried this chilling snippet:

Robot soldiers will fight for Britain

Thirty thousand “robot soldiers” could form a key part of the British army within two decades. General Sir Nick Carter, head of the armed forces, told Sky News that “an armed forces that’s designed for the 2030s” could include large numbers of autonomous or remotely controlled machines.

This has been worrying many roboticists and computer scientists for decades. Kevin Warwick, the professor of cybernetics at Reading University, begins his book with a terrifying prediction of the what the world could be like three decades from now in 2015 in his 1990s book, March of the Machines. The robots have taken over, decimating humanity. The few humans that remain are desexed slaves, used by the machines to fight against the free humans that have found refuge in parts of the world difficult or impossible for robots to operate in. Warwick is absolutely serious about the threat from intelligent robots. So serious in fact, that he became a supporter of cyborgisation because he felt that it would only be by augmenting themselves with artificial intelligence and robotics that humans could survive. I went to see Warwick speak at the Cheltenham Festival of Science years ago. When it came to the time when he answered questions from the audience, he was naturally asked whether he still believed that robots could take over, and whether this could happen as soon as 2050. He replied that he did, and that the developments in robotics had brought it forwards by several decades.

There have been a series of controversies going back decades when a country has announced that they intend to use robot soldiers. When this happened a few years ago, it was received with denunciations by horrified scientists. Apart from the threat of an eventual robot revolution and the enslavement of humanity, a la the Matrix, there are severe moral questions about the operation of such machines. Robots don’t have consciences, unlike humans. A machine that’s created to kill without proper constraints will carry on killing indiscriminately, regardless of whether its targets are a soldiers or innocent civilians. Warwick showed this possibility in his book with a description of one of the machines his department has on its top floor. It’s a firefighting robot, equipped with sensors and a fire extinguisher. If there’s a fire, it’s programmed to run towards it and put it out. All well and good. But Warwick points out that it could easily be adapted for killing. If you replaced the fire extinguisher with a gun and gave it a neural net, you could programme it to kill people of a certain type. Like those with blonde hair and blue eyes. Set free, it would continue killing such people until it ran out of bullets.

Less important, but possibly also a critical factor in the deployment of such war machines, is popular reaction to their use against human soldiers. It’s been suggested that their use in war would cause people to turn against the side using them, viewing them as cowards hiding behind such machines instead of facing their enemies personally, human to human, in real combat. While not as important as the moral arguments against their deployment, public opinion is an important factor. It’s why, since the Vietnam War, the western media has been extensively manipulated by the military-industrial-political complex so that it presents almost wholly positive views of our wars. Like the Iraq invasion was to liberate Iraq from an evil dictator, instead of a cynical attempt to grab their oil reserves and state industries by the American-Saudi oil industry and western multinationals. Mass outrage at home and around the world was one of the reasons America had to pull out of Vietnam, and it’s a major factor in the current western occupation of Iraq and Afghanistan. Popular outrage and disgust at the use of robots in combat could similarly lead to Britain and anyone else using such machines to lose the battle to win hearts and minds, and thus popular support.

But I also wonder if this isn’t also the cybernetics companies researching these robots trying to find a market for their wares. DARPA, the company developing them, has created some truly impressive machines. They produced the ‘Big Dog’ robot, which looks somewhat like a headless robotic dog, hence its name, as a kind of robotic pack animal for the American army. It all looked very impressive, until the army complained that they couldn’t use it. Soldiers need to move silently on their enemy, but the noise produced by the robots’ electric motors would be too loud. Hence the contract was cancelled. It could be that there are similar problems with some of their other robots, and so they are waging some kind of PR battle to get other countries interested in them as well as an America.

I’m a big fan of the 2000 AD strip, ‘ABC Warriors’, about a band of former war robots, led by Hammerstein, who are now employed fighting interplanetary threats and cosmic bad guys. When not remembering the horrors they experienced of the Volgan War. These are truly intelligent machines with their own personalities. In the case of Hammerstein and his crude, vulgar mate, Rojaws, a moral conscience. Which is absent in another member of the team, Blackblood, a former Volgan war robot, and ruthless war criminal. I really believe that they should be turned into a movie, along with other great 2000 AD characters, like Judge Dredd. But I don’t believe that they will ever be real, because the difficulties in recreating human type intelligence are too great, at least for the foreseeable future. Perhaps in a centuries’ time there might be genuinely intelligent machines like C-3PO and R2D2, but I doubt it.

The war robots now being touted are ruthless, mindlessly efficient machines, which scientists are worried could easily get out of control. I’ve blogged about this before, but the threat is real even if at present their promotion is so much PR hype by the manufacturers.

It looks to me that General Carter’s statement about using 30,000 of them is highly speculative, and probably won’t happen. But in any case, the armed forces shouldn’t even be considering using them.

Because the threat to the human race everywhere through their use is too high.

Why Google Is Facing Serious Accusations of Monopoly Practices

Published by Anonymous (not verified) on Wed, 11/11/2020 - 3:00am in

WDCPhotographer / Shutterstock.com The U.S. Department of Justice filed a lawsuit against Google-Alphabet (Google’s parent company) on October 20 for a range...

Read More

The divided citizen: Robo-debt was just the beginning

Published by Anonymous (not verified) on Thu, 05/11/2020 - 3:00am in

As governments around the world digitise their services, citizens are being subjected to conflicting forces of identity consolidation and fragmentation. In the name of ‘customer focus’, governments and their agencies are constructing systems that aggregate personal data from multiple sources. The ostensible reason is to provide convenient, personalised and accessible service on par with that of Facebook or Amazon. Through using corporations as models, governments are developing systems of surveillance and control based on commercial marketing technologies. As a result, citizens are left to reconcile the fragmented personas constructed from data created for incommensurate program goals.

While the recent Australian government robo-debt offensive provides a few insights into this new online retail mode of governance, it is just a sneak preview of the directions in which digital government is heading. This episode has taught us more broadly about the perils of data sharing, in particular the careless use of incommensurate data. It has also provided insights into the willingness of bureaucrats to prioritise compliance over ‘customer service’ and, despite internal and external warnings, to (illegally) force citizens to explain discrepancies in the data. 

As demonstrated by James Scott, when a government department seeks to administer an aspect of the natural or social environment, it works from a stripped-down map of the entities involved. The field of attention is abstracted and simplified within the goals of a program. The real world, whether a forest or a city, must be reduced and ordered to a point that makes it manageable from above. Moreover, any given community is subject to disparate programs that have been developed and legislated in a piecemeal way, each addressing the perceived circumstances in the time and place of their conception. Traditionally, program data has been service-focused and transactional. As a result, government information about a single person has been scattered between programs and the individual has been represented in the context of the program, for instance as a student, a passenger, a patient or a taxpayer. This means that each of us has to deal with government such that both parties assume a separate identity defined by each program. This sometimes leads to frustration in dealing with ‘the government’ when people assume it to be a single entity, but it has the advantage of limiting the ability of a government to have a singular view of a citizen.

Digital-government advocates propose to overcome these problems by delivering ‘seamless’ or ‘joined-up’ government, in which a government acts as a single entity in a relationship with a joined-up ‘customer’. Digital government combines web and app-based interfaces with large-scale back-room social and technical infrastructures. Digitising and automating large programs such as those operated by Services Australia can involve a comprehensive organisational ‘transformation’. The online retail mode of governance that pulls this picture together is based on computing infrastructure developed for corporate marketing systems that is designed to address analogous bureaucratic trends in the private sector. The origin of these customer-relationship-management (CRM) systems was a general move by many corporations from transactional or product-based marketing to relationship marketing. This connects with the widespread availability of modern computing and communications technologies that can collect, exchange and analyse large quantities of information at a micro level. CRM systems consolidate all information about a corporation’s customer across all its products and services, enabling precise tracking, prediction and influencing of customer intentions, preferences and behaviour over time. Analysing data over time opens up the possibility of predicting customer behaviours and identifying the opportunities and risks they present to the corporation. Consultants, systems integrators and software vendors have promoted this capability to government bureaucrats, such as those in Services Australia, who are bent on removing the uncertainties created by their old transactional systems and the unreliable behaviour of their ‘customers’. With such weapons in its arsenal, Services Australia has foreshadowed its ability to profile citizens who present a risk, thus enabling it to take pre-emptive action to prevent fraud and error and tighten compliance.

Services Australia delivers programs on behalf of thirty-four federal agencies. It is currently undertaking a $1.5-billion, seven-year program to fundamentally transform the delivery of social-services payments and services. It aims to use CRM and other large modular corporate systems to consolidate the data it has on each citizen across those programs. In doing so it wants to create ‘a single view of the Customer’, built up from thirty years of existing data and drawn from real-time links to other government and non-government sources. Services Australia wants to minimise the amount of information provided by customers in favour of data obtained through linkages to the systems of external organisations. It also wants to become a (digital) platform for the services of other federal, state and local governments.

However, this process of consolidation collides with the service-specific renditions of customers and their circumstances. This was amply illustrated in the case of robo-debt, which foundered in part on the differing definitions of income between Services Australia and the Australian Taxation Office. It was up to customers to reconcile the differences. This is just one simple example of a problem identified ten years ago by Paul Henman, who pointed out that it is customers who must navigate the incommensurate requirements of multiple policy areas when they encounter joined-up interactions with government. The continuation of this trend clearly highlights the entrenched power imbalance between the public and private system designers and a disparate and individualised citizenry.  

There is another dimension to this accumulation of customer-centric data. Michel Foucault described a ‘disciplinary government of each and all’, where, on the one hand, individuals are rendered visible to the state in fine detail and, on the other hand, populations can be sliced and diced according to life circumstances or current and future risk to the state. In the documentation for its transformation program, Services Australia envisages ‘a circumstance based approach’ to managing customers. It explains that:

Customer Circumstance data [is] the data that relates to the events that have or will occur in a Customer’s life, as disclosed to the department by the Customer or authorised third party… As a Customer moves through their life, their Circumstances (such as marriage and other relationships, residence, employment, birth, death and disability status) change and in turn these changes affect the Customer’s Eligibility and Entitlement.

Cutting across this consolidated view, according to Services Australia, customers can also be:

identified as part of a micro-segment according to their level of complexity, access needs and preferences, and level of risk. This is measured through indicators and Circumstances (both reactive and proactive) that can be observed by the department. [This] will enable the department to target service delivery based on an assessment of the individual risk, access needs and complexity of Customers, tailoring service offers to match customer circumstances.

This may be interpreted to have a benign meaning:  that Services Australia could be more inclined to ensure that every eligible person receives their due benefit. But the robo-debt experience demonstrated that it might be more prudent to interpret ‘target’ in a negative sense: to mean that only eligible people receive their benefits. Such an interpretation is supported by the predictive-risk frame engaged by the department. Robert Castel observed that such preventive techniques ‘promote a new mode of surveillance: that of systematic predetection’, which can ‘dissolve the notion of a subject or a concrete individual, and put in its place a combinatory of factors, the factors of risk’. In either interpretation, predictive risk assessment is a licence for an entity to surveil all its clientele all the time in an effort to assert control.

What does a citizen look like in the eyes of the state when she is constructed from multiple databases and how does she respond to the resultant kaleidoscopic rendition of her? Let us suppose that she has had multiple contacts with government agencies and programs over time. She could ask to see all the information that a government has on her. (Of course, despite current freedom-of-information laws, it would be almost impossible to get all her data. The citizen-customer would be obliged to seek her data separately from each program and agency, not always with success.) She would get something like a bunch of spreadsheets with data points (and, hopefully, field names) that have been stripped of the context of the agency applications that extracted the data from and about her. Unlike an old-fashioned paper dossier, with structured documents in chronological order, she would get the raw data but not the ‘business rules’ that give it meaning and that link the data elements together into a narrative. In a scenario of networked databases, her identities are constructed on the fly within the policy logic of each circumstance and the generic global logic of customer management embedded in the CRM system.

Like other major centres of surveillance in the Australian government, such as the Department of Home Affairs, Services Australia seems to assume that multiple incommensurate views of the subject can be reconciled without difficulty. But this approach assumes that the subject could be rendered in a coherent way that would serve all the programs drawing on that single rendition—a variant of the ‘view from nowhere’. This demands that all participating programs agree on the same definition of shared entities and their characteristics. Even if such a cultural mind-melding were possible, it would bring about a power struggle to agree on a dominant world view and the meanings of its language. What tends to happen in this situation is that interested parties, especially in bureaucracies, will only push so far, leaving inconsistencies to be resolved through informal arrangements or—such as occurred with robo-debt—other less powerful parties. Thus, inconsistencies must necessarily persist and the task of resolving them will generally fall to the isolated neoliberal subject; it will be their task to prove that their lived reality is more complex than the bureaucrat’s model.


Technocratic Urban Governance and the Need to Localise Computing Infrastructure

Andrés R. Yaksic, 21.7.2020

Site Maintenance

Published by Anonymous (not verified) on Thu, 05/11/2020 - 2:04am in

Tags 

Technology

Daily Nous will be undergoing some maintenance this week which may result in some pages being unavailable and fewer new posts and links than usual.

Your patience is appreciated. 

The post Site Maintenance appeared first on Daily Nous.

Thunderfoot Attacks Black South African Student Who Claims Western Science Is ‘Racist’

Thunderfoot is another YouTube personality like Carl Benjamin aka Sargon of Akkad, the Sage of Swindon, whose views I categorically don’t share. He’s a militant atheist of the same stripe as Richard Dawkins. He’s a scientist, who shares Peter Atkins’ view that science can explain everything and leaves no room for religion or mysticism. He’s also very right wing, sneering at SJWs (Social Justice Warriors) and attacking feminism. So he’s also like Sargon on that score. But in this video, he does make valid points and does an important job of defending science against the glib accusation that it’s racist.

Thunderfoot put up this video in 2016 and it seems to be his response to a video circulating of part of a student debate at the University of Cape Town. The speaker in this video, clips of which Thunderfoot uses in his, is a Black female student who argues that western science is racist and colonialist. It arose in the context of western modernity and excludes indigenous African beliefs, and if she had her way, it would be ‘scratched out’. One of the African beliefs it excludes is the fact, as she sees it, that sangomas – African shamans – can call lightning down to strike people. She challenges her debating opponent to decolonise their mind and explain scientifically how the sangoma is able to do that. Her interlocutor is not impressed, and laughs out loud at this assertion, which gets a sharp response from the moderator who claims that the debate is supposed to be a circle of respect and they should apologise or leave. The anti-science student states that western science is totalizing, urges her opponent to decolonize their mind, and calls for an African science. She also rejects gravity because Isaac Newton sat on a tree and saw an apple fall.

Thunderfoot answers these assertions by pointing out, quite rightly, that science is about forming models of reality with ‘predictive utility’. It is the ability of scientific model to make useful predictions which shows that the model is an accurate description of reality. Science’s discoveries are true for everyone, regardless of whether they are male or female, Black or White. He shows a clip of militant atheist Richard Dawkins talking to another group of students, and explaining that the proof that science works is that planes and rockets fly. The equations and scientific models describing them have to, otherwise they don’t. Dawkins is another personality, whose views I don’t share, and this blog was started partly to refute his atheist polemics. But the quote from Dawkins is absolutely right. Thunderfoot goes on to say that if African shamans really could call lightning down on people, then surely someone would have used it for military purposes. And to demonstrate, he shows a clip of Thor getting hit with a lightning bolt from an Avengers movie.

As for African science, he then hands over to another YouTuber, who talks about an attempted scam in Mugabe’s Zimbabwe. A women claimed that she had a rock which produced refined diesel oil, and called on the government to see for themselves. Which they did. If the woman’s claim was genuine, then Zimbabwe would be entirely self-sufficient in diesel. However, such hopes were dashed when it was revealed that the rock had a hole bored into it from which diesel was being pumped.

The video goes on to make the point that such ‘science denialism’ is dangerous by pointing to the claim of the former South African president, Thabo Mbeki, that HIV didn’t cause AIDS. He tried to stop people using the retroviral drugs used to treat HIV in favour of herbal cures that didn’t work. As a result, 300,000 people may have lost their lives to the disease.

Thunderfoot concludes that this is the situation this student would like to create: an African science which rejects gravity, asserts shamans can strike people with lightning, and in which hundreds of thousands of people die unnecessarily from AIDS. Here’s the video.

Racism and the Rejection of Conventional Science

Thunderfoot is right in that one current view in the philosophy of science is that science is about forming models of reality, which can make predictions. This is the view I hold. He is also correct in that science’s findings are valid regardless of where they are made and who makes them. And I’d also argue that, rather than science, it is this young Black woman, who is racist. She rejects science on the racist grounds that it was created by White Europeans. This is also the genetic fallacy, the logical mistake that a statement must be wrong because of the nature of the person who makes it. The Nazis, for example, made the same mistake when they rejected Einstein’s Theory of Relativity because Einstein was Jewish. They also believed that science should reflect racial identity, and so sacked Jewish mathematicians and scientists in an attempt to create a racially pure ‘Aryan’ science.

Science and the Paranormal

I don’t believe, however, that science automatically excludes the supernatural. There are very many scientists, who are people of faith. Although it’s very much a fringe science – some would say pseudoscience – there is the discipline of parapsychology, which is the scientific investigation of the paranormal. Organisations like the Society for Psychical Research and ASSAP have existed since the 19th century to carry out such investigations. Their members do include scientists and medical professionals. I don’t think it would be at all unreasonable for parapsychologists to investigate such alleged powers by indigenous shamans, just as they investigate appearances of ghosts, psychic powers and mediumship in the west. And if it could be demonstrably proved that such shamans had the powers they claim, then science would have to accommodate that, whether it could explain it or not.

On the other hand is the argument that science shouldn’t investigate the paranormal or supernatural, not because the paranormal doesn’t exist, but because it is outside the scope of scientific methodology to investigate it as different field altogether. Thus science can ignore the general question of whether tribal shamans are able to conjure up lightning bolts as outside its purview and more properly the subject of metaphysics or theology. In which case, it’s left up to the individual to decide for themselves whether these shamans are able to perform such miracles.

Muti Witchcraft and Murder

Thunderfoot and his fellow YouTuber are also right to point out the harm that bad and fraudulent science can do. And there are very serious issues surrounding the promotion of indigenous African magic. Years ago a South African anthropologist defended African muti at an academic conference here in Britain. Muti is a form of magic in which someone tries to gain success and good luck through acquiring amulets made of human body parts. These include the fingers and the genitals. It’s believed they are particularly powerful if they are cut off the victim while they’re still alive. There’s a whole black market in such body parts and amulets in South Africa, with prices varying according to the desired body party. Way back in 2004-5 the police found the remains of a human torso in the Thames. It had been wrapped in cloth of particular colours, and it was believed that it had belonged to a boy, who’d been killed as part of such a ritual.

Indigenous Beliefs and the Politics of Apartheid

Years ago the small press, sceptical UFO magazine, Magonia, reviewed a book by the South African shaman Credo Mutwa. This was supposed to be full of ancient African spiritual wisdom. In fact it seems to have been a mixture of South African indigenous beliefs and western New Age ideas. The Magonians weren’t impressed. And one of the reasons they weren’t impressed was Mutwa himself and the political use of him and other African shamans by the apartheid government.

Before it fell, apartheid South Africa had a policy of ‘re-tribalisation’. This was the promotion of the separate identities and cultures of the various indigenous peoples over whom the White minority ruled. This included the promotion of traditional religious and spiritual beliefs. These peoples had intermarried and mixed to such an extent, that by the 1950s they had formed a Black working class. And it was to prevent that working class becoming united that the apartheid government promoted their cultural differences in a policy of divide and rule. Mutwa was allegedly part of that policy as a government stooge.

Attacks on Science and Maths for Racism Dangerous

I’ve put up several videos now from Sargon attacking the assertion that western education and in particular mathematics is racist and somehow oppressed Blacks. I’m putting up this video because it does the same for the assertion that western science is also racist.

Not only are science and maths not racist, it is also very definitely not racist to reject some forms of African magic. Killing and mutilating people for good luck is absolutely abhorrent and should be condemned and banned, and those who practise it punished, regardless of its status as an African tradition. At the same time it does need to be realised that the South African government did try to keep Black Africans down and powerless partly through the promotion of indigenous spiritual beliefs. It’s ironic that the young woman shown arguing against science does so in an apparent belief that its rejection will somehow be liberating and empowering for Black Africans. And Thunderfoot has a chuckle to himself about the irony in her arguing against science, while reaching for her ipad, one of its products.

Belief in the supernatural and in the alleged powers of indigenous shamans should be a matter of personal belief. Disbelieving in them doesn’t automatically make someone a racist bigot. But this young woman’s rejection of science is racist and potentially extremely dangerous, because it threatens to deprive Black South Africans like her of science’s undoubted benefits. Just like Mbeki’s rejection of the link between HIV and AIDS led to the unnecessary deaths of hundreds of thousands of desperately ill men, women and children.

Conclusion

What is particularly irritating is that this young woman and her fellow students are affluent and, as students, highly educated. If the woman was poor and uneducated, then her views would be understandable. But she isn’t. Instead, she uses the language and rhetoric of postmodernism and contemporary anti-colonialism. It does make you wonder about what is being taught in the world’s universities, arguments about academic freedom notwithstanding.

In the past, there has been racism in science. Eugenics and the hierarchy of races devised by 19th century anthropologists as well as the Nazis’ attempts to create an Aryan science are examples. But attacks on conventional science and mathematics as racist, based on no more than the fact that modern science and maths have their origins in contemporary western culture is also racist and destructive.

Glib attacks on science by people like the young student in the above video not only threaten its integrity, but will also harm the very people, who most stand to benefit. They should be thoroughly rejected.

Zoom Censors Online Session on Zoom Censorship

Published by Anonymous (not verified) on Mon, 26/10/2020 - 7:00pm in

Tags 

Technology

“We Will Not Be Silenced,” an academic webinar about Zoom’s decision to cancel an earlier academic webinar, was canceled by Zoom.

The earlier webinar, scheduled to take place in September, was a talk by Leila Khalid to be hosted by San Francisco State University. Khalid is a Palestinian rights advocate who is known for taking part in hijackings of two planes 50 years ago. According to Verge,

The webinar was cancelled after pressure from Israeli and Jewish lobby groups including the Lawfare Project. They noted that the US government has designated the PFLP a terrorist organization, and claimed that by hosting Khaled on its service, Zoom was exposing itself to criminal liability for providing “material support or resources” to a terrorist group.

Reportedly, YouTube and Facebook also were involved in efforts to stop the talk.

In response to the cancellation of the September webinar, the New York University (NYU) chapter of the American Association of University Professors (AAUP) and NYU’s Hagop Kevorkian Center for Near Eastern Studies planned a session, “We Will not Be Silenced: Against the Censorship and Criminalization of Academic Political Speech.” Scheduled to take place this past Friday, Zoom canceled it, too.

According to Buzzfeed, a Zoom spokesperson said:

Zoom is committed to supporting the open exchange of ideas and conversations and does not have any policy preventing users from criticizing Zoom. Zoom does not monitor events and will only take action if we receive reports about possible violations of our Terms of ServiceAcceptable Use Policy, and Community Standards. Similar to the event held by San Francisco State University, we determined that this event was in violation of one or more of these policies and let the host know that they were not permitted to use Zoom for this particular event.

Zoom did not specify which policy had been violated, or how. Buzzfeed reports that “The NYU event eventually went on with Google Meet, but the effort was intercepted by ‘politically-motivated trolls,’… and the organizers had to hold it privately.”

The NYU-AAUP Executive Committee released a statement about the incident. Among other things, they write:

The shutdown of a campus event is a clear violation of the principle of academic freedom that universities are obliged to observe. Allowing Zoom to override this bedrock principle, at the behest of organized, politically motivated groups, is a grave error for any university administration to make, and it should not escape censure from faculty and students.

They urged that if Zoom continues to cancel academic webinars, particularly those “featuring Palestinian speech and advocacy,” universities should stop using the service and break any existing agreements they have with the company.

The post Zoom Censors Online Session on Zoom Censorship appeared first on Daily Nous.

Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows

Published by Anonymous (not verified) on Thu, 22/10/2020 - 6:06am in

After years of backlash over controversial government work, Google technology will be used to aid the Trump administration’s efforts to fortify the U.S.-Mexico border, according to documents related to a federal contract.

In August, Customs and Border Protection accepted a proposal to use Google Cloud technology to facilitate the use of artificial intelligence deployed by the CBP Innovation Team, known as INVNT. Among other projects, INVNT is working on technologies for a new “virtual” wall along the southern border that combines surveillance towers and drones, blanketing an area with sensors to detect unauthorized entry into the country.

In 2018, Google faced internal turmoil over a contract with the Pentagon to deploy AI-enhanced drone image recognition solutions; the capability sparked employee concern that Google was becoming embroiled in work that could be used for lethal purposes and other human rights concerns. In response to the controversy, Google ended its involvement with the initiative, known as Project Maven, and established a new set of AI principles to govern future government contracts.

The employees also protested the company’s deceptive claims about the project and attempts to shroud the military work in secrecy. Google’s involvement with Project Maven had been concealed through a third-party contractor known as ECS Federal.

Contracting documents indicate that CBP’s new work with Google is being done through a third-party federal contracting firm, Virginia-based Thundercat Technology. Thundercat is a reseller that bills itself as a premier information technology provider for federal contracts.

The contract was obtained through a FOIA request filed by Tech Inquiry, a new research group that explores technology and corporate power founded by Jack Poulson, a former research scientist at Google who left the company over ethical concerns.

Not only is Google becoming involved in implementing the Trump administration’s border policy, the contract brings the company into the orbit of one of President Donald Trump’s biggest boosters among tech executives.

Documents show that Google’s technology for CBP will be used in conjunction with work done by Anduril Industries, a controversial defense technology startup founded by Palmer Luckey. The brash 28-year-old executive — also the founder of Oculus VR, acquired by Facebook for over $2 billion in 2014 — is an open supporter of and fundraiser for hard-line conservative politics; he has been one of the most vocal critics of Google’s decision to drop its military contract. Anduril operates sentry towers along the U.S.-Mexico border that are used by CBP for surveillance and apprehension of people entering the country, streamlining the process of putting migrants in DHS custody.

CBP’s Autonomous Surveillance Towers program calls for automated surveillance operations “24 hours per day, 365 days per year” to help the agency “identify items of interest, such as people or vehicles.” The program has been touted as a “true force multiplier for CBP, enabling Border Patrol agents to remain focused on their interdiction mission rather than operating surveillance systems.”

It’s unclear how exactly CBP plans to use Google Cloud in conjunction with Anduril or for any of the “mission needs” alluded to in the contract document. Google spokesperson Jane Khodos declined to comment on or discuss the contract. CBP, Anduril, and Thundercat Technology did not return requests for comment.

However, Google does advertise powerful cloud-based image recognition technology through its Vision AI product, which can rapidly detect and categorize people and objects in an image or video file — an obvious boon for a government agency planning to string human-spotting surveillance towers across a vast border region.

According to a “statement of work” document outlining INVNT’s use of Google, “Google Cloud Platform (GCP) will be utilized for doing innovation projects for C1’s INVNT team like next generation IoT, NLP (Natural Language Processing), Language Translation and Andril [sic] image camera and any other future looking project for CBP. The GCP has unique product features which will help to execute on the mission needs.” (CBP confirmed that “Andril” is a misspelling of Anduril.)

The document lists several such “unique product features” offered through Google Cloud, namely the company’s powerful machine-learning and artificial intelligence capabilities. Using Google’s “AI Platform” would allow CBP to leverage the company’s immense computer processing power to train an algorithm on a given set of data so that it can make educated inferences and predictions about similar data in the future.

Google’s Natural Language product uses the company’s machine learning resources “to reveal the structure and meaning of text … [and] extract information about people, places, and events,” according to company marketing materials, a technology that can be paired with Google’s speech-to-text transcription software “to extract insights from audio conversations.”

DV.load('//www.documentcloud.org/documents/7273640.js', {
container: '#dcv-7273640',
height: '450',
sidebar: false,
width: '100%'
});

Although it presents no physical obstacle, Anduril’s “virtual wall” system works by rapidly identifying anyone approaching or attempting to cross the border (or any other perimeter), relaying their exact location to border authorities on the ground, offering a relatively cheap, technocratic, and less politically fraught means of thwarting would-be migrants.

Proponents of a virtual wall have long argued that such a solution would be a cost-effective way to increase border security. The last major effort, known as SBInet, was awarded to Boeing during the George W. Bush administration, and resulted in multibillion-dollar cost overruns and technical failures. In recent years, both leading Democrats and Republicans in Congress have favored a renewed look at technological solutions as an alternative to a physical barrier along the border.

Anduril surveillance offerings consist of its “Ghost” line of autonomous helicopter drones operated in conjunction with Anduril “Sentry Towers,” which bundle cameras, radar antennae, lasers, and other sophisticated sensors atop an 80-foot pole. Surveillance imagery from both the camera-toting drones and sensor towers is ingested into “Lattice,” Anduril’s artificial intelligence software platform, where the system automatically flags suspicious objects in the vicinity, like cars or people.

INVNT’s collaboration with Anduril is described in a 2019 presentation by Chris Pietrzak, deputy director of CBP’s Innovation Team, which listed “Anduril towers” among the technologies being tested by the division that “will enable CBP operators to execute the mission more safely and effectively.”

DV.load('//www.documentcloud.org/documents/7273652.js', {
container: '#dcv-7273652',
height: '450',
sidebar: false,
width: '100%'
});

And a 2018 Wired profile of Anduril noted that one sentry tower test site alone “helped agents catch 55 people and seize 982 pounds of marijuana” in a 10-week span, though “for 39 of those individuals, drugs were not involved, suggesting they were just looking for a better life.” The version of Lattice shown off for Wired’s Steven Levy appeared to already implement some AI-based object recognition similar to what Google provides through the Cloud AI system cited in the CBP contract.

The documents do not spell out how, exactly, Google’s object recognition tech would interact with Anduril’s technology. But Google has excelled in the increasingly competitive artificial intelligence field; creating a computer system from scratch capable of quickly and accurately interpreting complex image data without human intervention requires an immense investment of time, money, and computer power to “train” a given algorithm on vast volumes of instructional data.

“We see these smaller companies who don’t have their own computational resources licensing them from those who do, whether it be Anduril with Google or Palantir with Amazon,” Meredith Whittaker, a former Google AI researcher who previously helped organize employee protests against Project Maven and went on to co-found NYU’s AI Now Institute, told The Intercept.

“This cannot be viewed as a neutral business relationship. Big Tech is providing core infrastructure for racist and harmful border regimes,” Whittaker added. “Without these infrastructures, Palantir and Anduril couldn’t operate as they do now, and thus neither could ICE or CBP. It’s extremely important that we track these enabling relationships, and push back against the large players enabling the rise of fascist technology, whether or not this tech is explicitly branded ‘Google.’”

Anduril is something of an outlier in the American tech sector, as it loudly and proudly courts controversial contracts that other larger, more established companies have shied away from. The company also recruited heavily from Palantir, another tech company with both controversial anti-immigration government contracts and ambitions of being the next Raytheon. Both Palantir and Anduril share a mutual investor in Peter Thiel, a venture capitalist with an overtly nationalist agenda and a cozy relationship with the Trump White House. Thiel has donated over $2 million to the Free Forever PAC, a political action group whose self-professed mission includes, per its website, working to “elect candidates who will fight to secure our border [and] create an America First immigration policy.”

Luckey has repeatedly excoriated Google for abandoning the Pentagon, a decision he has argued was driven by “a fringe inside of their own company” that risks empowering foreign adversaries in the race to adopt superior AI military capabilities. In comments last year, he dismissed any concern that the U.S. government could abuse advanced technology and criticized Google employees who signed a letter protesting the company’s involvement in Project Maven over ethical and moral concerns.

“You have Chinese nationals working in the Google London office signing this letter, of course they don’t mind if the United States has good military technology,” said Luckey, speaking at the University of California, Irvine. “Of course they don’t mind if China has better technology. They’re Chinese.”

As The Intercept previously reported, as Luckey publicly campaigned against Google’s withdrawal from the Project Maven, his company quietly secured a contract for the very same initiative.

Anduril’s advanced line of battlefield drones and surveillance towers — along with its eagerness to take defense contracts now viewed as too toxic to touch by rival firms — has earned it lucrative contracts with the Marine Corps and Air Force, in addition to its Homeland Security work. In a 2019 interview with Bloomberg, Anduril chair Trae Stephens, also a partner at Thiel’s venture capital firm, dismissed the concerns of American engineers who complain. “They said, ‘We didn’t sign up to develop weapons,’” Stephens said, explaining, “That’s literally the opposite of Anduril. We will tell candidates when they walk in the door, ‘You are signing up to build weapons.’”

Palmer Luckey has not only campaigned for more Silicon Valley integration with the military and security state, he has pushed hard to influence the political system. The Anduril founder, records show, has personally donated at least $1.7 million to Republican candidates this cycle. On Sunday, he hosted President Donald Trump at his home in Orange County, Calif., for a high-dollar fundraiser, along with former German ambassador Richard Grenell, Kimberly Guilfoyle, and other Trump campaign luminaries.

Anduril’s lobbyists in Congress also pressed lawmakers to include increased funding for the CBP Autonomous Surveillance Tower program in the DHS budget this year, a request that was approved and signed into law. In July, around the time the program funding was secured, the Washington Post reported that the Trump administration deemed Anduril’s virtual wall system a “program of record,” a “technology so essential it will be a dedicated item in the homeland security budget,” reportedly worth “several hundred million dollars.”

The autonomous tower project awarded to Anduril and funded through CBP is reportedly worth $250 million. Records show that $35 million for the project was disbursed in September by the Air and Marine division, which also operates drones.

Anduril’s approach contrasts sharply with Google’s. In 2018, Google tried to quell concerns over how its increasingly powerful AI business could be literally weaponized by publishing a list of “AI Principles” with the imprimatur of CEO Sundar Pichai.

“We recognize that such powerful technology raises equally powerful questions about its use,” wrote Pichai, adding that the new principles “are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.” Chief among the new principles were directives to “Be socially beneficial,” “Avoid creating or reinforcing unfair bias,” and a mandate to “continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.”

The principles include a somewhat vague list of “AI applications we will not pursue,” such as “Technologies that cause or are likely to cause overall harm,” “weapons,” “surveillance violating internationally accepted norms,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

It’s difficult to square these commitments to peaceful, nonsurveillance AI humanitarianism with a contract that places Google’s AI power behind both a military surveillance contractor and a government agency internationally condemned for human rights violations. Indeed, in 2019, over 1,000 Google employees signed a petition demanding that the company abstain from providing its cloud services to U.S. immigration and border patrol authorities, arguing that “by any interpretation, CBP and ICE are in grave violation of international human rights law.”

“This is a beautiful lesson in just how insufficient this kind of corporate self-governance really is,” Whittaker told The Intercept. “Yes, they’re subject to these AI principles, but what does subject to a principle mean? What does it mean when you have an ethics review process that’s almost entirely non-transparent to workers, let alone the public? Who’s actually making these decisions? And what does it mean that these principles allow collaboration with an agency currently engaged in human rights abuses, including forced sterilization?”

“This reporting shows that Google is comfortable with Anduril and CBP surveilling migrants through their Cloud AI, despite their AI Principles claims to not causing harm or violating human rights,” said Poulson, the founder of Tech Inquiry.

“Their clear strategy is to enjoy the high profit margin of cloud services while avoiding any accountability for the impacts,” he added.

The post Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows appeared first on The Intercept.

Twitter Surveillance Startup Targets Communities of Color for Police

Published by Anonymous (not verified) on Thu, 22/10/2020 - 3:55am in

Tags 

Technology

New York startup Dataminr aggressively markets itself as a tool for public safety, giving institutions from local police to the Pentagon the ability to scan the entirety of Twitter using sophisticated machine-learning algorithms. But company insiders say their surveillance efforts were often nothing more than garden-variety racial profiling, powered not primarily by artificial intelligence but by a small army of human analysts conducting endless keyword searches.

In July, The Intercept reported that Dataminr, leveraging its status as an official “Twitter Partner,” surveilled the Black Lives Matter protests that surged across the country in the wake of the police killing of George Floyd. Dataminr’s services were initially designed to help hedge funds turn the first glimmers of breaking news on social media into market-beating trades, enabling something like a supercharged version of professional Twitter dashboard TweetDeck. They have since been adopted by media outlets, the military, police departments, and various other organizations seeking real-time alerts on chaos and strife.

Dataminr’s early backers included Twitter and the CIA, and it’s not hard to see why the startup looked so promising to investors. Modern American policing hungers for vast quantities of data — leads to chase and intelligence to aggregate — and the entirety of online social media is now considered fodder. In a 2019 pitch to the FBI, Dataminr said its goal was “to integrate all publicly available data signals to create the dominant information discovery platform.” In addition to the bureau, the company has entered test programs and contracts with local and state police forces across the country.

But despite promises of advanced crime-sniffing technology, conversations with four sources directly familiar with Dataminr’s work, who asked to remain anonymous because they were not permitted to speak to the press about their employment, suggest that the company has at times relied on prejudice-prone tropes and hunches to determine who, where, and what looks dangerous. Through First Alert, its app for public sector clients, Dataminr has offered a bespoke, scariest possible version of the web: a never-ending stream of notifications of imminent or breaking catastrophes to investigate. But First Alert’s streams were assembled in ways prone to racial bias, sources said, by teams of “Domain Experts” assigned to rounding up as many “threats” as possible. Hunting social media for danger and writing alerts for cops’ iPhones and laptop screens, these staffers brought their prejudices and preconceptions along with their expertise, and were pressed to search specific neighborhoods, streets, and even housing complexes for crime, sources said.

Dataminr said in a written comment, provided by Kerry McGee of public relations firm KWT Global, that it “rejects in the strongest possible terms the suggestion that its news alerts are in any way related to the race or ethnicity of social media users,” and claimed, as Dataminr has in the past, that the firm’s practice of monitoring the speech and activities of individuals without their knowledge, on behalf of the police, does not constitute surveillance. McGee added that “97% of our alerts are generated purely by AI without any human involvement.” McGee did not provide clarification about how much of Dataminr’s police-bound alerts — as opposed to other Dataminr alerts, like those created for news organizations and corporate clients — are created purely through “AI,” and sources contacted for this article were befuddled by the 97 percent figure.

Hunting for “Possible Gang Members” on Twitter

One significant part of Dataminr’s work for police, the sources said, has been helping flag potential gang members. Police gang databases are typically poorly regulated and have become notorious vehicles for discriminatory policing, unjust sentencing, and the criminalization of children; they’re filled with the names of thousands and thousands of young people never actually accused of any crime. Dataminr sources who spoke to The Intercept didn’t know exactly how allegedly “gang-related” tweets and other social media posts flagged via Dataminr were ultimately used by the company’s police customers. But in recent years, social media monitoring has become an important way to fill gang databases.

Staffers are pressed to search specific neighborhoods, streets, and even housing complexes for crime.

As part of a broader effort to feed information about crime to police under the general rubric of public “threats,” Dataminr staffers attempted to flag potential violent gang activity without the aid of any special algorithms or fancy software, sources said; instead they pored over thousands and thousands of tweets, posts, and pictures, looking for armed individuals who appeared to be affiliated with a gang. It’s an approach that was neither an art nor a science and, according to experts in the field, is also a surefire way of putting vulnerable men and women of color under police scrutiny or worse.

“It wasn’t specific,” said one Dataminr source with direct knowledge of the company’s anti-gang work. “Anything that could be tangentially described as a [gang-related] threat” could get sucked into Dataminr’s platform.

With no formal training provided on how to identify or verify gang membership, Dataminr’s army of “Domain Experts” were essentially left to use their best judgment, or to defer to ex-cops on staff. If Dataminr analysts came across, say, a tweet depicting a man with a gun and some text that appeared to be gang-related, that could be enough to put the posting in a police-bound stream as containing a “possible gang member,” this source said, adding that there was little if any attempt to ever check whether such a weapon was legally possessed or obtained.

In practice, Dataminr’s anti-gang activity amounted to “white people, tasked with interpreting language from communities that we were not familiar with” coached by predominantly white former law enforcement officials who themselves “had no experience from these communities where gangs might be prevalent,” per a second source. “The only thing we were using to identify them was hashtags, possibly showing gang signs, and if there was any kind of weapon in the photo,” according to the first source. There was “no institutional definition of ‘potential gang member,’ it was open to interpretation.” All that really mattered, these sources say, was finding as much danger as possible, real or perceived, and transmitting it to the police.

In its written comments, Dataminr stated that “First Alert does not identify indicators of violent gang association or identify whether an event is a crime.” Asked whether the company acknowledges providing any gang-related alerts or comments to customers, McGee did not directly respond, saying only that “there is no alert topic for crime or gang-related events.” Dataminr did not respond to a question about the race of former law enforcement personnel it employs.

There was no institutional definition of “potential gang member.”

A Dataminr source said that there never appeared to be any minimum age on who was flagged as a potential gang affiliate: “I can definitely recall kids of school-age nature, late middle school to high school” being ingested into Dataminr’s streams. Unlike Dataminr’s work identifying emerging threats in Europe or the Middle East, the company’s counter-gang violence monitoring felt slapdash by comparison, two Dataminr sources said. “There’s a great deal of latitude in determining [gang membership], it wasn’t like other kind of content, it was far more nebulous,” said the first source, who added that Dataminr staff were at times concerned that the pictures they were flagging as evidence of violent gang affiliation could be mere adolescent tough-guy posturing, completely out of context, or simply dated: “We had no idea how old they were,” the source added. “People save [and repost] photos. It was completely open to interpretation.”

While any image depicting a “possible gang member” with a weapon would immediately be flagged and transmitted to the police, Dataminr employees, tasked with finding “threats” nationwide, worried why some armed men were subject to software surveillance while others were not. “The majority of the focus stayed toward gangs that are historically black and Latino,” said one source. “More effort was put into inner-city Chicago gangs than the Three Percenters or things related to Aryan Brotherhood,” this source continued, adding that they recalled worried conversations with colleagues about why the company spent so much time finding online images of armed black and brown people — who may have owned or possessed such a weapon legally — but not white people with guns.

Two Dataminr sources directly familiar with these operations told The Intercept that although the company’s teams of Domain Experts were untrained and generally uninformed on the subject of American street gangs, the company employed ex-law enforcement agents as in-house “gang experts” to help scan social media.

Human Stereotypes Instead of Machine Intelligence

Although Dataminr has touted itself as an “AI” firm, two company sources told The Intercept this overstated matters, and that most of the actual monitoring at the company was done by humans scrolling, endlessly, through streams of tweets. “They kept saying ‘the algorithm’ was doing everything,” said a Dataminr source, but “it was actually mostly humans.” But this large staff of human analysts was still expected to deliver the superhuman output of an actual product based on some sort of “artificial intelligence” or sophisticated machine learning. Inadequate training combined with strong pressure to crank out content to meet internal quotas and impress police clientele dazzled by “artificial intelligence” presentations led to predictable problems, the two sources said. The company approach to crime fighting began to resemble “creating content in their heads that isn’t there,” said the second source, “thinking Dataminr can predict the future.”

As Dataminr can’t in fact predict crime before it occurs, these sources say that analysts often fell back on stereotyped assumptions, with the company going so far as providing specific guidance to seek crime in certain areas, with the apparent assumption that the areas were rife with criminality. Neighborhoods with large communities of color, for example, were often singled out for social media surveillance in order to drum up more threat fodder for police.

Although the company touts itself as an “AI” firm, most of the actual monitoring was apparently done by humans scrolling, endlessly, through streams of tweets.

“It was never targeted towards other areas in the city, it was poor, minority-populated areas,” explained one source. “Minneapolis was more focused on urban areas downtown, but weren’t focusing on Paisley Park — always ‘downtown areas,’ areas with projects.”

The two sources told The Intercept that Dataminr had at times asked analysts to create information feeds specific to certain housing projects populated predominantly by people of color, seeming to contradict the company’s 2016 claim that it does not provide any form of “geospatial analysis.” “Any sort of housing project, bad neighborhood, bad intersection, we would definitely put those in the streams,” explained one source. “Any sort of assumed place that was dangerous. It was up to the Domain Experts. It was just trial and error to see what [keywords] brought things up. Dataminr obviously didn’t care about unconscious bias, they just wanted to get the crimes before anyone else.”

Two Dataminr sources familiar with the company’s Twitter search methodology explained that although Dataminr isn’t able to provide its clients with direct access to the locational coordinates sometimes included in tweet metadata, the company itself still uses location metadata embedded in tweets, and is able to provide workarounds when asked, offering de facto geospatial analysis. At times this was accomplished using a simple keyword search through the company’s access to the Twitter “firehose,” a data stream containing every public tweet from the moment it’s published. Keyword-based trawling would immediately alert Dataminr anytime anyone tweeted publicly about a particular place. “Any time that Malcolm X Boulevard was mentioned, we would be able to see it” in a given city, explained one source by way of a hypothetical.

Dataminr wrote in its statement to The Intercept that “First Alert identifies breaking news events without any regard to the racial or ethnic composition of an area where a breaking news event occurs. … Race, ethnicity, or any other demographic characteristic of the people posting public social media posts about events is never part of determining whether a breaking news alert is sent to First Alert clients.” It also said that “First Alert does not enable any type of geospatial analysis. First Alert provides no feature or function that allows a user to analyze the locations of specific social media posts, social media users or plot social media posts on a map.”

Asked if Dataminr domain experts look for social media leads specific to certain geographic areas, McGee did not deny that they do, writing only, “Dataminr detects events across the entire world wherever they geographically occur.”

“In a way, Dataminr and law enforcement were perpetuating each other’s biases.”

On other occasions, according to one source, Dataminr employed the use of a “pseudo-predictive algorithm” that scrapes a user’s past tweets for clues about their location, though they emphasized this tool functioned with “not necessarily any degree of accuracy.” This allows Dataminr to build, for example, bespoke in-house surveillance streams of potential “threats” pegged to areas police wish to monitor (for instance, if a police department wanted more alerts about threatening tweets from or about Malcolm X Boulevard, or a public housing complex). These sources stressed that Dataminr would try to provide these customized “threat” feeds whenever asked by police clients, even as staff worried it amounted to blatant racial profiling and the propagation of law enforcement biases about where crimes were likely to be committed.

Dataminr told The Intercept in response that “First Alert provides no custom solutions for any government organizations, and the same First Alert product is used by all government organizations. All First Alert customers have access to the same breaking news alerts.”

Even if public sector customers use the same version of the First Alert app, the company itself has indicated that the alerts provided to customers could be customized: Its 2019 presentation to the FBI includes a slide stating that clients can adjust “user-defined criteria” like “topic selection” and “geographic filters” prior to “alert delivery.” Shown the below slide from the presentation, Dataminr said it was consistent with its statement.

The specially crafted searches focused on areas of interest to police were done “mainly looking for criminal incidents in those areas,” one source explained. When asked by police departments to find criminality on social media, “areas that were predominantly considered more white” were routinely overlooked, while poorer neighborhoods of color were mined for crime content.

Another source told The Intercept of an internal project they were placed on as part of a trial relationship with the city government of Chicago, for which they were instructed to scan Twitter for “Entertainment news from the North Side, crime news from the South Side.” (It is not clear if these instructions came from the city of Chicago; the Chicago Police Department did not respond to a request for comment.)

This source explained that through its efforts to live up to the self-created image as an engine of bleeding-edge “intelligence” about breaking events, “Dataminr is in a lot of ways regurgitating whatever the Domain Experts believe people want to see or hear” — those people in this case being the police. This can foster a sort of feedback loop of racial prejudice: stereotyped assumptions of what sort of keyword searches and locales might yield evidence of criminality are then used to bolster the stereotyped assumptions of American police. “In a way, Dataminr and law enforcement were perpetuating each other’s biases,” the source said, forming a sort of Twitter-based perpetual motion machine of racial confirmation bias: “We would make keyword-based streams [for police] with biased keywords, then law enforcement would tweet about the crimes, then we would pick up those tweets.”

Experts Alarmed by Techniques

Experts on criminal justice, gang violence, and social media approached for this story expressed concern that Dataminr’s surveillance services have carried racially prejudiced policing methods onto the internet. “I thought there was enough info out there to tell people to not do this,” Desmond Patton, a professor and researcher on gang violence and the web at Columbia University’s School of Social Work, told The Intercept. Social media surveillance-based counter-gang efforts routinely miss any degree of nuance or private meaning, explained Patton, instead relying on the often racist presumption that “if something looks a certain way it must mean something,” an approach that attempts “no contextual understanding of how emoji are used, how hashtags are used, [which] misses whole swaths of deep trauma and pain” in policed communities.

Systematized social media surveillance will only accelerate these inequities.

Babe Howell, a professor at CUNY School of Law and a criminal justice scholar, shared this concern over context-flattening Twitter surveillance and the lopsided assessment of who looks dangerous. “Most adolescents experiment with different kinds of personalities,” said Howell, explaining that using “the artistic expression, the musical expression, the posturing and bragging and representations of masculinities in marginalized communities” as a proxy for possible criminality is far worse than useless. “For better or worse we have the right to bear arms, and using photos including images of weapons to collect information about people based on speech and associations just imposes one wrong on the next and two wrongs do not make a right.”

Howell said the potential damage caused by labeling someone a “possible gang member,” whether in a formal database or not, is very real. Labeling someone as gang-affiliated leads to what Howell described as “two systems of justice that are separate and unequal,” because “if someone is accused of being a gang member on the street they will be policed with heightened levels of tension, often resulting in excessive force. In the criminal justice system they’ll be denied bail, speedy trial rights, typical due process rights, because they’re seen as more of a threat. Gang allegations carry this level of prejudicial bad character evidence that would not normally be admissible.”

All of this reflects crises of American overpolicing that far predate computers, let alone Twitter. But systematized social media surveillance will only accelerate these inequities, said Ángel Díaz, a lawyer and researcher at the Brennan Center for Justice. “Communities of color use social media in ways that are readily misunderstood by outsiders,” explained Díaz. “People also digitally brand themselves in ways that can be disconnected from reality. Online puffery about gang affiliation can be done for a variety of reasons, from chasing notoriety to deterring real-world violence. For example, a person might take photos with a borrowed gun and later post them to social media over the course of a week to create a fake persona and intimidate rivals.” Similarly fraught was Dataminr’s practice of honing in on certain geographical areas: “Geo-fencing around poor neighborhoods and communities of color only aggravates this potential by selectively looking for suspicious behavior in places they’re least equipped to understand.”

Of course, both Twitter and Dataminr vehemently maintain that the service they offer — monitoring many different social networks simultaneously for any information that might be of interest to police, including protests — does not constitute surveillance, pointing to Twitter’s strict prohibitions against surveillance by partners. “First Alert does not provide any government customers with the ability to target, monitor or profile social media users, perform geospatial, link or network analysis, or conduct any form of surveillance,” Dataminr wrote to The Intercept.

But it’s difficult to wrap one’s head around these denials, given that Twitter’s anti-surveillance policy reads like a dry, technical description of exactly what Datminr is said to have engaged in. Twitter’s developer terms of service — which govern the use of the firehose — expressly prohibit using tweets for “conducting or providing surveillance or gathering intelligence,” and orders developers to “Never derive or infer, or store derived or inferred, information about a Twitter user’s … [a]lleged or actual commission of a crime.”

Twitter spokesperson Lindsay McCallum declined to answer any questions about Dataminr’s surveillance practices, but stated “Twitter prohibits the use of our developer services for surveillance purposes. Period.” McCallum added that Twitter has “done extensive auditing of Dataminr’s tools, including First Alert, and have not seen any evidence that they’re in violation of our policies,” but declined to discuss this audit on the record.

“Twitter’s policy does not line up with its actions,” according to Díaz. “Dataminr is clearly using the Twitter API to conduct surveillance on behalf of police departments, and passing along what it finds in the form of ‘news alerts.’ This is a distinction without difference. Conducting searches of Twitter for leads about potential gang activity, much like its monitoring of Black Lives Matter protests, is surveillance. Having Dataminr analysts run searches and summarize their findings before passing it along to police doesn’t change this reality.”

“In this Dataminr example, you’re not talking about cops, you’re now talking about private individuals [who] lack the even basic knowledge that officers are coming from.”

Dataminr’s use of the Twitter firehose to infer gang affiliation is “totally terrifying,” said Forrest Stuart, a sociologist and head of the Stanford Ethnography Lab, who explained that even for an academic specialist with a career of research and field work spent understanding the way communities express themselves on social media, grasping the intricacies of someone else’s self-expression can be fraught. “There are neighborhoods that are less than a mile away from the neighborhoods where I have have intimate knowledge, where if I open up their Twitter accounts, I trust myself to get a pretty decent sense of what their hashtags and their phrases mean,” Stuart said. “But I know that I am still inaccurate because I’m not there in that community. So, if I am concerned, as a researcher who specializes in this stuff … then you can imagine my concern and hearing that police officers are using this.”

Stuart added that “research has long shown that police officers really lack the kind of cultural competencies and knowledge that’s required for understanding the kinds of behavioral and discursive practices, aesthetic practices, taken up by urban black and brown youth,” but that “here in this Dataminr example, you’re not talking about cops, you’re now talking about private individuals [who] lack the even basic knowledge that officers are coming from, some knowledge of criminal behavior or some knowledge of gang behavior.”

Stuart believes Twitter owes its over 100 million active users, at the very least, a warning that their tweets might become fodder for a semi-automated crime dragnet, explaining that he himself uses the Twitter firehose for his ethnographic research, but had to first consent to a substantial data usage agreement aimed at minimizing harm to the people whose tweets he might study — guidelines that Dataminr doesn’t appear to have been held to. “If it doesn’t violate Twitter’s conditions by letter, doesn’t it violate them at least in the essence of what Twitter’s trying to do?” he asked. “Aren’t the terms and conditions set up so that Twitter isn’t leading to negative impacts or negative treatment of people? At minimum, if they’re gonna continue feeding stuff to Dataminr and stuff to police, don’t they have some kind of responsibility, at least an ethical obligation, to let [users] know that ‘Hey, some of your information is going to cops’?” When asked whether Twitter would ever provide such a notice to users, spokesperson McCallum provided a link to a section of the Twitter terms of service that makes no mention of police or law enforcement.

The post Twitter Surveillance Startup Targets Communities of Color for Police appeared first on The Intercept.

No Flesh Is Spared in Richard Stanley’s H.P. Lovecraft Adaptation.

Well, almost none. There is one survivor. Warning: Contains spoilers.

Color out of Space, directed by Richard Stanley, script by Richard Stanley and Scarlett Amaris. Starring

Nicholas Cage … Nathan Gardner,

Joely Richardson… Theresa Gardner,

Madeleine Arthur… Lavinia Gardner

Brendan Meyer… Benny Gardner

Julian Meyer… Jack Gardner

Elliot Knight… Ward

Tommy Chong… Ezra

Josh C. Waller… Sheriff Pierce

Q’orianka Kilcher… Mayor Tooma

This is a welcome return to big screen cinema of South African director Richard Stanley. Stanley was responsible for the cult SF cyberpunk flick, Hardware, about a killer war robot going running amok in an apartment block in a future devastated by nuclear war and industrial pollution. It’s a great film, but its striking similarities to a story in 2000AD resulted in him being successfully sued by the comic for plagiarism. Unfortunately, he hasn’t made a major film for the cinema since he was sacked as director during the filming of the ’90s adaptation of The Island of Doctor Moreau. Th film came close to collapse and was eventually completed by John Frankenheimer. A large part of the chaos was due to the bizarre, irresponsible and completely unprofessional behaviour of the two main stars, Marlon Brando and Val Kilmer.

Previous Lovecraft Adaptations

Stanley’s been a fan of Lovecraft ever since he was a child when his mother read him the short stories. There have been many attempts to translate old Howard Phillips’ tales of cosmic horror to the big screen, but few have been successful. The notable exceptions include Brian Yuzna’s Reanimator, From Beyond and Dagon. Reanimator and From Beyond were ’80s pieces of gleeful splatter, based very roughly – and that is very roughly – on the short stories Herbert West – Reanimator and From Beyond the Walls of Sleep. These eschewed the atmosphere of eerie, unnatural terror of the original stories for over the top special effects, with zombies and predatory creatures from other realities running out of control. Dagon came out in the early years of this century. It was a more straightforward adaptation of The Shadow Over Innsmouth, transplanted to Spain. It generally followed the plot of the original short story, though at the climax there was a piece of nudity and gore that certainly wasn’t in Lovecraft.

Plot

Color out of Space is based on the short story of the same name. It takes some liberties, as do most movie adaptations, but tries to preserve the genuinely eerie atmosphere of otherworldly horror of the original, as well as include some of the other quintessential elements of Lovecraft’s horror from his other works. The original short story is told by a surveyor, come to that part of the American backwoods in preparation for the construction of a new reservoir. The land is blasted and blighted, poisoned by meteorite that came down years before. The surveyor recounted what he has been told about this by Ammi Pierce, an old man. The meteorite landed on the farm of Nahum Gardner and his family, slowly poisoning them and twisting their minds and bodies, as it poisons and twists the land around them.

In Stanley’s film, the surveyor is Ward, a Black hydrologist from Lovecraft’s Miskatonic University. He also investigates the meteorite, which in the story is done by three scientists from the university. The movie begins with shots of the deep American forest accompanied by a soliloquy by Ward, which is a direct quote from the story’s beginning. It ends with a similar soliloquy, which is largely the invention of the scriptwriters, but which also contains a quote from the story’s ending about the meteorite coming from unknown realms. Lovecraft was, if not the creator of cosmic horror, then certainly its foremost practitioner. Lovecraftian horror is centred around the horrifying idea that humanity is an insignificant, transient creature in a vast, incomprehensible and utterly uncaring if not actively hostile cosmos. Lovecraft was also something of an enthusiast for the history of New England, and the opening shots of the terrible grandeur of the American wilderness puts him in the tradition of America’s Puritan settlers. These saw themselves as Godly exiles, like the Old Testament Israelites, in a wilderness of supernatural threat.

The film centres on the gradual destruction of Nathan Gardner and his family – his wife, Theresa, daughter Lavinia, and sons Benny and Jack – as their minds and bodies are poisoned and mutated by the strange meteorite and its otherworldly inhabitant, the mysterious Color of the title. Which is a kind of fuchsia. Its rich colour recalls the deep reds Stanley uses to paint the poisoned landscape of Hardware. Credit is due to the director of photography, Steve Annis, as the film and its opening vista of the forest looks beautiful. The film’s eerie, electronic score is composed by Colin Stetson, which also suits the movie’s tone exactly.

Other Tales of Alien Visitors Warping and Mutating People and Environment

Color out of Space comes after a number of other SF tales based on the similar idea of an extraterrestrial object or invader that twists and mutates the environment and its human victims. This includes the TV series, The Expanse, in which humanity is confronted by the threat of a protomolecule sent into the solar system by unknown aliens. Then there was the film Annihilation, about a group of women soldiers sent into the zone of mutated beauty and terrible danger created by an unknown object that has crashed to Earth and now threatens to overwhelm it. It also recalls John Carpenter’s cult horror movie, The Thing, in the twisting mutations and fusing of animal and human bodies. In the original story, Gardner and his family are reduced to emaciated, ashen creatures. It could be a straightforward description of radiation poisoning, and it indeed that is how some of the mutated animal victims of the Color are described in the film. But the film’s mutation and amalgamation of the Color’s victims is much more like that of Carpenter’s Thing as it infects its victims. The scene in which Gardner discovers the fused mass of his alpacas out in the barn recalls the scene in Carpenter’s earlier flick where the members of an American Antarctic base discover their infected dogs in the kennel. In another moment of terror, the Color blasts Theresa as she clutches Jack, fusing them together. It’s a piece of body horror like the split-faced corpse in Carpenter’s The Thing, the merged mother and daughter in Yuzna’s Society, and the fused humans in The Thing’s 2012 prequel. But it’s made Lovecraftian by the whimpering and gibbering noises the fused couple make, noises that appear in much Lovecraftian fiction.

Elements from Other Lovecraft Fiction

In the film, Nathan Gardner is a painter, who has taken his family back to live on his father’s farm. This is a trope from other Lovecraft short stories, in which the hero goes back to his ancestral home, such as the narrator of The Rats in the Walls. The other characters are also updated to give a modern, or postmodern twist. Gardner’s wife, Theresa, is a high-powered financial advisor, speaking to her clients from the farm over the internet. The daughter, Lavinia, is a practicing witch of the Wiccan variety. She is entirely benign, however, casting spells to save her mother from cancer, and get her away from the family. In Lovecraft, magic and its practitioners are an active threat, using their occult powers to summon the ancient and immeasurably evil gods they worship, the Great Old Ones. This is a positive twist for the New Age/ Goth generations.

There’s a similar, positive view of the local squatter. In Lovecraft, the squatters are barely human White trash heading slowly back down the evolutionary ladder through poverty and inbreeding. The film’s squatter, Ezra, is a tech-savvy former electrician using solar power to live off-grid. But there’s another touch here which recalls another of Lovecraft’s classic stories. Investigating what may have become of Ezra, Ward and Pierce discover him motionless, possessed by the Color. However, he is speaking to them about the Color and the threat it presents from a tape recorder. This is similar to the voices of the disembodied human brains preserved in jars by the Fungi from Yuggoth, speaking through electronic apparatus in Lovecraft’s The Whisperer in Darkness. Visiting Ezra earlier in the film, Ward finds him listening intently to the aliens from the meteorite that now have taken up residence under the Earth. This also seems to be a touch taken from Lovecraft’s fiction, which means mysterious noises and cracking sounds from under the ground. Near the climax Ward catches a glimpse through an enraptured Lavinia of the alien, malign beauty of the Color’s homeworld, This follows the logic of the story, but also seems to hark back to the alien vistas glimpsed by the narrator in The Music of Erich Zann. And of course it wouldn’t be a Lovecraft movie without the appearance of the abhorred Necronomicon. It is not, however, the Olaus Wormius edition, but a modern paperback, used by Lavinia as she desperately invokes the supernatural for protection.

Fairy Tale and Ghost Story Elements

Other elements in the movie seem to come from other literary sources. The Color takes up residence in the farm’s well, from which it speaks to the younger son, Jack. Later, Benny, the elder son tries to climb down it in an attempt to rescue their dog, Sam, during which he is also blasted by the Color. When Ward asks Gardner what has happened to them all, he is simply told that they’re all present, except Benny, who lives in the well now. This episode is similar to the creepy atmosphere of children’s fairy tales, the ghost stories of M.R. James and Walter de la Mare’s poems, which feature ghostly entities tied to specific locales.

Oh yes, and there’s also a reference to Stanley’s own classic film, Hardware. When they enter Benny’s room, glimpsed on his wall is the phrase ‘No flesh shall be spared’. This is a quote from Mark’s Gospel, which was used as the opening text and slogan in the earlier movie.

The film is notable for its relatively slow start, taking care to introduce the characters and build up atmosphere. This is in stark contrast to the frenzied action in other, recent SF flicks, such as the J.J. Abram’s Star Trek reboots and Michael Bay’s Transformers. The Color first begins having its malign effects by driving the family slowly mad. Theresa accidentally cuts off the ends of her fingers slicing vegetables in the kitchen as she falls into a trance. Later on, Lavinia starts cutting herself as she performs her desperate ritual calling for protection. And Jack and later Gardner sit enraptured looking at the television, vacant except for snow behind which is just the hint of something. That seems to go back to Spielberg’s movie, Poltergeist, but it’s also somewhat like the hallucinatory scenes when the robot attacks the hero from behind a television, which shows fractal graphics, in Hardware.

Finally, the Color destroys the farm and its environs completely, blasting it and its human victims to ash. The film ends with Ward contemplating the new reservoir, hoping the waters will bury it all very deep. But even then, he will not drink its water.

Lovecraft and Racism

I really enjoyed the movie. I think it does an excellent job of preserving the tone and some of the characteristic motifs of Lovecraft’s work, while updating them for a modern audience. Despite his immense popularity, Lovecraft is a controversial figure because of his racism. There were objections last year or so to him being given an award at the Hugo’s by the very ostentatiously, sanctimoniously anti-racist. And a games company announced that they were going to release a series of games based on his Cthulhu mythos, but not drawing on any of his characters or stories because of this racism. Now the character of an artist does not necessarily invalidate their work, in the same way that the second best bed Shakespeare bequeathed to his wife doesn’t make Hamlet any the less a towering piece of English literature. But while Lovecraft was racist, he also had black friends and writing partners. His wife was Jewish, and at the end of his life he bitterly regretted his earlier racism. Also, when Lovecraft was writing in from the 1920s to the 1940s, American and western society in general was much more racist. This was the era of segregation and Jim Crow. It may be that Lovecraft actually wasn’t any more racist than any others. He was just more open about it. And it hasn’t stopped one of the internet movie companies producing Lovecraft Country, about a Black hero and his family during segregation encountering eldritch horrors from beyond.

I don’t know if Stanley’s adaptation will be to everyone’s taste, though the film does credit the H.P. Lovecraft Historical Society among the organisations and individuals who have rendered their assistance. If you’re interested, I recommend that you give it a look. I wanted to see it at the cinema, but this has been impossible due to the lockdown. It is, however, out on DVD released by Studio Canal. Stanley has also said that if this is a success, he intends to make an adaptation of Lovecraft’s The Dunwich Horror. I hope the film is, despite present circumstances, and we can look forward to that piece of classic horror coming to our screens. But this might be too much to expect, given the current crisis and the difficulties of filming while social distancing.

Pages