Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/

‘The First Cry of a Newborn World’: The Trinity Test at 75

Published by Anonymous (not verified) on Tue, 28/07/2020 - 3:03am in

Doctor has just returned most enthusiastic and confident that the little boy is as husky as his big brother. The light in his eyes discernible from here to High Hold and I could have heard his screams from here to my farm.

Coded message describing the successful Trinity Test from George L. Harrison to US Secretary of War Henry L. Stimson, 18 July 1945

The footage is black and white, and silent, but it still has the power to shock: the sudden violent flash of light, so bright that for a second or two the horizon is invisible; the massive pyrocumulus cloud rising up over the arid valley; the way the night sky seems to quiver and throb as the light from the explosion fades. ‘Mushroom cloud’ is the noun phrase of choice, but that scarcely does justice to the scale of the thing, let alone the immensity of the event. Both figuratively and literally, the Trinity Test was earth-shattering.

It was, perhaps, to shield themselves from the existential implications of their work that the scientists of the Manhattan Project nicknamed the first plutonium bomb The Gadget. To place such a device in the same category of objects as, say, an electric can-opener is to rob it of its power to harm; it’s the psychological equivalent of donning a protective mask and gloves before handling hazardous material. But whatever coping mechanisms were in place on that morning in 1945, they must have taken a battering as the blast ripped through the desert air, turning the surrounding sand to glass and sending a cloud of incandescent gas some 12 kilometres into the sky. What did they expect, the scientists of base camp, as they lay down in their shallow trenches in preparation for the Trinity Test? The possibility had occurred to some of them that the explosion would ignite the atmosphere. No doubt a prayer or two was muttered in the forty seconds it took for the sound of the ‘atom bomb’ to reach their ears.

It’s significant that the Trinity Test, conducted seventy-five years ago this month, is often mooted as the starting point of the Anthropocene, or Age of Humans, an unofficial geological designation that recognises the decisive influence humanity has had on planet Earth. The suggestion makes scientific and aesthetic sense. Scientific sense because gamma radiation from the testing and use of nuclear weaponry now mingles with particles of plastic and concrete, soot from power stations, chemicals from fertilisers, and trillions of animal bones as evidence of that influence. And aesthetic sense because the case for a new epoch is infused with a growing feeling of alarm. The events in New Mexico ushered in a period of existential anxiety. Today we stand in the shadow of a future in which rising sea levels, coastal flooding, devastating wildfire seasons, pollution, the spread of infectious diseases, and disruption to food and water supplies will transform our world to such a degree that even the recent Australian bushfires will seem like the lull before the storm. Our rise to biospheric dominance is inseparable from our talent for destruction. The mushroom cloud is the symbol of that.

But there is an even deeper sense in which the Trinity Test marks a rupture with the past – one that many of Arena’s editors and regular contributors have written about extensively in the last four decades. For the team at Los Alamos was engaged in something very different from its scientific forebears. For hundreds of years the thrust of science had been towards an understanding of nature, while the thrust of technology, which took its lead from science, was towards nature’s conquest and utilisation. By contrast, the Manhattan Project scientists moved beyond conquest to reconstitution. In his seminal essay ‘From Here to Eternity’, Arena’s founder Geoff Sharp wrote of the need to see nuclear power as central to the emergence of an ethos of ‘transformation’ in science and technology, and of the related need to ‘acknowledge the significance of the break in continuity concealed by the belief that technological change is simply more of that same progress that has defined modernity’. Nuclear power is not like wood or coal—a given attribute of the natural world—and even Einstein, who theorised the equivalence of mass and energy in his iconic equation, did not believe that it was a practical possibility. Wrenching that equation from the theoretical realm, the Los Alamos scientists proved him wrong and, in so doing, moved scientific endeavour from understanding to authorship. As the only journalist present at the test, William Laurence, intuited, this represented a radical break. The blast, he wrote, was ‘the first fire ever made on Earth that did not have its origin in the Sun’.

No doubt it’s for this reason that descriptions of the test were frequently couched in religious language. For Laurence, the explosion was ‘the first cry of a newborn world’, while for Major General Thomas Farrell, who observed the test alongside the Manhattan Project’s chief scientist J. Robert Oppenheimer, the ‘sustained, awesome roar…made us feel that we puny things were blasphemous to dare tamper with the forces heretofore reserved to the Almighty’. ‘We knew the world would not be the same’, said Oppenheimer himself in 1965. ‘I remembered the line from the Hindu scripture the Bhagavad-Gita: Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, “Now I am become Death, the destroyer of worlds”.’ With the detonation of the first atomic bomb—at Jornada del Muerto (Journey of the Dead)—science attained the power of a god.

It is this that makes the Trinity Test so relevant to the contemporary world. For the powers evinced in the New Mexico desert three quarters of a century ago raised the curtain on a new era of ‘techno-science’, in which nature was taken as a thing to be remade and not merely ‘harnessed’ or ‘tamed’ or ‘conquered’. From the ‘editing’ of DNA in agriculture and medicine to the suggestion that machine intelligence may become more powerful than the ‘intelligence’ of all human beings combined, we have entered an era in which science and technology have the power to rewrite the book of nature, and to renegotiate the fundamental terms of our existence. Such technologies are properly Promethean, in the sense that they unlock (or unleash) new powers, and with them radical new potentialities: the prospect of a world without work, for example, or of a social life without physical presence, or even of a life without death. The bright young things of Silicon Valley, with their dreams of direct democracy on Mars and digital immortality, are often difficult to take seriously. But their hubris is only the gaudy version of a broader cultural and political belief in the power of science and technology to edit, alter and override the very stuff from which our world is made—in other words, to ‘play God’.


Of course, humanity has at least as much to fear from the rejection of science as it does from its advance. There is no doubt, for example, that climate-change ‘scepticism’ has seriously undermined international efforts to lower greenhouse-gas emissions. But it is important to recognise nonetheless that the techno-sciences carry dangers of their own, and that their spectacular rise to prominence under the rubrics of ‘innovation’ and ‘progress’ has often been at the expense of the planet and the majority of its human inhabitants, not to mention almost all of its non-human inhabitants. Even determined action on climate change might conceivably do more harm than good. Attempts to reengineer the climate through techniques such as ‘solar radiation management’ (SRM), which would involve pumping large quantities of sulphur particles into the stratosphere in an effort to deflect radiation from the sun, could have devastating consequences, from drought in Africa to monsoonal changes to increased acidification of the oceans. Moreover, and crucially, it would fundamentally change the relationship between humanity and the planet, bringing in its wake a perilous new sense of power and possibility. Those who are against SRM and its analogues will often talk about what could go wrong, but surely we should also think about the consequences of such a thing going right. Are we happy to live on a ‘designer planet’? If so, in whose interests should it be designed? And what kinds of moral hazard might emerge as a consequence of such an enterprise?

What goes for so-called ‘geo-engineering’ goes for other technologies as well. Take biotechnology, for example. The discovery of the double-helix structure of DNA in 1953—biochemistry’s ‘atom-splitting’ moment—has transformed our understanding of genetic inheritance, and revolutionised agriculture, forensic science and medicine. But the development of new capacities—from the use of ‘test-tube babies’ in IVF to the artificial cloning of organisms to the ‘editing’ of genomes using CRISPR technology—also raises ethical and moral issues about the role of science in human affairs, as well as political questions about who or what should own the techniques or products derived from that science. Should private companies be allowed to develop and patent new seed and livestock varieties? If so, should they also be permitted to enhance or augment or even design human beings? In 2018 the young Chinese scientist He Jiankui announced that he’d created the first genetically edited human babies, declaring ‘Society will decide what to do next’. But surely the take-up of such techniques will reflect the existing inequalities within and between societies, as has happened in the case of the trade in live organs—a trade whose routes invariably run from the impoverished South to the more affluent North.

More broadly, we should ask how new technologies change our sense of what life is, and the purposes for which it is lived. Perhaps the most controversial technique to have emerged from the discovery of the double helix – one discussed many times in the pages of Arena, by Simon Cooper, Kate Cregan and others – is the use of stem cells to engineer replacement tissues for transplant into humans, a technique that often involves the use of generic cells from aborted foetuses or from embryos grown in laboratories. One doesn’t need to be religious to see that this process marks a fundamental shift in humanity’s view of what human life is, and thus a shift, or a potential shift, in how human beings view each other. What might be the effects, then, of such a technique becoming more widespread? Could an idea of human life as something ‘special’ survive such a transformative change? Or will we come to see each other as essentially no different from other materials that can be grown, augmented and modified?

A comparable set of questions attends the emergence of computer and information technologies, a process catalysed by the rapid development of microprocessors in the 1970s, as well as by the commercialisation of the World Wide Web in the 1990s. Following the rough trajectory of ‘Moore’s Law’—the observation that the processing power of computers doubles about every two years—the personal computer and internet technology (now combined in the form of the smartphone) have transformed not only the relationship between human beings and information but also the relationship of human beings to each other. As the recent pandemic has reminded us, human beings are social creatures whose sociality evolved in conditions of presence. But infotech makes possible a sociality based on absence, such that we are now able to move through the world enclosed in our own atomised spaces. Increasingly intimate with our PCs and phones, in thrall to social-media platforms, we have social lives that are privatised, ephemeral and performative in ways that may engender intolerance and other antisocial behaviours. We talk about the ‘filter bubble’ in the context of political discourse to describe the tendency to privilege information and opinions that support our own world-pictures. But the metaphor of the bubble can be applied more broadly, implying as it does both delicacy and distortion. The subjectivities of a social animal predisposed to see a human face in a random distribution of wood knots (or, indeed, a mushroom cloud) do not remain unchanged in the event that their social basis changes its character. Cut off from the physical presence of others, the self becomes fragile, defensive and anxious. Connectivity engenders disconnection; and disconnection makes us miserable.

Again, and as with biotechnologies, the increasing dominance of the algorithm invites us to think in different ways about the nature of the human animal. One fear common to much science fiction is that humanity will one day create thinking machines that develop anti-human behaviours: the Hal computer in 2001 and Skynet in Terminator are different versions of this anxiety. But a more immediate possibility is that human beings come to see themselves as mere flesh-and-blood automata—elaborate systems to be augmented or improved. Implicit in much discussion about artificial intelligence is a view of human intelligence as in some sense machine-like or algorithmic. For Yuval Noah Harari, for example, there is no essential difference between a machine that makes a cup of tea and the person who, by pressing the relevant buttons, sets the tea-making process in motion; both are computers, albeit computers fashioned from radically different stuff. Thus it becomes permissible to think of human beings as meat algorithms whose minds are in some sense detachable from their bodies. Google’s Director of Engineering Ray Kurzweil looks forward to the day when human beings will ‘upload’ their minds to powerful computers. And while such ‘transhumanism’ will strike many as perverse, there is a sense in which we already regard the human brain as a wet computer that can be reconfigured at will, whether through mood-altering pharmaceuticals or so-called ‘deep-brain stimulation’ or even psychological therapies that promise to ‘rewire’ the mind. It is less the danger of algorithmic machines attaining full consciousness that should worry us than the social and political implications of regarding ourselves as no different in kind from algorithmic machines.

In so many ways, then, humanity stands at a crucial point in its development—a point, indeed, at which its development is in its own, uniquely nimble hands. The techno-sciences have made us masters of our own destiny, or made an elite few the masters of such, and so now we must evolve a capacity for reflection equal to that mastery. How will augmented reality and the creation of anthropomorphic robots affect our sexuality, already half in flight from ‘meatspace’ as a consequence of ubiquitous pornography? How will the use of autonomous weapons transform a police and military ethos still notionally attached to personal sacrifice and the cultivation of ‘hearts and minds’? Does the widespread acceptance of antidepressants signal a new techno-political dispensation—one in which it is the individual and not the society that is to be made ‘better’? These are not questions about The Future. They are questions about the kind of creatures we are.


On the morning of 6 August 1945, less than a month after the Trinity Test in New Mexico, three B-29s appeared in the sky above the Japanese city of Hiroshima. The city’s residents paid them little attention: the sight of US reconnaissance planes was not unusual at this stage of the war. Mindful of the firestorms that had ripped through Tokyo and other cities as a result of US aerial attacks, the authorities had set some eight thousand schoolchildren to work preparing firebreaks. No siren sounded, and they returned to work. Only a few of them saw a large parachute unfurl beneath one of the planes as it turned around and pointed its propellers away from the city.

Debate still rages about President Harry S. Truman’s motivation for dropping the atomic bomb on Hiroshima and (three days later) Nagasaki. But the argument that it was militarily necessary has not fared well historically. Even many of the US scientists who accepted the case for the Hiroshima bombing—that such an act was necessary in order to save American lives and demonstrate to the Japanese that further resistance would be suicidal—could not forgive the US president for ordering the second attack. Many historians now take the view that Truman was making a longer-term calculation, attempting to demonstrate the power and ruthlessness of the United States to the Soviet Union, which had just declared war against Japan and was looking to gain a foothold in the East. Others stress the role of anti-Japanese racism or, more broadly, the moral degeneration that necessarily occurs in war.

Frankly, we will never know precisely Truman’s rationale. But what we can say is that this transformative event was the confluence of two kinds of power: the power of the bombs themselves, which was like no other power on Earth, and the power to deploy them. As thermal energy from the massive blast—several million degrees centigrade at its hottest point—travelled outwards at the speed of light, turning human beings within a half-mile radius into instant carbon statues of themselves, the human species was itself transformed, psychologically and politically. It became the only species on the planet with the ability to end all life on it—potentially, the destroyer of worlds.

It is clear that such Promethean capacities are now a general characteristic of humankind, and that alongside the existential threats of anthropogenic climate change and thermonuclear weaponry other threats are taking shape that deserve much more prominence than they are given. As the global situation grows ever more volatile, it is necessary to name those threats, and to frame the overarching one of which they are a part: that in seeking to transform itself, humanity ceases to be human at all.

Technocratic Urban Governance and the Need to Localise Computing Infrastructure (Part II)

Published by Anonymous (not verified) on Tue, 28/07/2020 - 3:00am in

Part I of this article is available here.

When considering questions of local governments and the computing technologies they employ, these matters need to be thought through using integral approaches. This article draws on both infrastructure studies — which focuses on socio-technical systems, analysing networks and computer-based infrastructure — and platform studies, a branch of media studies that analyses computer devices and software environments and their effects on different social actors and socio-economic structures. Combined, they allow examination of the ‘smart city’ approach to urban development, its ownership and governance, and the business models that drive it. Computer-based infrastructures are designed, programmed and controlled by governments and large corporations. They develop in phases and become part of networks, and because of the diversity of interacting stakeholders multiple social commitments arise. Infrastructures also create dependencies, since they are deeply embedded in our social system and have the power to affect culture. People can rely on them, as they endure through time—this is the case with the internet and—to take a more long-lasting example—with railroads.

Platforms are system architectures programmed to serve a specific purpose, provide user connections through an interface and allow data flows. They are mostly profit oriented, and through default settings and affordances they connect communicative practices with a market logic. Their structure, purpose and use are determined by different economic and social motivations. ‘Smart city’ platforms and service-oriented platforms are not exceptions; they program the city, reflecting strategic choices to manage it in an efficient neoliberal way, while downplaying their commercial interests and promoting the illusion of stakeholder empowerment and participation. ‘Smart city’ platforms aim to centralise the management of city functioning through integrating sensors and communication technology. Like other platforms, they are developed and managed under largely central corporate control, which creates data lock-ins, built-in obsolesce, compulsory upgrades and limited interoperability, and more importantly removes citizens’ democratic freedom to decide.

Whoever controls the computer-based infrastructure of the city can determine the type of future the city has. If we create an analogy with the example provided by Ganaele Langlois and Greg Elmer about Facebook locking in developers and users and controlling their ‘walled’ environment, ‘smart city’ vendors and their platforms will immediately lock in governments and citizens and limit their development, shaping the way we relate to the city while programming citizens’ behaviour. Their governance power becomes strong in any negotiation processes about the future of the urban environment. Adjudicating the management of the ‘smart city’ to corporations means that they will become entitled to program and adjust city processes through the management of sensors and data flows. Public administration becomes an application, and citizens’ relationship with it is mediated by privatised technology infrastructure. The ‘city as a platform’ is a market simplification of urban life, where citizens are connected to extractive, for-profit services.

Besides the technological infrastructure that supports them, platforms are usually composed of technological dimensions. In the case of ‘smart cities’, data and metadata are obtained by sensitising the city, which provides information about the urban environment but also includes public and personal data, which has social, political and legal implications. Algorithms then process the data and metadata—they are the programmed instructions to produce feedbacks and outputs. The protocols used are the rules for programming and the governing instructions for users to obey; they operate behind interfaces. Interfaces have a front end for users to interact with and a back end that connects the platforms with the infrastructure and the data sources. Interfaces have predetermined settings, or defaults, that are conduits of users’ behaviour. Deconstructing the platform is an important step in order to understand the logic of extraction and identify the infrastructure supporting it. 

The primary idea behind ‘smart cities’ is that they need a network of interconnected devices that includes sensors (data sources), cloud computing and analytical power, and data sets. Platforms are then programmed to provide ‘solutions’ (services) to cities and citizens. These service-oriented platforms are mostly private, and access to them is obtained in return for the surrender of information. One way to approach the datafication of the urban environment and its implications is to bring the infrastructure involved in collecting and processing data to the foreground of the debate about technology and the city and make its politics noticeable. It can be useful to analyse the practices behind data harvesting: what is being measured, how is it being categorised, and what are the calculations behind it? Although platforms could be a medium to make data legible to different groups of people and they may help socialise the ‘fruit of the harvest’, for this to be a citizen-oriented endeavour, technology infrastructure must be secured as a public good first; otherwise, platforms will only make visible what the owners want them to make visible, and they will continue to obscure the key infrastructures and the politics behind them.

The cryptic ways of programming platforms will never completely allow us to know the number of feedbacks and the perversity of their nudging, and critical citizens will be needed in order to hold algorithms accountable over time. The alarming trend is that public services are being privatised and automated, andco-created knowledge is being locked into profit-oriented corporate ecosystems. Privatised computing-technology infrastructure deployed across the city will continue to harvest information under specific business agendas that will also feed multiple types of platform that will access that data under different types of transaction arrangement. Data is constantly being traded and monetised. To halt this ghastly scenario, one interesting alternative is municipalising urban computer-based infrastructure. It can be a fruitful investment for cities, providing them with democratic governance over the infrastructure and allowing them to explore the creation of commons agreements with the local community and its groups and movements.

In the Australian case, a municipalised computer operating system (platform) would consist in having open-code software running on top of technology infrastructure that allows management, analytics and controlled access to the municipal data repository. This would provide the conditions for a technology and data governance regime that is more citizen oriented and non-market centred. Similarly, the ‘Internet of Things’ platform should be open –source, and sensors should collect democratically agreed data about city operations. Infrastructure is a ‘common good’, but more importantly it should be a commoning process of the practice and use of socio-technical assemblages. Cities are places where deep mediatisation processes are taking place, and where digital infrastructure should be addressed to recovering some control over it, shaping a democratic vision of the urban environment—one that is not programmed to serve capitalism. A commons approach to technology might be a favourable proposition for cities to explore. And although not all city challenges need technological interventions, cities could benefit from securing their computer-based infrastructure. In this way, their technological agenda could be oriented to securing civil liberties, and enhancing the resilience of the city and the quality of life of its citizens. In Australia, councils provide a good scale at which to pursue this non-neoliberal endeavour. Ideally, when proven successful it could be replicated and scaled up to the national level.

A fully referenced PDF version can be downloaded here.

Yes, There’s Still Time to Design an Excellent Fall Course (guest post by Paul Blaschko)

Published by Anonymous (not verified) on Tue, 28/07/2020 - 12:14am in

It’s almost August (sorry!). Do you know what you are doing in your courses this fall? Don’t panic. Paul Blaschko is back with another guest post* to explain how you still have time to put together a great course. 

Dr. Blaschko is assistant teaching professor in philosophy at the University of Notre Dame and assistant director at the Notre Dame Institute for Advanced Study. He is digital curriculum lead for God and the Good Life and, as of this summer, is heading up a digital curriculum redesign program for the Mellon-funded “Philosophy as a Way of Life” project.

If you haven’t yet, see his previous post “Six Ways to Use Tech to Design Flexible, Student-Centered Philosophy Courses.”

Yes, There’s Still Time to Design an Excellent Fall Course
by Paul Blaschko

There’s so much uncertainty about the fall (and beyond) that simple planning tasks have become incredibly difficult. The problem is especially acute in professions—like teaching—where a huge part of the model requires having at least a rough plan for what a group of people is going to be doing over the next two to six months. One option we have is to throw our hands up and claim that planning a course under pandemic conditions is impossible. How can we start thinking about units or lesson plans if we don’t know which room our course will be held in, when it’s scheduled, or if it will even be held in one particular physical space at one particular time? Still, the semester marches unstoppably toward us. At Notre Dame we’re going to be teaching real students in 14 days (!) So taking a knee at this point would be tantamount to parking on the train tracks.

The good news is that this isn’t our only option, and it’s certainly not our best.

An excellent online course—or one that is fully capable of being taught in person, online, or flexibly transitioning between these two formats —takes time to create. It took us eighteen months to make the pilot version of ours, and it’s taken five additional years to get us to where we’d hoped to be. This summer I’ve co-led a staff of twelve student workers in an effort to help two philosophy departments design intro courses that were more digitally flexible, and with much blood, sweat, and tears we’ve gotten this startup timeline down to about three months. But I’m convinced that you can design an excellent, flexible course under pandemic conditions with very little technical expertise in exactly two weeks. You can even take off weekends. Here’s how to do it.


Monday, Week 1: Write Your Learning Goals

It can feel like a waste to spend a whole day thinking about your course-level learning goals, but do not skip this step. Your learning goals will determine every other facet of the design process. When you start thinking about your course’s big final assignment, the learning goals will determine what you need to assess your students on, and will help you narrow down the potentially infinite number of things you could have them do. When you need to pick between putting your course readings on a publicly accessible website or your school’s “Learning Management System” (like Sakai or Canvas), your learning goals will help you determine which of these options will better serve your students. And the only two things you need to consider in crafting (or updating) learning goals for a digitally flexible course are: (1) are these goals I’m committed to helping students meet (regardless of the mode of instruction), and (2) are these goals that the real humans I’m instructing will find meaningful (regardless of the mode of instruction)? Some of your old goals might have to go. We decided not to promise students that we’d help them improve their verbal communication skills (which we usually do through a classwide debate tournament or ethics bowl), because this goal would not survive the transition to Zoom, at least given our skills, strengths, and expertise.


Tuesday, Week 1: Planning Assessment

With goals in hand, it’s time to start thinking about assessment. We’re tempted, in higher-ed especially, to think about assessment as grading, and grading as a mere certification mechanism. But the purpose of well-designed assessment is to provide a flow of information about student progress toward course learning goals to instructors and students alike. Really well-designed assessment uses this information to create tight feedback loops that shapes and directs the learning process. I’ve already spent a week thinking about assessment with one of the schools we’re working with this summer, and plan to spend one more, even though the course won’t ultimately issue students any grades. The relevant question for you to answer, then, is: how am I going to gather and use information about student performance in my course as inputs in a learning process that ends in their achieving the course’s learning goals.

In my course, we decided that students would only be assessed for grades in coaching conversations with some member of the teaching team (we have professors, graduate TAs, and undergraduate peer mentors on that team). In addition, we’ll provide qualitative feedback on written work in-line with Google docs, but will not assign grades as part of that process. The reason for this is that personal connections — what researchers call “social presence” — is a crucial predictor of student success in online courses. We care far more about using assessment to establish effective feedback loops than we do about using it as a certificatory measure, so we’re skewing toward personalized, coaching style feedback wherever possible in our course.


Wednesday, Week 1: Planning Assignments

This is a fun day! Take a good hard look at your learning goals. Ask yourself: what sort of work product would provide me with evidence that my students have accomplished one of these goals? Because one of our goals is for students to learn enough philosophy to have rigorous and meaningful conversations about philosophical problems they are already naturally curious about, we decided that we’d need 10 minutes of discussion with them in order to see if they’re conversant. We figure that in 10 minutes you can see if a student has acquired the skills and knowledge to apply Aristotle to their career discernment, in much the same way that you’d be able to tell if a student had been following along adequately in your foreign language class. To make this a more personal, more social, and more enjoyable experience, we decided to have each student submit three philosophical questions to a member of the teaching team. Then, in groups of three, the students will have conversations where that teaching team member poses one of their own questions back to them, and the whole group gets to weigh in with follow-ups. As simple as this sounds, it’s going to take us three weeks just to teach students enough philosophy to make their question topics interesting, another three to help them understand how to construct a strong philosophical question, and at least two weeks to facilitate all of these group conversations. This is one of three big assignments that we’re giving our students, and assignment planning day is where you get all the biggest picture pieces of your assignment plan in place.


Thursday, Week 1: Writing Assignment Documents

Once you’ve got your big pieces in place, it’s time to get concrete. What exactly are you asking your students to do and why? What are the steps and how long will each step take? Is there any way to streamline assignments, or is there a particular order that they’ll need to take in order to make sure that students have acquired the skills and knowledge from assignment A before taking on another assignment, B, that presupposes possession of them? The best way to iron out all these wrinkles is just to take a shot at writing assignment descriptions. These can be short, just a single page, and you can fill in the details later. But generating the documents will reveal inconsistencies, and will make other decisions about the structure of the course (e.g. when to introduce certain skills or content) fall into place.


Friday, Week 1: Assessment Workday

It’s almost the weekend, but don’t rest yet! You’ve got to create documents that will guide your assessment of each assignment, and that will help you turn mere “grading” into a knowledge generative feedback loop of information. These should be rubrics, but don’t forget that excellent rubrics can be informal and holistic. You’ll also want to spend some time thinking about how you’ll be communicating your assessment to students. One option is to simply fill out and return rubrics to them with their written work. In a small seminar, this can be an efficient way to start conversations about room for improvement, but — in some hard-won experience — we’ve found out that this method is a disaster at scale (and probably enough to sink student satisfaction in a course that’s both large and online). This is why we provide instructions, along with our rubrics, to every member of our teaching team, and why all assessment is communicated primarily in direct, face-to-face conversations with our students. However you decide to implement your assessment, spend some time thinking about how you’ll communicate it, and write up some copy to put on your website (or send an email) when it comes time to start engaging students directly.



Take the weekend off—you deserve it!


Monday, Week 2: The Tech Decider

Alright. It was going to happen sooner or later. It’s time to talk tech. But before we even start looking at any particular tool or platform, get out your assignment documents. Make a list of any element of any assignment that will require a technological component. If you’re having students turn in a paper, you’ll need some sort of online drop box. If you’re asking them to schedule a ten minute conversation or oral exam, you’ll need some sort of video streaming platform. After you’ve gone through your assignments, think about the course more broadly. Do you plan to communicate with the class weekly? You’ll need a message board or an email list. Are you requiring students to interact with each other? You’ll need a discussion board or some sort of chat application. What about in-class? Are you going to ask them to vote on things during your lecture? You’ll need polling software, and preferably software that integrates with the platform you use to create your slides (PowerPoint, Google Slides or whatever). Once you’ve got your list, group items in terms of how similar their functionality is (a group communication tool is similar to a discussion board and a chat application). This will allow you to research whether a single tool can be used for multiple purposes, and will reduce the overall number of tools and platforms you’ll need. Once you’ve finalized your list, it’s time to explore. There’s no shortage of ed-tech tools out there, and you’ll certainly find representatives who are more than willing to demonstrate their products. Invest your time wisely here, and aim to minimize the number of different tools you’ll be using, while maximizing the social presence students will experience when using them.


Tuesday, Week 2: Plan Content and Daily Learning Goals

As you continue to settle on the tech tools you’ll be using, you’ll want to sketch out your course content plan in a more fine grained way. Decide, for each day, what you’ll be asking students to read and do before class, and what you’ll be covering during the instructional time. Make a spreadsheet and start collecting links to PDFs. One huge advantage of digitally flexible courses is that you can utilize media in various formats, and there is a good deal of high quality resources out there. We have our students read a short text, watch a professionally produced video, and take an ungraded quiz to test their knowledge before each course. Each of these elements is included on our webpage and is accessible with just one click.


Wednesday, Week 2: Put it All Online (Part 1) 

You’ve got the tech, you’ve got the tools, you’ve planned the content and the course. Now it’s time to spend two solid days building, posting, and hyperlinking. Your eyes and your soul might be tired by the end of this process, but just remember: you’re saving yourself, and your students, a massive headache by doing this now. And a well-designed digital infrastructure can promote student learning by reducing the cognitive load otherwise required by having students hunt down content on their own and download it, or by asking them to follow a breadcrumb trail of unstable links, or by sending forty emails back and forth asking about how to submit any particular assignment. Once this ship leaves the bay, you barely have to touch the steering wheel…


Thursday, Week 2: Put it All Online (Part 2)

Keep plugging!


Friday, Week 2: Test the Site, Give some Demos, Send a Welcome Email

The final day. This should only take a few hours. Log out of whatever platform, site, or tools you’re having students use. Create a fake student account and click around. Make sure your hyperlinks are working. Make sure it’d be obvious to you what to do if you were a student in your own class. Show your work to some colleagues or family members. Maybe ask them to click around. Post it on Facebook. Take feedback graciously. And then?

It’s time to involve the people this whole process is actually about. Send a warm email to your class welcoming them to your course. Tell them how excited you are to meet (or “meet”) them. Include a picture of yourself and offer multiple ways for them to get in touch. Remember that the people involved here and infinitely more important than any of the tech. Now you’re ready to do what you do best. Go forth and make some learning happen!

The post Yes, There’s Still Time to Design an Excellent Fall Course (guest post by Paul Blaschko) appeared first on Daily Nous.

Four Levels of Real World Home Classrooms

Published by Anonymous (not verified) on Wed, 22/07/2020 - 4:30am in



Online learning has shown significant growth over the last decade as the Internet and other communication technologies are use to provide learners with the opportunity to gain new skills. Since the COVID-19 outbreak, online learning has become more of a reality in people’s lives.

As our classrooms and learning spaces shift from public buildings to our homes, it can be a challenge to consider how best to connect digitally. This post will share some of the tools and profiles involved in online learning to help you stay connected.

High tech means missed connections

As virtual classrooms and online learning proliferate, a connection to these digital spaces is essential to making sure that learners are not left behind.

The challenge is not all schools, educators, parents, or children are equipped to effectively learn in digital spaces. Many of these challenges disproportionately impact low-income students and those with special needs.

Each of the profiles that I’ll identify below begins with a laptop or desktop computer, connection to the Internet, and a space to conduct work. Many of our students and instructors do not have those basic connections and as a result are abandoned as we make this transition.

Identify your home base

All of the classrooms begin with a space to work. This can be a desk, or a kitchen or dining room table. I understand that your situation at home may not make it easy to carve out a space for a classroom. In this event, find a small area to identify as your work space that you can go to…and leave each day. If needed, you can set things up, and take them apart at the end of the day.

It is important that you identify a space for work that becomes your home base for work. This helps you create some balance as you work from home. Just because you can work all of the time from home, doesn’t mean that you should.

You should identify days and times in your schedule when you will enter the home classroom, and the time when you’ll leave. Just like you would go to a physical classroom space…either driving there, or going into a classroom…you should take this same mindset as you think about your home classroom space.

With that caveat, here are the four different levels of real world home classrooms.

Level One

Level one starts with the use of a laptop or desktop computer and an Internet connection. Yes, you could use a tablet, or mobile device…but you do not want to. I understand that not everyone has access to a laptop/desktop computer and an Internet connection, but this is a mandatory for a home classroom.

The computer pictured below is an older MacBook Pro. For many home classrooms, a Chromebook is a much better option that a more expensive computer. Very good Chromebooks are often very inexpensive, and they keep themselves updated. For most home classrooms, a Chromebook is an excellent option.

Most times, we connect to the Internet using wifi. Most people don’t think twice about the way they connect to the Internet, we just expect things to work. In the photo below, I include an ethernet cable. This is a hard wire connection to the modem or router that brings the Internet into your home. Most times this will give you a far superior connection than you’ll get from wifi. If possible…use an ethernet cable.

Headphones are a mandatory tool to use as you connect online. They help remove/reduce feedback as you connect to a video conferencing (Zoom, Skype, WebEx) meeting. Many times they’ll include a microphone that is better (or at least closer) than the mic on your computer.

Your cell phone is not necessary, but it may offer a good option for a secondary screen if you have it. The computer in the picture below has a 13-inch screen. This can be a challenge to use if you have multiple spaces open. Using a cell phone, even an older used phone, can be helpful as you pay attention to text chats, or listen to YouTube video or podcasts.

Power cables and a surge protector are important as you want to remain plugged in all day while you work.

Lastly, I recommend several other elements to make your home classroom more user-friendly. Notebooks, and printed out copies of worksheets or teaching materials is helpful. This creates an offline space for you to quickly jot down notes, or complete activities. A desk lamp behind the webcam or screen of your laptop helps light your face and makes you look more human in a video chat. Hydration is also important. Drink your water.

Level Two

The second level of classroom contains all of the components of the first classroom, but adds a tablet. A tablet is helpful as you can use it as a second screen in your workflow. The use of a tablet obviously gives more screen real estate, when compared to a mobile device.

Many tablets also can be used in conjunction with your computer to give you a second monitor. This means that you can connect the tablet to your laptop with a wire or wirelessly, and you can drag materials across both screens. Depending on the laptop and tablet you’re using, there are multiple opportunities to connect a tablet as a second display. Search online to get a better sense of the options.

If you cannot connect the tablet as a second display, I usually set the tablet up like an easel next to the laptop and use the tablet to monitor text chats, review webpages or documents, or listen to content. This allows me to use the laptop for typing, video conferencing, or more intensive activities.

Level Three

Most of what we’ve shown up to this point details items in a home classroom that are a bit easier to obtain, set up, and take down each day. Level three includes all of the elements in levels one and two, but adds in another monitor. This is the setup I’ve used for my home office for years.

I brought my laptop to work, school, and/or home. While at home, I used a spare monitor as a second screen. Most laptops (Mac, Windows, Chromebooks) will allow you to connect the laptop to a monitor using the VGA or HDMI cable. This is the same sort of use if you were to connect it to a projector for a presentation. In this use case, you can use the secondary monitor as an extended display.

The second monitor might be an old television, or computer monitor that is no longer in use. The monitors I’ve used in the past were ones that I obtained for free from neighbors that left them out for the trash collection. Garage sales, and online swap services like Facebook Marketplace or Craigslist are great places to find inexpensive or free monitors.

Level Four

Level four includes all of the elements shared in the previous levels, but switches out the laptop for a desktop computer. This is my current real world home classroom. The setup in level four also includes a microphone for use in recording webinars, podcasts, or joining video chats.

The desktop computer I bought in pieces on Facebook Marketplace and rebuilt it to use for this setup. Instead of the laptop screen, I’m using an ultrawide screen display and mounted it to an arm that also supports my old display that I used as the secondary display with my laptop. The ultrawide screen is basically the equivalent of having two 24 inch monitors side-by-side.

The desklamp is positioned in the corner to flood light behind the webcam (which is mounted in the middle of everything). This creates some lighting to assist in video chats and webinars.

This workspace is overkill for some, but it is something that I’ve build up over the years as I work and learn from home. Review the video below for some guidance on my home office/classroom, and to see how things are set up.

Hopefully this post helps you think about developing a good real world home classroom. Please send in pictures of your spaces as well.

If you like this content, you should subscribe to my weekly newsletter to become digitally literate.


Photo by Phil Goodwin on Unsplash

The post Four Levels of Real World Home Classrooms first appeared on W. Ian O'Byrne.

Hybrid & Online Teaching: Four Helpful Workshops

Published by Anonymous (not verified) on Tue, 21/07/2020 - 11:39pm in

Julia Staffel, assistant professor of philosophy at the University of Colorado, Boulder, and Zak Kopeikin, a new graduate of the PhD program there, recently conducted four online workshops on hybrid and online teaching, sharing what they know about online teaching strategies and technology to save others the time and trouble of researching and figuring out various options.

The sessions were recorded and are now available for anyone to view.

Staffel and Kopeikin write:

We designed these four teaching workshops with the intention to give people an efficient overview of the knowledge they might need to teach hybrid and online classes. We didn’t want to give a lot of basic teaching advice, but instead we assume that people already have ideas for what they want to cover and are familiar with designing lessons. We’re showing them how to use the relevant technology to make it happen. We also try to be mindful of people’s time, so we selected topics and applications that people can incorporate in their teaching without an enormous time commitment.

For each app or functionality we discuss, we give a short tutorial that discusses how to set it up and what it looks like from the student’s perspective. We also offer some ideas for how to use them in class and how some of these technologies can be used to reduce cheating. Of course there are many more possibilities out there, but we believe that we’re offering people who don’t want to spend a lot of time researching teaching technology a good overview of some common and useful tools for online teaching.

Below are the sessions and links to the slides used in them.


Workshop 1: Canvas for Online Teaching

Topics: Labeling Strategies for your Canvas page  ·  Giving students audio- and video-feedback (12:35)  ·  Using discussion boards effectively (18:35) · Using to-do lists for students (30:39)  ·  Scheduling office hours with Canvas calendar (38:10)  ·  Canvas quizzes (47:10)  ·  Recording name pronunciation (1:12:37)  ·  Accessing Canvas support and tutorials (1:15:00)

 Slides here.

Workshop 2: Zoom for Synchronous Online Teaching

Topics:  Creating personal connections in online classes by using name games  ·  An overview of Zoom settings and different ways of allowing students to speak up (6:10, 14:27)  ·  Recording Zoom meetings (21:20)  ·  Using Zoom’s polling function (33:45)  ·  Using a tablet and stylus to draw on a Zoom whiteboard (45:50)  ·  Using breakout rooms (57:50)  ·  Taking attendance in Zoom (1:09:42)

 Slides here.

Workshop 3: Recording Content for Asynchronous Teaching

Topics: Best practices for recorded lectures  ·  Using different programs to record lectures  ·  Zoom (4:47)  ·  PP Slides with Voice over (12:50)  ·  Audacity for audio (19:15) ·  Snag it (46:40)  ·  Using Play Posit to embed quizzes in recorded lectures (52:23)  ·  Recording in-person classes (1:13:25)

 Slides here.

Workshop 4: Further Apps, Methods and Resources

Topics: Using podcasts and videos (by others) as teaching tools  ·  Podcasts (0:48)  ·  Videos (10:30)  ·  Using Perusall for jointly annotating text (18:15)  ·  Using Piazza for communication with students (30:10)  ·  Using Flipgrid for recording short video comments (49:10)

Slides here.

Related: “Six Ways to Use Tech to Design Flexible, Student-Centered Philosophy Courses“, “Hybrid or Blended Classes: How Can They Be Done Well?

The post Hybrid & Online Teaching: Four Helpful Workshops appeared first on Daily Nous.

Technocratic Urban Governance and the Need to Localise Computing Infrastructure (Part I)

Published by Anonymous (not verified) on Tue, 21/07/2020 - 3:01am in

Computer-based technologies are increasingly being deployed in cities, and new theoretical frameworks are needed to deal with the predominant and always neoliberal ‘smart’ agenda. This two-part essay intends to describe the adoption of technologies in cities and envision alternative ways of approaching urban technology in Australia, arguing that councils, particularly in Victoria, are structured on a scale that allows more progressive technological regimes to be explored. The municipalisation of computer-based infrastructure is a way of reclaiming control of the future of the city, sovereignty over technology, and the liberty to decide the philosophical stance and guiding paradigm of a resilient community-oriented city. The second part of this essay discusses the urban technology required to pinpoint what needs to be municipalised in order to guarantee community-oriented endeavours, approaching the use of technology in a way that can enhance quality of life for its citizens.

            Cities and Technological Regimes (Part I)

In the past four decades, cybernetic capitalism,driven by neoliberal policies, has benefited oligopolies that now control the economy, with damaging effects for all sectors of society. They increase poverty and inequality, and they weaken the capacity of governments to provide public services. The provision of these services has become a very attractive business opportunity for technology corporations as they emerge as the spearhead of the digital economy, benefiting from loose regulation and tax exemptions, and treated by governments as allies in the senseless path of perpetual growth. Corporations operating in the current platform economy operate under a model of data extractivism, enabling surveillance practices to control and manipulate behaviour, and promote the never-ending production/consumption of services developed from the extracted data of commodified citizens. They also aim to automate processes throughout society while promoting the rhetoric of ‘efficiency’, understood in monetary and decidedly not ecological terms. Urban environments are attractive strongholds for corporations—particularly those dedicated to urban and citizen data extraction—for their connecting functions in the cybernetic network. People’s data has become a valuable asset in this era of capitalism, a time of deep mediatisation, and cities are important nodes in this hyper-commercialised network.

Ongoingly, urban data is being conceived, extracted and managed under different agendas among them the well-received and marketable ‘smart city’ agenda, which in its original form is a vendor-driven technocratic approach to urban governance. Through flexible agreements made with governments at different scales, privately owned service-oriented technology corporations infiltrate the provision of public services by deploying computer-based urban infrastructure. They operate in a highly deregulated environment that has allowed them to manage city systems, roll out technologies, collect and analyse data, and develop pervasive surveillance services and platforms, strengthening their governance power in cities across the globe. Acquiring this power allows ‘California ideology’ technology corporations to determine city problems and propose fixes, bypassing deliberation by citizens and local governments. The promoters of the ‘smart’ narrative as a type of urban development are big technological corporations such as Google, IBM, Microsoft, CISCO and Siemens. Arguably the largest and most researched example is the Google SideLabs project in Toronto’s Waterfront area, which was dropped in May after facing staunch resistance by the citizens of Toronto. Still, it seems to be a natural progression in technological urbanism that corporations now not only extract value through the provision of platform-based services but gain proprietary rights over city space, extending their power and control over the urban environment.

The ‘smart city’ agenda platformises cities and their infrastructure, making urban technologies, and their operations and politics, invisible. For city governments, the ‘smart’ tag is being used to denote the desired attributes of a corporatised and technocratic city-governance style, such as efficiency and sustainability. The reality is that these aspirations are often unmet; worse, when they are used as the solution to all problems, computer-based technologies have an enormous impact on the environment, from the rare metals and conflict minerals used in their development to the large amounts of energy expended in their extraction and functioning. For ‘smart’ provisioners, the vendors of technology, it is about monetising urban and citizen data flows—behavioural data that will inform the development of services that will then be sold to optimise the operations of a socially constructed environment for profit and growth. Cities have become sensed and s-censored environments that are being datafied through the procurement processes of private-vendor technology provision; this is a public issue that requires digital policy in order to regulate it for the public good.

Currently, data is being used as a fundamental resource to inform policy and urban interventions, and it has been commonly assumed that this data is ‘objective’ and provides a neutral insight into cities and their processes, untainted by politics or partisan interests. This assumption is damaging for the development of citizen-oriented urban futures and resilient cities. Data always carries a particular vision; when generated by corporations it tends to commodify and colonise life and space in cities for profit. The ‘smart city’ agenda follows a logic of neoliberal platformisation of the city and its urban infrastructures, where wealth is transferred to private corporations that structurally cannot prioritise public benefit or citizens’ well-being above their own profit-maximising drive, or work to strengthen democratic governments and their institutions. This appropriation of urban and public life has great social, political and ethical implications. It might be time for local governments to become more proactive in their role of managing their technology and data assets.

The Australian approach to digital infrastructure has been oriented to efficiency, treating technology as unbiased and capable of being adjusted for different purposes and outcomes. Numerous cities and city councils are transitioning towards smart-city agendas with this mentality, and many of them have already partnered with corporations such as IBM, Cisco and Microsoft. A government body, the Clean Energy Finance Corporation, finances a program to convert one hundred councils into ‘smart councils’ via an initiative called Thinxtra. An Internet of Things (IoT) platform aims to deploy ‘smartness’ across the councils using technology provided by SigFox, a French corporation that offers a computer-based ‘network to listen to billions of objects broadcasting data, without the need to establish and maintain network connection’. The initiative will first be used for rubbish collection, but it is expected to eventually include sensors in public spaces that will extract and control a wide range of urban data. For that purpose, a network of computer-based infrastructure is already being built in different places to, in the words of client Yarra City Council, provide a robust platform to move information quickly and efficiently’. This initiative is a clear example of the top-down ‘smart city’ approach, a technological fix to tackle councils’ perceived challenges. In all cases the strategy is usually the same: corporations offer money to governments using a ‘philanthropic’ approach, taking advantage of budget cuts, locking cities and rapidly becoming governance actors. The technologies they bring are then publicised as a panacea for urban management. In reality, the partnerships are very rigid and limiting to cities, which are bound to highly confidential procurement contracts, not to mention the abyss of ‘terms and conditions’ that citizens are compelled to agree to.

Under the privatised ‘smart city’ ideology, the city is being programmed: the software and code give instructions and predetermine the information that will be harvested and the intended outputs. What is blurred by the ‘smart’ rhetoric and practice is who decides what data is being collected, the purposes to which it will be put, who will contribute, who will benefit and who will be left out. Key computer-based infrastructure should be secured and treated as a public good, and municipalised, allowing the creation of public-commons agreements for the infrastructure that will empower local governments and their citizens, and enable the promotion of a healthier digital urban environment and a response to complex city challenges. When treated as such, the platformisation of infrastructure for corporate benefit is harder to achieve, or at least it must be negotiated under different terms. Because cities represent a powerful scale at which to enforce or break ideologies, they are places where counter-movements arise. In Australia, bringing urban computer-based infrastructure to the front of the digital-policy debate could help to recover public control over it, and to discuss its ownership, governance arrangements and regulation and the social agreement around it.

Part II of this article can be found here.

A fully referenced PDF version can be downloaded here.

Britain and Russia Nearly Cooperated to Develop HOTOL Spaceplane

Published by Anonymous (not verified) on Sat, 18/07/2020 - 12:28am in

The news today has been partly dominated by reports that the Russians have been trying to steal secrets of a possible vaccine for Coronavirus. Unfortunately, it wouldn’t remotely surprise me if this was true. Way back in the 1990s the popular science magazine, Focus, did a feature on espionage which stated that most of it was industrial with competing corporations trying to steal each other’s secrets. And I think that during the Cold War the Russians were spying on British companies trying to steal technology. But during the ’90s there was a period when it seemed that Britain and Russia would work together to develop the British spaceplane, HOTOL.

HOTOL would have used a mixture of air-breathing and conventional rocket engines to get into orbit, taking off and landing on ordinary airstrips. It would have been initially unmanned, designed to carry payloads of 7,000-8,000 kilos into low Earth orbit. Crewed missions would be carried out by converting by converting the cargo compartment into a pressurized cabin. The project was cancelled because there were problems developing the air-breathing engines. It was hoped that the Europeans would be interested in supporting it, but they refused on the grounds of the possible cost. However, this resulted in plans for the plane to be adapted to take off from a Russian transport plane. The entry for ‘Spaceplanes’ in the book Space Exploration in the series of Chambers Encyclopedic Guides (Edinburgh: W&R Chambers 1992) says of this

In response to this, an interim HOTOL, using conventional rockets instead of the original air-breathing engines, has been proposed. The interim HOTOL, which has a shorter, fatter fuselage, would be carried to high altitude on the back of the huge Soviet-developed Antonov An-225 transport aircraft and then released. Once clear of the aircraft, HOTOL would fire its rocket motor to climb the rest of the way into orbit. If development were to be authorized it is believed that the first flight of the interim HOTOL could be in 2005. (p. 207).

This entry also contained the following artist’s impression of the HOTOL spaceplane taking off from the back of the Soviet transport plane.

I think this would have been an eminently practical project. The American X-15 rocket plane, which achieved orbit, was launched from a specially-equipped conventional aircraft, as were the various lifting bodies that led to the development of the Space Shuttle. If Britain and Russia had cooperated on it, then we and the Russians would be boldly going into space together 15 years ago. Obviously politics and doubtless costs intervened. I dare say that there was also concerns about technology transfer and the Russians acquiring British aerospace secrets.

But it’s an example of yet another opportunity to expand onto the High Frontier being missed. Nevertheless HOTOL has now been superseded by Skylon, which is almost complete and which should fly. If it gets the backing it needs to put Britain back in orbit.


















Homeland Security Worries Covid-19 Masks Are Breaking Facial Recognition, Leaked Document Shows

Published by Anonymous (not verified) on Fri, 17/07/2020 - 5:10am in

While doctors and politicians still struggle to convince Americans to take the barest of precautions against Covid-19 by wearing a mask, the Department of Homeland Security has an opposite concern, according to an “intelligence note” found among the BlueLeaks trove of law enforcement documents: Masks are breaking police facial recognition.

The rapid global spread and persistent threat of the coronavirus has presented an obvious roadblock to facial recognition’s similar global expansion. Suddenly everyone is covering their faces. Even in ideal conditions, facial recognition technologies often struggle with accuracy and have a particularly dismal track record when it comes to identifying faces that aren’t white or male. Some municipalities, startled by the civil liberties implications of inaccurate and opaque software in the hands of unaccountable and overly aggressive police, have begun banning facial recognition software outright. But the global pandemic may have inadvertently provided a privacy fix of its own — or for police, a brand new crisis.

A Homeland Security intelligence note dated May 22 expresses this law enforcement anxiety, as public health wisdom clashes with the prerogatives of local and federal police who increasingly rely on artificial intelligence tools. The bulletin, drafted by the DHS Intelligence Enterprise Counterterrorism Mission Center in conjunction with a variety of other agencies, including Customs and Border Protection and Immigration and Customs Enforcement, “examines the potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems — such as video cameras, image processing hardware and software, and image recognition algorithms — to monitor public spaces during the ongoing Covid-19 public health emergency and in the months after the pandemic subsides.”

The Minnesota Fusion Center, a post-9/11 intelligence agency that is part of a controversial national network, distributed the notice on May 26, as protests were forming over the killing of George Floyd. In the weeks that followed, the center actively monitored the protests and pushed the narrative that law enforcement was under attack. Email logs included in the BlueLeaks archive show that the note was also sent to city and state government officials and private security officers in Colorado and, inexplicably, to a hospital and a community college.

The new public health status quo represents a clear threat to algorithmic policing.

Curiously, the bulletin fixates on a strange scenario: “violent adversaries” of U.S. law enforcement evading facial recognition by cynically exploiting the current public health guidelines about mask usage. “We assess violent extremists and other criminals who have historically maintained an interest in avoiding face recognition,” the bulletin reads, “are likely to opportunistically seize upon public safety measures recommending the wearing of face masks to hinder the effectiveness of face recognition systems in public spaces by security partners.” The notice concedes that “while we have no specific information that violent extremists or other criminals in the United States are using protective face coverings to conduct attacks, some of these entities have previously expressed interest in avoiding face recognition and promulgated simple instructions to conceal one’s identity, both prior to and during the current Covid-19 pandemic.” This claim is supported by a single reference to a member of an unnamed “white supremacist extremist online forum” who suggested attacks on critical infrastructure sites “while wearing a breathing mask to hide a perpetrators [sic] identity.” The only other evidence given is internet chatter from before the pandemic.

But the bulletin also reflects a broader surveillance angst: “Face Recognition Systems Likely to be Less Effective as Widespread Wear of Face Coverings for Public Safety Purposes Continue,” reads another header. Even if Homeland Security seems focused on hypothetical instances of violent terrorists using cloth masks to dodge smart cameras, the new public health status quo represents a clear threat to algorithmic policing: “We assess face recognition systems used to support security operations in public spaces will be less effective while widespread public use of facemasks, including partial and full face covering, is practiced by the public to limit the spread of Covid-19.” Even after mandatory mask orders are lifted, the bulletin frets, the newly epidemiologically aware American public is likely to keep wearing them, which would “continue to impact the effectiveness of face recognition systems.”

DV.load('//', {
container: '#dcv-6989376-U-FOUO-in-Violent-Adversaries-Likely-to-Use',
height: '450',
sidebar: false,
width: '100%'


The battle over masks predates the pandemic. During the 2011 Occupy Wall Street protests, the New York City Police Department pulled legal gymnastics to arrest demonstrators donning grinning Guy Fawkes masks popularized by the hactivist group Anonymous, citing an 1845 law that bans groups of two or more people from covering their faces in public except at “a masquerade party or like entertainment.” Bans in states around the country followed, often in response to protest movements. In 2017, for example, North Dakota banned masks amid protests over the Dakota Access pipeline. Other anti-mask laws were designed to prevent Ku Klux Klan gatherings, though often with the primary aim of protecting white elites.

The Homeland Security document cites as cause for concern tactics used in recent pro-democracy demonstrations in Hong Kong. In that movement, which coincided with the emergence of China’s sophisticated surveillance state, police carried around cameras attached to poles, presumably to capture the faces of protesters. Demonstrators responded by shining laser pointers at police, sawing down lampposts mounted with cameras, and masking up. “At first only the militants wore masks,” Chit Wai John Mok, a Ph.D. student in sociology who studies social movements, told The Intercept. “But later on when even peaceful assemblies or marches were also banned, most protesters, moderates or militants, wore them.”

In Hong Kong, too, authorities saw masks as a problem. Last October, Chief Executive Carrie Lam banned face coverings at protests, further enraging protesters. In January, as Covid-19 spread, the government abruptly reversed its policy and began encouraging people to wear masks in public places.

In the past few months, companies around the world have scrambled to adapt their systems to facial coverings, with a few claiming that they can identify masked faces. So far, there is little evidence to support these claims. Some companies appear to have updated their algorithms by photoshopping masks onto images from existing datasets, which could lead to significant errors. To use facial recognition to identify individuals on the street, “it would be best to have lots of real life examples showing the many ways people wear masks and the different angles they get captured,” Charles Rollet, an analyst with IPVM, an independent group that tracks surveillance technology, told The Intercept. Without such images, he added, “There’s a risk of a substantially higher false positive rate, which, in a law enforcement setting, could lead to wrongful arrests or worse.” IPVM tested four facial recognition systems in February and found that their performance was drastically reduced with masked faces.

Homeland Security’s Customs and Border Protection, which uses facial recognition screening on international travelers, has also claimed that its technology works on masked faces. In that scenario, however, travelers look straight into the camera — an angle that makes it easier to identify them, even with masks.

Homeland Security has recently come under fire for efforts to expand the use of facial recognition by CBP. In December, following public outcry, department officials walked back plans to make facial recognition of U.S. citizens mandatory in airports when they fly to or from international destinations. Current protocols allow citizens to opt out of facial recognition screening.

Even as Homeland Security warned in the document of the ostensible risks posed by masked “violent adversaries,” the agency cautioned about violence perpetrated by anti-maskers. The same day that the Minnesota Fusion Center distributed the intelligence note, it circulated a second one warning that some people viewed mask orders as “government overreach” — and would sooner fight than cover their faces. “There have been multiple incidents across the United States,” the second document read, “of individuals engaging in assaults on law enforcement, a park ranger, and essential business employees in response to requests to wear face masks and to abide by social distancing policies.”

The post Homeland Security Worries Covid-19 Masks Are Breaking Facial Recognition, Leaked Document Shows appeared first on The Intercept.

Hack of 251 Law Enforcement Websites Exposes Personal Data of 700,000 Cops

Published by Anonymous (not verified) on Thu, 16/07/2020 - 1:00am in

After failing to prevent the terrorist attacks of September 11, 2001, the U.S. government realized it had an information sharing problem. Local, state and federal law enforcement agencies had their own separate surveillance databases that possibly could have prevented the attacks, but they didn’t communicate any of this information with each other. So Congress directed the newly formed Department of Homeland Security to form “fusion centers” across the country, collaborations between federal agencies like DHS and the FBI with state and local police departments, to share intelligence and prevent future terrorist attacks.

Yet in 2012 the Senate found that fusion centers have “not produced useful intelligence to support Federal counterterrorism efforts,” that the majority of the reports fusion centers produced had no connection to terrorism at all, and that the reports were low quality and often not about illegal activity. Fusion centers have also been criticized for privacy and civil liberties violations such as infiltrating and spying on anti-war activists.

Last month, the transparency collective Distributed Denial of Secrets published 269 gigabytes of law enforcement data on its website and using the peer-to-peer file sharing technology BitTorrent. The data, stolen from 251 different law enforcement websites by the hacktivist collective Anonymous, was mostly taken from fusion center websites (including many of those listed on DHS’s website), though some of the hacked websites were for local police departments, police training organizations, members-only associations for cops or retired FBI agents, and law enforcement groups specifically dedicated to investigating organized retail crime, drug trafficking, and working with industry.

After the BlueLeaks data was published, Twitter has permanently suspended the DDoSecrets Twitter account, citing a policy against distributing hacked material. Twitter has also taken the unprecedented step of blocking all links to, falsely claiming, to users who click that the website may be malicious. Twitter is implementing these policies arbitrarily; for example, the WikiLeaks Twitter account and links to are still accessible despite the large amount of hacked material that WikiLeaks has published. Following Twitter’s example, Reddit banned the r/blueleaks forum — citing its policy against posting personal information — where users discussed articles based on leaked documents and their own findings from digging through the BlueLeaks data. German authorities have seized a server belonging to DDoSecrets that was hosting BlueLeaks data, leaving BitTorrent as the only way the data is currently being distributed by the organization. (For the record, I’m a member of DDoSecrets’ advisory board.)

“I think the bans are simple attempts to slow or stop the spread of the information and news,” Emma Best, a co-founder of DDoSecrets, told The Intercept. “The fact that the server was seized without a warrant or judicial order and now sits idle while the Germans debate whether or not to let FBI have it simply emphasizes the conclusion that censorship and retaliation, not just investigation, are the driving forces,” they added.

All of the hacked websites were hosted and built by the Texas web development firm Netsential on Windows servers located in Houston. They were all running the same custom (and insecure) content management system, developed using Microsoft’s ASP.NET framework in the programming language VBScript, using Microsoft Access databases. Because they all run the same software, if a hacker could find a vulnerability in one of the websites that allowed them to download all the data from it, they could use that vulnerability to hack the rest of the websites without much additional effort.

The hacked data includes a massive trove of law enforcement documents, most of which dates from 2007 until June 14, 2020, well into the wave of anti-police brutality protests triggered by the police murder of George Floyd in Minneapolis. The data also includes the source code for Netsential’s custom CMS — while analyzing it for this story, I discovered a vulnerability myself — and the content of the databases that these websites used.

“Netsential can confirm its web servers were recently compromised,” the company said in a statement on its website, which itself runs this same CMS. “We are working with the appropriate law enforcement authorities regarding the breach, and we are fully cooperating with the ongoing investigation. We have enhanced our systems and will continue to work with law enforcement to mitigate future threats. Netsential will continue to work with clients impacted by the intrusion. Inasmuch as this is an ongoing investigation, and due to the sensitivity of client information, Netsential will provide no further statement while the matter is pending.“

“It’s a disaster for law enforcement from a PR perspective,” Phillip Atiba Goff — CEO and co-founder of Center for Policing Equity, an organization that uses data science to combat racial bias within U.S. police departments — told me in an encrypted phone call. “That there is worse stuff than what we’re seeing, that it’s not just individual [police] Facebook accounts but it’s part of the culture of the department — that doesn’t surprise me. That shouldn’t surprise anyone.”

700,000 Law Enforcement Officers Exposed

The vast majority of people who have logins on these hacked websites are law enforcement officers, and Netsentiel’s CMS stores quite a lot of personal information about each account.

For example, the Northern California Regional Intelligence Center has 29,114 accounts, and each one includes a full name; rank; police department or agency; email address; home address,; cellphone number; supervisor’s name, rank, and email address; the IP address used to create the account; and a password hash — a cryptographic representation of the user’s password (hashed with 1,000 iterations of PBKDF2 and a 24-byte salt, if you’re that kind of nerd). If a user’s password is weak, hackers with access to its hash could crack it to recover the original password, potentially leading to a giant list of all the weak passwords used by U.S. law enforcement.

This is from a single fusion center. The BlueLeaks data contains similar information for 137 separate websites, though most have fewer accounts and not every website contains all of these pieces of information. Some don’t contain password hashes.

The two largest account databases come from the National Guard’s counterdrug training program website, with more than 200,000 accounts exposed, and the Los Angeles High Intensity Drug Trafficking Area training program website with nearly 150,000 accounts exposed. In total, the hacked data includes private details for over 711,000 accounts.

“I get that there’s a community concern that there’s not accountability for law enforcement, and there’s a desire among a nontrivial portion of the population for something like not justice but vengeance, and there’s a feeling that the entire population of law enforcement is to blame for what we’ve seen in the streets,” Goff said. “I really pray that no officer is hurt because of this. Even more I pray that no officer’s family is hurt because of this.”

Hacked Websites

Many of the websites belonged to traditional fusion centers, such as Minnesota’s fusion center called ICEFISHX, the Alabama Fusion Center, and even the Mariana Regional Fusion Center based in the Mariana Islands, a U.S. commonwealth in the North Pacific.

But a number of the hacked websites belong to organizations in which law enforcement agencies partner with industry, such as:

  • Energy Security Council, a nonprofit where law enforcement collaborates with oil companies. Its board of directors includes executives from companies like Chevron and Exxon Mobil.
  • Chicagoland Financial Security Group, a “crime watch”-type website that Chicago law enforcement uses to communicate with the financial industry (presumably, white-collar crime isn’t included in their definition of “crime”). Partner organizations include Bank of America, Chase, U.S. Bank, and several other financial institutions.
  • Chicago Hospitality Entertainment and Tourism Security Association (Chicago HEAT), a nonprofit where the DHS, FBI, DEA, and Chicago Police collaborate with Illinois Hotel & Lodging Association.
  • Law Enforcement and Private Security Los Angeles, which organized annual symposiums between law enforcement and private security companies.
  • Organized retail crime alliances (ORCAs), partnerships between law enforcement and local retail industries that investigate organized shoplifting rings. These include Alert Mid-South (Tennessee, Mississippi, Alabama), CAL ORCA (California), Central New York ORCA, and many others.

Many of the hacked websites belong to high intensity drug trafficking area programs, or HIDTAs, essentially fusion centers focused solely on the war on drugs. These include Atlanta-Carolinas HIDTANew Mexico HIDTA, Puerto Rico-U.S. Virgin Islands HIDTA, as well as many others.

Some of the hacked websites belong to local police departments, such as the Jersey Village Police Department in Texas, which prominently displays a link to request a “vacation house watch.” In this case, partners who log in to the website appear to be individuals who live or own property in Jersey Village. Websites belonging to the Lamar University Police Department (also in Texas), the Burlingame Police Briefing Board (in California), and several other local police departments were among those hacked.

Many of the hacked websites belonged to training academies for law enforcement, such as the Iowa Law Enforcement Academy, the Amarillo College Panhandle Regional Law Enforcement Academy, and many others. The Los Angeles Police Department Detective Training Unit website, which was taken offline after the data breach, offers courses taught by billionaire Peter Thiel’s private surveillance company Palantir.


Screenshot of, which is now offline.

Screenshot: The Intercept

Finally, several of the hacked websites belong to members-only associations like the Houston Police Retired Officers Association, the Southeastern Michigan Association Chiefs of Police, and associations for various chapters of the FBI National Academy Associates.

Suspicious Activity Reports

A week after Derek Chauvin, a Minneapolis police officer, knelt on George Floyd’s neck for eight minutes while he lay handcuffed in the street until he died, triggering massive nationwide protests, a young political science major in Oregon was contacting lawyers. “I am a long time activist and ally of the Black Lives Matter movement,” she wrote to a Bay Area law firm. “Is there anyway[sic] that I could add your firm, or consenting lawyers under your firm, to a list of resources who will represent protesters pro bono if they were/are to be arrested? Thank you very much for your time.”


PDF attachment in a Suspicious Activity Report from the NCRIC BlueLeaks data. Some personal information redacted in the original. Additional redactions by The Intercept.

Screenshot: The Intercept

A lawyer who read this message was infuriated and anonymously reported the student to the authorities. “PLEASE SEE THE ATTACHED SOLICITATION I RECEIVED FROM AN ANTIFA TERRORIST WANTING MY HELP TO BAIL HER AND HER FRIENDS OUT OF JAIL, IF ARRESTED FOR RIOTING,” he typed into an unhinged letter, in all-caps, that he mailed to the Marin County District Attorney’s office, just north of San Francisco.



PDF attachment in a Suspicious Activity Report from the NCRIC BlueLeaks data.

Screenshot: The Intercept

An investigator in the Marin County DA’s office considered this useful intelligence. She logged into the Northern California Regional Intelligence Center’s CMS and created a new Suspicious Activity Report, or SAR, under the category “Radicalization/Extremism” and typed the student’s name as the subject. “The attached letter was received via US Postal Service this morning,” she wrote in the summary field. The student “appears to be a member of the Antifa group and is assisting in planning protesting efforts in the Bay Area despite living in Oregon.”

She uploaded a scanned PDF of the letter to the fusion center. The return address on the envelope was the address of the San Francisco District Attorney’s office. The Intercept could not confirm if the attorney who reported this student works with the San Francisco DA or not.


PDF attachment in a Suspicious Activity Report from the NCRIC BlueLeaks data.

Screenshot: The Intercept

This is one example from over 1,200 community-submitted SARs in the BlueLeaks data, the bulk of which are included in the data of 10 different fusion sites. Here are a few others.

A probation officer posted a SAR to the Orange County Intelligence Assessment Center stating, “3 young females with hijabs were videotaping the LJC [juvenile court] building,” and adding: “Although it is their constitutional right to video tape it did make me very concerned.”

Google reports threatening YouTube comments to the Northern California fusion center in the form of SARs, regardless of where the abusive YouTube user is located. For example, in June, Google reported a series of comments that a user from Michigan posted to different videos. Here is an example of one of his comments:

He was only a nigger. Who cares. With the years of unprovoked anti white attacks and contribution to white genocide. I feel nothing for nigger deaths. Or view them as human. Way they act. Trump 2020 and why I still refuse to serve niggers in my diner to the point I have history of pointing my gun at any who still enter despite the sign outside. That includes ones in police uniform who could be fake cops in stolen uniforms. Would be like letting in the Devil.

It’s unclear what the fusion center does with this information, if anything. But this SAR reported by Google is the only place in all of the BlueLeaks data where this YouTube user’s display name or email address appears.

Scratching the Surface

In all, the BlueLeaks archive contains more than 16 million rows of data from hundreds of thousands of hacked database tables: not just personal information of officers, but the content of bulk emails and newsletters, descriptions of alleged crimes with geolocation coordinates, internal survey results, website logs, and so much more. It also contains hundreds of thousands of PDFs and Microsoft Office documents, thousands of videos, and millions of images.

“I think that law enforcement can be better if [evidence of police crimes and racial bias] can be made more public,” Goff said. “The emails and records that I’ve seen could absolutely take down the entire profession.”

The post Hack of 251 Law Enforcement Websites Exposes Personal Data of 700,000 Cops appeared first on The Intercept.

The Microsoft Police State: Mass Surveillance, Facial Recognition, and the Azure Cloud

Published by Anonymous (not verified) on Wed, 15/07/2020 - 5:42am in

Nationwide protests against racist policing have brought new scrutiny onto big tech companies like Facebook, which is under boycott by advertisers over hate speech directed at people of color, and Amazon, called out for aiding police surveillance. But Microsoft, which has largely escaped criticism, is knee-deep in services for law enforcement, fostering an ecosystem of companies that provide police with software using Microsoft’s cloud and other platforms. The full story of these ties highlights how the tech sector is increasingly entangled in intimate, ongoing relationships with police departments.

Microsoft’s links to law enforcement agencies have been obscured by the company, whose public response to the outrage that followed the murder of George Floyd has focused on facial recognition software. This misdirects attention away from Microsoft’s own mass surveillance platform for cops, the Domain Awareness System, built for the New York Police Department and later expanded to Atlanta, Brazil, and Singapore. It also obscures that Microsoft has partnered with scores of police surveillance vendors who run their products on a “Government Cloud” supplied by the company’s Azure division and that it is pushing platforms to wire police field operations, including drones, robots, and other devices.

With partnership, support, and critical infrastructure provided by Microsoft, a shadow industry of smaller corporations provide mass surveillance to law enforcement agencies. Genetec offers cloud-based CCTV and big data analytics for mass surveillance in major U.S. cities. Veritone provides facial recognition services to law enforcement agencies. And a wide range of partners provide high-tech policing equipment for the Microsoft Advanced Patrol Platform, which turns cop cars into all-seeing surveillance patrols. All of this is conducted together with Microsoft and hosted on the Azure Government Cloud.

Last month, hundreds of Microsoft employees petitioned their CEO, Satya Nadella, to cancel contracts with law enforcement agencies, support Black Lives Matter, and endorse defunding the police. In response, Microsoft ignored the complaint and instead banned sales of its own facial recognition software to police in the United States, directing eyes away from Microsoft’s other contributions to police surveillance. The strategy worked: The press and activists alike praised the move, reinforcing Microsoft’s said position as a moral leader in tech.

Yet it’s not clear how long Microsoft will escape major scrutiny. Policing is increasingly done with active cooperation from tech companies, and Microsoft, along with Amazon and other cloud providers, is one of the major players in this space.

Because partnerships and services hosting third party vendors on the Azure cloud do not have to be announced to the public, it is impossible to know full extent of Microsoft’s involvement in the policing domain, or the status of publicly announced third party services, potentially including some of the previously announced relationships mentioned below.

Microsoft declined to comment.

Microsoft: From Police Intelligence to the Azure Cloud

In the wake of 9/11, Microsoft made major contributions to centralized intelligence centers for law enforcement agencies. Around 2009, it began working on a surveillance platform for the NYPD called the Domain Awareness System, or DAS, which was unveiled to the public in 2012. The system was built with leadership from Microsoft along with NYPD officers.

While some details about the DAS have been disclosed to the public, many are still missing. The most comprehensive account to date appeared in a 2017 paper by NYPD officers.

The DAS integrates disparate sources of information to perform three core functions: real-time alerting, investigations, and police analytics.

Through the DAS, the NYPD watches the personal movements of the entire city. In its early days, the system ingested information from closed-circuit TV cameras, environmental sensors (to detect radiation and dangerous chemicals), and automatic license plate readers, or ALPRs. By 2010, it began adding geocoded NYPD records of complaints, arrests, 911 calls, and warrants “to give context to the sensor data.” Thereafter, it added video analytics, automatic pattern recognition, predictive policing, and a mobile app for cops.

By 2016, the system had ingested 2 billion license plate images from ALPR cameras (3 million reads per day, archived for five years), 15 million complaints, more than 33 billion public records, over 9,000 NYPD and privately operated camera feeds, videos from 20,000-plus body cameras, and more. To make sense of it all, analytics algorithms pick out relevant data, including for predictive policing.


A snapshot of the Microsoft Domain Awareness System — also called Microsoft Aware — desktop interface. Photo taken from Microsoft presentation titled “Always Aware,” by John Manning and Kirk Arthur.

Image: Microsoft presentation

The NYPD has a history of police abuse, and civil rights and liberties advocates like Urban Justice Center’s Surveillance Technology Oversight Project have protested the system out of constitutional concerns, with little success to date.

While the DAS has received some attention from the press — and is fairly well-known among activists — there is more to the story of Microsoft policing services.

Over the years, Microsoft has grown its business through the expansion of its cloud services, in which storage capacity, servers, and software running on servers are rented out on a metered basis. One of its offerings, Azure Government, provides dedicated data hosting in exclusively domestic cloud centers so that the data never physically leaves the host country. In the U.S., Microsoft has built several Azure Government cloud centers for use by local, state, and federal organizations.

Unbeknownst to most people, Microsoft has a “Public Safety and Justice” division with staff who formerly worked in law enforcement. This is the true heart of the company’s policing services, though it has operated for years away from public view.

Microsoft’s police surveillance services are often opaque because the company sells little in the way of its own policing products. It instead offers an array of “general purpose” Azure cloud services, such as machine learning and predictive analytics tools like Power BI (business intelligence) and Cognitive Services, which can be used by law enforcement agencies and surveillance vendors to build their own software or solutions.

Microsoft’s Surveillance-Based IoT Patrol Car

A rich array of Microsoft’s cloud-based offerings is on full display with a concept called “The Connected Officer.” Microsoft situates this concept as part of the Internet of Things, or IoT, in which gadgets are connected to online servers and thus made more useful. “The Connected Officer,” Microsoft has written, will “bring IoT to policing.”

With the Internet of Things, physical objects are assigned unique identifiers and transfer data over networks in an automated fashion. If a police officer draws a gun from its holster, for example, a notification can be sent over the network to alert other officers there may be danger. Real Time Crime Centers could then locate the officer on a map and monitor the situation from a command and control center.


Microsoft’s Connected Officer simulation demo for IoT surveillance and data integration for real-time situational awareness and centralized police analytics. Photo taken from Microsoft presentation, “The Connected Officer: Bringing IoT to Policing,” by Jeff King and Brandon Rohrer.

Image: Microsoft presentation

According to this concept, a multitude of surveillance and IoT sensor data is sent onto a “hot path” for fast use in command centers and onto a “cold path” to be used later by intelligence analysts looking for patterns. The data is streamed along through Microsoft’s Azure Stream Analytics product, stored on the Azure cloud, and enhanced by Microsoft analytics solutions like Power BI — providing a number of points at which Microsoft can make money.

While the “Connected Officer” was a conceptual exercise, the company’s real-world patrol solution is the Microsoft Advanced Patrol Platform, or MAPP. MAPP is an IoT platform for police patrol vehicles that integrates surveillance sensors and database records on the Azure cloud, including “dispatch information, driving directions, suspect history, a voice-activated license plate reader, a missing persons list, location-based crime bulletins, shift reports, and more.”


A demo of the Microsoft Advanced Patrol Platform, or MAPP, IoT surveillance vehicle for police. An Aeryon Labs SkyRanger is perched on top. Photo taken from Microsoft Azure blog, “Microsoft hosts Justice & Public Safety leaders at the 2nd annual CJIS Summit,” by Rochelle Eichner.

Photo: Microsoft Azure blog

The MAPP vehicle is outfitted with gear from third-party vendors that stream surveillance data into the Azure cloud for law enforcement agencies. Mounted to the roof, a 360-degree high-resolution camera streams live video to Azure and the laptop inside the vehicle, with access also available on a mobile phone or remote computer. The vehicle also sports an automatic license plate reader that can read 5,000 plates per minute — whether the car is stationary or on the move — and cross-check them against a database in Azure and run by Genetec’s license plate reader solution, AutoVu. A proximity camera on the vehicle is designed to alert the officers when their vehicle is being approached.

Patrolling the skies is a drone provided by Microsoft partner Aeryon Labs, the SkyRanger, to provide real-time streaming video. (Aeryon Labs is now part of surveillance giant FLIR Systems.) According to Nathan Beckham of Microsoft Public Safety and Justice, the vehicle’s drones “follow it around and see a bigger view of it.” The drones, writes DroneLife, can “provide aerial views to the integrated data platform, allowing officers to assess ongoing situations in real time, or to gather forensic evidence from a crime scene.”

Police robots are also part of the MAPP platform. Products from ReconRobotics, for example, “integrat[ed] with Microsoft’s Patrol Car of the Future Program” in 2016. Microsoft says ReconRobotics provides their MAPP vehicle with a “small, lightweight but powerful robot” that “can be easily deployed and remotely controlled by patrol officers to provide real-time information to decision-makers.”

Another Microsoft partner, SuperDroid Robots, has also announced they will provide the Microsoft MAPP vehicle with two compact remote-controlled surveillance robots, the MLT “Jack Russell” and the LT2-F “Bloodhound,” the latter of which can climb stairs and obstacles.

Although it sports a Microsoft insignia on the hood and door, the physical vehicle the company uses to promote MAPP isn’t for sale by Microsoft, and you probably won’t see Microsoft-labeled cars driving around. Rather, Microsoft provides MAPP as a platform through which to transform existing cop cars into IoT surveillance vehicles: “It’s really about being able to take all this data and put it up in the cloud, being able to source that data with their data, and start making relevant information out of it,” said Beckham.

Indeed, Microsoft says “the car is becoming the nerve center for law enforcement.” According to Beckham, the information collected and stored in the Azure cloud will help officers “identify bad actors” and “let the officers be aware of the environment that is going on around them.” As an example, he said, “We’re hoping with machine learning and AI in the future, we can start pattern matching” with MAPP vehicles providing data to help find “bad actors.”


The MLT “Jack Russell” and the LT2-F “Bloodhound” on display at an event showcasing the Microsoft MAPP police vehicle solution during the FBI National Academy Associates’ 2015 Annual Training Conference in Seattle. Photo taken from SuperDroid Robots blog post, “SuperDroid Robots Partners with Microsoft.”

Photo: SuperDroid Robots

Last October, South African police announced Microsoft partnered with the city of Durban for “21st century” smart policing. Durban’s version of the the MAPP solution includes a 360-degree ALPR to scan license plates and a facial recognition camera from Chinese video surveillance firm Hikvision for use when the vehicle is stationary (e.g., parked at an event).

According to South African news outlet ITWeb, the metro police will use the MAPP solution “to deter criminal activities based on data analysis through predictive modeling and machine learning algorithms.” The vehicle has already been rolled out in Cape Town, where Microsoft recently opened a new Azure data center — an extension of the digital colonialism I wrote about in 2018.

Much like the U.S. (albiet with some different dynamics), South Africa faces the scourge of police brutality that disproportionately impacts people of color. The country had its own George Floyd moment during the recent Covid-19 lockdown when the military and police brutally beat 40-year-old Collins Khosa in the poor Alexandra township, leading to his death — over a cup of beer. (A military inquiry found that Khosa’s death was not linked to his injuries at the hands of authorities; Khosa’s family and many others in South Africa have rejected the review as a whitewash.)

The MAPP solution will be used for “zero tolerance” policing. For example, Durban Metro Police spokesperson Parboo Sewpersad said the rollout aims to punish “littering, drinking and driving, and drinking and walking” during summer festivities.

It is difficult to determine where else the MAPP vehicle may be deployed. The rollout in South Africa suggests Microsoft sees Africa as a place to experiment with its police surveillance technologies.

Microsoft: Powering CCTV and Police Intelligence in the City

Beyond wiring police vehicles, video surveillance provides another lucrative source of profits for Microsoft, as it is loaded with data packets to transmit, store, and process — earning fees each step of the way.

When building a CCTV network packed with cameras, cities and businesses typically use a video management system, or VMS, to do things like display multiple camera feeds on a video wall or offer the option to search through footage. A leading VMS provider, Genetec, offers the core VMS integrated into Microsoft’s Domain Awareness System. A close partner of Microsoft for over 20 years, the two companies work together on integrating surveillance services on the Azure cloud.

Some of the most high-profile city police forces are using Genetec and Microsoft for video surveillance and analytics.

Through a public-private partnership called Operation Shield, Atlanta’s camera network has grown from 17 downtown cameras to a wide net of 10,600 cameras that officials hope will soon cover all city quadrants. Genetec and Microsoft Azure power the CCTV network.

On June 14, Atlanta’s Chief of Police, Erika Shields, resigned after APD cops shot and killed a 27-year-old Black man, Rayshard Brooks. Last month, six Atlanta police officers were charged for using excessive force against protesters of police violence.

In 2019, Atlanta Police Foundation COO Marshall Freeman told me the foundation had just completed a “department-wide rollout” for Microsoft Aware (Domain Awareness System). Freeman said the Atlanta Police Department uses Microsoft machine learning to correlate data, and plans to add Microsoft’s video analytics. “We can always continue to go back to Microsoft and have the builders expand on the technology and continue to build out the platform,” he added.

In Chicago, 35,000 cameras cover the city with a plug-in surveillance network. The back-end currently uses Genetec Stratocast and Genetec’s Federation service, which manages access to cameras across a federated network of CCTV cameras — a network of camera networks, so to speak.


Genetec Citigraf on Microsoft Azure. Data ingested for correlations, monitoring, and alerts includes Computer Aided Dispatch data, gunshot detection from ShotSpotter, automatic license plate reader cameras, American Community Survey (census) data, CCTVs on Genetec’s VMS, various communications and intrusion alerts, Geographic Information Systems data, and database components like incidents and arrests. Photo taken from webinar, “How Chicago Integrated Data and Reduced Crime by 24%,” presented by Otto Doll (Center for Digital Government), Jonathan Lewin (Chicago Police Department), and Bob Carter (Genetec).

Image: Government Technology/Center for Digital Government webinar

In 2017, Genetec custom-built their Citigraf platform for the Chicago Police Department — the second-largest police force in the country — as a way to make sense of the department’s vast array of data. Powered by Microsoft Azure, Citigraf ingests information from surveillance sensors and database records. Using real-time and historical data, it performs calculations, visualizations, alerts, and other tasks to create “deep situational awareness” for the CPD. Microsoft is partnering with Genetec to build a “correlation engine” to make sense of the surveillance data.

Chicago’s police force has a brutal history of racism, corruption, and even two decades’ worth of torturing suspects. During police violence protests following Floyd’s murder, the CPD attacked and beat protesters, including five Black protesters to the point of hospitalization.

The city of Detroit uses Genetec Stratocast and Microsoft Azure to power their controversial Project Green Light. Launched in 2016 in tandem with a new Real Time Crime Center, the project allows local businesses — or other participating entities, such as churches and public housing — to install video cameras on their premises and stream surveillance feeds to the Detroit Police Department. Participants can place a “green light” next to the cameras to warn the public — which is 80 percent Black — that “you are being watched by the police.”

In 2015, the DPD stated, “the day is coming where police will have access to cameras everywhere allowing the police to virtually patrol nearly any area of the city without ever stepping foot.”

DPD Assistant Chief David LeValley explained to me that prior to creating the new command center, the department sent a team of people to several other U.S. cities, including New York, Chicago, Atlanta, Boston, and the Drug Enforcement Administration center in El Paso, Texas, to scope out their intel centers. “Our Real Time Crime Center is an all-encompassing intelligence center, it’s not just Project Green Light,” he explained.

The expansion of police surveillance in Detroit has been swift. Today, Project Green Light has around 2,800 cameras installed across over 700 locations, and two smaller Real Time Crime Centers are being added, a development trending in cities like Chicago. LeValley told me those RTTCs will do things like “pattern recognition” and “reports for critical incidents.”

In the wake of George Floyd’s murder, activists in Detroit have recharged their efforts to abolish Project Green Light in the fight against police surveillance, which local community advocates like Tawana Petty and Eric Williams deem racist. This year, two Black men, Robert Julian-Borchak Williams and Michael Oliver, were wrongfully arrested after being misidentified by the DPD’s facial recognition technology.

Nakia Wallace, a co-organizer of Detroit Will Breathe, told me Project Green Light “pre-criminalizes” people and “gives the police the right to keep tabs on you if they think you are guilty” and “harass Black and brown communities.” “Linking together cameras” across wide areas is “hyper-surveillance” and “has to be stopped,” she added.

The “function that the [DPD] serve,” Wallace said, is “the protection of property and white supremacy.” “They’re hyper-militarized, and even in the wake of that, people are still dying in the city” because “they have no interest in the livelihood of Detroit citizens.” Instead of militarizing, we need to “stop pretending like poor Black people are inherently criminals, and start looking at social services and things that prevent people from going into a life of crime.”

In a 2017 blog post, Microsoft boasted about the partnership with Genetec for the DPD, stating that Project Green Light is “a great example of how cities can improve public safety, citizens’ quality of life, and economic growth with today’s technologies.”

Microsoft Actually Does Supply Facial Recognition Technology

While Microsoft has been powering intelligence centers and CCTV networks in the shadows, the company has publicly focused on facial recognition regulations. On June 11, Microsoft joined Amazon and IBM in saying it will not sell its facial recognition technology to police until there are regulations in place.

This is a PR stunt that confused how Microsoft’s relationship to policing works technically and ethically, in a number of ways.

First, while the press occasionally criticizes Microsoft’s Domain Awareness System, most attention to Microsoft policing focuses on facial recognition. This is mistaken: Microsoft is providing software to power a variety of policing technologies that undermine civil rights and liberties — even without facial recognition.

Second, facial recognition is a notable feature of many video surveillance systems and Real Time Crime Centers that Microsoft powers. The cities of New York, Atlanta, Chicago, and Detroit are among those utilizing Microsoft services to collect, store, and process the visual surveillance data used for facial recognition. Microsoft services are part and parcel of many police facial recognition surveillance systems.

Third, at least one facial recognition company, Veritone, has been left out of the conversation. A Microsoft partner, the Southern California artificial intelligence outfit offers cloud-based software called IDentify, which runs on Microsoft’s cloud and helps law enforcement agencies flag the faces of potential suspects.


Veritone’s aiWARE solution on the Microsoft Azure cloud; Veritone is also on Amazon’s AWS GovCloud. Photo taken from Veritone webinar, “Artificial Intelligence: Machine Learning and Mission,” presented by Chad Steelberg (Veritone), Richard Zak (Microsoft), Ryan Jannise (Oracle), and Patrick McCollah (Deloitte).

Image: Veritone webinar

In a 2020 keynote at the Consumer Electronics Show, speaking alongside executives from Microsoft, Deloitte, and Oracle, Veritone CEO Chad Steelberg claimed that thanks to Veritone’s IDentify software on Azure, cops have helped catch “hundreds and hundreds of suspects and violent offenders.” Veritone’s Redact product expedites prosecutions, and Illuminate allows investigators to “cull down evidence” and obtain anomaly “detection insights.”

In a recent webinar, Veritone explained how IDentify leverages data police already have, such as arrest records. If a person is detected and has no known match, the IDentify software can profile suspects by creating a “person of interest database” that “will allow you to simply save unknown faces to this database and continuously monitor for those faces over time.”

Veritone claims to deploy services in “about 150 locations,” but does not name which ones use IDentify. It launched a pilot test with the Anaheim Police Department in 2019.

Microsoft lists Veritone IDentify as a facial recognition law enforcement product offering in its app repository online. The promotional video on the Microsoft website advertises IDentify’s ability to:

… compare your known offender and person of interest databases with video evidence to quickly and automatically identify suspects for investigation. Simply upload evidence from surveillance systems, body cameras, and more. … But best of all, you’re not chained to your desk! Snap a picture and identify suspects while out on patrol, to verify statements, and preserve ongoing investigations.

Veritone has been a staunch defender of its facial recognition technology. In May 2019, the company tweeted:

In a promotional video featuring Microsoft, Veritone’s Jon Gacek said, “You can see why at Veritone we’re excited to be tightly partnered with Microsoft Azure team. Their vision and our vision is very common.”

Smoke, Mirrors, and Misdirection

Despite claims to the contrary, Microsoft is providing facial recognition services to law enforcement through partnerships and services to companies like Veritone and Genetec, and through its Domain Awareness System.

Microsoft’s public relations strategy is designed to mislead the public by veering attention away from its wide-ranging services to police. Instead, Microsoft president and chief legal officer Brad Smith urges the public to focus on facial recognition regulation and the issue of Microsoft’s own facial recognition software, as if their other software and service offerings, partnerships, concepts, and marketing are not integral to a whole ecosystem of facial recognition and mass surveillance systems offered by smaller companies.

Esteemed Microsoft scholars, such as Kate Crawford, co-founder of the Microsoft-funded think tank, AI Now Institute, have followed this playbook. Crawford recently praised Microsoft’s facial recognition PR and criticized companies like Clearview AI and Palantir, while ignoring the Microsoft Domain Awareness System, Microsoft’s surveillance partnerships, and Microsoft’s role as a cloud provider for facial recognition services.

Crawford and AI Now co-founder Meredith Whittaker have condemned predictive policing but haven’t explained the fact that Microsoft plays a central role in predictive policing for police. Crawford did not respond to a request for comment.

If these Microsoft clients were offering sex trafficking services on the Azure cloud, Microsoft would surely close their accounts.

Microsoft and its advocates may claim that it is a “neutral” cloud provider and it’s up to other companies and police departments to decide how they use Microsoft software. Yet these companies are partnering with Microsoft, and Microsoft is getting paid to run their mass surveillance and facial recognition services on the Azure cloud — services that disproportionately affect people of color.

If these Microsoft clients were offering sex trafficking services on the Azure cloud, Microsoft would surely close their accounts. And because law enforcement agencies purchase surveillance technologies using taxpayer dollars, the public is actually paying Microsoft for its own police surveillance.

If activists force corporations like Microsoft, Amazon, Google, IBM, and Oracle to terminate partnerships and infrastructure services for third parties conducting police surveillance, then cloud providers would have to acknowledge they are accountable for what is done on their clouds. Moving forward, activists could press to replace corporate ownership of digital infrastructure and data with community ownership at the local level.

There is a lot at stake in this moment.

The post The Microsoft Police State: Mass Surveillance, Facial Recognition, and the Azure Cloud appeared first on The Intercept.