Privacy

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).

EFA Complains to OAIC About Retailers Use of Facial Surveillance

Published by Anonymous (not verified) on Wed, 22/06/2022 - 8:17pm in

Electronic Frontiers Australia has joined with Digital Rights Watch and the Australian Privacy Foundation to file an official complaint with the Office of the Australian Information Commissioner about the conduct of Bunnings, Kmart, and The Good Guys as originally reported by CHOICE.

Our full complaint is available below.

Privacy-Complaint-re-Bunnings-etc.Download

Complaint – Australian Retailers Using Facial Recognition

Reporting by Choice1 has revealed that major Australian retailers including Bunnings, Kmart and The Good Guys (the Retailers) are using facial recognition technology in some of their retail outlets.

Bunnings has advised Electronic Frontiers Australia (EFA) that the alleged purpose of the use of this technology is, in combination with CCTV surveillance, to “support the safety of our team and customers against repeat violent or threatening behaviour, and to prevent unlawful behaviour instores2.

Representations from Bunnings to the EFA3 suggest that Bunnings is scanning all customers in stores using this technology, and using facial recognition to compare customer ‘faceprints’ to a list of faceprints of banned customers to enable ‘action’ to be taken. It is unclear what actions may be taken.

We believe that the current use of facial recognition systems by the Retailers constitute a breach of the Australian Privacy Principles. Our reasoning is set out below.

Collection of ‘sensitive information’

According to Choice’s report, the Retailers are using facial recognition technology to create ‘faceprints’ from CCTV footage.

We note that the following are included in the definition of ‘sensitive information’ in s6 of the Privacy Act 1988:

(d) biometric information that is to be used for the purpose of automated biometric verification or biometric identification; or

(e) biometric templates.

We note the Information Commissioner’s finding in Commissioner initiated investigation into 7- Eleven Stores Pty Ltd [2021] AICmr 50(the 7-Eleven matter) that faceprints are ‘personal information’—and that facial images and faceprints are ‘sensitive information’—within the meaning of s 6(1) of the Privacy Act.4

Lack of valid consent

APP 3.3 provides that APP entities must not collect sensitive information unless:

  • they have obtained consent to the collection and the collection is ‘reasonably necessary for their functions or activities, or
  • an exception listed in APP 3.4 applies.

The APP Guidelines provide that, for consent to be valid, it may be express or implied, but:

  • the individual must be adequately informed before giving consent,
  • the individual must give consent voluntarily,
  • the consent must be current and specific, and
  • the individual must have the capacity to understand and communicate their consent.5

We note that Bunnings stated to Choice:

“We let customers know about our use of CCTV and facial recognition technology through signage at our store entrances and also in our privacy policy, which is available on our website”.

Providing notice of collection is not the same as obtaining consent. In Commissioner initiated investigation into Clearview AI, Inc (Privacy) [2021] AICmr 54 (the Clearview matter), Commissioner Falk stated:

A privacy policy is a transparency mechanism that, in accordance with APP 1.4, must include information about an entity’s personal information handling practices including how an individual may complain and how any complaints will be dealt with. It is not generally a way of providing notice and obtaining consent. Any such consent would not be current and specific to the context in which that information is being collected, and bundles together different uses and disclosures of personal information.6

We consider that the Retailers have failed to obtain valid consent. We contend that the above approach does not meet the requirement that consent be adequately informed, nor that consent must be current and specific, nor that the affected individuals would have the capacity to understand and communicate their consent.

In addition, the fact that the technology operates on every person entering a store means that minors, who cannot legally consent, are also having their biometric information collected and used by the retailers.

We further contend that the collection of biometric information is covert collection.

Notices insufficient

Firstly, we contend that it is highly unlikely that customers will notice or read signage at store entrances. Such signs are often not prominent or noticeable, and drafted in vague terms.

Choice’s reporting included the following example:

 CHOICE]Signage at the Kmart store in Marrickville, New South Wales. [Source: CHOICE]

This signage does not state the purpose of collection, or how the faceprints will be handled. We recognise that different signage may be used by Bunnings or the Good Guys. We note that in the 7-Eleven matter, similar signage was not considered sufficient to enable informed consent, as:

  • customers were not adequately informed about what they were being asked to consent to,
  • the signage did not clearly state what information was being collected and how it would be handled by 7-Eleven, and
  • without being given this information, customers were not in a position to understand the implications of providing or withholding consent [at 94].

We note that such signage is unlikely to satisfy the notification requirements outlined in the APP guidelines for APP 57:

  • the APP entity’s identity and contact details
  • the fact and circumstances of collection
  • whether the collection is required or authorised by law
  • the purposes of collection
  • the consequences if personal information is not collected
  • the entity’s usual disclosures of personal information of the kind collected by the entity
  • information about the entity’s APP Privacy Policy
  • whether the entity is likely to disclose personal information to overseas recipients, and if practicable, the countries where they are located

As such, even if customers do notice and read the signage, we contend that signage similar to the Choice example would not be sufficient to support ‘adequately informed’ consent.

Lack of informed consent

We observe that the Bunnings and Kmart Privacy Policies do mention that images may be collected from CCTV and facial recognition software, for the purposes of “loss prevention or store safety purposes”. The Good Guys policy includes similar language, stating that they use “facial and feature recognition technology to capture an image of an individual’s face, features and clothing and to track an individual through the store… strictly for the purposes of security and theft prevention and managing/improving customer experience”.

However, in our opinion, it is not reasonable to expect customers to read a privacy policy published on a website before attending a brick and mortar store. In that respect, we note the Information Commissioner’s findings in the 7-Eleven matter:

…an APP entity cannot infer consent simply because it has published a policy about its personal information handling practices. A privacy policy is a transparency mechanism that, in accordance with APP 1.4, must include information about an entity’s personal information handling practices, including how an individual may complain and how any complaints will be dealt with. It is not generally a way of providing notice and obtaining consent. Any consent inferred from the existence of a privacy policy would not be current and specific to the circumstances in which the information is being collected.”8

As a side note, we observe that the Bunnings and Kmart Privacy Policies (which are substantially similar) are over 3,600 words long. The Good Guys policy is over 5,800 words long. A readability analysis suggests that the policies would require tertiary education to fully understand (the policies all score ~34 on the Flesch Kincaid Reading Ease test9). In practice, these policies are therefore not accessible to the 2/3 Australians10 who do not have a university degree.

Collection is covert

Given the inadequate notification discussed above, we consider that the circumstances of the collection of biometric information is covert.

In the Clearview matter, Commissioner Falk found [at 172-173] that in the circumstances and “in the absence of specific and timely information about the respondent’s collection practices” that Clearview AI, Inc had engaged in covert collection. We consider the activities of the Retailers, as far as we are able to determine given the limited information available about their activities, are very similar to the activities of Clearview, AI. Specifically:

  • The Retailers do not adequately notify individuals that their image is captured and used to create a faceprint.
  • The Retailers publicly available notices and privacy policies provide limited information about their information handling practices. For example, they do not explain:
    • that the Retailers generate biometric templates for matching purposes
    • how the Retailers algorithms analyse captured images to generate faceprints (biometric vectors)
    • how faceprints derived from captured images are used to identify sufficiently similar faceprints
    • which third parties may be shown Matched Images, and the countries those third parties are located in.

We consider the behaviour of the Retailers is sufficiently similar to that described by the Commissioner in the Clearview matter that Retailers are likely performing covert collection of biometric information.

The Commissioner noted that there are significant risks of harm to individuals from such collection:

The covert collection of biometric information in these circumstances carries significant risk of harm to individuals. This includes harms arising from misidentification of a person of interest by law enforcement (such as loss of rights and freedoms and reputational damage), as well as the risk of identity fraud that may flow from a data breach involving immutable biometric information.11

The Commissioner also noted that privacy harms from indiscriminate surveillance are not merely individual, but collective, affecting everyone in society:

More broadly, the indiscriminate scraping of facial images may adversely impact all Australians who perceive themselves to be under the respondent’s surveillance, by impacting their personal freedoms.12

Reasonableness and Proportionality

We recognise that the Retailers have a legitimate need to manage customer and staff safety and mitigate theft. However, it is not self-evident that a system that conducts facial recognition on all customers is a reasonable, necessary, or proportionate response to those risks, or that the resulting impact on customer and community privacy is justified.

Other implementations of facial recognition have been observed to result in false positives; organisations that overly rely on these technologies for enforcement decisions have thereby made incorrect decisions, and unnecessary and negatively impacted individuals. For example, the 2018 arrest of Robert Williams in Detroit, Michigan for an alleged theft on the basis of a false positive facial recognition identification13.

As such, it is critical that the risks of this technology be well understood and mitigated. In this instance, there is no indication that the Retailers have done so; indeed, Bunnings’ representations to EFA14 suggest they do not fully understand that the technology that they have implemented requires surveillance of all customers.

Conclusion

Facial recognition is not a risk-free technology, nor is it magic. While it is becoming more accessible to organisations, the increasing ubiquity of this technology does not obviate the requirement to comply with the Privacy Act and the APPs.

There are numerous concerns over this use of the technology that have not been addressed by the Retailers in their public communications.

  • It is unclear whether the Retailers undertook any proactive risk assessment process, such as the preparation of a Privacy Impact Assessment, or otherwise made any attempt to limit the impacts on individual customers. Arguably, the failure to do so could constitute a failure to take reasonable steps to keep personal information secure, per APP 11.1.
  • It is unclear whether the Retailers have assessed their respective facial recognition systems for possible bias, or how Retailers will handle false positives.
  • It is unclear how long the Retailers will retain the faceprints they create, if they are stored locally or shared between stores, or if they are part of a broader database system (for instance, with Westfarmers; the parent company of Bunnings and Kmart).
  • It is unclear whether the faceprints and the facial recognition technology will be used for any other purpose, such as tracking or targeted marketing, or whether biometric information will be on-sold. While some Retailers have claimed that they will not use the collected biometric information for these purposes, their demonstrated ignorance of how the technology actually functions means we must give little weight to such claims.

In the light of the ongoing review of the Privacy Act15, this matter underlines the need for additional and specific regulation to govern the use of facial recognition technology, in particular to prevent disproportionately intrusive levels of biometric surveillance. The Retailers have claimed that their use of this technology complies with the Privacy Act, while we believe (as do many privacy professionals with whom we have discussed the matter) that it manifestly does not. Any lingering ambiguity risks permitting others to commit future privacy harms while claiming that their activities are legitimate under cover of that ambiguity.

We note that the recalcitrance of the Retailers in response to widespread public outcry on this matter indicates that individual rights of action are also likely to be needed so that the outsized power of major retailers can be effectively countered with a similar level of individual and collective power. We note that while the Commissioner considers this matter, the facial surveillance systems remain in place and continue to harm Australians’ individual and collective privacy. Justice delayed is justice denied.

1Ibid.

2Electronic Frontiers Australia, Inc, ‘Australian Retailers Using Face Surveillance’(16 June 2022) <https://www.efa.org.au/2022/06/16/australian-retailers-using-face-survei....

3Ibid.

4Commissioner initiated investigation into 7-Eleven Stores Pty Ltd [2021] AICmr 50, 80.

5Office of the Australian Information Commissioner, ‘Australian Privacy Principles Guidelines’B.35 <https://www.oaic.gov.au/privacy/australian-privacy-principles-guidelines>.

6Commissioner initiated investigation into Clearview AI, Inc (Privacy) [2021] AICmr 54, 154.

7Office of the Australian Information Commissioner (n 6) 5.

8Commissioner initiated investigation into 7-Eleven Stores Pty Ltd (n 5) 95.

9Wikipedia (online at 18 June 2022) ‘Flesch–Kincaid readability tests’

10‘Education and Work, Australia, May 2021 | Australian Bureau of Statistics’(9 November 2021) <https://www.abs.gov.au/statistics/people/education/education-and-work-au....

11Commissioner initiated investigation into Clearview AI, Inc. (Privacy) (n 7) 174.

12Ibid 176.

13Adi Robertson, ‘Detroit Man Sues Police for Wrongfully Arresting Him Based on Facial Recognition’ The Verge (13 April 2021) <https://www.theverge.com/2021/4/13/22382398/robert-williams-detroit-poli....

14Electronic Frontiers Australia, Inc (n 3).

15‘Review of the Privacy Act 1988’ Attorney-General’s Department (5 November 2020) <https://www.ag.gov.au/integrity/consultations/review-privacy-act-1988>.

Related Items:

Poor ABC coverage of facial surveillance

Published by Anonymous (not verified) on Mon, 20/06/2022 - 11:01am in

On Sunday, the ABC posted an article about facial surveillance that, we feel, was poorly written, edited, and generally did a poor job of informing readers about the issue. EFA and Digital Rights Watch submitted a joint complaint to the ABC about the article, and we are sharing our correspondence publicly so that others can see the details of our concerns.

If you would like to complain to the ABC as well, their complaint form is here: https://www.abc.net.au/contact/complain.htm

ABC Online Facial Surveillance Article Complaint

Dear ABC Editorial Team,

We are writing to submit a complaint regarding the analysis piece titled “That selfie you posted can be used to train machines — which might not be a bad thing”, published on the morning of Sunday 19 June 2022 at:  https://www.abc.net.au/news/2022-06-19/why-many-people-arent-comfortable-with-facial-recognition/101157518

While we appreciate the ABC’s commitment to providing a variety of viewpoints, we are concerned that this piece is actively harmful, misleading, and overlooks essential legal and ethical points. Given that this is an analysis piece, rather than an op-ed, we would expect to see a more balanced approach to weighing up the arguments, and a greater use of readily available facts.

Existing practices are unlawful

The piece starts by referring to “tech companies who used [photographs on social media] to train their artificial intelligence systems”. This practice, as used by Clearview AI, Inc, was found to be illegal in Australia under the Privacy Act with the Office of the Australian Information Commissioner (OAIC) ordering the company to stop collecting facial biometrics and to destroy all existing images and templates that it holds.

“Although Meta, the global behemoth that owns Facebook and Instagram, stopped using the tech last year, that doesn’t mean your pictures aren’t still harvested by companies who build searchable databases of faces.”

Holding up the practices of companies such as Clearview AI or similar products that have been found to be in breach of the Privacy Act as a reason people shouldn’t care about the covert use of facial recognition technology in major retailers is misleading.

It is also disingenuous to suggest that the use of facial recognition technology to surveil shoppers is in any way analogous to using social media. 

“Sharing selfies on social media platforms, using a streaming service or loyalty card all divulge more personal information than the facial recognition technology CHOICE was probing.”

Surveillance capitalism is certainly a serious digital rights issue, but an individual does not give up their right to privacy in a physical store because their privacy is already compromised online.

Existing bad practices are not a justification for increased surveillance

The piece repeatedly uses the argument that mass data collection is already happening as a justification for increased surveillance.

“Sharing selfies on social media platforms, using a streaming service or loyalty card all divulge more personal information than the facial recognition technology CHOICE was probing.”

“But the reality is, this kind of data is already available through our online activities, which have been harvested and sold for years.”

This is a lazy, status-quo argument that relies on and encourages disempowerment and apathy in ABC readers. It suggests to readers that “better things aren’t possible, so give up now”. It also overlooks the immense harm that is caused by the current trajectory of ubiquitous use of surveillance-based technologies.

Current harmful—and frequently illegal—practices are not a justification for continued or increased harmful and illegal practices.

Harms are clear

As Commissioner Falk found in their Clearview AI decision, at [176]:

[T]he indiscriminate scraping of facial images may adversely impact all Australians who perceive themselves to be under the respondent’s surveillance, by impacting their personal freedoms.

Further, the activities referred to—harvesting data online, the data broker industry, invasive practices of loyalty cards—are all practices that continue to be subject to both legal and ethical debate. For example, the OAIC and the ACCC have both recently investigated privacy-invasive practices of loyalty card programs. They should not be held up as justification as to why the use of facial recognition in major Australian retailers is less concerning, and encouraging readers to dismiss their valid concerns regarding any of these harmful applications of surveillance-based technologies is dangerous.

Motivated sources

The use of a former FBI Agent [update: and also former “Senior Intelligence Officer with the Defense[sic] Intelligence Agency”] to advocate for giving up privacy and accepting increased surveillance is deeply troubling. They were presented as an independent expert without disclosing their law-enforcement affiliation up-front. The potential for motivated reasoning is clear, but this affiliation was only disclosed to readers in the picture caption, not in the text of the article itself.

Their claims were presented as fact, without any supporting evidence. Specifically:

Retailers rely on it to reduce shoplifting. They can be notified if someone who has stolen from the store before enters it again.

Law enforcement agencies across Australia use it to disrupt serious and violent crime, as well as identity theft.

These statements were not presented as quotations, so it is difficult to determine if these are the views of a clearly pro-surveillance source, or of the author themselves. No evidence is provided to support the claims made.

“Dr Desmond said many people don’t understand what they can obtain in exchange for giving up a certain level of privacy.”

It was disappointing to see a largely pro-surveillance article that gave little space to the abundance of evidence of known harms. It was particularly disturbing to see no discussion of the harms caused by surveillance technology in law enforcement contexts when the primary source for the article was clearly linked to law enforcement.

For example:

It is unacceptable to promote the use of facial recognition technology for the purpose of law enforcement without acknowledging the known harms or risks.

Misleading readers 

Overall, we were dismayed to read such an article published by the ABC. Most of these concerns should have been detected and corrected as part of the editing practice.

It was disappointing to read an article so dismissive of people’s valid concerns regarding their right to privacy and agency over their own biometric information. It is exceptionally disappointing to see the ABC publish arguments in favour of collection and use of Australians’ personal and sensitive information through such controversial technology as facial recognition, based on the existence of other harmful, unethical, and in some cases specifically illegal practices. 

We understand that the ABC Editorial Guidelines regarding impartiality do not require that every perspective receives equal time, nor that every facet of every argument is presented, however, we are concerned that this piece is likely to mislead readers. Facial recognition surveillance poses a fundamental threat to Australians’ human rights, however the article inaccurately presents it as legal, benign and widely accepted. Providing “both sides” on an issue with well-documented human rights violations goes against the ethos of the ABC. 

To counter the article, we propose to provide the ABC with a balancing piece that lays out the evidence for the known facial surveillance risks and harms, and better explains the legal context for the technology in Australia.

We look forward to hearing from you,

Justin Warren, Chair, Electronic Frontiers Australia 

Samantha Floreani, Program Lead, Digital Rights Watch

Related Items:

Australian Retailers Using Face Surveillance

Published by Anonymous (not verified) on Thu, 16/06/2022 - 12:58pm in

Tags 

Privacy, choice

Choice revealed on Tuesday that several major Australian retailers are using face surveillance technology in their stores, to the great surprise of the vast majority of customers.

Face surveillance is an inherently dangerous technology and it has no place in retail stores.

EFA does not believe the use of this technology is necessary or proportionate to address the issues the retailers have used to justify its use. EFA also does not believe the retailers have a lawful basis to be using this technology due to a fundamental misunderstanding of both privacy law and the way facial surveillance technology works.

Until organisations suffer consequences for these kinds of privacy violations, EFA believes the situation will not improve. We endorse the recommendation made by the Australian Law Reform Commission in 2014 that there should be a private right of action (a tort) for serious breach of privacy. We should not have to wait for an underfunded and overwhelmed regulator to one day perhaps consider taking action. We should also be able to take steps individually, and collectively, to address the abuses of power by organisations.

More robust privacy protections are long overdue and it is well past time for the Australian government to act.

Contacting Bunnings

EFA Chair Justin Warren contacted Bunnings to learn more about what they were doing and why.

He initially called their contact centre, but they were unable to provide more information than prepared talking points. He learned that Bunnings had been receiving many calls from concerned Australians after the media coverage of the issue. He then sent an email to the Bunnings privacy team:

Hello!

It seems we’re not alone in being surprised by the news today that Bunnings is using facial surveillance on its customers.

I have some questions, on behalf of our members:

– Is Bunnings aware of the 7-Eleven case in which the OAIC found 7-Eleven stores’ use of facial surveillance was unlawful? (https://www8.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/AICmr/2021/50.html)

– On what basis does Bunnings believe the use of facial surveillance in its stores is lawful?

– Did Bunnings seek legal advice regarding the legality of using facial surveillance before implementing the system? Is so, would Bunnings be able to publicly share that advice?

– On what basis does Bunnings believe members of the public have provided informed consent for the use of facial surveillance?

I would also like to request access to my personal information held by Bunnings, as required by Australian Privacy Principle 12. I wish to receive access via electronic means, by email.

Bunnings Reply

Bunnings replied a day later and failed to answer any of our questions.

Hi Justin,

Thanks for your feedback. We have also seen the recent media reporting regarding the use of facial recognition in our stores and are separately raising our concerns regarding the accuracy of this reporting directly with the outlets involved. In particular, this reporting suggests that facial recognition is applied to all customers. This is simply not the case, as detailed below.

The safety of our team and customers is at the core of what we do and we have several measures in place in our stores to help keep our team and customers safe.

At selected stores, our CCTV systems utilise facial recognition technology, which is used to support the safety of our team and customers against repeat violent or threatening behaviour, and to prevent unlawful behaviour in our stores. Images are only uploaded to this system following a particular individual being formally banned from one of our stores, or after them being suspected of engaging in unlawful or threatening conduct in our stores. The facial recognition technology checks for matches against these uploaded images, and where there isn’t a match then no action occurs. No data relating to anyone other than these uploaded images are stored in the system.

In recent years, we’ve seen an increase in the number of challenging interactions our team have had to handle in our stores and this technology is an important tool in helping us to prevent repeat abuse of team and customers.

We let customers know if the technology is in use through signage at our store entrances and also in our privacy policy, which is available on our website. Our use is solely for the purpose of preventing threatening situations and theft, which is consistent with the Privacy Act. This technology is not used for any other purpose, including marketing or behavioural insights.

It’s really important to us that we do everything we can to ensure a safe and supportive environment for our team and customers in our stores, and we believe this technology is an important measure that helps us achieve this outcome.

Kind Regards
Bunnings Privacy Team

A Fundamental Misunderstanding

The Bunnings reply highlights a fundamental misunderstanding about an important aspect of how facial surveillance technology works.

If Bunnings wishes to check if a customer is—or is not—on their list of “formally banned from one of our stores, or after them being suspected of engaging in unlawful or threatening conduct in our stores” it must:

  1. Take a picture of each customer.
  2. Compute a ‘faceprint’ from the image.
  3. Compare the faceprint with the faceprints recorded in the system.

It is not possible to not surveil people with one of these systems. They are designed specifically to perform mass-surveillance.

It appears that Bunnings has fundamentally misunderstood both privacy law and the technology it is using. These mistakes place all of us at risk. Privacy harms from mass surveillance are not merely individual; they are also communal.

We know this kind of technology is embedded with serious algorithmic bias, especially when identifying or misidentifying non-white faces. So people with darker skin tones would be more likely to be misidentified – this is racial bias. In an environment where Aboriginal and Torres Strait Islander peoples are over-policed and over-represented in prison systems, we can only expect that racial bias to be amplified by the use of facial recognition technology.

Further Back and Forth

Justin replied to Bunnings, pointing out that they had not actually answered any of his questions.

Firstly, you have not answered my questions.

Secondly, you appear to have fundamentally misunderstood the technology you are using.

In order to check if my face matches, or does not match, the faceprint of a person you have recorded in your system as someone “formally banned from one of our stores, or after them being suspected of engaging in unlawful or threatening conduct” you must a) capture an image of my face, b) generate a faceprint from the image, c) check to see if it matches any of the faceprints in your system.

It is possible that this is a mistake based on ignorance, but it might also be a deliberate falsehood, and based on the reaction to the revelation you’re doing this, I am inclined towards the latter.

Finally, I shall restate my questions:

  1. Is Bunnings aware of the 7-Eleven case in which the OAIC found 7-Eleven stores’ use of facial surveillance was unlawful? (https://www8.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/AICmr/2021/50.html)
  2. On what basis does Bunnings believe the use of facial surveillance in its stores is lawful?
  3. Did Bunnings seek legal advice regarding the legality of using facial surveillance before implementing the system? Is so, would Bunnings be able to publicly share that advice?
  4. On what basis does Bunnings believe members of the public have provided informed consent for the use of facial surveillance?

Bunnings responded:

Hi Justin

We’re sorry we weren’t able to resolve this for you in the first instance.

We can clarify that faceprint matches only occur for uploaded images, as you have queried. No faceprints for anyone other than these uploaded images are retained.

As to the balance of your questions, Bunnings has carefully considered this particular application of facial recognition technology and is comfortable that this use is undertaken in accordance with the requirements of the Privacy Act.  This consideration has extended to reviewing the OAIC’s previous determinations regarding other businesses’ use of facial recognition technology for other purposes.

Kind Regards

Bunnings Privacy Team

Missing the Point

Bunnings has, once again, missed the point here.

The issue is not so much that faceprints are not retained, but that they are made at all.

What Bunnings is doing is akin to fingerprinting every customer, and then taking the piece of paper the fingerprints are on and checking it against a list of fingerprints they have in a file. The fact they then throw away that piece of paper isn’t the problem, it’s that they took the customer’s fingerprints in the first place.

In fact, what Bunnings is doing is worse because the faceprints are created surreptitiously and without explicit customer consent.

Bunnings does not appear to understand the way this technology works. Or they do and are being deliberately misleading about it in order to attempt to justify this outrageous behaviour. EFA also believes Bunnings has fundamentally misunderstood privacy law and the lessons to be learned from the OAIC decision against 7-Eleven.

It appears that it will take an external intervention to resolve this matter, so EFA looks forward to seeing the results of an OAIC investigation.

That’s the best we can hope for until Australia passes more rigorous privacy laws that would allow us to individually take action against organisations that do this.

EFA will continue to follow up this issue with the retailers and with the privacy regulator.

Related Items:

What to keep in mind this election

Published by Anonymous (not verified) on Tue, 17/05/2022 - 5:00pm in

EFA’s policy positions don’t change from election to election, as we prefer to take a longer view, grounded in human rights. However, this election, we’d like to highlight a couple of issues we think are particularly important.

Surveillance is not safety

There is a prevailing attitude among many policy makers that greater surveillance is a prerequisite for greater safety. This is fundamentally untrue. In fact, surveillance is dangerous.

When the authorities want it to, the law can be changed very quickly. Activities that were once legal can suddenly become illegal — and then all of the surveillance data collected about our behaviour suddenly makes us an easily located target. We’re seeing that now in the United States with the threats to Roe v Wade and related legislation.

If you’re thinking that it couldn’t happen here in Australia, remember that abortion only became fully decriminalised in South Australia in 2021 — and 15 months later, those laws have yet to be put into action. Same-sex marriage only became legal in 2017. We’ve seen recent attempts to provide exemptions to laws on religious grounds. We’ve seen attempts to hunt down whistleblowers and those who speak truth to power. We’ve seen how defamation law is used as a cudgel, not a shield.

Unlike in decades past, those with power now have access to a lot of data about our everyday activities. They don’t need to expend resources to create it. It already exists. It’s sitting in innumerable databases both public and private.

Waiting.

Your phone app data is logging your health because you want to track your fitness or be able to predict when you’ll get your next period. Your kid plays Pokemon Go and their school collects and stores attendance data electronically. Your phone uses algorithms to sort your photos into categories. These are all legitimate use cases — and private information.

Until it’s not.

If a government, corporation, or religious organisation decides it wants to use your data against you, there is very little to stop them.

EFA thinks that needs to change.

Make privacy a priority

Australians have been asking for better privacy protections for decades. This election, consider who you can trust to give you the power to protect yourself, rather than to amass yet more power for themselves.

We don’t need another regulator; we don’t need more police. We don’t need to give more power to the authorities we already have. We need fundamental protections that every one of us can access so that we can all, together, resist those who want to use our private information against us.

The Privacy Act is currently under review. Unlike the raft of legislation that has been rushed through with bipartisan support, increasing privacy protections is a policy that has languished through multiple terms of government.

It is time to act.

POSIWID

The purpose of a system is what it does.

We’re asking you to remember that intent doesn’t matter that much. Outcomes do. Focus on what has actually happened, not what people said they wanted to happen, expected to happen, or claimed would happen. What the law says can and can’t happen doesn’t usually reflect what actually goes on in real life.

We’re asking you to remember to pay attention to the margins, and listen to the people who are trying to warn you. Listen to those whose warnings proved correct, not those who have arrogantly dismissed these warnings while being regularly, and consistently wrong.

We’re asking you to remember that the best predictor of future behaviour is past behaviour.

People who have broken their promises before can’t be trusted to keep them in future; and all we have to protect our data right now is promises.

We need a government that will pass robust privacy legislation that puts power into people’s hands so we can take individual and collective action to protect our privacy, both as individuals and as a society.

What will change?

The composition of the Australian Parliament determines not only who will govern, but what changes will occur.

Laws will change. The nature of those changes will depend on who we give the power to make those changes.

If you value autonomy, privacy, and information security, who do you trust to make changes that will help you, not hurt you?

Choose wisely.

Related Items:

Student Privacy and Pandemics: Understanding and Reducing Privacy and Security Risks

Published by Anonymous (not verified) on Fri, 06/05/2022 - 4:20am in

I’m presenting several times at TLTCon 2022 later in May. As part of the conference, they’re having some “live” or synchronous sessions where participants and presenters can interact. One of my sessions was accepted and identified as a virtual, or asynchronous session. This means that I put all of my materials together and make them available for consumption at the please of the participant.

In this post, I’ll share some of my materials and thoughts about the virtual session.

Description

This session will briefly review data security and privacy protection regimes as they apply to institutions of higher education. Data security involves everything you need to know and do to secure the data you have and produce. Data privacy is framed by policies that may be handled by an institution’s legal or compliance office to ensure that people are aware of the laws and risks associated with the handling and dissemination of personal data. Data can be a powerful tool for parents, educators, students, and administrators. This includes not only student data, but also employee, alumni, donor, and vendor information.
   
The session will discuss the different types of student data, how that data is used, and the key policies, practices, and procedures that schools and districts should implement to create a culture of privacy. I outline some potential best practices to establish trust and promote transparency. Tips will be shared for talking with students about privacy including the new challenges posed by online learning.

Slide Deck

Video Recording

References

The post Student Privacy and Pandemics: Understanding and Reducing Privacy and Security Risks first appeared on Dr. Ian O'Byrne.

Research or Push-Poll? Troubling Developments at UNSW

Published by Anonymous (not verified) on Mon, 28/02/2022 - 1:26pm in

A recent survey from UNSW provides us with a cautionary illustration of what can happen when a student is not provided with the support they need from their supervisors to design good research. If you’re not familiar with how universities and student/supervisor relationships work (many people outside of academia are not), it’s important to remember this as we continue discussion of this survey: it is ultimately the responsibility of the student’s supervisor(s) to ensure that student work is of sufficient quality to be worth the time of the student, any participants they recruit, and the eventual examiners. The student is, by definition, a student and still learning what good research is — it is the function of supervision to support and guide that learning.

Designing good research studies is hard. Students need to learn how to test theories rigorously without their pre-existing guesses and assumptions getting in the way. In the social sciences, it's important to understand how to frame a survey impartially, to the best of your abilities, to avoid tilting the outcome in the direction you wanted, rather than the direction the survey respondents chose. Research should advance genuine inquiry, not push a predetermined perspective. That is propaganda, not research. It is the opposite of what a student should be learning. Nobody does this perfectly – avoiding personal bias is an ideal that scientists should strive for, not a perfect state that we attain by the time we graduate.

Sadly neither poor supervision nor bad survey design are rare in academia  — so why does EFA care about this particular survey? The survey in question purports to be an examination of community expectation regarding the responsibilities of online service providers for child safety on their platforms. In reality, it pushes a particular anti-encryption, carceral surveillance approach to the Internet that has nothing to do with safety and everything to do with authoritarian control over people.

Have you ever seen a political party or the government advance a claim that a particular surveillance law or expanded power is good, actually, because “research shows” that “a majority of Australians support…” whatever it is? Where that research comes from, and how that support was acquired is worth digging into a bit.

Let’s get started.

Learning From Mistakes

Mistakes provide a good opportunity for learning. Let’s use this survey to examine how survey design can be used to manipulate people to answer in a particular, biased way. A word of caution: once you learn how to design a push poll, you’ll start to see just how often they’re used everywhere: newspapers, lobby groups, and yes even in academia.

The survey is here if you’d like to follow along: https://unsw.au1.qualtrics.com/jfe/form/SV_1X4K65UYCLsPTim

We’ll step through each question, but our main concern is not the individual questions per se but the framing of the survey and the absence of context or alternative solutions that respondents might have opted for instead.

Question 1

"Children younger than 16 should not be able to access adult pornography sites."

Note that we haven't had any discussion of what pornography is, who identifies it, or even how "adult pornography" differs from other kinds of pornography. Is there a whole genre of children's pornography? Pornography for pets, perhaps?

Question 2

"Adult pornography sites should check the age of their users to stop children accessing pornography."

Note how this question follows on immediately from the previous one, to which the vast majority of respondents have presumably answered yes, and leads them to consider only one solution. Kids shouldn't access porn? Well then the porn sites should verify age, obviously! Presented this way, it seems like the only logical solution.

But what if a series of alternative solutions had been presented instead? "To discourage children from accessing porn, which of the following techniques do you think would be most effective:

  • age verification by the website
  • education programs in schools
  • better access to accurate and respectful age-appropriate information about sex
  • an anonymous government-organised age-attestation system that didn't give ID information to the porn site
  • etc.

A survey that genuinely wanted to elicit people's opinions would have offered them a list of possibilities, including an "other (please specify)" option, and would have asked them to think about which, if any, of the options were going to be effective.

Question 3

"Many phones currently scan for viruses and spam. I think they should also scan for child sexual abuse material."

Again, consider the framing. One could instead have framed the same question by saying "Many authoritarian states routinely scan the private photographs and messages of their citizens. I think Australia should do so too, but only for child sexual abuse material." Same question, but the different framing would elicit a very different set of responses.

A well-designed survey would have deliberately avoided framing it in a way that tilted it one way or the other. (Incidentally, one of EFA’s board members wrote a whole paper about why the virus-scanning as CSAM-scanning argument is a false equivalence – it's all about whose interest the scanner runs in https://arxiv.org/abs/2110.07450)

Question 4

"When I am using a website or an app, it is important to me that my data can never be accessed by my government or the police."

Again, obviously, framing. Some Australians greatly distrust their own government(s), but many don't. If the question instead had asked "When I am using a website or app, it is important to me that my data can never be sold to advertisers or accessed by a foreign government" we'd get different answers. A neutral framing would simply have said "never be accessed by anyone other than the intended recipient." But no effort at a neutral framing has been made.

Question 5

"It is important to me that the websites and apps that I use are not being used by child sexual abusers to harm children."

Are we supposed to be less concerned about children being abused if it’s done with apps we don’t happen to use? It’s somehow more abhorrent if it’s done using the good, clean apps that normal, decent, law-abiding people use?

This question attempts to paint child abuse as something that only ever happens over there by strange people we never meet, who are nothing like ourselves, when the reality is that it’s often committed by close family friends of the victim-survivors.

Question 6

"I am comfortable with government or police accessing my data, if it helps websites to be safer for children."

This is a particularly sneaky question. It is tacitly asking that everyone should constantly prove to the government that they are innocent. Constant authoritarian surveillance is not a feature of a liberal democracy. Asking “I am comfortable with full-blown fascism if it helps websites be safer for children” would be ludicrous, but we’re expected to accept that this question is somehow less absurd.

What if the question was "I am comfortable with the government or the police accessing all of my photos if it helps websites to be safer for children”? All of them, including the ones parents take of their children in the bath because they pulled a hilarious face. The intimate ones shared with their partner with full consent and enthusiastic participation. Because that’s what “my data” implies: full access to everything on your phone.

Question 7

"If scanning for child sexual abuse material is done, I would be worried that this information might be used later for different purposes."

This is a better question. It acknowledges that there are risks, not just benefits, to the proposals. An even better question would seek to discover what other risks might exist, and attempt to gauge how worried people are about them. For an example of how this could be done, see the OAIC’s Australian Community Attitudes to Privacy Survey 2020.

Question 8

"Some online messenger apps are considering using “encryption” (messages are locked away from everyone except the sender and receiver, and even the tech company and the police can’t see them). 

This would make it very difficult to detect child abusers grooming children online. 

Do you agree encryption should be used for online messaging?"

Leaving aside the failure to distinguish end-to-end encryption from ordinary client-server encryption, the issue again is framing. The survey could instead have asked "There have been incidents in Australia of intermediaries harvesting health information and using it for financial gain. How important is it to ensure that online messaging is encrypted so that only the intended receiver can decode the information?" or even "Some teens, and many adults, voluntarily send sexualised images to a partner. How important is it that those images cannot be viewed by intermediaries on the Internet?"

The truth is that there are security risks associated with exposure, too. Encryption also makes it difficult for abusers to stalk their victims when they’re trying to escape.

The survey should have asked about risks, or at least refrained from framing the question only in terms of one example of why security would be bad. Security is good!

Question 9

"If it helped reduce online child sexual abuse, I would be comfortable with proving my identity to the social media sites that I use, such as by providing a driver’s license number (or similar)."

Again the same question: why would handing your ID to Twitter do anything to deter child sexual abuse? It's not just that this question completely fails to examine any of the downsides (for example, the risk of identity theft, particularly for low-budget, insecure or untrustworthy platforms), it asserts – without asking the respondent to examine - a completely false link between undermining the privacy of the survey respondent and making children safer.

Question 10

"Social media and other companies know that some adults groom children online.

I would support social media companies scanning online messaging for signs of grooming, and if detected, warning children who are being groomed online."

Telephone companies know that some adults groom children on the phone. Should phone companies listen in to everyone’s conversations for signs of grooming? Car companies also know that some adults groom children in cars. Should car companies scan conversations in private vehicles for signs of grooming?

The framing of this question assumes that there are no other ways to address grooming of children, implies benefits, and ignores any potential risks from online message scanning.

We could have asked “Social media companies know that children use online communities to explore identities that abusive and controlling adults in their life object to — often violently. Sometimes, those abusive individuals and communities call learning about other ways of living "grooming".

Should social media companies alert parents if their children are viewing queer or trans educational material?

Question 11

"What do you expect social media and technology companies to do to make sure that their service is free from child sexual abuse material (CSAM)? Select as many as you want..." There follows a list of options, including "other (please specify)"

This is the first question that comes across as a genuine effort to find out what people think, but it is not without its challenges. It implies that making a service completely free of child abuse material is possible in a society that creates CSAM.

We are not asked “what do you expect society to do to ensure it is free from child sexual abuse material?” We are not asked what we expect of other companies. The question frames social media and technology companies as uniquely responsible for CSAM in ways that are unreasonable.

It’s also unclear if it’s even possible for companies to take action of this nature on their platforms without significant undesirable consequences. Yet again we’re asked to make an assessment of options with only benefits presented to us, and none of the risks. That’s an unfair trade off to expect us to make, and it biases the research significantly.

Question 12

"Do you think that the Australian government should pass laws making tech companies take action on their platforms against child sexual abuse material?"

Who could say no? This is a “when did you stop embezzling money” question. It’s a trap.

The question also ignores that Australia already has laws that require companies of all kinds to respond to evidence of child sexual abuse material. No effort is made to assess what legislative gaps exist that necessitate new laws. Are the existing laws not being enforced?

A better question would have included a list of options, or even a specific proposed law. As it is, this question provides no useful information but does push a particular, narrow, perspective: Something must be done. This is something. Therefore we must do this.

Question 13

"What do you think should happen to an internet company that repeatedly allows child sexual abuse material to be shared on its platform?

Select as many as you want:" 

What a surprise - when we're not being asked about Australian laws we are allowed to select the options we want.

Summary

Overall this survey asked us how much personal privacy and security we were willing to sacrifice in return for an ill-defined promise to “make children safer”. It didn't ask us to examine whether any of this sacrifice actually would make children safer, and it didn't ask whether exposing children's personal messages or adults’ identity documents might actually make them a lot less safe.

When it asked for our support of new laws, it didn't even tell us what the new laws would be. It didn't ask us whether police or ASIO officers should need a warrant, or whether powers to "make children safer" needed judicial oversight and needed to be necessary or proportionate, or would actually make the slightest bit of difference. 

The occasional real question merely serves to underline the coercive and misleading nature of the others.

This really isn't the student's fault; it's the supervisor's fault for not explaining what scholarship is supposed to be. The trouble is that it deprives Australians of the trustworthy scientific contributions that public debate depends upon, and undermines trust in scientific research more generally.

Now you know a bit more about how poorly designed (or actively malicious) surveys can be used to manipulate research outcomes and public opinion. Keep a careful eye out for more, and let us know if you spot one!

Related Items:

Book Review: Life After Privacy: Reclaiming Democracy in a Surveillance Society by Firmin DeBrabander

Published by Anonymous (not verified) on Sat, 26/02/2022 - 9:00pm in

In Life After Privacy: Reclaiming Democracy in a Surveillance Society, Firmin DeBrabander argues that rather than seeking to safeguard and revive privacy in the digital age, we should instead focus on becoming engaged citizens who contribute to a democratic public sphere. This lucid book is public philosophy at its best, writes Paul Showler, though he questions whether there might … Continued

Biometrics: The New Tool of the Hostile Environment

Published by Anonymous (not verified) on Fri, 18/02/2022 - 12:04am in

BiometricsThe New Toolof theHostile Environment

Sam Bright and Sian Norris explore the Home Office’s plan to vastly extend the scope of its immigration data collection

ShareEmailTwitterFacebook

The Home Office gave a presentation late last year that offered a glimpse into the future of immigration policy, and the monitoring of migrants.

Speaking to the Biometrics and Forensics Ethics Group – an advisory non-departmental public body – the Home Office said that “future biometrics policy would require facial images and fingerprints to be captured from all foreign nationals subject to immigration controls coming to the UK”. This would come into force, the Home Office said, once the technology was ready.

This is a vast undertaking – the wholesale monitoring of migrants entering the UK – and one that has profound implications for privacy and individual rights.

Experts at Privacy International told Byline Times that the Home Office’s policy – part of its aspiration to “become digital by design” – is a “reflection of a global trend which raises significant concerns”.

In particular, “there is a risk that this data could not only be used to monitor people but, integrated with other data, used to check against other information gathered via different means and shared both internally and externally,” Privacy International said.

Indeed, the mass gathering and sharing of data has become commonplace among public authorities in the UK and beyond – often used to clamp down on perceived offenders.

In 2014, the big data firm Palantir was awarded a $41 million contract by US Immigration and Customs Enforcement (ICE) to build and maintain an intelligence system to identify and deport undocumented immigrants. The system collates data from multiple federal and private law enforcement agencies, each of which might have fragments of information on these individuals.

Forbes consequently reported in January that Palantir had “helped turbocharge the Trump administration’s crackdown on immigrants”.


Private Health MeetsBig Data
Sam Bright

A similar, draconian system is feared in the UK, as experts suggest that migrants could be pushed to the margins of society – afraid of using state services – because they may be subject to mass data gathering by Government authorities.

In fact, the Home Office explicitly says that its biometrics data may be used “to check whether an individual has applied for or obtained a service or product which they are not legally entitled to receive… this could include access to a public benefit or service, such as local authority housing and housing benefits”.

Therefore, its biometrics proposals must be seen firmly in the context of the “hostile environment for migrants”, Privacy International experts told Byline Times. “Data collection should be justified and subject to oversight and strict protections to limit how it is used”, they added.

The equality impact assessment on the digitisation of the immigration system has yet to be published.

While serving as Home Secretary from 2010 to 2016, Theresa May implemented a raft of immigration policies designed to create what she called a “hostile environment” for people migrating to the UK. This culminated in the wrongful detention and deportation of dozens of British citizens who belonged to the post-war ‘Windrush’ generation. This abuse of state power, and the persecution of people who had done nothing wrong, still causes institutional distrust among migrant communities – an instinct that is crucial to understanding the Government’s data reforms.

For example, the hostile environment requires employers, landlords, private sector workers, NHS staff, and other public servants to check a person’s immigration status before offering them a job, housing, healthcare, or other forms of state support. It also allows for data-sharing between institutions – for example, a rape victim who sought help from a sexual assault centre was referred to the Home Office due to her irregular immigration status. 

Digital Dangers

In a written submission to the Parliamentary Committee on Human Rights, the Joint Council for the Welfare of Immigrants (JCWI) expressed concern about increasing the powers to obtain biometric information, saying that the proposals “significantly [expand] the class of individual from whom information may be taken without reference to the purpose of information gathering”. This, it said, could be incompatible with human rights legislation.

The digitisation of the hostile environment has been trialled through the EU Settlement Scheme, with people holding a ‘digital only’ status – meaning that they can be verified online but they do not have not any physical documentation proving their migration status. The digitisation process will be expanded to Hong Kong BNO passport holders entering through the new Hong Kong visa route, who have a biometric passport, and will gradually be phased in for the entire migrant population. 

According to the JCWI, “where digital border surveillance systems, and technologies being applied to welfare, housing, and other services, come together, there is a serious risk for all migrants as regards privacy, data security and safe access to vital amenities”. 

In fact, fears around data-sharing have resulted in people with insecure or irregular immigration status, such as undocumented migrant people, avoiding accessing healthcare – even during the pandemic. The result, according to a report into Filipino migrants’ experiences of Coronavirus, was people dying from the virus at home, too scared to seek treatment.

The hostile environment also pushes people into insecure and exploitative work – in the dark economy – dangerous at the best of times but deadly during a pandemic when workers were prevented from isolating and taking time off sick. 


The ‘Hostile Environment’ ContinuesHome Office Rejects ‘Firewall’ forMigrant Victims of Abuse
Sian Norris

The digitisation of the hostile environment, the JCWI explains, will “make it easier for the Government to increase surveillance – and subsequently criminalise and punish – migrants, who have no choice but to interact with public services on a daily basis”.

There are also concerns about the blockades that may prevent people from opting out of the biometrics system. The Home Office says that fingerprints are retained for 15 years, unless the individual is granted citizenship (in which case the records are deleted sooner), or they are considered an immigration ‘risk’ (in which case the data may be retained for longer). Facial images, meanwhile, are only deleted when “retention is no longer necessary for use in connection with a function under the Immigration Acts or in relation to nationality”.

Individuals can request that their data is deleted, but they must prove that they satisfy the above requirements – submitting relevant documents and chasing officials. And, even at the end of the process, the Government may decide that it has a “legitimate need to continue to keep or use their data”.

This route is highly unlikely to be pursued by individuals at the margins of society, especially those who have an instinctive distrust of public authorities, potentially leading to few people challenging the Government’s data dominance.

Concerns have also been raised about the Home Office’s track record of managing sensitive information, with the Public Accounts Committee already criticising the department for presiding over a “litany of failure” for its digital border programme. Errors can even lead to people being wrongfully denied access to healthcare or the legal right to work.

“It is a matter of fact that the Home Office has an abysmal record on delivering on IT projects despite its huge budget,” Privacy International experts told Byline Times. “We have estimated that annual expenditure exceeds £2 billion. There is a risk that this will be yet another example of the department throwing money at arms and tech companies for shiny new features which never materialise.”

The Home Office has been approached for comment.

ShareEmailTwitterFacebook

SIGN-UP TO EMAIL UPDATES

OUR JOURNALISM RELIES ON YOU

Byline Times is funded by its subscribers. Receive our monthly print edition and help to support fearless, independent journalism.

SUBSCRIBE TO THE PRINT EDITION OF BYLINE TIMES FROM AS LITTLE AS £3.50 A MONTH

BECOME A PATRON OF BYLINE TV

SUBSCRIBE TO BYLINE TIMES & GET THIS MONTH’S DIGITAL EDITION IMMEDIATELY

EFA's Submission on Electronic Surveillance Reform

Published by Anonymous (not verified) on Fri, 11/02/2022 - 3:42pm in

EFA welcomes the government’s intention to reform Australia’s electronic surveillance framework. The existing framework is by parts complex, archaic, confusing, and not fit for purpose. The work to replace it will be complex, but necessary, and EFA welcomes the government’s commitment to undertaking this task with the required dedication of time and resources to do it well.

Australia’s future as a liberal democracy depends in no small part on our ability to get these reforms right. A global trend towards authoritarianism, secrecy, and fear must be resisted. We are encouraged by the long-term trend towards greater transparency and oversight of surveillance powers, and the understanding of many agencies that their social licence to operate depends on Australians’ continued support for their work.

We have no desire to repeat the embarrassing failures of the past that have left Australia vulnerable and damaged the reputations of the agencies whose very existence relies on Australians’ continuing to believe they are necessary. The extraordinary powers these agencies are granted must always be used, and be seen to be used, in service of the best version of Australia we can imagine.

EFA is pleased to participate in the process of restoring trust in Australia’s surveillance powers.

Read our submission on the Reform of Australia’s electronic surveillance framework discussion paper.

2022-02-EFA-Electronic-Surveillance-ReformDownload

Related Items:

Book Review: Life After Privacy: Reclaiming Democracy in a Surveillance Society by Firmin DeBrabander

Published by Anonymous (not verified) on Thu, 03/02/2022 - 1:34am in

In Life After Privacy: Reclaiming Democracy in a Surveillance SocietyFirmin DeBrabander argues that rather than seeking to safeguard and revive privacy in the digital age, we should instead focus on becoming engaged citizens who contribute to a democratic public sphere. This lucid book is public philosophy at its best, writes Paul Showler, though he questions whether there might be ways to envision new and better forms of privacy for our present times. 

Life After Privacy: Reclaiming Democracy in a Surveillance Society. Firmin DeBrabander. Cambridge University Press. 2020.

Book cover of Life After PrivacyFind this book (affiliate link):amazon-logo

There is something paradoxical about our preoccupation with privacy. Most of us chafe at the thought of our data being collected, analysed and sold. Nobody is celebrating the rise of government and corporate surveillance. And yet the outrage we feel at the latest data breach or uncanny use of machine learning is typically short-lived. We continue to relinquish our smartphone data — sometimes in exchange for something as trivial as a coupon — and to cheerfully divulge our most intimate details over social media.

We say that we care about privacy. But our actions suggest otherwise.

Firmin DeBrabander’s Life After Privacy issues a bold challenge to theorists and reformers seeking to resuscitate privacy in the digital age. At best, these hopes are practically implausible. It’s not just that we have become too reliant on information technology (we have), or that Big Tech has become too powerful (it has); it’s that the advantages of using big data to solve our problems have become undeniable. At worst, however, an overly myopic focus on privacy may be politically self-destructive and a barrier to salvaging our imperilled democratic institutions. Instead of advising individuals to ‘rebuff, resist, or elude surveillance, or loosen their devotion to digital technology’, DeBrabander calls for us to ‘empower people politically in the face of their many spies’ (74). Rather than safeguard privacy, we should focus on becoming (and raising) engaged citizens capable of contributing to the public sphere.

One of the book’s central claims is that privacy is far less integral to democracy than is typically thought. For many of its advocates, privacy is a ‘universal aspiration and an enduring, consistent value or institution’ (76), without which political freedom or human flourishing would be impossible. DeBrabander challenges this presupposition on historical grounds. Privacy is far from monolithic and has meant different things across time and space. In the United States, the current paradigm of privacy — the suburbanite seclusion sought within a detached single-family home — is a recent invention, which for most of human history would have been unfathomable.

Surveillance cameras on a wall

Photo by Scott Webb on Unsplash

Privacy, as we currently know it, is the upshot of a complicated history whose roots DeBrabander traces to ancient Stoic practices of introspection (and their associated virtues of emotional self-control) and early Christian ideas of spiritual purification. It wasn’t until the early-modern period that privacy became a distinctively political concern. Its connection to democracy emerged gradually as the result of legal struggles over property, the ascendency of a centralised bureaucratic state and elevated prosperity. But this suggests that there is no reason to regard privacy as ‘inherent and essential to democracy’ (75). Moreover, if privacy has prevailed (at least in some form) long before our current legal systems and suburban infrastructure, then whatever advantages it is supposed to confer — for instance, autonomy or freedom of expression — may be secured through other means.

Someone might grant these historical claims but still regard privacy as an indispensable feature of our present political landscape. Because it affords individuals the space necessary for self-determination, the argument goes, privacy is a precondition for political autonomy. For DeBrabander, this is just ‘magical thinking’ (113). Why should anyone think that, when left to their own devices, people will develop the capacities required for making valuable political contributions? Who is to say that privacy won’t breed sadism, or docility, or apathy? This argument is premised on one of classical liberalism’s most implausible ideas: that we are isolated subjects who are unaffected by and through our social relations with others. Life After Privacy devotes considerable space to debunking this atomistic view, which has shaped much of our public discourse on the value of privacy.

These illuminating discussions of privacy’s history and its connection to liberal political theory set up DeBrabander’s own positive proposal: forget about privacy. Or, at the very least, stop rallying behind a fraught individualistic conception of it. The power imbalances between Big Tech and individual consumers are too great, and the potential abuses by governments have become too insidious. Preserving and bolstering democratic institutions in a world where privacy is threatened beyond repair requires a robust and concerted revival of the public sphere. And this demands a renewal of virtues, capacities and values instilled through democratic association in its various forms.

Drawing on the American philosopher John Dewey, DeBrabander believes people are empowered politically when they possess the skills and dispositions necessary for active participation in public life. Democracy is not something that can be legislated into existence, but must be ‘practiced, nourished, and taught in the family, the school, the church, social clubs, or professional or political advocacy groups’ (125). Notably, DeBrabander is skeptical that online associations will instill democratic habits. Social media presents too many anti-social temptations through its promise of anonymity.

There is much to admire about Life After Privacy. One of the book’s virtues is its political realism. It takes seriously the possibility that even the best theoretical arguments for privacy may leave our digital habits unchanged, or that regulation (though valuable) may be no match for authoritarianism. Moreover, DeBrabander is remarkably charitable to those theoretical perspectives with which he disagrees, and his lucid treatment of a broad range of historical and contemporary political philosophers is nothing short of impressive. This is public philosophy at its best.

Although sympathetic to the book’s historical arguments and its calls for a renewed public sphere, I sometimes found it hard to discern the author’s considered stance towards privacy. Some passages seem to lament its erosion in a world of mass surveillance, while directing our efforts elsewhere on pragmatic grounds. It would be nice if we could salvage our privacy — but good luck convincing people to forgo the convenience of a smartphone! As the book progresses, however, one gets the sense that privacy — at least as presently conceived — is doing more harm than good, and that we are better off without it. While these stances aren’t necessarily contradictory, the tension between them remains unaddressed. Moreover, whatever its shortcomings, one wonders whether privacy would still serve an important role in a world — as DeBrabander would have it — where the democratic ethos has been renewed and the public sphere refortified. Perhaps certain forms of privacy are (or should be) dead. But should that stop us from envisioning new (and better) ones to take their place?

Note: This review gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science. The LSE RB blog may receive a small commission if you choose to make a purchase through the above Amazon affiliate link. This is entirely independent of the coverage of the book on LSE Review of Books.