This paper captures the findings of our research in 2019 looking ar conceptual barriers to safeguarding New Zealand children's data privacy. It explores how parents and teenagers conceptualise corporations and their surveillance of their internet activities. It provides a fresh approach to studies around online privacy which had to date been focused on teenagers' practices within social media sites, by examining how parents and teenagers understand and conceptualise privacy in the digital context. This paper challenges existing neoliberal policy frameworks that presume educators and parents, and even teenagers themselves, will through education develop the motivation to protect children's data privacy. This research found that their notions of privacy risk and harm remain embedded within the social rather than technological contexts, prohibiting concerns about corporate surveillance.
Summary sheet
Several key findings emerged that remain important today as we think about children's privacy not just as consumers of digital technologies at home, but also as students now having to engage with digital technologies in the classroom often without the choice, or being informed that their data is being collected.
In talking with parents and children about how they understand privacy risks when using digital technology and services online we found 4 significant barriers to motivating parents, children and young people to mitigate corporate surveillance and mining of their personal data.
The first important finding was that both parents and children were highly motivated to control who has access to personal information: They had a high awareness of immediate physical and reputational harms that may result from personal information being shared online without their consent. As such, both parents and children actively managed what they shared with others online to protect themselves from a range of privacy risks such as online predators, physical location and safety, and cyber bullying.
Secondly, the study found that parents and children did not associate privacy risk to companies and collection of information about them through their use of their services. While parents and children were aware of the physical/spatial, interpersonal, and to a degree informational privacy risks in the digital environment, they were unaware of the decisional privacy risks posed by institutional and commercial data collection practices in the digital environment.
Thirdly, the study found that parents and children shared limited personal information with institutions like schools on a ‘need to know’ basis: They believed that schools kept sensitive information secure and did not share this with children’s peers or with other institutions without their consent. Parents were concerned that any negative accounts of their child’s behaviour should not be permanently recorded or be shared with other schools or teachers as this may prejudice future teacher-student relations.
Fourth, parents and children were unaware of the threat that commercial and institutional collection of personal data pose to children’s decisional privacy and access to future opportunities as adults. In particular, they were unaware of the longer-term potential for discrimination by authority figures (employers, universities, banks insurers etc.) who, by potentially having access to commercially generated sensitive personal data, may decline their applications or access to services thereby excluding them from life opportunities.
Why is this work important now?
This work takes a critical approach to the problem of consumer apathy when dealing with corporations collecting personal data in the digital environment. It specifically addresses the problems of a self-management model in relation to children's exposure to global tech data mining practices. This paper, published in 2020 is still highly topical, relevant, and timely as during Covid, children's exposure to corporate surveillance and data mining escalated through the exponential growth in EdTech within schools. The paper's main finding is that we need a shift in the way that we understand 'decisional privacy' bringing it into line with the digital environment, and specifically the data practices of institutions and companies. Recent research being conducted by the LSE, Digital Futures Commission, 5Rights Foundation, and the Human Rights Watch have exposed how extensive corporate collection of children's personal information is, highlighting the need to continue education and debate about how best to protect children's personal data and rights.
The commercial aggregation, use, and sale of children’s data over time will result in their sensitive personal data profiles being used in the future by those in authority to make decisions as to their eligibility i.e. for things such as applying for a job, university placement, or accessing insurance and mortgages. Today’s children will have no way of knowing the nature of the personal data against which their applications, as adults, are being assessed, and they will be unable to dispute these decisions. This breach of their decisional privacy is likely to decrease their agency in accessing future life opportunities.
The key contribution of this paper is to highlight to policymakers, academics, schools, parents, and children that commercial data practices in the digital environment put children’s future decisional privacy at risk, a situation that will result in discrimination for many.
Commercial interests pose significant risk to children's future decisional privacy. However, the way that parents and children think about privacy online does not recognize corporate collection of their personal data as problematic. We need a shift in the way we think about privacy harm to recognise and understand the longer-term consequences of corporate surveillance, both at home and within schools, on children’s access to life opportunities in their adult lives.We need all stakeholders, Government, schools, teachers, parents and students to be part of a conversation around the issues of EdTech in schools.