Still Spoken
Still Spoken
The Other End of Identity: Digital Gestation and Children's Data
Usually, Still Spoken is about how your data lives on through technology after you die. But how do you live in technology before you're able to form your own digital footprint, perhaps before you're even born?
When I was writing one of the final chapters All the Ghosts in the Machine: The Digital Afterlife of Your Personal Data, I ended up in an unexpected place: thinking about how I'd created a digital reflection for my own child that would eventually form part of her digital legacy. These days, there's data generated, mined, and monetised about us from digital gestation to digital afterlife and everywhere in between. What are some of the ways that happens?
So in this episode we're looking the other end of identity, and there's no one better to do that with than Tama Leaver, a Professor of Internet Studies at Curtin University in Perth, Western Australia; the President of the Association of Internet Researchers (AoIR); a regular media commentator; and a Chief Investigator in the ARC Centre of Excellence for the Digital Child.
This episode is a preview of host Elaine Kasket's upcoming book, provisionally titled This is Your Life on Tech (2023)- a technopsychosocial exploration of the human life span, looking at how technology is the third force in almost all the relationships we have across our lifetimes.
Written and produced by Elaine Kasket; recorded in April 2021. I do this podcast ALL BY MYSELF with no production team, editors, or help from anyone other than my wonderful guests. If you want a simple, easy start to your own podcast, you can do what I did: get a great podcasting platform (see the link for mine below) and easily add music and sound effects with an affordable subscription to Epidemic Sound.
All music and any sound effects in this episode were from Epidemic Sound:
Royal Lullaby (All in the Family)
Snooper's Paradise (Jon Bjõrk)
Computer Wiz (Marten Moses)
Get to know Elaine's writing on Substack and Medium.
Start for FREE
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Still Spoken S2 E3: Digital Gestation and Children's Data, with Tama Leaver
Tama: My name's Tama Leaver. I'm a Professor of Internet Studies. How did I get into this specific area where we're talking about especially very young people and their data? Um, quite convoluted route, but I guess I, I was interested in the fact that all of the writing in theory around social media and all of the stuff written by platforms was all about the user of social media, being this very active, purposeful adult agent who, if there was a problem, the answer was give them more tools, give them more settings. They can make informed decisions for themselves. That's how we fix everything. And I guess initially I was just really interested in that. What about all of the spectrum of human life, where there are, you know, especially the beginnings of life and at the ends of life, people really can't make those informed choices - who who's responsible then? And, and do we have any mechanisms for that? And the, the original answer was no.
As my children were born, I guess I became more and more interested going well, looking at all of the, the practices around that, looking at the sharing practices, looking at the rituals of sharing ultrasounds, but even the evolution of technology, like just in the space of a decade we went from, I think we were probably the last people in the entire world to get the offer of a VHS cassette tape of an ultrasound.
Elaine: That’s Tama Leaver. He's a professor of Internet Studies at Curtin University in Perth, Western Australia. And he's been through the ultrasound drill with four kids.
Tama: So I think it's part of that ritual that there was, there's always been a, we know you wanna hold onto this memory. On what platform do we provide it?
The option was a VHS tape. Um, and by the time we got to the, the final ultrasound, they'd already upgraded. So it was the burnt CD. Then it was a USB stick, I think, for the middle two. And then by the time my fourth was born, it was like, here's basically a bespoke social media platform for your ultrasound images.
And yeah, it was 100% pushed by the clinic that was doing it. And it was, that was the only option. Yes. They can still print one or two hard copies, but really they're like, ah, you don't really wanna do that. We'll send you everything. Um, and they really do, like, it was, you know, you got the full sort of 15 minute video and it's like, do I really want this much? I'm not sure I can do much with that, but you know, it, it was all there, but it was also set up for sharing it. It’wasn't just, you can get this thing, but it was like, there’s this, send this on to other people. And I think that's what was interesting about that sort of architectural change.
Elaine: I'm Elaine Kasket, and this is Still Spoken, a podcast about how the dead live on through us, our stories and through technology…except this time we're talking about the other end of life.
I first met Tama because of our mutual interest, which is what happens to your data when you die and how the dead live on online. When I wrote my book, All the Ghosts in the Machine, one of the final chapters ended up in a place that I didn't expect. When I started the project, I found myself talking about the digital footprint of my own child, the one that I'd constructed, and how by sharing information about her online during her childhood, I was assuming a really powerful role in not just shaping her personality, but determining at least partly what her eventual digital legacy was gonna be. and that's where Tama’s research really comes in. As the title of one of his papers says, he's interested in visualizing the ends of identity. So, in this special edition of Still Spoken, we're going to zoom right to the other side of the human life cycle and look at the data of infants and kids and the, as yet unborn.
Now I recorded this interview with Tama back in the spring of 2021. Sorry about the delay, Tama. If anything, things have probably got even weirder since. Um, but let's pick up our conversation from where we left off in that introduction.
You know, Tama. I remember my own interest in this being awakened quite a lot when I was a new parent observing what other people were doing on social media. But it's so interesting that even though I was an expert around digital stuff and privacy at the time stuff about the other end of life, as a besotted new parent, and also an expat who was very far from family and friends, I mean, I was very unreflective or un-self-critical about my sharing about my infants information, about my early experience of parenthood, because it had significant rewards and reinforcement for me. I mean, I, I don't know if I would've used it in the same way, if I'd still been close to my family of origin at that time.
Tama: Mm-hmm I mean, I think, think that that point in life though is exactly the time when you most want to share. And you are least likely to think through the consequences of that, you know, years down the track. You're very much in the moment. And I think that's one of the challenges in this space, is getting to people so they've front loaded, that thought process before they see that first ultrasound or something is actually really tricky because there might be opportunities maybe after say a child's born, that's the point where you might go, okay.
In Australia, at least everybody gets handed this ridiculously large book, which is like, here's all the things about childhood. Um, take it home and have a read. You might need it. Um, and you know, you might be able to slip a page into that about social media or something like that. To some extent it's those earlier bits, uh, ultrasound sharing, the dreaded gender reveal party, if you are so inclined, things like that, where the initial parameters of sharing are all sort of set up. Um, and once you start sharing that stuff, it's really hard to turn around and go, oh, now my beautiful baby is born, I'm not gonna show you. You know, you just don't really do that. So I think those establishing those norms is really tricky to do in advance. And it's usually parents who have got a kid or two kids who are like, oh, I wish I'd thought about this earlier. And, and I was exactly the same. I think when my oldest was born, I think. I was blogging still back then. And I actually read a blog post with, here's a picture and here's his full name.
And I'm like, oh my God, I can't believe I gave that all away when he was, you know, a couple of days old. In retrospect, I can see why that was a terrible idea.
Elaine: Oh, absolutely. And, and we'll get to some of the reasons why that might be a terrible idea in a minute, but you just mentioned one of the major two ways I think that kids get profiles online before they even emerge into the world. We were talking about a sonogram a minute ago, but then yes, as you say, the dreaded gender reveal party, about which I know you have some opinions. And so do I, but maybe explain to the uninitiated, if there is any uninitiated person left who doesn't know what a gender reveal party is, what that is.
Tama: Ah, there would definitely be uninitiated. They're relatively odd still in Australia. I don't think they're the norm here at all. Um, but a gender reveal party is basically when, uh, expectant parents, uh, gather friends and they, they will at that party, hopefully not already know, although sometimes they do the gender of the, uh, child that they're gonna have.
And usually it's in some fairly contrived way, it might be a white cake that you cut into and you discover, is it blue or is it pink in the middle, or you pop balloons or you do something. And in that moment, surrounded by friends and family, you, um, suddenly discover the gender of your child and share that moment with family and friends in this sort of, um, ritual where the very first identity marker.
This, this new person will be gendered this way and identity therefore can begin. Um, and I mean, it's, it's a very strange ritual. I mean, that seems to have, um, weird connotations now. I think there was a massive bushfire on the back of somebody's gender reveal party in the us or something. I don't really understand.
Elaine: Tama is correct. There was a massive bush fire somewhere in America. Not just the Sawmill fire, but also the lethal El Dorado blaze and an assortment of other tragedies: an inadvertent pipe bomb, a novelty signal cannon, a plane that pitched into the sea with its doomed cargo of two pilots and a sign reading ‘It's a girl!’ One expectant father never made it to the party because the unspecified contraption he was preparing detonated as he tinkered with it in the garage. As the trend burgeoned, it was fuelled by clips shared on social media and people found increasingly creative and occasionally fatal ways to not settle for cake.
In one video, a group of people aim their smartphones at a curious trio: the expectant dad, his heavily pregnant partner and an elderly pet alligator called Sally. Someone hands over an apparently intact watermelon and the man expertly tilts Sally’s head upwards, tossing the melon into her gaping mouth. The beast's jaw snap reflexively shut, spewing arcs of blue goo towards the delighted onlookers.
Tama: Um, it, it is, does seem to be sort of like escalating warfare. My gender reveal party was so good, we took out a third of California!...which may or may not be a claim to fame. Um, but yeah, it is one of those weird rituals and, and I guess gender is one of those challenging things where even though we sort of accept that gender, isn't a binary, it’s a spectrum and all sorts of things, the absoluteness of ‘is it a boy or is it a girl’ is still the first question, which people will sort of need an answer to, to start thinking of this entity as a person.
Elaine: Yeah, because of course was technically being revealed is what the scans or the tests are revealing about the child's sex, but in the whole performance of the gender reveal party conducted within the community of family and friends and so forth, the gendering process, the stamping of a particular gender identity on this yet to be born person, starts to take place. And I. I think what's interesting to me is the reactions of the wider community on social media. For example, those people who might not have been at the party, but they're commenting and they're responding to the video of the gender reveal party and they really pile on to that gendering process.
I remember seeing a friend from high school who was an NFL football player, very sporty guy. A guy's guy, jock sort of person. And when those balloons exploded and the blue ticker tape things floated down, this man, who'd had two daughters in the past, went absolutely mental. And he posted this video very proudly on social media.
And it was really very cute, but everybody started talking about how the boy was gonna like football, like his dad, everybody started talking about the sports that he was gonna play. Everyone started talking about these things and I watched this unfolding just thinking again, what if the kid's not into that at all? And he's gonna grow up. You know, of course this was always the case you're aware of your parents' hopes or expectations or preferences for you, but there is something about it having it enshrined and developed in such an elaborate way on social media. That made me think about that, that much more.
Tama: Yeah, that, that, that kid, when they, they failed to get into the high school football team will be sitting in the shed, watching that video from 14 years ago, terrified of telling their parents what actually happened.
Elaine: Yeah. So these are really early. We're talking about the kind of relational difficulties potentially. Within a family or identity issues involve for a particular child. Who's been gendered in this huge celebratory gender reveal performative way, what we're not talking about yet. And what I'd love for you to talk about is, okay, so you post a sonogram online and maybe you do have the baby's full name on there. Where's the problem here? Because a lot of people, maybe people who don't understand really how social media and the business and the economics of it all works under the hood might not grasp the difficulty here with what seems like this innocuous, joyful sharing of this fantastic event about to happen. So problematize it for us.
Tama: Okay. So I mean, and, and the two things are not separate. Like, it can be a joyful thing at the same time, but I guess let's just use the example of say posting your Instagram. Let's say you, you share the, the 12-week sonogram on Instagram, um, depending on who you are.
You may have just, just pulled out the phone and taken a picture of the, the screen at the, the clinic and, and got, um, just, just a bog-standard sonogram has the mother's name, the date of the scan, the estimated date of birth, the location of the clinic that you are in, and a bunch of other things all on the screen.
It has, uh, probably the first picture of this sort of, um, entity, but all of that. Um, one, even if it's in an image form, it's trivial to pull that information back out as text. So all social media platforms, process images by firstly saying, is there text in here? Can we pull this out? Secondly, it'll look for metadata.
So it'll say, where was this taken? When was this? When was this taken? Where was this taken? What other information can we extract from that? I guess one of the things people don't really understand about something like Instagram, which is owned by Facebook. And so sort of does the same backend work is that as soon as it detects another person or even person, not quite yet born, they will start a profile.
So Facebook doesn't just profile the people who've elected to have profiles. It has a profile of anyone connected to them and starts to, to gather information and build the idea of. So who is this? Um, what do we know about them? Now, if you can start that profile from even before they're born, then the amount of information you will already have on the system by the time they say, turn 13 and can officially, um, sign up for Instagram or Facebook or what's at all of which are connected, um, because they're all owned by the same company, then that profile will be so much richer. So what that platform then can do in terms of selling information to advertisers to allow them to do targeted advertising is all, all of the hard work to some extent is done before you even sign up.
So I guess that's some of the stuff that's happening for young people that we kind of forget is going on. And that's why Facebook's sort of relational map of, of families and, and children is so important to the company because it's so valuable because it's not just what's happening today, it’s building those really rich profiles into the future.
Um, and it's very contextual right now because there's a lot of pushback about does, does Facebook hoover up too many cues? Does, does it look in too many places? Has it got too many, um, too much reach on the internet? Should we stop sharing as much stuff? And to some extent, Facebook I guess is fighting some of those battles, but what it's also admitting, we've got so much information now that we actually need less and less and less information to reach the same conclusions about those people. And one of those, those sort of really rich data sources that isn't is not going away is parents sharing lots of stuff about their kids and the value of that then sort of pre-formatting the profile of, of that, um, young person when they even end up using one of these services should not be underestimated. And I think that process is completely invisible to most discussions of Facebook, because it's about the active user. It's about, well, I could choose to leave Facebook. That's true, but Facebook doesn't stop knowing stuff about me or anyone related to me, especially my kids.And I think it's that process that's happening largely invisibly. It's not completely invisible. You can see it. If you look at, you know, some of the, the, if you do that thing where you download everything, Facebook knows about you. There are a whole lot of flags that, you know, it knows you're a parent. It knows the ages of your children, things like that. Um, but they're very, very well hidden in terms of the, the everyday experience of the platform.
Elaine: It occurs to me, and maybe you can confirm this, you know, when you download what Facebook knows about you, you get this set of information with the little pictures. Facebook thinks you're interested in football. Facebook knows you're a parent, et cetera, but behind the scenes, the breadth and complexity and hugeness on a big data level of what Facebook knows is actually a whole bunch of stuff that's not only invisible to us, but if we were to get access to it, it would be uninterpretable by us.
It would be uninterpretable by any oneentity because the big data machine, that's the whole point of it, it hoovers up so much unimaginable universe of information that the human mind is incapable of understanding all that data and all the correlations that are possible from that data, including correlations that might make no sense.
You know, Facebook thinks I like football because I used to see if there was a football game on before I dared go shopping in a particular location, because I didn't want to encounter the crowds. Right? So it's done this dumb thing where it's like, oh, she searches for this. So maybe she likes that, but actually there's a whole lot more behind the scenes that's going on. It says, okay, you know, for some reason, because she owns this kind of car, she's more likely to buy that kind of thing. Oh look. So all those people who own that kind of car are likely to buy that thing. And nobody could predict that that's the point it looks for that unpredictable.
Tama: Yeah. It's, it's, it's all about correlations and, and many of those correlations will be wrong. Or that will be stupid correlations. And because a lot of the decision-making about making those correlations doesn't involve people anymore, it’s machine-learning algorithms. It means that we can't as easily see the processes by which different things are joined.
And I think that's, I mean, as a huge segue, but that's where some of the, the sort of racial profiling stuff came from because Facebook didn't even necessarily understand that when you were doing. Sort of correlation stuff, crime and black people does go together historically because that's, what's been talked about that doesn't make one causative. It just means that there there's been a correlation historically for all sorts of other reasons that the black box doesn't understand. And I think that the. That decision making process being invisible and designed to be invisible is in and of itself. A huge other problem that's possibly beyond our scope, but worth noting.
Elaine: Yeah, absolutely. I mean the average person on the street, if you talk to them about the behavioural futures market, they would've no idea what behavioural futures market is. The idea that what we're putting out there about younger people starting their profile. Say in a way that's going to be hugely valuable for many companies, because these are the consumers of the future and these consumers that are going to be known already at the time that they spend their first dollar, their first pound to their first euro, whatever. And they're gonna be known in a much more thorough way than our generation was known by. People want to, who wanted to sell stuff to us by the time we hit 15 and got an after-school job or whatever it is. So they have the potential to really understand our kids and their vulnerabilities and their purchasing preferences and everything else in this much more complete way. And that's going to render them vulnerable, unless there are different regulations or legislation in place at that time about what it's permissible to do.
Tama: But also how that information is kept in their systems. So for example, you might, you know, the, the GDPR might say, you are not allowed to store a specific profile of a named kid in this way, but what it might do is just go, um, child one through child four of this person have these characteristics. We're not allowed to name the child. We're not allowed to attach that to a specific person, but we can continue to extrapolate over time. And at a certain point that then becomes legal to start putting that information into or around the named profile. So, there might be interpretations of the law where even if it says, don't collect data about kids, it's not saying don't collect data, and don't, you know, no kids exist. What it's saying is don't put those two pieces of information together at this time. It's not necessarily promising that you can't in the future, and maybe that's the intention of the law, that it's supposed to prevent that. But whether it does or not is a whole other question.
Elaine: I mean, I was aware, there was a function that I never used on Facebook where children under a certain age, aren't supposed to have profiles on Facebook. Although we know they all do with parental facilitation, but there was this album function where, okay, your kid's not supposed to have a profile, but please gather all the images of your child into this bespoke album. Might we warmly invite you to do that?
Tama: And, and if you did that, the, the level at which the automatic tagging algorithm could learn the face of your child was terrifying for people, because it could learn faces and yeah, I mean, we've got all sorts of examples of any of these platforms that are doing sort of facial recognition learning pretty quickly. Um, and developmentally, because one of the interesting things for facial recognition is, when a child changes, the face actually moves around a bit and sometimes it, it loses that, but if it knows the evolution of that face over time, then it actually has a much more robust fingerprint of you in terms of sort of your facial biometrics and things like that. So, um, yeah. It's like parents do that hard work of teaching Facebook exactly what it needs to know to profile your kids really well in the future.
Elaine: Yeah! The interesting thing about facial recognition technology, I was talking to one of the F RT experts in the UK who was involved in the London, metropolitan police FRT study, and he was talking about, on the one hand facial recognition technology is really bad at identifying kids' faces because there's a reason, evolutionarily speaking, kids, you know, the younger they are, they look roughly similar and then they get more and more individual, because we're supposed to respond to them as children - they have certain kinds of facial markers that say help me, and look after me, and take care of me, and all those things. And then as they age, they start to differentiate and then FRT gets better at recognizing them, at least if they're white, you know, because we all know FRT struggles with getting people of colour, differentiating people of colour. Um, but what you're saying is another point that actually, if you keep on uploading images of the same iid over time, the learning that's possible there is potentially, uh, pretty monetizable.
Tama: There's, there's an experiment that I have at home. So Google used to have, um, it's it's defunct now, but they have a piece of software called Picasa, which was sort of a downloadable version of, of software that they use, um, sort of like the ancestor of Google Photos. But what it does have in that is a really basic version of, um, the facial recognition technology, which is at a much larger scale now on Google proper. That's the software that I used to manage my photos on my hard drive. And all of my kids looked super similar when they were really young. And because this, this, um, you know, really sort of scaled down version, even that, once it's seen sort of the kids' faces at particular ages, it becomes about 99% accurate at picking them even two or three years out from that set of photos.
So I think like, and that's a cheap nasty version of what Google does at a much more professional and profound level, um, you know, today. So if that's what they were giving away for free, imagine how good the software is that they're actually using. Um, and Google and Facebook, I would imagine, be on par for that sort of technology.
But yeah, it's, it's always really useful for me just to see when, when you upload new photos and it knows them all already. You know, what if this is the cheap version, I'd hate to think what the professional version is doing.
Elaine: Yeah. Cheap version from a few years ago. Um, when I was speaking to Professor Pete Fussey, this UK FRT expert, he was talking about how common this discourse still is. That if you haven't done anything wrong, you shouldn't have anything to hide. And so this is why things like FRT or having information about you out there. Oh, it shouldn't bother you if you haven't done anything wrong. And he's always having to correct this misapprehension, that harm to you, or disadvantage to you, can happen for other reasons. Um, so, but I think this is the something the average person might struggle with because I think that discourse is very powerful, because hey're thinking well it's okay. This is for my safety or my wellbeing, this enables convenience for me in unlocking my phone or at various retailers, or I'm recognized - people don't have an apprehension of the harm. Can you unpack some of the reasons why perhaps this should perhaps bug us a bit more, that our systems are able to identify our kids in this way?
Tama: Well, I guess it, I mean, there's all sorts of levels to it and well beyond the scope of what we can easily talk through. But I guess at the end of the day, um, The presumption of, uh, I mean, the enshrined right to privacy is, is one of the most basics here.
I mean, children do have an enshrined right to privacy. We know recently that's been, you know, explicitly extended into the digital sphere. And yet it seems to be something that is almost universally ignored in the design of software. Um, I, if software. Does discriminate between children and not children.
It's usually not 18 or 21. It's usually the age of 13 because that's a particular legal quirk. Thanks to, um, the, the children online protection act in the us where it sort of says under 13, you're just not allowed to collect their data without parental consent parental, cuz that's too hard to get. Oh, we just won't do it then.
Um, but there's, there's all of these sort of layers to, to that question about, um, why profiles are problematic, why. The biometrics of profiles are problematic rather than sort of, you know, the, I mean, yes, it's true. Advertising agencies have been doing sort of profiling work since they began, but the level of specificity and targeted this, um, is another thing altogether.
I think one of the. I mean to go to big scale example, I guess one of the, the reactions people had to all of the stuff around the Trump campaign, doing exceptional micro targeting, you know, doing a different ad that was seen by 18 people, um, was for what does that achieve? And people realizing when you know, so many things, the capacity to just push that person's button in a slightly more nuanced way than the person sitting next to you. That's of real value to someone who wants to persuade you, whether that's to sell you something or to change your vote or convince you not to vote in the case of sort of the us system. Um, and I guess the more data you Hoover up, the more sophisticated, the capacity for those systems, do you have influence now?
We do sometimes massively overclaim, you know, I don't even think people are completely doofable. Like, people do have agency and this, this isn't magic. It doesn't always work, but it's about working better on the basis of information that probably should not be, should not need to be collected. And I think that's where the, the conversation around children is quite a challenge for a lot of a lot of people, because there is no good reason to hold onto children's data beyond the immediate functionality of something. And in almost any case that you look where children's data is being recorded, the longevity that that is kept and the purpose, purposes for which it might be used are all far beyond the scope of making this thing work properly.
Elaine Yeah.
Tama: Um, I, I've done a lot of work looking at sort of infant wearables, um, which are truly fascinating. Uh, and, and some of the functions of those wearables are very important, but there is no good reason why the data collected by them should persist in the cloud. Like, it might be that you need to store it locally for, say, a few months for profiling purposes ff you're looking for a patent around health or something, but that data doesn't need to go somewhere else. It should be something processed locally. It shouldn't be something that is shared, and it shouldn't be something that is aggregatable unless a case is made to a parent for an explicit use. If you want that data to be collected with other kids' data to show something, there might be good reasons to do that, but that should be something that parents are transparently aware of, not something that happens by default.
And if you wanted to go through the nine steps to opt out, you can, but we've buried it really well on our terms and conditions, if indeed the option to opt out, opt out exists at all.
Elaine: I, I was less aware of. Wearables. They were probably in their infancy at the time that my daughter was born, and because she had no underlying health conditions that we knew of, we didn't think about those things.
I'm not even sure if we had a baby monitor, we, she probably yelled loud enough to hear her wherever we were. But. I was really fascinated by your deconstruction of a lot of the advertising and marketing that went into one particular infant wearable. You talk about in one of your presentations, uh, the Owlet, which is an eye-wateringly expensive physiological monitoring device that's designed for your peace of mind, and on its website or all these testimonies fom parents that really scare the life out of you about things you didn't know that you should be scared about. My daughter's 11 and I was retrospectively alarmed by some of these testimonials, um, on the website. I felt that little parental fear twitch, because I was reading these stories. So talk to me a little bit about your foray into the world of infant wearables and their marketing.
Tama: Yeah. So I mean, infant wearables are one of those fascinating cases of inventing a product and then convincing the market that it's an absolute necessity when it should be novel. Um, so one of the examples that I, I spent the most time thinking about was something, the Owlet. And the Owlet does what's called pulse oxymetry, which looks at the amount of oxygen in your blood, which is measured not by actually extracting blood, but by using a, a sensor that can sort of read that in real time. Now this is something that hospitals use, especially after babies are first born, just to make sure, you know, their breathing works properly, that they, they are actually taking enough oxygen in. Um, and so there is a purpose for this technology. It's not, you know, it's not, uh, a ridiculous thing to have, but usually in sort of 99.9% of cases, if a child is healthy, that technology doesn't need to leave the hospital. It doesn't come into the home. Now it is true, um, that sudden infant death syndrome exists and it still isn't well explained. Um, but what has happened with the Owlet is they have inferred this connection between, if you know what their blood oxygen level is, it will prevent SIDS.
Now they're very, very careful in the wording on their website that they don't actually say that. What they do instead is they have all of this testimony from other parents who have inferred that for themselves, who have said, oh, there was an alarm and it went off and I took my child to hospital and they said maybe they weren't getting enough oxygen, and I probably saved their life, and it's entirely because of this device. Now there probably are one or two cases where that may have happened and, and they actually have, you know, made a, a difference to health, but the amount of cases where parents have had the crap scared out of them, because that their alarm has gone off and actually it's either been a false positive, or it's been, I didn't quite wedge the sock on right, and the sensor wasn't quite in the right place, but nevertheless, we had a panic and, and took them to hospital. Um, I think that's, that's where you see that this, this technology may not be as safety and preventative minded as it likes to think it is. But also this idea that it should be something that every parent is doing, I think is for me, the, the big challenge here, this idea of selling peace of mind, um, is dangerous. One, because it shouldn't be, like, very few of these technologies are actually, um, medical grade. Now there are, I think you found that there are one or two examples that have, have evolved recently.
Elaine: Yeah. This Snuzi Hero monitor here in the UK, and the reason I became aware of it, um, I was aware of a particular blogger here in the UK, who very tragically had lost her daughter to SIDS. And she had two children subsequently, but particularly the first child, the ‘rainbow baby’, after the loss of her child, I remembered her having talked about monitors. And she was somebody who really weighed things up and really thought about things. And she said at the time, I'm not sure what to do. I'm aware of false positives, but then she repeated this ‘peace of mind’ thing. And ultimately, you know, that first child, after the death of her daughter, she wore that monitor all the time. It was never, ever taken off. Uh, just in case the child went to sleep and something happened. And I noticed that she was advertising a giveaway of the Snuzi Hero and having read your work. I was a little uncertain when it talked about medical certification, but it turns out that yes, in fact, in the UK, it at least had this medical certification or sign-off. And it was one of the least expensive ones on the market, as opposed to some of the other wearables that don't have that certification and that cost, you know, up to four times as much.
Tama: Yeah. I mean, the fact that if you look at the American website for Owlet, they have a payment plan for service personnel that splits it over six months so you can afford it tells you something about how ridiculously expensive this device actually is. Um, but I guess the, the other thing about medical grade is even if it is a medical-grade technology, i.e., it does what it claims to do to the accuracy it claims, it still relies on the parent, in the house, putting it on right. And I think that's one of the real difficulties here is how, how often, especially in those first few months of life are parents, basically non-functional zombies who are getting through the day of, okay, my child is not screaming right now. Tick. I therefore have done everything, right. Like that that's often as much brain as you've got. The idea that you're gonna know exactly how to position, uh, a monitor or something like that on a baby every time they fall asleep, which can be, let's be honest a lot of times, um, in a given day, it is a tricky ask. Now I'm not. That wouldn't be valuable for some people. Um, but what I would say is that if a hospital believes you need that technology, you go home with it. Like they will set you up in the house if they believe that there is risk.
And I think there is a balance to be struck between, um, not feeling, not, not making new parents believe there's more risk than there actually is. And I, and I think that's one of the, the difficulties here, is that the marketing message is, there's so much risk, but you can reduce risk this way. Um, If that was the whole thing, if we were just having this discussion about just whether the fiddle didn't reduce risk, that probably would mean it would be a very different sort of conversation. But the, the thing that the Owlet’s doing, one, is it's giving you an interface to understand your child. You are supposed to be out of the room. You are supposed to be out of earshot and you are supposed to be holding up your phone and looking at three little, you know, um, notifications. And if they're all in the green, that tells you, your child is healthy and well, you've done your job. Now, the idea of having an interface to understand your child has some real challenges. I think it sort of changes, not necessarily in a bad way, but it certainly changes the way you understand that connection and that relationship. And if you happen to have multiple interfaces that you are using, then it will change the way that that relationship initially develops. As I say, I'm not saying this is a good or bad thing, I'm just saying it's different.
Um, but the bit that I would say. Most problematic is that all of the data that the Owlet and many of these other devices collect then goes to the parent company and is owned by the parent company. You don't own your child's data, the biometric history that the Owlet records of kids is the property of the Owlet company. Now for a little while they, they were so arrogant as to offer to sell you back the aggregated version of that data. Um, they did roll that back eventually where they're like, I will give you for free access to your child's own data over time. Um, but even then, it's, it's like their sales pitch to the world is one on one hand, they're saying to consumers, here's this device, this will give you peace of mind, it’ll warn you if something's wrong. What they're saying to investors is we've got this massive data set that nobody's ever built before. This is gonna be so valuable for learning all sorts of stuff. Invest in us, and we'll figure out how we use this data. There is a profit, um, you know, that they're, they are trying to make a profit out of this on both fronts.
And I think that's the, the challenge. When you look at these companies that they conceive of themselves as big-data companies that happen to have made a device, not the other way around. And, and I think that's, that's the challenge here is, is there would be a way to do this in a more, um, in a less intrusive way that doesn't build a history of that child.
You could easily see most of the functions that most of these devices do, recording that data locally using it if you need to over a couple of days to show a pattern, but beyond that, the data could go away. And I think that's one of the… there is amost no, um, technology company that would say, oh, we're designing the data to go away after a period of time. And I think if that was a default, and then you had to explain why you weren't doing that, that would be much better designed than what we currently have, which is really led by tech startups trying to extract data.
Elaine: Yeah. Societal health warning, applications of data may be yet to mature. you know, with a lot of these, um, startups and pitches, the main sell is, look at all this data that we have, who knows what we might be able to eventually use this data for, because there might not be immediately apparent uses of it, but you know, what might be even more fantastic once you work out, or once the big works out what some of these correlations are, it’s only then that you can figure out how that data might be able to be used and then you monetize it and monetize it again and again, and of course the children that contributed to this database may have grown up long ago by that time, but still, it might be their data in aggregate is contributing to the fortunes of the company that sold their parents the physiological monitor back then.
Tama: I'm just saying you could have those sort of, well, it feels like a science fiction scenario, but it might be that, you know, that the correlations, um, over time show that this particular weird number at this particular time of, of, of a heart rhythm actually correlates with, you know, a 99% chance of having a heart attack before 45. And if that's the case, what happens to that data? If the all company gets bought by say an insurance company, and, and while that, you know, we, we sort of always use these what-if scenarios, well, Google bought Fitbit. What do you think happened with all of that data? You know, the, the idea of the big ish swallowing the smaller fish, that is how most social media companies make their profit. They wait for Google or Facebook to go. You might be a competitor, we'll buy you out today, but you're not just buying the product, you're buying all of the data that that company owns. And if it is data that can be combined with an existing database that the company already has, then that's another layer of information that they're bringing in. And I think it's, it's those, um, the question of aggregating different data sets to learn more and more about individuals that I think is something that we really don't have any stringent rules around at all, but we know it's happening.
And I think, you know, the, the fact that say, Fitbit data has been in used in court cases to say, oh, your heart rate, wasn't doing this. You couldn't possibly have been experiencing a break in. You must have, this must have been, you know, um, something that was set up or, or whatever, that those sort of things already happen in court. So what does the data tell you is something that w e'll have many other answers over the potential lifetime of that child. If on average, they're gonna live to say 86, then that's an awful lot of years for somebody playing with that data to try and learn something else from it.
Elaine: Absolutely. And as you said, health insurance companies, particularly in those contexts where health insurance is really powerful, in countries like the US, where the same healthcare isn't guaranteed to every citizen of that country. You know, it seems like a really obvious market, health insurance companies, for these kind of data, especially when you're building up this physiological profile of a kid over time. And, you know, they distribute ones that aren't wired up to the cloud, thankfully, but I remember these…they distributed cheap step counters at my daughter's school. And my daughter's always been desperate, Can I have an Apple watch as well? So I can track everything and see how high my heart rate goes during exercise or whatever.
And she's interested in those kinds of things that we do sort of fetishize our data, especially when it's data about something that we haven't had data about before, some kind of process in our bodies that's previously been obscured and invisible to us, but it's interesting to us in our sort of narcissistic way, like, Ooh, look at, look at what this is doing, you know, and we might be sold information that if we're able to track this, we're able to catch a heart arrhythmia or something like that. And then we'll be safer. We'll be better off.
But there's also just the coolest factor of this new candy-coloured graph. Every app that I see that tracks some kind of physical physiological information, I mean, it's hugely visually appealing. It presents the data in all sorts of different ways and compares things and you start triangulating it with other kinds of information. Like, oh, that's interesting. If I compare my alcohol intake with my sleep, look at the conclusions I can draw. And I'm thinking, well, you know, these conclusions that I can draw are nothing compared to the conclusion that somebody else, somewhere else is drawing.
And if a, a health insurance company has access to that information and thinks, oh, at this particular point in. According to the correlation information that we have about whose people whose data looks like this, maybe 2032 is a good point to cancel her health insurance policy, or not renew it. Because according to our calculation, she might be cruising for this kind of health thing that we don't wanna cover.And that sounds really dastardly, but, you know, They ain't saints, you know?
Tama: Yeah, well, no, that that's, that's the thing. Isn't it it's like, it, it all the, the real challenge sometimes talking about this area is it sounds science-fictiony. Like, if people have watched black mirror and gone, this is fiction, then turning around and saying actually a good 60% of what's on black mirror is actually doable. Now it's just usually the, the last step that the, the show’s take is, you know, you can't embed a chip in your eyeball now that'll record your life history, but you've got an awful lot of your life history being recorded by the fact that you carry a phone with you 24/7, you know?
Uh, and, and it, I guess, from, from sort of, you know, the death-studies perspective, no, you can't reanimate someone. but you could build a pretty good sort of slimline model where you can return bits of database information that feel like you're talking to someone
Elaine: Mm-hmm .
Tama: So the space between the fiction and the fact, or the fiction, and what's, you know, realistically doable isn't as far as, as the most futuristic and dystopian shows suggest. But for that reason, it's really hard to have conversations where you're like, aren't you just telling me about Black Mirror? It's like, no, no. I tell you about why Black Mirror could be three days from now.
Elaine: Yeah, there's this. I don't remember whose definition of science fiction it is, but it says that science fiction needs to be relatable, so it needs to be a world that you can readily recognise, and in episodes of Black Mirror, often you recognize the world fully, but then there's these differences about it. It needs to be plausible enough to relate to, but implausible enough, just beyond the horizon. And I think what people aren't realising with the stuff that we're talking about is just how close it is how shortly over the horizon things actually are. What you were saying about infant wearables, relying on the interface. When you're relying on that interface more, it's giving you all that peace of mind and reassurance, there's the question of what are you relying on less? What are you not developing? What are you not trusting? And I suppose that word trust is a really salient one here because we're in this climate where surveillance is being normalized surveillance of our own stuff of our kids of our kids' stuff. So it's not just being normalised. It's being lionised is increasingly equivalent to good and responsible parenting, or non-surveillance is being equated to perhaps less-than- responsible or neglectful parenting.
And so. I I'm just aware of how these devices assuage parents' anxieties, the kind of parents and the testimonials on the Owlet website, where they said, I was finally able to get some sleep. I was finally able to alleviate my anxiety and I feel safe when I get these notifications. And. I'm sort of curious, you know, what have you done? What have you found yourself as a parent, uh, getting drawn into, uh, as a means of alleviating your anxiety using technology? If anything?
Tama: Yeah, I mean, my, my oldest does have a mobile phone in his bag and, and does the, I mean, we, I. I've gone the super cheap data plan. So I don't, we, we generally don't try and use the tracking stuff more because it would use up the scanned amount of plan rather than any other reason.
Um, but at the same time, I, he, his bike stolen and he called us from the school saying, I'm gonna be here for an hour. We're looking at, at CCTV. Um, I can't get home. My friend's dad's gonna drop us cause his bike was stolen as well. And there was that moment of, how much harder would that have been had we not given him a phone that he could turn on in an emergency.
I mean, now to be fair, every other child around him also has a phone in the school, has a phone and they could have, he could have called from the school. But at the same time, there was sort of a moment of, oh, I'm glad I made that decision. That that was probably the right call after all. Um, and yet you're right. Kids have coped in those situations forever. Um, and he, would've very sensibly gone and asked the school admin person to give us a call, you know, and that would've achieved exactly the same end. And yet there was a sense of, oh, I'm, I'm glad I went to the trouble of digging out the old phone and putting a, a plan on it.
So it it's, you know, it, it is a difficult one to…because that, that extension of, of reducing anxiety and increasing peace of mind, even if you know, all the problems that come with the technology still happens. Like, as you say, you know, you, you still had that, that, that relief of knowing your daughter was in the house, even if you didn't really want the technology that gave you that.
It's a bit like, you know, we kind of want the, the what's the magic, Weasley’s clock in Harry Potter. We want to know where they are. We don't really wanna invade their privacy, but we kind of want the magic to work. Um, and, and unfortunately for the magic to work, there's this whole other price, which is all the data going somewhere.
And I think that's the, the difficulty is, that price isn't a price you necessarily would've had to pay. It's just, that's the dominant business model.
Elaine: Well, I'm really conscious of time, but I'm curious if there is anything burning further that you really wanna say before we wrap this podcast episode up.
Tama: I guess the, the, the stuff that I've been doing recently that I've been thinking more and more about is that sort of, that question of the interface, um, like the mobile phone interface as the portal, through which we inverted com a soul parenting. And I think that the question of gamification, um, is something that, you know, it starts with the sharing of images and, and sort of the, the interface, to wearables and things like that. But I guess the thing is, and, and this has been much starker, I guess, because of COVID is, there's an interface for everything now. And I think if the interface was just information that will be one thing, but almost all of them seem to gamify the experience of knowledge somehow.
So even something as simple as, you know, the traffic light system in an Owlet red, bad; orange, you might wanna have a look; green, you're absolutely fine look away. And the fact that, you know, it's got the base station that just glows green to reassure you, that is still a gamification of knowledge, it's giving you a reward mechanic for knowing something.
And I guess the danger that, what dangers is, or, maybe that's too big a word. The concern I have is that that style of interface and information seems to be more and more persistent across anything relating to the way that we engage with other people, but especially the way that parents engage with kids, whether that's through Seesaw as, as sort of a primary school classroom interface, whether that's Class Dojo, whether that's, um, any of the other platforms that are available for understanding kids' education, whether that's about health apps, whether that's, I mean, even the…
I mean, as much as it's a daft, example, even sort of the COVID 19 login apps, they you give you that cheerful ping when you get the, the, you you've logged into a location correctly. Or something. Like we, we are so accustomed to having the gamified experience of parenting that I guess the, for me, the map that I haven't drawn, but I'm sure is there is looking at these wearables as step one and going well, what is the, the gamified interface for each other step of sort of childhood? And what is the payload? What is the data that's being generated behind that? And how much is it, over time? Like, I'm really fascinated by this sort of question of what we normalize at infancy and the reassurance we get from that, in that, that sort of interface we're gonna ask for that same reassurance as they get older.
And if the phone and the interface is the thing that gives us that reassurance, we're gonna keep looking for the apps and the platforms that can provide that. And the question of what's behind those and where the data's going, I think remains secondary to the reassurance of seeing the glowy green light that says my child is essentially okay.
Elaine: Well, this is Elaine Kasket and I have been speaking to the wonderful Professor Tama Leaver from Curtin University in Perth, Western Australia, about the deification of childhood, the gamification of childhood and the data trail that increasingly follows us all from digital gestation, right through to digital afterlife and everywhere in between. And I am so excited that I got to have this really wide ranging and fascinating conversation with him.
This episode has been a bit of a sneak preview for my upcoming book, hopefully around August of 2023, provisionally entitled. This is Your Life on Tech, which is a techno-psychosocial model of the human lifespan from digital gestation to digital afterlife and everywhere in between, uh, with a look at how technology forms a third force in pretty much every relationship we have throughout our lives.
Hopefully there'll be a dedicated podcast series at that time. But for now, you've been listening to Still Spoken, a podcast about how the dead live on through us, through our stories, and through technology. If you like the podcast, please do give it lots of stars, a nice review, a recommendation to somebody you think would like it too. Until the next podcast, thank you so much for having listened in.