The Shifting Privacy Left Podcast

S3E13: 'Building Safe AR / VR/ MR / XR Technology" with Spatial Computing Pioneer Avi Bar Zeev (XR Guild)

June 18, 2024 Debra J. Farber / Avi Bar-Zeev Season 3 Episode 13
S3E13: 'Building Safe AR / VR/ MR / XR Technology" with Spatial Computing Pioneer Avi Bar Zeev (XR Guild)
The Shifting Privacy Left Podcast
More Info
The Shifting Privacy Left Podcast
S3E13: 'Building Safe AR / VR/ MR / XR Technology" with Spatial Computing Pioneer Avi Bar Zeev (XR Guild)
Jun 18, 2024 Season 3 Episode 13
Debra J. Farber / Avi Bar-Zeev

In this episode, I had the pleasure of talking with Avi Bar-Zeev, a true tech pioneer and the Founder and President of The XR Guild. With over three decades of experience, Avi has an impressive resume, including launching Disney's Aladdin VR ride, developing Second Life's 3D worlds, co-founding Keyhole (which became Google Earth), co-inventing Microsoft's HoloLens, and contributing to the Amazon Echo Frames. The XR Guild is a nonprofit organization that promotes ethics in extended reality (XR) through mentorship, networking, and educational resources. 

Throughout our conversation, we dive into privacy concerns in augmented reality (AR), virtual reality (VR), and the metaverse, highlighting increased data misuse and manipulation risks as technology progresses. Avi shares his insights on how product and development teams can continue to be innovative while still upholding responsible, ethical standards with clear principles and guidelines to protect users' personal data. Plus, he explains the role of eye-tracking technology and why he advocates classifying its data as health data. We also discuss the challenges of anonymizing biometric data, informed consent, and the need for ethics training in all of the tech industry. 

Topics Covered

  • The top privacy and misinformation issues that Avi has noticed when it comes to AR, VR, and metaverse data
  • Why Avi advocates for classifying eye tracking data as health data 
  • The dangers of unchecked AI manipulation and why we need to be more aware and in control of our online presence 
  • The ethical considerations for experimentation in highly regulated industries
  • Whether it is possible to anonymize VR and AR data
  • Ways these product and development teams can be innovative while maintaining ethics and avoiding harm 
  • AR risks vs VR risks
  • Advice and privacy principles to keep in mind for technologists who are building AR and VR systems 
  • Understanding The XR Guild 

Resources Mentioned:

Guest Info

Send us a Text Message.



Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Show Notes Transcript

In this episode, I had the pleasure of talking with Avi Bar-Zeev, a true tech pioneer and the Founder and President of The XR Guild. With over three decades of experience, Avi has an impressive resume, including launching Disney's Aladdin VR ride, developing Second Life's 3D worlds, co-founding Keyhole (which became Google Earth), co-inventing Microsoft's HoloLens, and contributing to the Amazon Echo Frames. The XR Guild is a nonprofit organization that promotes ethics in extended reality (XR) through mentorship, networking, and educational resources. 

Throughout our conversation, we dive into privacy concerns in augmented reality (AR), virtual reality (VR), and the metaverse, highlighting increased data misuse and manipulation risks as technology progresses. Avi shares his insights on how product and development teams can continue to be innovative while still upholding responsible, ethical standards with clear principles and guidelines to protect users' personal data. Plus, he explains the role of eye-tracking technology and why he advocates classifying its data as health data. We also discuss the challenges of anonymizing biometric data, informed consent, and the need for ethics training in all of the tech industry. 

Topics Covered

  • The top privacy and misinformation issues that Avi has noticed when it comes to AR, VR, and metaverse data
  • Why Avi advocates for classifying eye tracking data as health data 
  • The dangers of unchecked AI manipulation and why we need to be more aware and in control of our online presence 
  • The ethical considerations for experimentation in highly regulated industries
  • Whether it is possible to anonymize VR and AR data
  • Ways these product and development teams can be innovative while maintaining ethics and avoiding harm 
  • AR risks vs VR risks
  • Advice and privacy principles to keep in mind for technologists who are building AR and VR systems 
  • Understanding The XR Guild 

Resources Mentioned:

Guest Info

Send us a Text Message.



Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Avi Bar-Zeev:

Especially now, in the age of AI, t he computer has so much of an advantage over us that we're going to have to prevent that one way or another. It's like walking into a room with the best salesman ever and they have access to our entire life history and can read our mind practically, and no normal human is going to be able to stand up to that kind of a treatment. We're all susceptible to that kind of manipulation, so let's be clear about that and avoid it. A nd, I think the thing we should be trying to help each other with as much as possible is sharing the information, sharing the data. It's for everybody's benefit to figure out where we messed up in the past and be honest about it and open so that other people don't have to make those same mistakes. L et's try to foster that as teams and make it available.

Debra J Farber:

Hello, I am Debra J Farber. Welcome to The Shifting Privacy Left Podcast, where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans and to prevent dystopia. Each week, we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding- edge of privacy research and emerging technologies, standards, business models, and ecosystems. Welcome everyone to The Shifting Privacy Left Podcast. I'm your host and resident privacy guru, Debra J Farber.

Debra J Farber:

Today, I am delighted to welcome my next guest, Bar-Zeev, a true tech pioneer who's been at the forefront of spatial computing for over 30 years. He first launched Disney's groundbreaking Aladdin VR ride in the early 90s. He crafted Second Life's 3D worlds and co-founded Keyhole, which became Google Earth, which is a Mirror Earth 3D browser around 2001. He co-invented Microsoft's HoloLens in 2010, helped to find Amazon Echo Frames in 2015, and then contributed at Apple on some undisclosed projects. Most recently, he's the founder and president of the XR Guild, a new nonprofit membership organization that seeks to support and educate professionals in XR on ethics and the most positive outcomes for humanity. He's also a Member of Magic Leap's Board of Directors .

Debra J Farber:

=Board, welcome! I t is a delight to have you here on the show..

Avi Bar-Zeev:

Thank you. Thank you. It's great to be here. I could add one thing. You said undisclosed projects. I think I wrote that bio when I still couldn't talk about the Apple Vision Pro, but that's at least one of the things that worked on at Apple was helping on the Apple Vision Pro, so I'm happy that I can finally talk about it.

Debra J Farber:

Well, that's exciting, that's really awesome. I mean, I think that there's a lot of people, a lot of touch points, those technophiles in the audience that have checked out the Apple Vision Pro. It's just kind of at the forefront of what's current right now in VR, so that's pretty freaking cool. You know, you have such a deep background in this space and I want to ask you what you've seen over the years, but maybe it's first easier to just talk about - when it comes to AR / VR / metaverse, what privacy issues are top of mind for you? And then, how did you get there over the course of 30 years? How did you come to realize that those are the top issues?

Avi Bar-Zeev:

Yeah, 30 years is a long time to collect a lot of mistakes and a lot of bad things that have happened over that time, and I'd say 30 years ago I would have sounded very much like any average metaverse enthusiast of this is the future we're all going to be living in online worlds. I actually thought in 1992, that was the time that, forget 2d, we're not going to be doing these 2d web pages, everything's going to be 3d immersive. That's gotta be the way it goes, because that's all you know. All the literature pointed in that direction. And and no, the 2d was actually pretty good and there were a lot of problems even with that that we haven't even solved in 30 years.

Avi Bar-Zeev:

And so I've spent all that time looking and finding all these mistakes that we've collectively made and realizing, man, we should have done better, we can do better, and in the future we have to do better because we don't want to let down our customers.

Avi Bar-Zeev:

We really want to do a good job at this. So, anyway, it's just a collection of that and I've just learned over the time. I'm not a lawyer, I was never a privacy expert, but I've become fairly close to being both at this point after having experienced all the tragedies that we've had over the years in this area, and what I'm worried about with the future is these technologies are now with XR, spatial computing. They're so much more powerful than the things we've dealt with in the past that the benefits are magnified, but the harms are also magnified, and if I had to guess, I'd say at least 10x, but could be even a lot more, based on how impactful these technologies can be to our perception and our emotion. The chances for manipulation and exploitation are just through the roof, so we have to be even more careful.

Debra J Farber:

Thanks for that. Are there any specific privacy issues? Obviously, we want our products to be safe and there's security issues, there's misinformation issues. But, what about we're collecting all of this data? Through these experiences required to collect a lot of data about the person in order to make these 3D worlds work. So, what ends up becoming the potential harms by collecting this data and what's unique to the space?

Avi Bar-Zeev:

That's a great point. For many of these experiences to work - for them to have the benefits, we do need to collect data. I'm the kind of person who would love to work on UX and algorithms that are beneficial to people. But, I know that if other companies to the same thing, but poorly, it's going to impact all of us, and those things might wind up getting bad.

Avi Bar-Zeev:

This whole notion of privacy, I mean this may not be new to people who listen to your podcast at all, but when people talk to me and say privacy is dead all the things that they like to cite I go back and say just look at the third Amemendment think about the constitution of the United States and go back to this period in time where people who lived in the colonies were being forced to have soldiers quartered in their house. Right, the whole amendment sounds like it's completely outdated. We're not no one's asking us to put soldiers in our house, but the reason that's there as an amendment is because if the soldiers were living in their house, then the soldiers would learn. If the person was a rebel, they'd learn what their politics were, they'd learn who they associated with, they'd learn how they think, they'd learn your relationships. They'd learn all your weaknesses and your pain points so that you could be manipulated. And so they rightly said no, we can't have that. I wish that amendment had been updated to include companies, corporations and the internet and things that they couldn't have imagined yet, but that's why it was there and we're still living with this.

Avi Bar-Zeev:

There are people that want to use our data to exploit us and take advantage of us, and I think we have to figure out how to enable the good uses and stop the bad ones. And that is not just about privacy. Like you said, security comes into that as well, but it requires all of us to be so careful when we do these things and to think so hard about what can go wrong, just like the colonists didn't think about what could go wrong after soldiers were no longer living in your house. There's still plenty of other harms.

Avi Bar-Zeev:

That's what. spent a lot of time thinking about, and in the case of XR specifically, there are technologies here that are equivalent to mind reading, so in some sense, it's like having the government or a company living in your mind, learning things about you, and those things can be used for your benefit and they could be ( used against you. The thing that perpetually I bang my head against the wall is why do we have so much talks) telling the difference between experiences that are working for you and there to benefit you and those that are there for someone else's profit, so they're profiting off of your experience? We should be able to tell the difference between those things, but for some reason we have trouble defining laws, regulations and policies that differentiate those, and that's one of the things I spent a lot of time trying to focus on.

Debra J Farber:

That's fascinating I and I have to say, in my 19 years of working in privacy directly, I have never thought about the third amendment as a privacy risk, and you're absolutely right. " just kind of have me noodling on that today. So thanks for expanding my mind there. I'd love to hear a little more about why is it and maybe this is good to go into the next question because I have watched some of your other podcasts and talks where you talk about classifying eye tracking data as health data. Can you explain, not only, why but what is it about eye tracking data that is so different from just collecting data that's known about a person? Specifically, I'm talking about how you've about how the different parts of the the enable y ou to can change things in . 3D experience without the person in the experience knowing that or detecting that a change has happened because of the way that the lever makes it seem seamless by tricking the person based on their eye tracking.

Avi Bar-Zeev:

Yeah. So, to get real concrete, here's an example of a technology that would probably be beneficial, but it also highlights the danger. So, there's a thing in VR that was invented called redirected walking. So just imagine you're in a room let's say the room is 10 feet by 10 feet. Y ou can't really walk straight. Even if you went diagonally, you could only go about 13 feet. R ight? You really can't walk in a room like that without hitting furniture and walls. But, with redirected walking in a slightly larger room, you could theoretically be walking in circles, b ut it looks like you're walking in a straight line. For you, in the headset, you see yourself as walking down a road straight for miles and miles and miles,. For whatever reason, what you're actually doing is walking in circles in the room. You're reusing the same space and the technology can trick you into t hinking that . A you're walking straight by progressively rotating the world. As you walk, it just rotates the world a couple of degrees and you start going in a circle to compensate and you think you're going straight. But how could you rotate the world without noticing? That seems like everybody would notice and they'd dizzy. But, it turns out, whenever we blink or whenever we look away and our eyes are constantly darting around as we try to make sense of the world around us. Our eyes are constantly looking everywhere and trying to fill in information. Those are opportunities to change the world during those brief periods, And then, we are actually blind during those periods. There's a couple of videos online that you could find that'll show that that when our eyes are in what they call saccadic movement, that's when they dart around or when we blink, our brain is telling us that we see the world, but it's not actually seeing the world. Y you can change things and we just don't pay attention to some of these other things. There's great experiments where an interviewer is talking to somebody on the street and then their hired hands come along with a billboard and they get in the way of the two people talking. Then you swap out the interviewer with a completely different person, different race, different gender. It didn't matter. The person being interviewed, as soon a s they can see them again just continues talking as if nothing has happened. They don't have the continuity of mind to know that the interviewer got swapped out when they couldn't see them because they didn't pay attention. Those are details that we often just don't pay attention to.

Avi Bar-Zeev:

It happens all the time and allwe're all vulnerable to this. So, this is t is reason that illusions and magic are [possible. So, think about the danger being. You know, we might go to a magic show to be entertained and the magician might steal our watch. But now we're dealing with people who are figuratively stealing our watches but not giving them back at the end of the show. It's not for entertainment purposes, it's for them to make money. They're using the same tricks that could allow us to be surprised by sleight of hand, by just being distracted looking somewhere else. Those same tricks can be used against us in order to actually make us more susceptible to advertising, for example. It's not hard to imagine how this will work, because it's already working today in a crude ? form All. Right, is already there. It just doesn't work well yet, but when it does look out because we already know very well how to manipulate people emotionally You'll see commercials GDPR that pull on the heartstrings. So what you really have to do if you really want to sell things to people is get them worked up emotionally. No-transcript in Africa, where Debra entire country, the young people in the country, were convinced not to vote and a certain regime was elected as a result of it. So we're all subject to these manipulations, but the more data they have about this and the more experiments they can run live experiments the more susceptible we are to it. And now I think it's time to talk about it, because now we have to kind of put a stop to it before they become significantly entrenched in the money-making machine, in the same way that tech and social media have become entrenched already.

Avi Bar-Zeev:

already

Avi Bar-Zeev:

. t. U..,. I Yeah, that's so much information that you just shared, like so many questions about the ill future ic So the first is I know, like eye tracking data, you've advocated for being classified as health data. So in the US that might be covered under HIPAA, GDPR the EU, gdpr. Are you saying it's a biometric or it's so related to how our bodies work? Are we able to pick . . . Deborah Farber out Oh basically say, oh, this behavior is Deborah Farber's behavior, so therefore biometric, but a biometric. But when you use lots . of those things.

Avi Bar-Zeev:

All of the above. I'll try to list out some of those things. So, first of all, in just a very literal sense, it's health data because it can be used to diagnose certain conditions. So, we've already shown that you can diagnose autism, Parkinson's, concussions, potentially ADHD. There are a whole variety of conditions I don't want to say diseases, because not all of them are. You know, some people may argue they're just ways of being, but in general, things that somebody might care about. What your diagnosis is and who cares? Well, insurance companies care. The insurance companies might increase your rates if they know that you are predisposed to a certain condition. That might cost more money. Your employers might care. They may discriminate against some of these things and in some cases it is legal to discriminate. In some cases it isn't. Not everything's covered under the like, for example, the American Disabilities Act, and the government might want to know as well. There may be things. There may be these things, this piece of information may be very private. So that's just right off the top, the reason why the raw eye tracking data and the derived if anybody's run the algorithms to try to see what our diagnosis is that should also be covered as health data. But the raw data, because it has the potential to be diagnosed should cover that. But now, even more than that, I don't know if HIPAA is always the right answer, because that would cause us a whole bunch of other bureaucracy to be invoked, but the core notion is we individually should be in control of our old data.

Avi Bar-Zeev:

However the regime is implemented whether it's HIPAA or something like it we have to have first have informed consent, truly informed consent when it comes to uses of the data. And let's be really clear that EULAs terms of service any illegally they call these things contracts of adhesion right, there are things that you didn't really agree to. All these EULA like things are neither informed nor their consent, because the lawyers know that nobody reads them. So they're not informed and they're not consent because you already bought the product and you're already in the experience before you click yes and so it's just a performer. People are just clicking yes to get into the product. Nobody's really agreeing that. Nobody really understands the risks. Even the professionals don't even understand the risks alone. So let's just give up on that idea that we have somehow have true informed consent here. But we have to have it. If your information is going to be used, you got to know where it's going. You have to be able to revoke it if you don't like the way it's being used, and then it has to be accountability across that whole spectrum. So that's very HIPAA-like, but it doesn't have to be the exact HIPAA law that covers it, although, like I said, some of this is real health data, so it would overlap.

Avi Bar-Zeev:

But what else can be done with eye tracking data is we can get at your state of mind. People like to look at brain-computer interfaces, and one of the books I read recently talks a lot about brain-computer interfaces. I think we're still a little bit a ways away from that. That's not immediate. Brain-computer interfaces will be there for people who have physical disabilities that were useful for people who might be paralyzed. They're not there in a general way yet. We're still a little ways out, but eye tracking is already here. Eye tracking is here today and it can work like mind reading.

Avi Bar-Zeev:

Think. Think about it this way the only part of your central nervous system that is exposed to light are your eyes. Your eyes are actually part of your brain, so we're seeing. We're not seeing your synapses, but we're seeing effectively a part of your brain system, your central nervous system, when we look at your eyes and as a result of the way the eyes move, of the way you blink things like that we can essentially infer things about your mental state and what you're paying attention to. For example, if you wanted to tell who somebody is attracted to, you could look at their eyes and tell the kind of glances they might steal and where on the body they may look at another person. Do they make eye contact, do they look at other body parts? Those are things that will make it clear whether or not this is a person that you're attracted to or not. And even the fact that you might look away shyly is an indicator that you might be attracted to them. So it's not simply when you stare, but it's how you look. Also, things that you're interested in.

Avi Bar-Zeev:

Pupils will dilate. They dilate for purposes of regulating the amount of light right, they're like camera irises and that they control the amount of photons that get into our eyes, to our retinas. But when you control for that, they also dilate when you get excited about things. So if you were to tell me about something coming up that was interesting, my pupils would dilate naturally and I'd be excited. So it's a good signal to tell if people are interested or disinterested. Do you think the advertisers want that? Of course they do. They want to know when they show you that car or that can of soda, are you interested in it or not. So they're going to want to see that data, to know in advance.

Avi Bar-Zeev:

And now, if you put people in a tight loop, like we call a Skinner box right, where you do stimulus and response and you keep changing the experiment as you go in the matter of minutes, you could imagine the system showing me a hundred different cars in a matter of minutes to figure out which one I looked at in a way that indicates that I like that car, and I won't even notice that they're changing the car because the thing we said before when we blink and when we look away, we don't notice subtle changes in the world.

Avi Bar-Zeev:

And so therefore, they could have cycled through a hundred different cars in one spot in my world and figured out which one I responded to, and then they're like oh, that's the one he likes. And do the same thing with people figure out which ones I'm attracted to, which ones I'm not, and at the end of the day, you can imagine some versive advertiser making an advertisement that takes the car that I like and the people that I like, putting them together especially if they're people I know, like my best friend and somebody I might have a crush on, or whatever. Put them in the car, have them build a commercial around that I'm driving off into the sunset. You know that's going to push my buttons. It's going to push everybody's buttons when they see the things that they care about.

Avi Bar-Zeev:

And so we're very close to a world in which that is real and the simplest form of this is just going to simply be we've all seen generative AI at this point and diffusion models. How hard is it going to be to replace that can of Coke in your hand with a can of Pepsi, right? So your social media feed in the near future could very easily unless we say no, could very easily become something in which we're the advertisement to our friends and family. We're the product placement and we have no control over how that's used. So the technology is already there.

Debra J Farber:

It's just a matter of the people being clever enough to exploit it and us being yeah, definitely Amazing thought experiments there, and you can connect the dots to see how that could be used. It's interesting. The last episode that I published last week was around embedding privacy into A-B testing. Let's talk about experimentation for a second. We didn't unpack what I'm about to talk about last week, but I think you're the perfect person to talk about it with. And that's when we design products and services for safety in what we call highly regulated industries whether it's less transportation or even healthcare, real mission critical or sensitive data we make sure that we're not experimenting with like do we think this will fit on the engine? We test everything. We're very, very.

Debra J Farber:

We have protocols and then if we're going to experiment about something, especially on people, like in healthcare even HIPAA not to just bring up HIPAA as it's just coming up because healthcare but there's a concept of for research and experimentation that there's an institutional review board that will take a look at these potential harms maybe do a little threat modeling and you can be a little looser with the data there, too.

Debra J Farber:

You have maybe broader data sets you could work on, as long as you agree not to try to re-identify some things to allow for some innovation there, but there's usually a group of people that are stewards on behalf of human beings, almost an extension of the doctor-patient confidentiality like social contract. We see stuff like LLMs and how open AI just rolled out experiments on people and how we've been doing this with A-B testing and advertising, and where the technology is getting more and more towards analyzing sensitive data and manipulating people. Is that an answer? Do we need institutional review boards? Is that something that we can't do in a decentralized way? Do we need laws? How can we best address these experiments? At least that piece of the manipulation and privacy problem.

Avi Bar-Zeev:

No, that's a great series of questions, I think. Just to go back to basics for a second, because you mentioned this happens in medicine, right? If you got educated as a doctor, you had courses on ethics. There's a Hippocratic Oath and you learned the Hippocratic Oath and you learn essentially it boils down to first do no harm, but there's a lot of details in there. If you learn the Hippocratic oath, and you learn essentially it boils down to first do no harm, but there's a lot of details in there.

Avi Bar-Zeev:

If you go read it, there are things that doctors learn that you should or shouldn't do, and experimenting on humans is one of those things. Experimenting on children is even more restricted because children can't give you their informed consent. So science progresses fairly slowly when it comes to diseases around children, because you want to give them the treatment you don't want to give a child the placebo when you start doing these medical experiments, right? So you have to think about the ethics of these situations. So doctors get ethical training. Who else gets ethical training? Journalists get ethical training. Lawyers get ethical training, even though their ethics may be a little different in terms of defending somebody you know is guilty. You wouldn't want a doctor taking that same approach. But lawyers have a set of ethics, civil engineers we have it around our professional engineering Bridges shouldn't fall down. Computer science it's completely lacking. I had no ethics training in school whatsoever. The closest thing I had was philosophy and that was not about ethics. That was about life and the universe and all that stuff.

Avi Bar-Zeev:

What we need to do is start training our students in computer science, especially in AI, but also in XR, spatial computing, because it's probably a close second in terms of the risks and the harms that can happen, like in civil engineering, if your software fails, it's like the bridge falling down, so people get hurt. There's liability that comes with that. And because there's liability, there needs to be training. There needs to be things that we do so that we can at least say at least we followed the rules that we knew were the right rules, even though the outcome failed. We have less liability because the process was done correctly. It's just something that we couldn't control might've failed. But if we didn't have a process in the first place, of course we should be liable. If we're going to just throw things out there and not care about the harms that can happen, which has been the motto of certain companies. Right has been literally do it and see what happens. That's what we need to address really, really carefully here, and so I think that's the argument is that we should have we should first understand ethics and what is ethical and what isn't, and then you're right, and then we have IRB flows from that.

Avi Bar-Zeev:

Once you have ethics, then one of the realizations that you make when you study ethics is I can't be my own judge, jury. Study ethics is I can't be my own judge, jury and executioner, because I'm always going to give myself a pass. I can't be the one that evaluates my software. I've always hired UX researchers to come in and tell me where my stuff sucks, because I'm not the best person to judge it. They watch people use it and learn and then tell me where my stuff is broken. I should be good at that, but nobody's good at that. Everybody needs a fresh set of eyes to look at it.

Avi Bar-Zeev:

So we need these IRBs of people who are not. Their salary doesn't depend on the answer aligning with what the company wants to do. That's the key to an IRB is that they have to be free to say no. That's a bad idea, and the problem is that, as an employee of a big company. I forgot who said who was the quote.

Avi Bar-Zeev:

No person likes to criticize something that would undermine their salary, right? It's kind of paraphrasing that and it's true for all of us. I don't criticize companies that I own a lot of stock in, just naturally I want them to do well. I'm not going to go out and criticize them every day, so that's just normal. And IRBs is a great way to approach it and I think ultimately we'll probably see something in computer science that's like what doctors have, which is maybe licensing, but at the very least boards that where you can take issues where you can say these people really didn't think this through and they did a lot of harm. And it isn't just that you have to appeal to the government or a civil suit, but that it might be in. There might be groups within the industry that try to keep it within the industry but also sanction people who are doing harm. So doctors have that and journalists have that and lawyers have that. You know you could be disbarred, right? That's not a government function, it's a function of the lawyers themselves.

Debra J Farber:

Indeed. Yeah, it'll be interesting to see how that shapes up, but we do know that that's going to take time, because we're not even close to that right now, so a lot of damage can be done. Let's turn to maybe some protections, right? So what about anonymization? Is making data anonymous enough to protect people and their personal data? Is making data anonymous enough to protect people and their personal data? And is it even possible to anonymize VR and AR data to the point where it's still usable but considered anonymous?

Avi Bar-Zeev:

Yeah, I think anonymity is a lie. I'll just be real blunt about it. I think that it made some sense in the past that you could hide certain details from, let's say, some patient's file. You could take the name off and put a code in instead of the name and that data was not going to be identifiable because you didn't know where they lived and maybe you only knew their age. But with the size of the datasets that we have now, nobody can really be anonymous. We first saw that in earlier memories when it was AOL putting out their search logs and people were able to reverse engineer who was who by the queries that they made to AOL. Aol thought they anonymized it. They pulled out the identifiers, but people were able to reverse engineer it process of elimination, figure out who was who. And when it comes to VR, researchers at Berkeley have already shown that just given a few minutes of recording of your body motion, it's unique enough that you could be re-identified in seconds. So if a company has been recording you for the last year or two as you use their VR headsets and if they kept that data and they don't delete it, some friends who are employees there say they don't keep it, but they don't promise not to keep it, so who knows? The fact is, they could re-identify you even if they said you're anonymous. So I don't trust anything that says we're going to anonymize you if it includes any stream of biometric data.

Avi Bar-Zeev:

Your eyes are very individual, both in the way they look. So many parts of the eyes are unique that they can be used for better than fingerprint level identification. Iris ID on the Apple Vision Pro is pretty rock solid. I'm not going to say exactly how it works, but you can imagine the things that are different about everyone's eyes, including the irises, but the retinas are different. The sclera we have blood vessels on the eyes are different in every person, and way easier to tell the differences between people than it is in fingerprints.

Avi Bar-Zeev:

Fingerprinting is actually a lot more in common and you never get a full fingerprint. It's always partial. In any event, this stuff is very revealing. It's crazy that it's even just our walk, our walk cycle, the way we move our bodies is revealing. So anybody who thinks that they're going to go use anonymity as a tool to say, hey, this is totally private because it's anonymous, no, don't trust it. And then some of these devices, I won't even use them because until they promise me that they're going to delete that data, I don't want to give them that headstart on being able to collect something about me that they could use to re-identify me later. I'm not, I'm just like no, I'll just use something else. There's plenty of other options.

Debra J Farber:

Right, we think about these things, right. I worry about the public, who doesn't even know what questions to ask, to be worried or to make the decision to give.

Avi Bar-Zeev:

They're putting their kids in these things. They could have a lifetime of no anonymity with things, from when they were 10 years old to 13 years old, are going to be able to be used later to figure out who they were, and that stuff could be used in a good way, right? The company could say hey, we realized that the person who's using the headset now is only 70% of the height of the person who bought it, and their arms are shorter everything is shorter, and they move like a child according to our stats. So maybe we should put the parental controls on. Even if they're pretending to be the parent, somehow they got the parent's password. No, sorry, you're not identified as the parent. So the positive of Iris ID or other biometric markers is the device can be locked to us in a way that it's safe, that only we get to see our own data and kids can be protected from things that they're too young to see.

Debra J Farber:

For some reason, the same companies who are not protecting our data are also not identifying the children using them. That then brings us to how can product and development teams foster innovation in the space but at the same time minimize harm right? How do you strike that balance between the good use cases, the ethical use cases, and the ones that are like really scary, that can cause harm to individuals?

Avi Bar-Zeev:

The most important thing is to take that time in the development process to think about the answers. I think we're all capable of it. I mean, I'm glad there are people who are privacy professionals, who study this and learn and can teach other people. But ultimately all the engineers, all the designers, everybody working on the product needs to become as well-versed in these issues so that they know how to discuss them. They know how to ask the question of we choose A or we choose B, what are the harms and benefits of either choice? And then make the right choice, even in the face of the boss saying get it done. The pointy-haired boss may be sitting there going I don't care A or B, I just want it to ship tomorrow, and you're left with the decision of what's the right thing to do. We should have enough collected data, that ammunition, about things that have been done in the past, the mistakes that we've made in the past. If those were available to us, we'd go back to the boss saying, hey, this other team on this other project chose A in the past and it really blew up on them. They didn't do it right. So we should choose B. We know B is a better option, even though it seems like it might be more expensive, it's actually cheaper in the long run. Where is that data? Where do we go for that information of the case studies and the postmortems and the experience that people developed over 30 or 40 years of what works and what doesn't work? It can't just be tribal knowledge, right, and it can't just be a few people that have that information and nobody's ever going to be able to publish it in the form of a book of. You must do exactly this, because every product is different, every situation is different and so much of it is cutting edge.

Avi Bar-Zeev:

What we shouldn't be doing is making the same mistakes. If we have to make mistakes and we will make mistakes, they should always be new mistakes. They should always be. I like to say you know, don't make the same mistake twice, always bigger and better, like, let's fail up, but let's not keep repeating the bad things that people have done for years, and the best example of that is with the metaverse. Holy cow, you have people who build new metaverses in the last five years that made the same mistakes that massive multiplayer games have made for 30 years. Did they not play these games? Did they not talk to the designers and engineers on those games to figure out what goes wrong in terms of harassment and griefing and safety.

Avi Bar-Zeev:

We should not make that mistake ever again, and that's part of the problem is people don't listen and they think that this time is different and it's an ego thing in some cases, and the realization has to be look, we're the ones building these things, we're all the bakers of these things. We may be the decision makers, we may be designers, we may be engineers, but all of our names are going on that thing and we want to be proud of what we build. We want it to not hurt people. We care. I don't know anybody in the field who doesn't care about the result of their work. Right, they all care. They just don't all have the information. So let's spread the information, let's make sure it's available and let everybody do their best job at making the best possible products.

Debra J Farber:

I think that makes a lot of sense. So far, I'm hearing we need a lot more education and to developers and product folks and conversations with people like yourself or listening to videos and talks and such, where we could get that knowledge of what has gone wrong in the past so we're not keep repeating it in like these tech cycles, but I get funded and keep making the same mistakes, but at larger scale. Now, before we go on, how do we think differently about AR and its set of risks versus VR and its set of risks?

Avi Bar-Zeev:

That's a good question. I will tend to argue there's not a lot of difference between AR and VR technologically, and the Apple Vision Pro is a great example of that. Apple wants to call it spatial computing, but if you peel that back you have a dial. You can go between fully real world with very little virtual stuff, even though you're seeing things through a display right in cameras. It's all virtualized to some degree, but you're seeing the real world one-to-one and there's a lot of effort that went into making that really good, making it one-to-one. But you turn the dial all the other direction and now you're in Joshua Tree National Park, completely immersed all around you. So AR and VR are in the same exact device. You could be at either end of the spectrum. But there's a functional difference, not as much of a technological difference.

Avi Bar-Zeev:

And the functional difference is AR is best used for things that are related to you and your life, things that are about the here and now is the way I like to say it. So anything that is about improving the quality of your work, your social interactions, your cooking, looking at the fridge and seeing what could I make for dinner, those are all AR type experiences. They're all based in your reality and your daily life and think about how much time any of us spend immersed in our daily lives versus not. I'd say it's probably 90% to 10%. We spend 90% of our daily lives doing things that are just pragmatic, like talking to people, socializing, working, whatever. 10% entertainment, you know. Escape is watching a movie, playing a game. Some people do more than 10%, but I think on average it's about 90, 90 times is the split, and so VR at most is ever going to probably be, on average, 10% of our time, and then we use it for escapism as well. It's really useful for doing things that we can't do in real life going places that don't really exist or going places that we can't easily get to, and the kind of an AR experience we would have if we were both wearing third-generation Apple Vision Pros.

Avi Bar-Zeev:

We would be doing this and you would see me in your office and I would see you in my office or living room or wherever you happen to be, and we would feel like we're in the same space together. The goal is so that we feel the connection of being able to make eye contact and all the social cues work normally, but we're in the same space together, but we're each in our own space. We don't have to travel anywhere. We've just literally invited you over for a talk. That's the goal, and the benefit of that is it's going to reduce travel a lot. You won't have to commute, you won't have to fly as much right, that's going to reduce pollution quite a lot, even though these devices also spend energy. That's not free. That's a lot less than an airplane. It's a lot less than a car. That's the future. But AR is much more of that socializing and talking to people and VR is much more of the escapism.

Avi Bar-Zeev:

Now, when I'm 99% immersed in my own house, there's not a lot you can change about the world. You can add subtle things in the world you could start to. A good example is let's say, we were in a meeting room and you didn't want to have to have a real clock, but you wanted to let people know when the meeting was ending. Well, we could start to just fade the color to sunset. Right, the walls start becoming more reddish as the meeting gets towards its end. So there's a real subtle cue to reality. You don't have to change a lot. You just can do very minimal things to give us the important information that we need.

Avi Bar-Zeev:

In VR it's going to tend to be more overpowering. It's going to tend to be you against zombies, you going to the moon or the top of the Eiffel Tower things to the moon or the top of the Eiffel tower, things that are much more dramatic. But there's a lot more opportunity for really changing the world around us. And that thing I talked about cycling through a hundred different cars would work much better in VR, because in AR there's an actual car on the actual street and it's going to be a little while before you could seamlessly replace that car with another car and no one noticed, right? You'd have to erase part of reality if you were going to put a smaller car in the place of a bigger car. So that stuff is not going to work as well in AR. But in VR, you know, all bets are off.

Avi Bar-Zeev:

Like with the redirected walking I talked about earlier, you could really change somebody's world dramatically, and it brings up all these issues of identity and harassment and griefing. None of those things come into play. They're not hard issues. When you're doing AR interactions, like you coming into my house, you're being yourself, you're visiting me. There's not a lot of safety concerns because we know each other well enough to have invited the other person over. But when you start talking about the metaverse whatever that is now you're talking about the mall or the world, which contains all the good and bad. Good behavior, bad behavior there's no police in the metaverse, yet there's no rules in the metaverse, yet it's the wild west, and so it's inherently unsafe.

Avi Bar-Zeev:

I wouldn't send children into that kind of situation without parental supervision anytime soon, because you just don't know what is going to be going on there. It's like I wouldn't send my 14 year old to the mall at this point by themselves. Even a mall with police in it, I probably wouldn't do, because who knows what's going to happen? 16, 17, sure, why not? But I think you know. 13, 14, I'd still have some qualms about it. 10, certainly not. So this is what we're dealing with, but yet it's happening. All the time. Parents are strapping HMDs to their kids' heads at age 10, and we don't even know where the kids are going and what they're doing, and I think we have to pay a little more attention to that.

Debra J Farber:

Yeah, that is a really great point, and thanks for distinguishing virtual reality from augmented reality and how you think about those. Given that, do you have any advice for privacy technologists and just technologists generally who are building AR and VR systems, especially when it comes to privacy? Maybe some just practical guidance principles to keep in mind?

Avi Bar-Zeev:

I think just the fact of having principles. I think it is important. I think every project should probably start out. You know, at Disney, whenever they started a new movie, they had a book they called the Bible, which is not the religious text, but it was when we say purple, this is the purple, the exact color purple we need, and so the character is always going to be the same color purple and everybody gets the same guidance. So the Bible for any project should include well, here's what we believe. This is what we say is right and wrong, and this is what we're going to stick to.

Avi Bar-Zeev:

Ideally, that flows through the entire product lifecycle, and the customers are informed of what that means too. So the customers wind up understanding the principles. These are the things we promise and these are the things we're going to stick to, and these are the things we're going to fix if we mess them up, and these are other things that we didn't necessarily promise. So if a company is going to advertise to us and use ad tech, let's be clear. That's how they're making their money. Tell us upfront, be clear You're not our customer. You are essentially, you are the chips in the casino that are a real customer, the advertisers and they're betting on you and they either make the money when you buy their product or lose the money when you don't buy their product at where the house. You know, the companies that are ad tech companies need to be really clear that they are the casino that's going to make money, no matter what people bet, and we're the chips. The humans are the chips in this case. We're the ones being bet on where the cards, or whatever you want to say, and let's just be honest about that. And so people can make their choice as to whether they really want to be a part of that or not. And if you're doing other kinds of products that don't involve ad tech, which I tend to be negative about let's tell people about the benefits. Let's espouse that, you know?

Avi Bar-Zeev:

Look at a product like Second Life. I worked on it. I didn't really understand the economics of it really well. Philip Roselle understood it way better than I did back then, but now I've listened to it a lot more and it turns out Second Life made more money per user than Facebook does without any advertising. They made more money by just building a world and letting people build stuff and trade stuff and didn't have any ads and it was much more lucrative, and so I don't know. There's got to be a way. There's got to be other business models out there that people could find, that really support the work, and and I would encourage people to get creative and figure those out and not just be lazy and pick the ones that that make a little bit of money but aren't the best.

Avi Bar-Zeev:

I think this won't be the choice of the ad tech companies, but I think one of the things that we're going to find is that it'll be necessary, at the end of the day, to figure out a way to firewall the advertising from the personal data. I think that's probably the key to making it survivable is that there was this thing called the Glass-Steagall Act that protected Wall Street. The banks were conservative and they weren't allowed to bet with your money. They had to use it in conservative ways. And then there's Wall Street and you can take crazy bets on things and make billions of dollars, and they were kept separate because we knew one was safe and one wasn't, and then we got rid of that and then we had 2008. We saw the consequence of getting rid of that separation, and we haven't learned that lesson yet, but I think we're going to need the separation between the tech, which is technically a first amendment, right? I mean, advertising is protected by our constitution.

Avi Bar-Zeev:

No one's saying ban advertising, right, it's a right of a company to put out there and say why they think we should buy their product. That shouldn't be good. But should they be able to use our personal, private data to manipulate us? No, I think that's going too far. That's an asymmetric power where the computer and especially now in the age of AI, the computer has so much of an advantage over us that we're going to have to prevent that one way or another. It's like walking into a room with the best salesman ever and they have access to our entire life history and can read our mind practically, and no normal human is going to be able to stand up to that kind of a treatment.

Avi Bar-Zeev:

We're all susceptible to that kind of manipulation. So let's be clear about that and avoid it, and I think the thing we should be trying to help each other with as much as possible is sharing the information, sharing the data. It's for everybody's benefit to figure out where we messed up in the past and be honest about it and open so that other people don't have to make those same mistakes and let's try to foster that as teams and make it available. Game developers are really good about this. We can learn a lot from them. They always have postmortems when the game is done. They teach other game developers what went wrong. They share best practices and best ideas and it's not that games are perfect. There's a lot of practices in games that people could criticize, especially if there's a lot of misogyny, gamergate kind of stuff that goes on. But there's a lot of positives in the community and the culture and the way it works that we should find those and try and reinforce those.

Debra J Farber:

Yeah, and my understanding of games to the successful ones is that they take a lot of effort and money and with scissors and just create your product and get traction and then think about putting safety around it later on. I wonder if they could even stop long enough to actually even have a postmortem. I'm not asking you to solve this, but how do you see the cycles seem different? How do we put that ethical perspective and learning from one another and postmortems into this process?

Avi Bar-Zeev:

I think that will come with the liability when companies actually realize that they can lose more. Like a game developer will know this instinctively that if their game is not fun to play, if they put out their crappy game, they don't make any money. They lost all this money they put in development and they're not going to recoup any of this With the big companies. Somehow they have enough crossovers and connections that they can put out crap and they've survived it. But is that going to be true when there's lawsuits and more government sanctions and when people have choice of going between crappy product A and good product B? When they have that choice, they're going to vote with their feet and these companies are going to realize that they can't win by doing the cheap and easy thing.

Avi Bar-Zeev:

That's not the way to get there. It's a great way to prototype and again, the game companies will do quick and dirty prototyping, but it's in-house. The only people who are suffering from that are their game testers. They're not going to hopefully ship that because they'll know they'll lose, and so the big companies should do tons of prototyping in which fail fast and all the slogans you could imagine that mean the same thing are all fine within your prototyping group. Try it, see how it feels, go, go quickly, but don't ship it. Don't ship it until you get it right. The people and customers are not the guinea pigs. They're not the beta testers, unless they sign up for it. Stop making them to be cannon fodder, essentially for bad decisions that we make in the companies.

Debra J Farber:

I think that's great advice.

Avi Bar-Zeev:

I think it's now also a great time to ask you, Dan, to tell us a little bit about the XR Guild no-transcript said let me see what I could do here, and it seemed like the biggest gap was in the ethics, and not that people are unethical people. All the people I'm friends with and know are highly ethical people. But it wasn't organized and so we said look, let's make a group. We'll call it a guild, not necessarily in line with the Hollywood guilds. It's not a union per se where there's no collective bargaining.

Avi Bar-Zeev:

I feel like a gamer like guilds in a game Like guilds in the old ages, though like more like the masons right or the carpenters. It's more like we're a trade of skilled people, whether we're programmers or designers, it's all just as important, and what we can do is we can teach the new people how things have gone on, we can apprentice, we can do all the things that guilds have done, and we can also create a library. We set out to do all these things. We're in the process of building a library of all this information that people should know about when they work on projects. We need volunteers, but we haven't started going on at libraryxrguildorg. So there's some library work going on there and our plan with the library is, when the AI stops being hallucinatory, we're going to throw the AI on top of the library, so you have a librarian that you can go and ask questions say how do I solve this particular dilemma and we'll give you the actual resources rather than making up some BS. We'll actually give you the postmortems and the papers and the things that have been written about the right answers to those questions. But we also know that people need support. They don't just need information.

Avi Bar-Zeev:

If you work at a company and the boss is putting pressure on you to do something in a bad way or your team is going in the wrong direction. We want you to have the support of other people who you know feel the same way as you do, so that you can have the conversations of how do we solve this. That should happen within the NDA boundaries, of people who are working on secret projects that they can't talk about outside. You should be able to find other people in your company who you can talk to safely without worrying about being fired or ostracized for taking a stand on something. But you should also have support outside the company if you ever need it, so that you know in the worst case, you can find a more ethical job. If you really can't fix something, you can tap your network and say where can I do the same work, but for a company that actually cares about the ethics, versus where I'm currently working, because we don't want anybody to have to just up and quit. That's one of the worst things to do is just have to take a stand but not having their job lined up. So we want everybody to feel secure in their ability to work on things that are ethical and in line with helping their customers.

Avi Bar-Zeev:

So we want to do all that. We're creating a mentoring system so that you can partner up with somebody more experienced than you in a field to have that networking. We're doing the library, we do meetups and talks and various things. We try to do all of that and the challenge is it's very hard to do that as a nonprofit. We're trying, but nobody's making any money at this.

Avi Bar-Zeev:

The companies that are actually doing this well actually are for-profit companies. They hire people to build curricula around the right way to do it. As long as the companies are willing to pay for that, great more power to them. But what if you have to tell the company something it doesn't want to hear? They tell the company something it doesn't want to hear. They're not going to pay for that. So somebody still needs to fill in the gaps for these ethical dilemmas that aren't always the common wisdom and need some additional thinking and tire-kicking outside of the company. So we'll try to help whoever we can help who's working on those things and also plow ahead. So we have partnerships with other groups and right now we're just in a mode of trying to add as many members as we can and grow the organization and get a little bit of a financial base so we can create the kinds of materials that will help people. That's been my mission for the last.

Debra J Farber:

If people want to find out more or become a volunteer themselves, what's the URL for that?

Avi Bar-Zeev:

It's pretty easy to remember. It's just xrguildorg Orgs are good for nonprofits. Just remember org. And xrguildorg Orgs are good for nonprofits. Just remember org and XR. Guild is all one word and it's super easy to find. We have some materials right there on the website. People can start learning about us and what we do.

Debra J Farber:

Excellent and before we close, are there any books or papers or other resources you would recommend to the audience to learn about the history of the field and the risks to safety, privacy, freedom, security, all of the good stuff, anything topical or recent, that you'd like to call out?

Avi Bar-Zeev:

There's two books that are pretty recent that I would plug. One is you know, some of our members of the Guild actually are writing these. So the one that I just finished most recently is called Our Next Reality, or Our Next Reality, by Alvin Graylin and Lewis Rosenberg, and I work with them on a lot of these issues. They wrote a book that does a really good job of explaining the issues and walking people through it, and it's written as a kind of they take opposite sides of the issues so they can argue the issues, and so it's really interesting to see how the back and forth.

Avi Bar-Zeev:

And then the other one is the battle for your brain, anita Farhani, who is a law professor, I think at Duke, if I remember right, and she's very much into the brain-computer interfaces, but I think she's learning more and more about the spatial computing and XR side of it as well, and I've exchanged comments with her about eye tracking as well, and I think she's aware that it isn't only about probes in the brain that we have to worry about for the future of our mental autonomy. I think the most important thing about that book that people should take away is, if we don't solve this, what we are risking effectively is our mental autonomy. We can be manipulated in such a way that if we don't stand up now, we may not be in a position to stand up in the future, because we'll be convinced that everything is fine just as they are in the matrix. The world is pulled over your eyes.

Debra J Farber:

We don't want to get to that state where we've lost our ability to say no to the things that we're opposed to. No, we do not want to get to that state. Awesome, are there any last words of wisdom with which you'd like to leave the audience today?

Avi Bar-Zeev:

I'm sure you have a mix of people who are both interested in this and people who consult for a living on these issues. It's all important, but I think the thing that I would say my opinion anyway is that it's everybody's responsibility, that nobody should be hoarding this knowledge. Our goal should be educating everybody and, ideally, if we do our jobs right, we make our current jobs obsolete and we move on to the next, harder challenge. So we should hopefully not be doing the same thing over and over again of telling people the same three points of things they need to fix. Everybody should get good at those things and then we should become experts in the next set of things to raise the bar even higher and do even better.

Debra J Farber:

Oh, I love that. I think that's great. Avi, thank you so much for joining us today on the Shifting Privacy Left podcast. Thanks for inviting me.

Avi Bar-Zeev:

It's been a pleasure. shiftingprivacyleft. miss . privato.

Debra J Farber:

Until next Tuesday, everyone, when we'll be back with engaging content and another great guest. Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website, shiftingprivacyleftcom, where you can subscribe to updates so you'll never missa show While you're at it. If you found this episode valuable, go ahead and share it with a friend. And if you're an engineer who cares passionately about privacy, check out Privato, the developer-friendly privacy platform and sponsor of this show. To learn more, go to privatoai. Be sure to tune in next Tuesday for a new episode. Bye for now.

Podcasts we love