Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Would you Trust a Driverless Car?
Driverless cars are never far from the news. In our second foray into the subject we talk to Professor Subramanian Ramamoorthy, Professor and Personal Chair of Robot Learning and Autonomy in the School of Informatics, University of Edinburgh. Joining the panel this week are Richard Hyde, Paurav Shukla and Elveera Perez. We kick off with a lengthy discussion on NFTs (Non Fungible Tokens)
Driverless Car interview starts at 20mins 8seconds
Elvira Perez
Paurav Shukla
Richard Hyde
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 1, Episode: 15
Would you Trust a Driverless Car?
Driverless cars are never far from the news. In our second foray into the subject we talk to Professor Subramanian Ramamoorthy, Professor and Personal Chair of Robot Learning and Autonomy in the School of Informatics, University of Edinburgh. Joining the panel this week are Richard Hyde, Paurav Shukla and Elveera Perez. We kick off with a lengthy discussion on NFTs (Non Fungible Tokens)
Driverless Car interview starts at 20mins 8seconds
Elveera Perez
Paurav Shukla
Richard Hyde
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Thanks for choosing Living With AI, the podcast all about artificial intelligence and how it affects us all. Our feature interview today is about driverless cars. I’ll be talking to Professor Subramanian Ramamoorthy he's Professor and Personal Chair of Robot Learning and Autonomy in the School of Informatics at the University of Edinburgh. But before that I’ll unveil this week's panel Living With AI and putting up with my silly questions this week it's Richard Hyde, Paurav Shukla and Elveera Perez. Richard is hoping to join us shortly he is just dealing with a working from home crisis at the moment, Elveera is Associate Professor of Digital Technology and Mental Health at the University of Nottingham She's also the TASHub’s responsible research and innovation director. Welcome back Elveera.
Elveera: Thank you.
Sean: Paurav is a management thinker and entrepreneur as well as Professor of Marketing at university Southampton’s Business School welcome back Paurav.
Paurav: Hey Sean.
Sean: My name is Sean Riley, I’ll be your virtual ringmaster today in fact everything's virtual at the moment, but I'm not sure virtual lion taming would quite have the same adrenaline rush waving a chair at a Teams screen. I don't know what do you think Paurav?
Paurav: That would be something wouldn't it? The lion would certainly like that because it would be free to roam then after.
Sean: Absolutely, I don't know if it’d stay in the webcam though I wonder if it’d have a filter on. Anyway, we're recording this on the 18th of March 2021 and as always we start with what's going on at the moment in the news and one thing I noticed is AI meets art and Sophia the robot is going apparently to be creating some crypto art. This is some crazy thing to do with NFTs which are really strange, really strange things, non-fungible, fungible tokens have you heard of these Elveera?
Elveera: Not really, no I'm- to be honest I'm not very familiar with the concept.
Sean: Yeah, well I don't know Paurav have you got a good concept of this?
Paurav: It's a, it's a, it's an interesting concept per se because what it is, is that it is a type of a token, it's a digital token that lives in the blockchain, uses the blockchain and it allows you part ownership or complete ownership of a digital asset. This digital asset could be an artwork, it could be a piece of software possibly or something that exists virtually, in many cases possibly even it may not exist virtually, it may exist physically but now you own a part of that. So in some sense if you look into the physical side you have the physical ownership by multiple people of a piece of art and then they would share it between them and so kind of a syndicate buying.
This is more of a digital syndicate buying but here the funny part is, is that for example imagine I have two bank notes and if I give somebody a bank note and I say, “This is a £10 bank note do you have another £10 bank note and we can exchange like that with each other?” With a fungible token, NFT that is not possible because while we both have one piece of that particular note the problem is your note and my note look different. So we don't know who's is going to hold the value that's where the problem lies, so it is a very tricky thing, in a way it's a virtual thing which holds virtual value.
Sean: And I know this isn't strictly AI but it is definitely trust because I’ve looked a little bit into the technical side of this and depending on who provides your NFT it can have hard coded into it the URL or the web address where the digital asset is on the web. And that means that if that website goes down your NFT becomes useless instantly, I mean that seems to be a bit of a problem?
Paurav: It could be but at the same time because the NFT lives in the blockchain there would be enough places where the ledger would be located, so you wouldn't go bust anyway, your ownership wouldn't go bust.
Sean: No, no, no it's the fact that the NFT records the asset location in the blockchain as being the location where your asset lives and therefore if that asset is not at that web address your NFT becomes pretty much worthless.
Paurav: It, it does it does yes. So obviously that asset has to exist for that paint- If the painting was burnt then you couldn't do anything about it, it's gone.
Sean: Yeah, but this is this, I suppose this is more like if, if the gallery closes down the, the painting's still in there right? Whereas if say I don't know, Plusnet goes bust and the- you still might be able to have your digital picture but if you upload it somewhere else it's not yours anymore.
Paurav: That is also- so the digital rights of these virtual goods are always going to be contentious in many ways and, and you're absolutely right also in terms of what we saw recently, the artist Beeple you know, suddenly massively that $69 million worth of art being sold. And they are, in a way what we have if you look at it is a JPEG image of a multiple of JPEG images put together. Now somebody is owning one NFT of that which is very exciting in some ways, but in reality it's just a virtual asset. And when we did some research a few years ago around, around virtual assets we were driven by this phenomenon because of that game called Second Life wherein in 2006 the first virtual goods millionaire came about.
A Chinese person, a Chinese Woman who happened to be selling land in second life became a millionaire, and it's a phenomenal thing when you think about it that somebody would buy a land in a virtual game which has no extrinsic value and why would people do that? But then you start realising- and so we started researching into it and one of the things that came out was that the community continuously influences that and that, that part of that normative influences that drove the value mechanisms, that drew the trust towards it.
Once that value somehow eroded- and that is what we are seeing in many of these blockchain related cryptocurrencies and crypto technologies, is that the value is so fungible. One, one famous entrepreneur like Elon Musk suddenly invests into the currency and it just goes up like nobody's business and then after a week the excitement dies down and the currency value is significantly lower.
So we've seen those types of things and we've seen many other types of intrinsic motivations that would drive this NFT also, so I think the fundamental reasons for this driving of this NFT or the boom as we have seen in, in the marketplaces this community driven phenomenon that we have, we are right now witnessing.
Sean: Well it probably brings us around to the fact that at the heart of, the start of this we were saying about this idea of the robot creating art. I mean it's supposed to be Sophia the robot and artist Andrea Bonaceto- I can't, I don't know if I'm pronouncing that correctly, Spanish artist I think. And then the robot’s blend of AI and neural networks smarts studies the work of Bonaceto and then produces her own, “her” own compositions. There's, there's issues with this aren't there Elveera? Is this really new art or is this just rehashes of the old thing?
Elveera: That's a very interesting question. The value that this type of art brings to me is quite questionable, I think if it's useful to inspire artists, to inspire humans to produce more art I welcome, because for me the, the value of art is, is the therapeutic process and creation is we know it, it, it, it's fantastic for our wellbeing. So if we take that away now it’s, it’s- I found that yeah, it doesn't make much sense to me, however if we're using machines to inspire us and produce actually more work and think, actually if a machine is able to produce these random- again and to me that's what I feel is fantastic.
Also the value that we project or transfer to art is extremely random and I worry as Paurav just as mentioned, that if communities start valuing art that has been generated by a machine more than a human there is obviously going to be a high demand. And, and maybe there is going to be a disruption and a massive disruption, disruption in, in what art means, we have an interesting case in Nottingham I'm pretty sure we all know Baskin?
Sean: Banksy?
Elveera: Yeah, he made a fantastic mural in in near, very near Ilkeston Rd well, it's been sold. I found that amazing but it's opening a really interesting debate about what, what’s art and the value of art and who owns art. And I think when the machine is producing art that's going to again bring really fantastic research questions because who owns that art and the impact and the effects of that specific piece of art? Imagine you have an influence in politics or economics or in new concepts.
So who, who is accountable for all that and we know that art change culture so are we going to have now a new generation of machine generated art that is going to change the way we live? Who's controlling all that, so I think it’s, it’s very fascinating but at the same time it's- there's so many unknowns.
[00:10:25]
Sean: I think though that it's interesting that, that actually it's similar in a way to music being created with samplers and being remixed by producers perhaps it's the, whoever presses the button or asks the robot to do it, that's the person who's actually creating this art. I’d like to welcome Richard along, thanks for joining us Richard I hope you're, I hope you're doing all right?
Richard: I, I, I am, I am all right now Sean thank you for having me.
Sean: Good, good thanks, thanks for joining us and what's your take on this? We, we, we discussed the idea of NFT's, these non-fungible tokens and, and then Sophia the robot who's creating art apparently.
Richard: Yes it’s, it’s difficult both on, on a- so I'm a lawyer from a, from the legal side but also the non-fungible tokens are very, very bizarre. The, the, particularly the sort kind of collectible sports highlights non-fungible tokens I find to be, be strange because you know I, I am a person of my era and I collected you know, stickers to stick in sticker albums. And I suppose that you know, the non-fungible tokens are fulfilling a similar sort of thing that, that I did but it, it's the kind of the creation of permanence out of something that's ephemeral that I think is kind of really interesting about them.
And it's kind of, if you, so the sports highlights NFTs like if you were there you have an experience, but can you come back and like re-examine them, experience it again, put a valuation on that kind of experiential thing and have that experiential moment become kind of embedded in a, in a blockchain over years? And, and I think that, that kind of links to the arts side of things that part of the, the art for me as not a particularly you know, highfaluting art person is, is the, the emotionality and the kind of experience of things there. And to pick up on, on what Elveera said about the Banksy part of the thing about the Banksy was its place, was where, where, where it is and how it related to its, its environment.
Sean: The context right?
Richard: Yeah, absolutely and if you take the, take the window the, the bricks out of the wall is it still the same piece of art? And you know Anthony Gormley is, is complaining this week about you know, somebody moving his artwork and placing them on a beach on their side rather than standing up as they, they were, they were meant to be. And, and I think the whole thing about context and emotion and, and, and place in art is really interesting and also really difficult to kind of recreate when you're, you're having you know, non-fungible tokens and valuing it through that or having it being produced by, by, by a robot right? And that's what I, I wonder about whether you really can capture some of these things in those sorts of situations.
Sean: To, to play devil's advocate though on the sporting highlights for instance, there are, there are generations of people who would buy the VHS tape of you know, Manchester United's greatest goals or delete where applicable clubs greatest goals. Is the difference here you're, you're actually buying a certificate to say that you like that thing? Is that what an NFT is rather than- I mean you technically are owning it or a part of it but in what way do you actually own it?
Richard: I mean, I, I mean I wish my videos of sporting highlights were as valuable as some of these, these NFT's because I'd be sitting on an absolute gold mine in my garage of ageing VHSs of, of sporting highlights. But it's a, it, it- I think it's a, that's the weird thing right? You can go on YouTube and you can watch these highlights for free if you want to and what I, what kind of interests me is how the certification of your ownership of them can give value to that. Whereas to me the value is more about my relationship to the, the reason I’ve, I’ve got- I, I'm a Notts County fan, I will hold my hands up to that, the reason I’ve got you know videos is so-
Sean: Our thoughts are with you by the way, our thoughts are with you.
Richard: Yes, yeah I’ve, I’ve got videos of, of Notts goals from 1991, 92 is not because I think that you know, they're inherently valuable but they're, they're valuable to me right? They, they I, I was there I, I can sort of remember what happened etc. It's, it, it, it's not that it’s, it’s the valuation of, of the tape, of the recording it's my relationship to the recording that's kind of the, the valuable thing I think. But I may be straying a little far from my level of expertise.
Sean: Perhaps these are just autographs for the new age I'm not sure.
Richard: And I’ve got books of them as well.
Paurav: One of the things I would like to reflect on in a way Richard exactly that is what we study in my lab particularly why do we value what we value? And the aspect around this is we realise that there are three different kinds of value mechanisms that really operate and we will need to think about where these NFTS sit in.
So there is one side of value which we call social value, something that is connected to the society and we value that or for that matter the other is the personal value, that is something which is personal, inherently intrinsic to me. Like what you said you know, you were there at that point in time, you saw something happen and you know, that, that, that is etched into your memory, you want to relive that memory and these are the ways you will relive those memories. And then there is this functional value, into something so there is the functionality of the product or a service or anything, whatever you tend to engage with.
Now when you think about NFTS right now I do not see particular functional value very clearly with that as you rightly pointed out, there is also the personal value that I see is predominantly as someone was mentioning, the personal value is, is that I want to be part of that history. To be saying to my kids or grandkids, “You know, you know what when this came about I was among the first people to buy this.” Right? So there is this personal value, a little story that we can relate to and the large driving force behind this whole value mechanism is nothing but the social value. Because people are valuing it right now, because it is in the thing, it is in thing that's why people are buying it, once it goes a little on the periphery suddenly things may just turn very quickly pear shaped for many, many people. Which is what we saw in that recent case of Reddit trying to push. you know the Reddit group trying to push the prices of certain stock.
Sean: GameStop?
Paurav: Yeah, and then you know, most of the losers are going to be those individual investors and not the hedge funds in particular. So you know, we, we see this societal value operating in different, different ways.
Elveera: I just wanted to add that I think in relation to art and, and value, when, when an artist produces something there is something, part of this person represented in, in a painting or a choreography or, or sculpture. So there's a narrative and there is authenticity, how do you reach that when it's a machine that has no experiences and hasn’t- so the, the, is, is it yeah, I think that what I want to stress is the, the authenticity of machine generated art is very questionable. And what is it that tell us to, to, to us as, as humans, as communities and that's why I'm, I'm always a bit hesitate it’s like, what's the point?
In music I, somehow I hesitate because I can see that you can have a, a piece of software that may generate something like sound like Bach or Mozart or, or Debussy that's, and that can inspire an artist to compose new with or getting further from the style or closer, you can actually assess how close or how far away you are from a specific style. But we have a machine generating music yeah, we may like it because there is a pattern that the machine knows that that's what we seem to- it’s, it’s easy and it's pleasant.
Or maybe it's- but it's there's a, a rule behind it, that's what I, I found really uneasy, there is no, there is the spontaneity and the creativity and the authenticity is gone. Or maybe not, what happens when we cannot distinguish between men or women generated art on a machine what does it tell- so it's yes lots of, it's just full of really interesting research questions for us I guess.
Sean: Yeah, and then it'll probably come down to no matter whether it's art or not do you like it.
[00:20:06]
Today's feature interview is about that ever present discussion topic, the driverless car. I’d like to welcome Professor Subramanian Ramamoorthy, he is Professor and Personal Chair of Robot Learning and Autonomy in the School of Informatics at the University of Edinburgh. Welcome to Living With AI it's nice to have you.
Subramanian: It's nice to be here thanks for having me.
Sean: We've more than briefly discussed autonomous vehicles in the past or AV, but they come up time and time again even when we're discussing other topics, autonomous vehicles come up so it'd be great to, to go into this in a bit more detail. And, and I believe you've been part of, part of a start up that, that has pretty much done the whole thing from start to end, can you tell us about that?
Subramanian: Yeah, so, so let me first say a little bit about moral academic background. So I work in the area of robotics, I’ve spent the past 20 years working in various aspects of planning, prediction, motion control these kinds of issues and robotics. So as of the past four years I’ve been heavily involved in a company called Five AI which is a UK-based startup and we say UK based because we have offices in multiple cities across you know, different areas. And I’ve, I’ve helped set up much of the engineering organisation of the company so for, for between 2017 and 2020 I was the Vice President and I was heavily involved in the motion planning and prediction areas of setting it up.
So what we did in the company probably one of the more exciting things for me was in the first two or three years we built the full technology stack for a self-driving car. So everything from perception to planning to actually the safety case and the operational aspects of putting a car on the road for a multi month trial. And it was a fairly sophisticated, one of the more sophisticated trials in Europe which was very exciting.
Since then the, the focus of the company has been somewhat more on the, the, the platform and development tools that go into ensuring that such a deployment is safe, what you would do, not just to develop the core platform but everything around it. So, so this has been an extremely educational experience for me so, so not just building the technology but seeing what it takes to build the team around it as well.
Sean: Did you have a, a base car to work with or, or was it everything, was it-
Subramanian: No so normally in this space very few people with maybe one exception not building the car itself, because it’s, it’s not strictly necessary to do so. So the, the car is, it's an entity that has been looked into for 100 years, I think we have a fairly good understanding of how that's put together. Most of the action if you like, these days is on the computational and sensing side of it, so we're trying to ask the question you know, what I would normally do is you know, look around then press the gas pedal, can we just do that bit? And actually just that bit is, is non-trivial.
Sean: It’s, it’s non-trivial but also I think the most complicated thing of any of these is things like you know, the software how do you make decisions? Because even the sensing you say is non-trivial I suppose if you had endless money you’d put 1000 cameras on the vehicle and, and, and multiple GPUs to process the information then then you know those sorts of things. So there comes a balance doesn't there I guess with, with how big is your budget right?
Subramanian: And I would say, I mean there, there, there’s certainly that. So, so some of the most interesting discussions we've had in the process of running road trials are simple things such as if you run a trial for all, for a whole day and then you just collect all of this information. Even after having invested the money and, and put the put the information to offload it off of the back of the car onto your computer it can actually take you many hours with most current technologies that we are aware of. So even simple things like how do you just physically get at the data is an interesting technical challenge and of course, I mean people enjoy doing that sort of thing as well.
But, but what I wanted to say is it's not just a matter of the hardware and having a lot of resource because there are some questions that are just intrinsically difficult conceptually. So let me give you a simple example, you're driving around in a, in a busy street and you've, you've kind of come close to a school zone. You don't see any kids but they could be there, how do you make sense of the fact that you have to be cautious with respect to an unseen kid? This is not just a matter of what I can see with my cameras, I can install a lot of cameras but somehow I have to put in the intelligence into this machine to think like this and that's I would say one of the big challenges.
Sean: But playing Devil Devil's advocate there, we see signs right? I mean literal road signs, we see schools and I suppose that's just an element that you also need to, to pick up on the sensors. I noticed in your kind of bio that teaching robots to behave features and that's something really intrigues me because, how do we define- and I'm guessing it's the same with autonomous cars, how do we define the behaviour of the vehicle? Is it, is that, does that link into this?
Subramanian: I think it very much does. So, so again I’ll, I’ll use examples throughout just because this is an area in which people have a very good sense for what I'm talking about if I tell you, “What would you do?” So, so one good example from the UK Highway Code is, don't pull out in such a way that you cause another driver to slow down. You’re taught this when you learn to drive, this is incredibly hard to tell a car what to do because in order for me to describe this to a car I have to say, “You have to make this prediction about the other person, you have to understand their intent. You know did I cause them to slow down not just did they happen to slow down, and then what does it mean for me to behave in such- so what does it mean for me to drive in such a way that I don't cause this?”
So this is the kind of thing we're trying to model so, so in, in computational terms this, this is quite tricky because we want to build machine learning models that capture causality, that capture prediction and then capture notions of optimal behaviour. So that's what we mean by behaviour is that at one level it's a simple problem you know, get some input from your camera and put some output through your accelerator but actually you know, there's a lot more to it about predicting the outside world and understanding what's going on.
Sean: Yeah, there's inferring and all sorts right? But you mentioned there modelling so I guess there's been a lot of modelling and simulation that goes hand in hand with this? You know any simulation or model is only as good as the data that you put in right? So how different is it between you know, simulation and modelling and the real world?
Subramanian: This is actually the, literally the billion dollar question. So simulators are pretty good these days so if you've played video games or watched movies you know that we've come really far in terms of the quality of being able to get it, not quite exactly like the real world but not bad. But on the other hand what agents do inside these simulators is still a very tricky business, so one could get this information in a number of different ways.
One could go out and collect data which is of course expensive because you have to you know, run cars and you know, come up with the procedures for doing it. And people are finding out that you could do it and in fact like the really deep pocketed companies have done this for many millions of miles almost, but at the same time that's not quite enough because what we call corner cases or edge cases you know, the things that happen very, very rarely.
They become increasingly hard to find and there are two reasons why this happened so, so one of the, one of the phenomena is that once you get increasingly better at driving you see these bizarre cases less because you've become competent. You, you know how to avoid things and so you never really find yourself in situations where you're in trouble that, that's one aspect of it. And the other aspect that just the logistics of running very long campaigns means that many people have been going round and round the same place.
So I mean sometimes you'll hear companies like Waymo talk about having done a very large number of trials but a huge fraction of that might actually be in a few neighbourhoods repeated many times right? Which is different from what a typical human driver has picked up, not just from driving but just from being a person and the common sense knowledge that you've picked up as a person. So putting that into a simulation is, is quite hard So we do a number of things we kind of come up with procedural ways in which you can think up new scenarios. You could do this with machine learning, you could do this with programming and so on, you ask people for high level descriptions of what might be happening and you try to script that in. So you have to have a judicious mix of all of these things.
Sean: It's interesting you mentioned the, the Waymo example of perhaps are going around the same neighbourhood it's kind of a parallel to when a human’s learning to drive certainly in the UK. The instructor will take them around the same few streets over and over again to learn their, their craft and then one day they'll pass their test and you know that's when the real learning starts you might, you might argue.
There's a simple but very complicated question here, how do we avoid crashes and disasters? And what I mean by that is it would be easy to say, “Well the easiest way to not have a crash is to drive everywhere at two miles an hour.” But there's a balance to be struck here between having something that is effective or efficient in terms of doing what you're asking it to do and keeping it safe, how do we strike that balance?
Subramanian: Yeah, now this is a very good question so the, the nice thing is that maybe we don't need to drive at two miles per hour maybe we could do much higher speeds if we understand the what we call in technical speed, the operational design domain. So, so you know, what are the parameters that define this particular behaviour? Let's say lane keeping right, so I can drive at 60 miles per hour on the lane no problem if I can see the lane, I can see the lane boundary I know where other people are. So it's all of that that defines the complexity of it. So in low complexity situations that we are able to do it and in fact if you see commercial vehicles that are claiming some of this capability like a Tesla car they're, they're able to do quite a bit and that's why they have so much following.
[00:30:42]
But, but, the, the difficulties as you rightly point out are you know, when you have more people, more events, more interactions then things get difficult. So the simplest thing is if you ask people you know, “How do you cut into traffic?” For instance, now all of a sudden it's not even the speed, even at five miles per hour it, it might be hard to cut into traffic if the other person doesn't want to let you in. And then as humans we've kind of come to understand this dance and, and how exactly to, to act it out but, but for cars the, the mistakes can very easily be made in these situations.
So I think the, the, the current best answer to your question is that firstly we have to carve out these design domains in ways that are manageable. So it's a bit like what happens with bus lanes so you, you, you put boundaries that are kind of acceptable and then having put down those boundaries we try to understand all of the edge case and then we kind of optimise for that.
Sean: There's a situation that I can think of on the road where, where we come up to a roundabout, now in the UK we have quite a lot of roundabouts round the world not so many in certain countries. And I remember trying to explain to somebody in America about how it was to get to a roundabout, now they have roundabouts or I think they call them rotaries in America in some places. But they're not common or certainly not as common as the UK and it was difficult for me to explain how it works even though it works multiple times every route here. Every time I get in the car I might have to go through a roundabout, it just works but trying to explain what happens who yields to who and what happens if everybody arrives at the same time.
Well the answer would probably be something like it doesn't happen that often, but actually when you start thinking about it in those terms it makes you wonder how a, a robot system would deal with a roundabout if everybody arrives at the same time you have to have add some random input or something how does it work?
Subramanian: Yeah, you, you could do that I, I mean the nice thing about robots is that you can tell them rules and they'll follow them. So in this case you could say what the rules are according to the code right? You know you, you tell them, “If everybody was exactly at the same time then one of you chooses, but if not these are the rules.” But in practice what we might do is to augment those rules with observations. So the nice thing about the current build is that there's quite a bit of data that we can get from intersections you know, there’s cameras everywhere. Interestingly enough the fact that some you know British cities have the most surveillance in terms of CCTV can also be an advantage when it comes to getting data.
So, so you could get all of this data and then you could ask, “How do people seem to do it?” What are the parameters for when they start to yield or when they kind of do these things, and then you would, you would find a merger of the two. You would programme in the rules and then you would tune the parameters of the rules to match the data and then you would always have to have some basis for saying, “When things go wrong what do you do?” and this is the most interesting thing. It’s a bit like your two miles per hour question right? So if in a roundabout things start to go wrong then as a novice driver I would just stop and let somebody else clear out so that I'm in a different position, and maybe that's what cars have to do.
Sean: Yeah, stop, wait reassess absolutely and of course I'm guessing they'll be communicating with each other on some level perhaps some kind of Wi-Fi or Bluetooth or whatever? And of course humans communicate probably visually most of all through the windows, from smiles through some hand gestures there are different ways of having that communication. I suppose the, the most complicated thing will be when autonomous vehicles are sharing the road with human drivers right? Is that something that's being researched or thought about?
Subramanian: In fact I mean that that is indeed the case. If every, every vehicle was autonomous then we'd be in a different situation, we'd be in a physical Internet almost and then you could just say these are the protocols and this is what happens. The, the real problem comes in when you have human behaviour in the mix and the unpredictability of human behaviour. So at the moment I would say that there are many schools of thought but I think I'm not stretching too far when I say that for most people the thinking is that you have to accept that the car is not as intelligent as the person. So it has to be allowed to kind of you know, make mistakes, let's slow down and stop and you have to accept that.
Sean: That sounds fair to me how far off are we from having say in the UK some legislation that allows some of this, some of the autonomous vehicles?
Subramanian: No, no it's a good question and I think the UK government has taken the view that this is maybe an even an opportunity for the government to be proactive and to set up the legislation in such a way that this country is welcoming for road trials and so on. So I would say that the current setup here is quite progressive and forward-looking in terms of enabling road trials but that said, certain questions like you know, “What is an acceptable level of safety?” Are open questions for the whole community in any country and they're actively being debated.
So, so one example is if I asked you to guarantee for me that there's no collisions in many, many situations hat’s just practically impossible. For instance you can't prevent somebody from hitting you from behind when you're parked in traffic right? So but if that happened there's a very clear notion in the law for who's responsible, so the notion of you know responsibility, blame and what in the law of tort would be duty of care. So those kinds of things are well understood for human driving and possibly we can transfer them for automated driving as well.
Sean: It's interesting mentioning things like legislation because the one question we've had a few times and I'd like to hear your take on it is, is when it comes to the idea of liability in the current kind of ownership model where most people will either own or take responsibility for their own vehicle and therefore it's kind of a sense of ownership. Then if they have an accident or something like that then- they don't call it an accident anymore they call it a collision, if they have a collision then, then the liability can be looked at between the different drivers and, and if you like stakeholders in that incident for want of a better word. How do we look at that then if the person who's in the vehicle is not in control of the vehicle and it's being done autonomously?
Subramanian: I think you can still apply the same logic in terms of who's responsible and what caused it. So to unpack the causal chain of events would look the same it's just you know, who owns the vehicle is a somewhat different matter from who did what. Now after that comes the interesting question of, what financial and insurance models people want to apply, which is outside the realm of this technical question right?
So one could envision a model in which we have convinced ourselves that these collisions are sufficiently rare that financially it makes sense to just insure it. So you know a big company could say, “Yes I can take that hit for the few times it happens and it's good for business for me to just insure the whole thing myself.” and then that, that keeps it simple. Or one could then have a more distributed model in which we say, “No I can't take that and I want you to take some of this liability.” and then it becomes a financial question.
Sean: It gets yeah, it gets quite complicated and confusing. One thing that is perhaps a bit easy to answer and from a technical point of view is, would you envisage there being a kind of yearly MOT for these vehicles, and how would that take into account things like software versions and things like that? Because at the minute we have a- certainly in the UK after three years older a vehicle has to go for an annual test and check that it's roadworthy and mostly that focuses on the mechanical and physical parts right? Brakes, emissions this sort of thing, how does it work when yeah, you need to check that the software version is up to date?
Subramanian: Yeah, I, I suppose this is still early days so it hasn’t gone that far but the nice thing, this is one area where maybe the MOT could happen all the time if it is a software and if it's a connected piece of software then in principle you can have checks much more periodically. There's already quite a bit of discussion on the idea of a black box in, in ways that are similar to aviation where you know, you're recording information all the time and somehow backing it up or, or coming up with robust mechanisms.
And that would be a good thing I think, because that then allows for these kinds of questions to be resolved in a much more straightforward way. So I, I expect that as the industry matures we will come up with models some of which will mirror what has happened in other industries like aviation and some of which might be completely new simply because you know, it will be a reinvention of principles.
Sean: I think that’s interesting mentioning aviation because of course in in the aviation world there is automation going on all the time isn't there? You know from landing aircraft to you know, flying aircraft across the Atlantic or whatever. I suppose there's more space there and fewer, perhaps fewer vehicles at the same time.
Subramanian: That is true but the, the, the analogy I have here is it's quite an interesting one, if you think about aircraft engines for a number of carriers they don't own the engine even if they own the plane. So in some sense they lease it from the manufacturer of the engine and part of the arrangement is that they have a digital twin of that engine sitting somewhere else you know in a different country altogether and they're communicating all the time. So you might be flying on top of I don’t know like let's say Southeast Asia and then you have an office in southern England and, and you're there's somebody is monitoring this engine all the time and, and making decisions about how this has to be serviced and this has to be done.
[00:40:50]
That's a little bit like what could easily be done with autonomous vehicles and then your question about MOT and so on becomes a continual maintenance question. So there is precedent it's just that, I mean with engines like I, I don't know what the cost is but presumably each of these things is multibillion. And you know the idea of having one person dedicated to monitoring doesn't sound so bad with if you had you know, hundreds of thousands of cars which don't cost so much you have to come up with different mechanisms. But I think technology has ways of dealing with that.
Sean: Absolutely and of course they get to choose which manufacturers engine they would like, would it Would you want a Rolls Royce engine or is it GEC? I wonder if there'll be a similar thing in the cars you know what, what driver would you like for your Audi? Would you like the- yes, I don't know I don't know what brands they would be. Would you like the McDonald’s driver or would you like the Burger King driver today?
Subramanian: Yeah I mean if you think about it this is literally the question that many of us have. We started out by saying, “I have an iPhone and you have an Android.” and they’re operating systems in exactly the same way, they do identical things, slight differences. You, you will have that kind of consumer choice in such a sector as well. One of the contexts for this interview is that I'm heading up this project which is the TAS node on governance and regulation and I came to this from thinking about the certification challenges that we faced when we were doing the road trials and thinking about that gap between what it takes to successfully complete you know road trials or whatever period. You know, let's say four months, a few 1000 miles you know that kind of thing to you know the hundreds of thousands of cars, very low accident rates.
And that gap is what everybody in this industry is trying to struggle with and part of the question is technical but the other part is also trying to understand you know, how do we want to attribute blame? How do we want to understand causal change of how things go wrong? And, and crucially for AI technologies it's not just can I debug an AI system in the sense of you know, there's the vision system but also the larger discussion that people have socially such as, “If I make a choice about you know this is the risk level I'm willing to take, who else got to influence that decision?” So did the person walking on the street influence it? You know did the person buying such a car influence it?
So that, that bigger that larger social conversation requires a framework and so we've been thinking in terms of, how do you come up with tools that help the regulator and the developer community think through this. But also then bring in input from everybody else and then what would a policy framework look like and, and how the technology meshes with the policy. And that's the kind of thing we are taking on in this node so autonomous vehicles I guess the, the discussion we've been having is a very good case study for how this could be done.
Sean: Professor it's been an absolute joy talking to you today and thank you very much for all your insight and experience.
Subramanian: Thank you very much.
Sean: It's great to hear the professor mention how the tech mesh is with the policy this and how the ethics are implemented as surely some of the most pressing issues here? Richard we've discussed policy relating to food manufacturing and AI before, are autonomous vehicles much different?
Richard: Well I, I think one of the ways that it differs from the, the kind of food manufacturing stuff that we, we talked about last time, is that you've got a lot of interaction with lots of different people. So we're talking about AI in, in food factories you've got the workers in the food factories and you've got the eventual consumers. But you're putting autonomous vehicles on a road which is a really complex system where you have you know, pedestrians, you have other drivers, you have lots of different signs and, and things that are there. You have, have different social cues and I think what came out really nicely in the interview was kind of, the real complexity of driving. And we kind of forget about it and, and, and you and it's difficult to kind of think back to when I started driving.
But what that interview brought back to me was how hard it is to make some of those judgements Ram was talking about you know, the, the pulling out and the judging whether you make something slow down and I’ve kind of forgot that I do that when I, when I pull out from things. But you do and it's really difficult and I, I will hold up my hands now and say, “I failed my driving test the first time I took, took my driving test and it was because I tailgated a tractor.” But because I, I was going to overtake it and then you know, like I thought, no I’m on a driving test I don't want to overtake it and followed it too closely. And listening to kind of, how do you teach the, the, the AIs how to behave, the autonomous vehicles how to behave took me back and gave me great sympathy for my driving instructor.
Sean: Fair enough I, I must confess here right now that I failed twice, I passed on the third time and I do believe if I remember rightly it was, it was due to being over eager in certain circumstances. But anyway let's not worry about that now but there's, there are so many things there's the mechanical side of it, how do you make the car you know, turn, slow down speed up, how do you change gear? Then there's the etiquette side of it, then there are rules of the road there on the legal side of- and there is so much going on here isn't there?
Elveera: Yeah, so another aspect is that if we want a vehicle to actually work properly it's not just the amount of rules and decisions that you have too, but it's also the cost because I, Ram mentioned something, if you want, if you, there is no limit to the budget put so many cameras. But the reality is that there's going to be so many compromises to make sure that cars are sellable and what is it that we're going to be missing in order for that car to, to be, to be able to, to be successful in, in the market and that really worries me, so where are the cuts?
Sean: Yeah there, there was a, a discussion we've had on a previous podcast where we talk about the idea of you might go and test drive two or three of these vehicles and, and one feels really comfortable and smooth. But how do you know that it's comfortable and smooth because it's not the safest one? Paurav, what is the value here?
Paurav: The, the funny part, there are two points I would like to mention but the first is, is that somehow we have this expectation that automotive vehicles in terms of this autonomous vehicles, AVs have to be perfect from day one. When did we devise as human beings a technology that was perfect from day one, and somehow we are trying to put in so many limitations of ours onto this technology and asking that technology to be perfect.
And that makes me very worried because we are actually setting it up to fail in some ways and by worrying so many things- you know what Richard very nicely said you know, that something which has become now ingrained into us as we try to pull out we first of all look and check and all those kind of things but do we do it in the same fashion?
And sometimes the rule book will have to be adjusted according to the environment and, and, and so you know, how many rules, how many of those combinations of rules can we teach the AI? And then which company will teach what kind of AI what kind of rules? And so there are so many combinations that would emerge and, and so if we are expecting AIs to be perfect from day one we are, you know we are going to be really disappointed.
Sean: I think, I think you’re absolutely right and we've mentioned this again you know, not, not to kind of repeat too many times but the one key thing here is that this isn't an automated system that's only automated. It has to mesh with an existing network of people and vehicles and road systems that have been there some for 1000s of years so it's that combination of two or three or multiple types of working that's the problem. Because if you're setting up an automated system from scratch you presumably feed it in some rules and give it some worst case scenarios and some disaster scenarios where it hopefully just stops if it, if everything is, is undeterminable if that's a word. But yeah, this it's the meshing isn't it? Richard how do you deal with that?
Richard: Okay, so this is a really complicated question from a kind of liability question [unclear 00:49:53] things and how do you deal with new things and regulate new things to make sure that they don't harm the people who are already in the system? So think about it right, we all expect other people on the road to drive in a particular way we, we anticipate that people are going to behave in a particular way and moderate our behaviour.
Let me tell you about a case I tell my, my first year students in about lecture three and it's a case called Nettleship and Weston. And in Nettleship and Weston Lavina Weston goes on a driving lesson with her husband's friend Mr Nettleship right? She's driving, it's not a dual control car but he's telling her what to do and she crashes and injuries Mr Nettleship and he sues her. Some might think rather unfairly but he's injured and really he's looking to get as his insurers.
[00:50:53]
So, so he sues her and the case goes to the Court of Appeal, and in the Court of Appeal the question is, what should we expect from learner drivers? Should we expect learner drivers to be the same or better than the average driver on the road or should we give them some form of mechanism by which we allow their inexperience to be taken into account? And what Lord Denning the Master of the Rolls giving the judgement of the Court of Appeal says, is he says, “We do expect everybody, no matter how inexperienced, no matter the fact that they cannot possibly reach the standard we expect of them, we expect them to reach the standard of a reasonably, careful, skilled driver.” Because you can't choose not to go on a road with learner drivers, there isn't two paths, you can't say, “I’ll go on the learner driver road, I’ll go on the not learner driver road.” You have to put up with these inexperienced people and if they injure you they should compensate you.
And so I wonder whether that kind of links to what, what Paurav said that what we need to make sure is, is perhaps not that before we let these things out in the wild that they are absolutely perfect and won't make mistakes. But that if they do make mistakes that we have some way to make sure that people are compensated and are able to be, be set right for the problems that happen. And I think that is this kind of the, the issue we have here is how perfect do we make them, what happens when they fall below their standard of perfection?
Sean: One thing I'd, one thing I'd say there is at least we do label them don't we? We have these big signs saying, “There's a learner ahead.” and in the same way as agricultural machinery might have orange flashing lights or you know, we, we do at least try to distinguish between these different kind of users of the road. But I'm sure your students always ask this but, was Nettleship himself not in some way liable because he was telling her what to do?
Richard: Yes he was, he was he was found to be partially contributory negligent for, for some of his for some of the damages and they reduced those, those accordingly. But yes so well done Sean you've, you've now passed tutorial one of the law of tort.
Paurav: I wanted to ask the same question, but beyond that I wanted to bring in the role of the organisation that is actually promoting this. So for example, if you look at it just within the last 10 days we had an announcement from Tesla- from particularly Elon Musk rather than Tesla saying that you know their car has got full self-driving now. So when you think about it, the company or the CEO of that company are now called the techno king of the company is telling me that the car is full self-driving. But if you actually look into it a little more deeply it comes out that it is actually according to the technical standard, it's a Level 2 car on a scale of 1 to 5.
That is how autonomous vehicles are right now put in, that level 1 is where the driver is in control and at level 5 actually the car is in complete control, there may not be even a governor to actually drive. There is nothing, you're just sitting there, you're a passenger and so when the techno king of the company is telling us that this is a full self-driving car and in legality it is only a level 2 car you know, you have massive problems.
Like what Richard was talking about you know, so if the company starts using marketing language in a way- you know companies have come up with very funny type of wordings. You know, for example Honda now calls themselves- we have talked about this twice in this podcast, before Honda this month have launched in Japan a level 3 car but that level 3 car is actually only going to work in one particular condition that is traffic jam. So is it really a full level 3 car?
And so should we not be thinking of this autonomous vehicle, not from this technicality of level 1, 2,3 and 4 and 5 but rather from what a consumer understands to be an autonomous car? And that is where the real marketplace understanding would come across because the problem is, is that for me what is cruise control in automation enough or to some where a traffic jam pilot an automation enough? And is there a difference between in my mind to a cruise control versus traffic jam? Not personally but to, to a technology it is from level 1 to level 3 and in my mind it is no different still I am the one who is doing all of the work.
And so we have to really think of it not just from a technology but from a consumer perspective and this is something missing. However, within the TASHub there are two projects that are going on, one where I- in both of them partially I'm involved, one I'm leading is around looking at the consumer level of trust in these systems and the second part is, is how do you do the handover? And that is led by Professor Gary Burnett at Nottingham so I think some interesting things are emerging from TASHub around this also.
Sean: Just coming back to your first point Paurav I think we need to rebrand Elon Musk one more time to super, super salesman or king of the salesman because that's what this is isn't it, it's marketing and salesmanship yeah? Elveera?
Elveera: But well something that worries me is the, the, the expectations why are we confusing the, the consumer with selling what it seems a level 5 but it's a 2 or maybe we're lucky a 3 in some cases. And I worry that we are doing that systematically but also with the concept of artificial intelligence where is it? And we’re the, the, the population is getting confused of actually what is it a, an autonomous car? And Paurav I'm hoping that your, your research brings that into a conversation that we, we are managing people's expectations because I have a feeling that there's going to be so much disappointing.
Sean: Branding aside and marketing and salesmanship aside, I think you're absolutely right these levels are still fairly difficult to fathom. I, I love the idea of having this sliding scale and knowing that at one end is perhaps as you say, a clever cruise control and at the other end it's the dreamed of vehicle that will take me to the pub and back without me having to be designated driver for the evening. At which point I'm hoping the TASHub has done its job and that I trust it completely.
Richard: One of the, the I think the real challenges for policymakers in this area and, and Ram talked about it a little bit in, in the interview is to balance out making policy for the short-term technical feasibility so things like automated lane keeping right? To ensure that you can, you have that appropriately regulated and usable and looking at the further horizon. To look at Sean’s kind of trip down the pub quasi taxi and kind of thinking about where the balance of your policy making should fall. Because it's kind of difficult I think to really peer behind this kind of curtain of technological development and think, what does the regulatory structure look like in 10 years’ time? Because we don't know what that complex system that is the road will look like in 10 years’ time necessarily.
What will be the balance between people driving and people, people autonomously sitting in vehicles etc. We've all seen you know, every time you go to an autonomous vehicle presentation someone starts off by going “This is New York in 1900, Look at all these horses. This is New York in 1920, Look at all these cars.” And, and this is what will happen or things like, “Do you have small children? They'll never need to learn how to drive.” But I don't think we know right? And so I think it's difficult to kind of build a, a regulatory system that is going to deal with all the problems that come up.
[01:00:12]
And so Paurav said that it’s, it’s difficult to have a, an autonomous vehicle that's perfect out the box, I think it's also difficult to have a regulatory system that's perfect out of out of the box and we have to accept that like there's going to be an iteration of, of kind of having to sort, sort these things out. So, so I think, I don't think this is the last conversation we're going to have on these and I, and I think that that I don't think it's necessarily the last conversation that when the government comes out with its you know, its regulatory systems and, and its, its final recommendations on what it's doing, I don't think that's going to be the final, final thing to say either. So, so I think that we’re, we’re all going to be kept in a job by autonomous vehicles for quite some time yet.
Sean: Yeah the, the professor mentioned that with “Global Britain” I'm using air quotes here and you know, the Brexit and all, all these different things that that the government might be making it easy for tech firms to do testing of autonomous vehicles in the UK. So things may move quickly, but as you say futurology or futurism is, is very, very difficult you know, people in the 1960s and the 1950s assumed things would happen a lot quicker than they did because they, they saw the space race happening. And they thought that you know by 1980 hover cars etc. etc. etc. and all things come in and contrive to change events right?
We know that over the last year the way that you know, fewer people are driving to business meetings, we are all sitting here looking at people on screens and actually why not? It works really well for a lot of- you know, it's not perfect but it works really, really well, who knows whether they'll even be that much of a drive- pardon the pun for autonomous vehicles going forward.
Elveera: From a responsible innovation point of view it worries me that we are, we have to accept imperfection, imperfection from the machines and the regulation and I'm wondering all this drive through the digital economy how much do we have to accept and is there a choice? I love driving, I'm not, I'm not personally I'm very interested in it but, but it’s, it’s, it’s this push, for the digital, the push for the autonomous systems and, and where the human agency- is there room for that?
It seems that we have simply to accept it that that's and, and we just need to manage it as best as we can. And I feel like there's so many risks that we seem not to be reflecting and, and choice is so important, and it feel that that there will be a point where there's not going to be a choice because infrastructure is going to support only specific types of vehicles.
Sean: I think the, the, the changeover will be incredibly slow in, in my- this is my guess at it to being no choice because it would take so long to get past having all of these you know, if you like kind of vehicles that you know, are on the, are on the roads already will take so long to, to flush through the system unless there's some kind of retrofitting option to this. But we've discussed before on the podcast about- certainly in the art arena about the ability to make mistakes being really important and yet we don't really want that many mistakes to be made by autonomous vehicles unless it's in a modelling situation or at extremely low speeds perhaps?
Paurav: Yeah, in a way improvisation is very critical to any, any art related aspect, any creative aspect but Elveera is right in terms of the choice, that will we have enough choice? But also at the same time Elveera in some ways I see why this drive towards autonomous vehicles or autonomous aspects is further and further being fuelled up, is because the productivity puzzle that we are facing. And we want to, we want to free up that human productivity as much as possible.
And so for example if, if I'm sitting in London right now and I have to go to Southampton to my university I have to drive for one and a half hours. If that one and a half hours is freed up while the car is driving itself and I'm just sitting in it you know, it could, it could increase my productivity probably several folds. So in some ways you're absolutely right that you know, there, there should be an option for people who want to drive probably I'm pretty sure there would remain an option but it is still a far-flung future where we are possibly going to see a, an autonomous car.
Sean: There's a, there's a, somebody, a listener out there screaming, “Just take the train.” I'm sure of it right at this moment. But aside from that lone person.
Paurav: Which is what I do, which is what I do but the, but the point I wanted to make was that there are people who have those, do not have those choices either you know, and then a train takes me three hours so how- what do I do then?
Sean: True.
Elveera: But Paurav why do you want to increase more of your productivity? Don’t you think you produce more than enough? Then you feel like your workload is, is, is so- and I feel that the lockdown has increased so much the workload for so many academics and researchers. Like to me it's scary that I will be in a system that actually I have even more time to keep producing, I don't think that's healthy.
Paurav: That's very true.
Sean: Or maybe you just want to read a good book right?
Elveera: Yeah, Yeah I like that.
Sean: And the choice if we, if we refer back to Arnold Schwarzenegger's finest work that in Johnny Cab there will be a choice to drive you just rip the robot out of the socket, and for those who've not seen I think it's Total Recall is it? You should check out Johnny Cab on YouTube right now and you'll understand yeah, autonomous vehicles as, as seen in Hollywood a few years ago is quite amusing. I'd like to say thanks to all of you for taking part today. Thank you to Richard.
Richard: Thank you very much Sean.
Sean: Thank you to Paurav.
Paurav: Thank you again Sean.
Sean: And thank you Elveera.
Elveera: Yeah, thank you for having me I really enjoyed this, this chat. Thanks.
Sean: If you want to get in touch with us here at the Living With AI Podcast you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Ltd. and it was presented by me Sean Riley. Subscribe to us wherever you get your podcast from and we hope to see you again soon.
[01:06:54]