Conversations on Applied AI

Lois and Ross Melbourne - Moral Code | Women in Tech, Morality, and Ethical AI

Justin Grammens Season 3 Episode 1

The conversation this week is with Lois Melbourne & Ross Melbourne. Lois is an author presenting visions of what if her former life was exciting as a software CEO and entrepreneur, and she is now the Chief Story Officer at My Future Story. Within her nonprofit work for My Future Story she published The STEM Club Goes Exploring, and Kids Go to Work Day. She loves to coach kids and schools through the wonderment of career exploration with a mission to inspire kids toward purposely designing their futures. Her third book and the first novel is the Sci-Fi, "Moral Code". It was written in collaboration with our business partner and husband Ross. Ross is a lifelong innovator, entrepreneur, and patent holder. He has lived and worked in the US for 30 years, but grew up in England making stuff for fun. Ross developed the world's first automatic organization charting software and then co-founded Aquire with his wife, Lois. Ross is a proven business executive and technology leader using what he calls family-first ethics.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!
Your host,
Justin Grammens

Lois Melbourne  0:00  

When they group all STEM together, women are making great strides and closing that gap. However, when you take out computer science, and just look at that women are still far underrepresented. And that's a problem because AI I mean, it is right there. It's mathematics. It's computer science and in follow on to your comment of we gotta get it right now. We need those diverse voices right now.


AI Announcer  0:32  

Welcome to the conversations on applied AI podcast where Justin Grammens and the team at emerging technologies know of talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.


Justin Grammens  1:03  

Welcome everyone to the conversations on applied AI Podcast. Today. I'm thrilled to have Lois Melbourne & Ross Melbourne as guests on the show. Lois is an author presenting visions of what if her former life was exciting as a software CEO and entrepreneur, and she is now the Chief Story Officer at My Future Story. Chief story officer, I really liked that Lois that's an awesome title. Within her nonprofit work for My Future Story she published The STEM Club Goes Exploring, and Kids Go to Work Day. She loves to coach kids and schools through the wonderment of career exploration with a mission to inspire kids toward purposely designing their futures. And being an educator myself, I love the mission. And something that I know we'll dive into more during this conversation. Her third book, and the first novel is the Sci Fi Moral Code. It was written in collaboration with our business partner and husband Ross, who was also on the show as well. Ross, you are a lifelong innovator, entrepreneur and patent holder. You've lived and worked in the US for 30 years, but grew up in England making stuff for fun. Ross developed the world's first automatic organization charting software and then co founded Acquire with his wife, Louis. Ross is a proven business executive and technology leader using what he calls family first ethics. So I'm curious to know more about that. And he's an angel investor and mentor with multiple startups. So a huge thank you, Lois and Ross for being   on the show today.


Ross Melbourne  2:20  

Thanks for having us. 


Lois Melbourne  2:21  

Thank you.


Justin Grammens  2:22  

Well, hopefully, my intro here did a little bit of justice. And you know, there's a lot of things for us to talk about in this episode. And I know we're going to cover moral code in the time that we have today. But I also want to make sure, of course, that we touch on this trait that I kind of see in both of you, which is this trait of sort of giving back, you know, interesting, and your interest in STEM, and I just have a lot of like a huge, huge amount of respect, I guess, as I've started to kind of do a little bit of research on you before this, and helping kids and and startup so thank you for all of that for sure. I gave a give a quick highlight on both of you. But maybe Lois you could maybe start maybe share your background and maybe work up to the time when you when Ross met,


Lois Melbourne  2:59  

Life began when we met. We were pretty young. When we first got together, we actually met at a Super Bowl party and got married five years later on Super Bowl Sunday. So so there's a little Melbourne trivia for you. My background for school was in was in marketing. And I was very interested in that creative side. But my very first job was with computers in the 80s. And I didn't think that was abnormal. But I guess it kind of was it was more the word processing side that got into right after graduation started at a systems integrator did some network engineering and but mostly training and software selection. And Ross came to me with his product idea and some proof of concept there. And unlike rule, if you can build that I can sell it, we did that. And I was on the CEO side building relationships with the ERP systems and sales and Ross was our is the tech visionary. And we sold that to private equity firm. And so now we're kind of on to our next thing with I'm writing and like you said, helping kids explore careers, and I have a passion for getting people to the polls and voting.


Justin Grammens  4:25  

Oh, excellent. Excellent. Cool. Well, very, very good. Very good. Yeah. And I'm sure we definitely will cover all those topics here as we're sort of talking through and, you know, Ross, I think about computers in the 80s. I mean, I you know, kind of showing my age, I grew up, you know, dabbling around with Apple two, you know, writing basic and stuff like that, probably in the late 70s. You know, what was the computer industry like back then maybe you could tell us a little bit about your, your background and sort of how you got into technology.


Ross Melbourne  4:49  

Yeah, I grew up in the UK. So born and raised in England. I feel very lucky actually to have been kind of born and kind of reached out Like my teenage late teenage years, just the time that microcomputers became somewhat popular in England, and the US, so my first computer was a Sinclair Spectrum. And I taught myself how to program in basic. And then I taught myself how to code in machine language and an assembler. You know, at that point, I had the confidence to kind of switch, switch careers and become a COBOL programmer, being able to be a COBOL programmer back in the late 80s meant you go anywhere in the world. And so I have a brother lives here in Texas. And he actually told me about Lois before I immigrated to the states. And so, you know, and he thought we would be a good match for each other. He was absolutely right. So we met him at Super Bowl party, and we never looked back. It's been a fantastic marriage and partnership. And Lawrence is just one of those people that learns very quickly and is interested in all kinds of things and all that kind of crazy things. Technology wise, I was interested in, she would listen. And I was always impressed at how quickly she figured out how to apply these technologies to business.


Justin Grammens  6:06  

That's amazing. That's really, really special. Thanks for sharing that story as Superbowl party. I mean, was was I guess living in Texas, you kind of have to like football, I guess.


Lois Melbourne  6:13  

We didn't pay much attention to the football. Gotcha.


Justin Grammens  6:17  

Yeah, it's Yeah, well, so sort of shifting gears, right going, going from a technology startup, lowest or, you know, organization is building technology, and then you know, selling it now all of a sudden becoming an author. You know, maybe let's, let's talk a little bit about moral code. I know this was your third book, but maybe you can share with us maybe the plot and story of moral code and where the idea came from.


Lois Melbourne  6:37  

So moral code is a wonderful story about if I could build an AI ally would be the one that I would want the premises being able to contain AI within a ethical framework. And if you could do that, then what would be the most ethical charge you could give to an AI and it would be to protect kids, and to help them help educate them, but to protect them from some of the the brutalities that are there in the world. So we have a protagonist that designed a moral operating system to have in the hopes that all AI's would be built upon that her system, Ellie throws a great deal during the process of the story. And they, they move to protect kids. And it's predominantly because they get exposed to how much abuse and trafficking there is for kids. And an Ellie and Cara are set out to kind of change that.


Justin Grammens  7:42  

Gotcha. And there's some sort of a specific event that happens right in this that sort of sets off this chain reaction at the beginning of the book.


Lois Melbourne  7:50  

Yeah, at the beginning, there's an earthquake, Ellie is with Kira, but she can't do much because they're, they're trapped in an earthquake building. But a group of Americans come with surveillance, very secretive, surveillance nanites that can penetrate down into the crevices of the building and help with communication and finding the location of those people that are trapped. That starts a relationship between Kira who's the engineer working with AI, and Roy, who is the billionaire that owns the company with the new Camry. And they merge together and start off on this journey trying to figure out whether they trust each other.


Justin Grammens  8:38  

Gotcha. And I think I had read somewhere, you know, this was kind of just an idea that both of you sort of hatched together over like breakfast one day Is that Is that true?


Lois Melbourne  8:47  

Ross was actually the one that started the idea, he should probably tell you how he came up with the original idea


Ross Melbourne  8:54  

Well I remember we were sitting having breakfast and Lois had written two children's books, and she wanted to write a novel. And she asked me, well, you know, what novel would you write into? You could write a novel. And I'd always kept coming back to the same concept we talked about many times. And that is, imagine technology that became so advanced that it so miniaturized, that it literally was everywhere, is omnipresent. What if it could pretend children we had long talked about, you know, how they fix the world, if you could prevent childhood trauma, those children would not grow up to, you know, potentially break Crow and make crimes and, and be bad people? I always joke that, you know, I did, I did well, picking my parents, but, you know, kids don't have that choice. And so yeah, I think it stemmed from that kind of conversation about how can we, you know, highlight no protecting children and have a utopian future instead of a dystopian future. And that's really how the conversation started. And she said, Well, you know, When I helped her with this, and on the science and technology side of it, and I said, Sure, and we now obviously we've collaborated on on software before, this was not like a software project because it was a technology kind of futuristic ideas project and, and Lois just ran with the ball. And we just kind of worked together on the storyline, low side writing chapters, and I was just blown away by just how good the initial chapters were, the story was great as we're engaging, you know, in this period of writing his book, she has become a full blown novelist. And I said tremendously impressive kind of feet from their business communicator and business leader. And now being a novelist is very impressive.


Justin Grammens  10:42  

Yeah, no, I It's sort of an item that's on my bucket list, I guess, someday to be able to write a book, although my books would probably be pretty dry, they would just be all about some sort of new technology. And I guess maybe Ross for you, were you sort of bringing in like, what's technically feasible, I guess, and in some of these things, and then lowest was making it into a story that would read well.


Ross Melbourne  11:01  

Yeah, I think that's basically what my role was, was was coming up with technology, and I enjoyed books, like the Martian, where it was hard science fiction, meaning that there was there was feasible for it to be created at some point in time, you know, having a background in software and, and worked in machine learning myself, I kind of had an idea of the, of the limitations of machine learning, but where it, you know, where it could grow to, and also the major challenges of AI, which was, you know, doesn't really have any kind of soul, you know, and if you lead it, you will make very bad decisions. And then people are always worried and scared about AI taking over the world and so addressing how would you create an artificial intelligence system that had moral underpinnings and had an ethical framework that law was mentioned that that was the challenge and software side on the hardware side, which was not my, my forte, I had a lot of research on, you know, how far can you take miniaturization? How would you power? Very, very small machines powered, they move to the air, how they communicate? What basically could they do, and so Rolo notes and Lois Lane would meet and share those notes with here. And she had the difficult task of weaving that into the story and building the characters out to make it believable, and I think she's done a fantastic job in doing both those things. For sure.


Justin Grammens  12:27  

So Ellie, in the story is a is a virtual assistant. Right? It's it's an AI, it's a bot. Yes. Can it reach out and touch the physical world? I mean, I always wonder so how do you protect a child, when you don't really have any physical mechanisms to do that? 


Lois Melbourne  12:41  

Cara Nellie both start when LA is still just day, still just an assistant and audible, but they eventually, le and nanites have an exploratory mission together. And so there does become a little bit more of a physicality to the whole process. It was interesting as the story was evolving, learning about all of the different ways that you have to train an AI and what are all of the problems if you have bias in your data. And part of my objective was to make a eyes consumable for the non techie person, and also even for the non science fiction reader. But yet, like Ross said, we wanted it to be very realistic tech, that was a challenge to show the evolution as you feed more data in and if you feed bias data, what happens and what decisions get made, but trying to make that in an illustrative educational mode, without it sounding education, that was both difficult and a lot of fun, to be able to kind of build that into to see the AI develop over time, because more data and more training became available. There was times when I present something out to Ross and he'd read it and go, that's not gonna happen.


Justin Grammens  14:17  

So you had to bring in this. Yeah, some reality. It's not pure 100% science fiction, right. Is that true to say?


Ross Melbourne  14:24  

I think the thing that I was trying to do was, I think with science fiction, you go big or go home, you know, you start off with hardcover, humble beginnings and the story, but really the the story behind moral code, and I can't give away the the plotline or the ending that, you know, has a dramatic impact on humanity. From that point forwards from from the end of the book, walk forwards, it has a dramatic impact on humanity. And also, you mentioned physicality. What I was trying to design was the ultimate robotic system, a robotic system to end all robotics systems. And that's the story in the book as well. So, for me, it was a lot of fun because I got to fantasize about where technology would go to ultimately, lowest drove the story to a utopian future where, you know, he came very close to not being that in the story. And I think that that's, that's exciting. So that, hopefully there'll be follow on books and practicing your movies from made from the story. So yeah, that's where we left it.


Justin Grammens  15:28  

There might be a follow up to it, you know, potentially in the future. The other thing that I've seen is maybe you're you're trying to get discussion around this book, right, you talk about my lowest, I think you said, this is a great book for those that want to debate ethics and technology. And so you know, have you engaged book clubs with this? Have you had some open conversations around ethics? How has that been received? I guess?


Lois Melbourne  15:47  

Yeah, we're starting those conversations in the books launched fairly recently. But it is definitely part of the conversation is, you know, people they're afraid of AI. They're afraid of robotics, media has a very big role to play in pet Entertainment has, you know, they set up drama? And well, where is their drama? It's building fear. That is something that it's definitely, in a lot of the conversations I have around the book, also the difficulty of defining what is ethical, and writing a book, there's big chapters, and anyone that's done software or hardware development knows what I'm talking about, how can this process for their field? Do you have this feature list? And that's the wish list. And then you got to check stuff out those reality. Yeah. And then writing it sometimes there's whole chapters and they get yanked, but there's threads that can stay in. So goal of defining that ethical definition that people would accept to say, if you're going to trust that AIs are making ethical decisions, how do you define the ethics? In our world, we created an entire organization around crowdsourcing, the definitions and the training sets of ethics, what is ethical? What is ethical decisions? How do you train ethics, and this would be globally crowd sourced and pulled together from the different elements of religions and corporations and educational institutions to differing agreed upon nugget, if you will, very large nugget for what truly is the definition for AI so that that you can build those boundaries? So, you know, that was kind of a world in and of itself that we kind of had to develop? Because we wanted to make it realistic that someone could define it and have somebody say, well, some lady up in Stanford can't just decide what is ethical


Ross Melbourne  18:03  

And the book is crowd sourced. Ethics and AI traffic is the key to it.


Justin Grammens  18:07  

That's great. I mean, I think defining ethics is one thing, how would you, either or both of you define artificial intelligence? How do you define or try and boil it down to what is AI? Kind of a question, I'd like to ask people who are on the program,


Ross Melbourne  18:22  

I'll go first on that one, you know, I see it as software that, you know, does a job and is trained instead of kind of coded, it's trained with data and a lot of data, you know, so it's making decisions and predictions based on its training data.


Lois Melbourne  18:39  

Well as not being the expert in the room on AI, I will kind of just acquiesce to, you know, to Ross's definition, other than to say that the importance in the definition of AI is that the training and the data that creates any of that artificial intelligence, that's the key that it has to be good data, consider it data for it to be a usable AI. 


Justin Grammens  19:16  

Yeah, that was funny as you were talking about the ethics. So I is a little bit of a sidebar, I listen to Audible a lot when I go out on my runs and stuff. And there's this audible, free book that came out I think it's called like the AI who loved me or something like that. It's an interesting little story. I haven't, haven't finished it all. But it was it was interesting, because like, they talked about in the story, a scenario which most people talk about the scenario where self driving car, you know, is it gonna run into a human or is it gonna run into something else? How does it make that decision? And in his book, it was, you know, that it was programmed in such a way that it could it could hit either a school bus full of kids, or some sort of tractor trailer that was worth, you know, 10s of millions of dollars or whatever, and the AI chose to hit the kids because of a pure dollar value amount, right? It was basically saying Well, now I'm putting more and more money on this on these physical products that are worth millions of dollars and I am on kids lives. And, you know, arguably, yeah, you're right, the person in Stanford should not be the person to make those sort of decisions on ethics. And so, you know, who does make those decisions, I guess is kind of where a lot of these conversations are sort of going. It's very, it's very timely, because if we don't build these systems, correct today, what's going to happen in the next three 510 years going forward?


Lois Melbourne  20:24  

Yes. And that concerns me and one of the examples to help illustrate that in the story is, after a FBI bust of a trafficking ring, a female, the AI classifies a female that was in the room as a victim. And she was actually a trafficker. And so the discussion ensues, that, that's because the data that we have fed to you to recognize traffickers doesn't include that women could be in that role. Right. So we're trying to trying to simplify that for people to understand where bias is come from. Without it being a pre cheese story, even though there's a lot to preach about bad data in the trade. But trying to make it different than what we're seeing already. And in the news about AI, that was how I, I chose to kind of make that clear for the readers,


Ross Melbourne  21:30  

We had fun with. Also, you know, the unexpected things that if you make an AI, that's conversational, that is actually ethical, you might be surprised by some of the things they they had they object to so in one, you know, seeing them in the book, they ask Ellie, who is the AI character in the book to if she will perform the Turing test, she will take part in the Turing test. And she refuses because she does not want to try and mislead anyone, she finds that to be deceptive, when you know, the Turing test is, you know, is to can you make somebody believe that you're a real person and then now that's very fair, a moral framework is not gonna allow her to deliberately deceive somebody. And I, I think that's kind of interesting, fun titles, less serious than what was mentioned. And a fun thing that, you know, AIS was the truly start making better decisions could surprise us and my interest into AI, I do actually apply it to business problems. But you know, Tim, have kind of a just an intellectual standpoint, I'm interested interested in the future of generalized artificial intelligence. And what were the breakthroughs on a calm to become a truly genuine livestock Krishna intelligence that can can really raise on? I think that that's, that's where we are in the book, le kind of breaks that boundary and becomes a generalized artificial intelligence. Wow. Yeah.


Justin Grammens  22:58  

That's, that's definitely an area where I don't think anyone, anyone has the answer today, right? Where is AGI going eventually end up? And I've had people on the program like what is what is consciousness? And is the AI alive? And so we can talk about that for hours. And, and then you know, the other side where it's just, it's just numbers. So I mean, everyone sort of has their own belief on it. But it's going to be very, very interesting. And this is what excites me as well. And again, that's why really, why sort of have this program is just to have us have discussions on it. Everyone has their own different different topics or their own different perception. And I don't think anyone's right. And I think to go back to you, Lois, you know, you're like, Well, I'm not the expert in the room. Well, I think everyone's an expert. Honestly, in this field, this field is so new, that just because you don't maybe understand all of the mathematical, you know, simulations going on under the covers, it's really the applications of it. So I welcome everyone's input, for sure. And so it's great. And I guess the other thing that I was kind of curious about, and I touched on a little bit at the beginning is it seems like, you know, Louis, you're really interested in in STEM you're really interested in in helping women get into this field. And maybe you could talk a little bit more about, you know, sort of what you're doing in that space, you had sort of two prior books, and you're part of this nonprofit, you know, maybe I just be curious to our listeners learn a little bit more about what you're doing there.


Lois Melbourne  24:08  

Well, the nonprofit and my kids books are driven around helping kids explore careers. So we so often see that kids, they don't plan for their future, or, you know, they resent that question of what are you going to be when you grow up or you know, however it gets phrased, but if you can find things that excite individuals at any age, you then have a much greater ability to help them find where they may want to take their their careers. And so I spend time predominantly with students, helping them find different ways to find what it is that interests them. And then how do they match that up to careers in fields that they think that they want to work in and then helping them I'm trying to figure out ways to learn what those are. So a lot less about a job title, and more about an industry or a type of job or a method that they can use to go further down that path. Beyond just students, I definitely have a personal vested interest in helping women feel comfortable in STEM be accepted in STEM and pursue those careers. And it's not just women, it's any person that, frankly, isn't a white male, to also feel welcome at the table, that we get A a diversity, because you can also get a problem where you don't want Justin, an all female team, designing tech, or any homogenous type of group because you need a diversity of voice. And so I am really enjoying the different ways of helping people understand that it's important that having multiple voices at the table is extremely important. And I think that entertainment and media have a really big role to play in that. Because if you can see women in different roles, you know, Star Trek was a beautiful example of having a, you know, a very diverse group of individuals on the leadership team. You're then except more or that, yeah, well, women can do that. And even if it becomes subconsciously, you can break down barriers, because people have seen it. Even if they maybe didn't see it in their office in their brain, they can say, well, I've seen that happen. And I think female doctors are definitely no longer an anomaly. But I would also say that seeing so many women in medical fields, and entertainment has played a role in that. Because people accept that. So I want to see entertainment, books, movies, TV shows have equal footing for everybody, and that women are playing these various roles. So it was fun for me in that sense to have a series of female voices in the story. It's not the driver. It's not a in your face element at all. But it's what I know as being a woman that was in a tech industry that's comfortable for me to build the scenario.


Justin Grammens  27:44  

I mean, you were the CEO of a of a tech company. And as you probably looked around, you probably didn't see at that time, very many other women that were in that position, and it's still probably underrepresented today.


Lois Melbourne  27:57  

Yeah, it is. If you look at STEM careers, in total, when they group all stem together, women are making great strides and closing that gap. However, when you take out computer science, and just look at that, women are still far underrepresented. And that's a problem because AI sits real. I mean, it is right there. It's mathematics. It's computer science. And in that follow on to your comment of we gotta get it right now. We need those diverse voices right now. And the AI fields and all of the different industry elements that touch AI development, the users as well as the developers, because it is really important. It will be ubiquitous in our lives. And we got to get it right.


Justin Grammens  28:54  

For sure. Well said, as I think about the technology and having more people involved, like how do you see then just the future of work changing just on a more human human scale, as the technologies and AI and everything gets smarter and smarter and better and better? What's the future work gonna be you think in the next five to 10-15 years for anybody entering the market?


Lois Melbourne  29:15  

I think that having a an understanding? Where does technology play in whatever industry you are in? You can't have a us versus them kind of concept going forward. And in the world. It's not? Well, there's technology and then there's the rest of us. It's woven in everywhere. And so the comfort level needs to be there. There's definitely opportunity. People thought that you know, spreadsheets like Excel and Lotus 123 We're going to put accountants out of out of work. That did not happen. We just do more with each one of them. So yes, there will be jobs that will be II eliminated or changed, but it doesn't mean that it will crash the workforce in total.


Justin Grammens  30:07  

Right. Sure. Ross do you have the same the same thinking on that? 


Ross Melbourne  30:12  

Yeah, I think that technology has, you know, changed the workplace and new jobs that didn't exist five years ago are popping out all over the place. cryptocurrency and blockchain developers, you know, if you go back five or 10 years, there wasn't any such jobs. And so yeah, the, the key is to when technology is to not leave everyone leave a huge portion of population behind. And I think that has happened, you know, I think that it would be much better if we have universal access to internet, you know, for everyone, some, especially in rural areas, I think people like Elon Musk and Starlink is changing that. So that, you know, there will be ubiquitous, you know, access to the internet, the thing that I find interesting, I don't think people really predicted that the Internet would have some the harmful effect it's had on, you know, just wanting to fact and what is what is real. And my concern is with artificial intelligence, especially as again becomes conversational, is that I think everyone knows, most people that really looked at how Google works is that if you have a preset, conceived notion of how the world works, and I'll give an example of something that GPT three, for instance, I've got a chat port around DBT, three, called Le ball, and most nine decided we wanted a chat bot represented the character in the book, how cool, it'd be good, we could chat with a character in the book and gonna get to know her and ask him questions. And so I built this chat bot, based on GPT theory, and I'm sure when people listen to your podcast, you know, understand the power of GPP, three, but one day I do my testing, right. And you probably see on my, on my shoulder here, pictures of people walking on the moon or the Apollo missions, which have a huge influence on me. I had to ask her questions, you know, trivia type questions to see if she's working. And so I asked her about the Apollo missions, and she was answering those very impressively, you could ask compound questions. And she would know, you know, through the command module pilot was for Apollo 12. And she was the third man to walk on the moon. She got it all perfect. And then one time, I just must have asked the question, made a different way. And she came back and said, No, we didn't go to the moon. And I really, have you been to the NASA website? And you said, I trust the NASA website, like, you know, she said, I don't trust them. You shouldn't trust any every website you go to. I started to basically, you know, I was kind of panicking. Howard, how is this possible? And of course, it dawned on me before we, you know, most people had ever heard of the concept of fake news as a as a polo enthusiasts, I had always been fascinated with the statistics that showed that a quarter of Americans don't think we ever went to the moon. And of course, if you'd go back over the last 20 years plus of the internet being around and the World Wide Web, there's a lot of people discussing that the you know, conspiracy theories, basically. And that was one of the biggest conspiracy theories going, you know, if you go back like 10 years ago, that was the most popular ones. And so GPT three is grabbed all of this training data from, you know, it's enormous amount of training data it grabbed, and it grabs a lot of that. And so if you ask that question in just the right way, it will dive into that pool of nonsense. And so my main concern is, is that, you know, we got to make sure that generalized or even conversational AI is, which I think is a big part of our future, quite honestly, I think we're all like, we all have cell phones in our pockets, will all have conversational AI is that how plastering? Now that guy, we're going to make sure that they're not trained on nonsense, because that can be very harmful.


Justin Grammens  34:01  

Wow, that was an interesting discovery. It's like bad data, and then bad, bad data out. And even something as simple as a news feed. On my Google News Feed, it feels like it just continued to drives you in a certain direction. It just continues to reinforce the things that you like, then you end up in your own little bubble, and you don't you have no visibility, what's going on outside of you, which I think is harmful.


Ross Melbourne  34:22  

Yeah. Especially with the anger ending those kind of journalism as we, as we know it, being ubiquitous to everyone consuming journalism, certainly 30 years ago, everyone in America was consuming journalism for their news. And so there was a checks and balances in that and now those checks and balances are gone. So yeah, it's very challenging for technologists to try and get this right, I think and so I think borrow COVID. The book really tries hard to imagine how that problem will be solved.


Lois Melbourne  34:52  

On my wish list and what I wish people were developing out there is a tracking of provenance of data or decision making, or in what's come out very strong lately with art and video, etc. And really locked down the provenance of who created it, who has edited it. And, you know, marry that with that kind of blockchain security so that it's discoverable without forensics to say this was aI created by this, or these are images that were from this, this and this and have been modified through these things. And some that if you don't have that provenance, it's like trying to buy classical art or, you know, collectible art. If you don't have the provenance. It's not worth this much. So if your videos or art or decision making or news back, I don't want Newsmax to become an oxymoron. I want them to be discernible. My AI Trinetra wishlist staff berries, how can we get this trackable facts? And who's sharing what? And who uncovered it? Where did it come from? An accurate bibliography of how do we know this is real?


Ross Melbourne  36:21  

Y eah, audit trail for what's, you know, inf ormation, if you will? 


Lois Melbourne  36:26  

And creativity.


Ross Melbourne  36:27  

Yeah,


Justin Grammens  36:28  

Yeah. I mean, if you write something you should say, your references. And I feels like right now that AI is just a blackbox to people, they throw something in and whatever the thing that comes out is then assumed to be to be true. And I think I'm 100% behind you, we shouldn't we need to understand how these systems work internally, kind of the things that I've been seeing people say is, is just, you know, we need to have understandable AI. Right, we need to understand in terms of what's going on when you have GPT, three, with essentially billions and billions of basically, you know, nodes in the neural network that have been that have been trained for years. That is the most complex system we've ever seen. So it's difficult for us to explain what's going on inside there. But unless we do, it's really tough to know, can we trust what's coming out of it? Right. And another question I like to ask people is sort of like so what is the day a day in the life for you to I mean, you've been successful in your careers now you're, you know, Lois, you're you're writing books and and Ross, I think you're still developing technology, you know, as well. But yeah, what's, what is typical day look like?


Lois Melbourne  37:27  

I have a tough time saying anything's typical, you know, typical, I tried to get my exercise in, but writing research right now with the launch of the book, doing a lot of work to try and understand where you know, where people perceive it, and who, who would like to consume it. working on another book. It's not a sequel, Tiger Swan, we have our ideas for a sequel, but I'm, I'm working on another book as well. Just an enjoying the days. And I'm excited that Formula One's going to be here soon in the States, we'll get to go down to Austin for Formula One. So that's my typical weekend during the season is I watch my Formula One. That's good.


Justin Grammens  38:09  

That's good. Well, you mentioned we're writing a book. I mean, it took you many years to write this one. Is that Is that true?


Lois Melbourne  38:15  

Yeah. It was about four years from our original conversation to publication.


Justin Grammens  38:20  

Do you think this next one's gonna take as long? No,


Lois Melbourne  38:24  

I don't think so. Part of the time lapse was going up that learning curve of the craft of writing a novel and was a lot of rewrites, just because I didn't know what I was doing. Like that start. And then part of it was some, you know, COVID distractions and such but I don't think the next one will take me as long. Okay, okay. Well,


Justin Grammens  38:52  

what about you, Ross?


Ross Melbourne  38:53  

My days typically starts off with exercising I like to play racquet sports. So I play squash which is a game like racquetball. And now pickleball and I also now the weather is cooling Oscar in Texas. I like to go mountain biking, I'll take lowest tillow to the lake and she'll kayak and I'll, while she's kayaking, mountain biking, which fortunately, we have good trails, and a nice lake. We're going back 10 minutes from so. We'll do that in the morning. I'll do that in the morning anyway, and and then I'd probably you know, after lunch, I typically work from, you know, after lunch all the way to jog ATM. I'm working on a robotics startup right now, which is in stealth mode. It's using AI but nothing is impressive is in the book moral code. But working on a robotic startup and my brother and I'm also on the board of another robotics startup. I spend my time to working on ways that I'm developing my own technology and I'm doing the software side my brother's doing the haul by five of us basing it that it really we last night, kind of we do our own thing, but we always in the same building and we do meals together and I on vacations together, and you know, it's this continual collaboration.


Justin Grammens  40:03  

That's good, that's good. Well, as I mentioned at the beginning, you know, I really have a lot of respect for people that want to give back, you know, help startups help the next generation of people. And when we publish, this will have liner notes. So I'll have links off to the book and links to your LinkedIn profiles and stuff like that, like what what's the best way for people to reach out to you? Well, we each


Lois Melbourne  40:21  

have our own websites that are our names, loismelbourne.com And, and rossmelbourne.com. And then we have the book site itself is moral code, the book.com. Hopefully, one day, there'll be a site called moral code, the movie.com. That's a goal. And then I'm on for the readers out there. I'm on Goodreads, just as low as Melbourne and Twitter as at lowest Melbourne.


Justin Grammens  40:50  

Perfect. Are there any other topics or things that maybe you wanted to share that we didn't really discuss here today?


Lois Melbourne  40:55  

Well, there was one, which is all of the proceeds from moral code we are donating. And the heart of the book is helping kids. And the challenges that they're facing is the child abuse and trafficking. So we're donating all of the proceeds for the book to the prevention of those evils. So we're starting with prevent child abuse.org, which is a 50 year old company, or organization working towards that end all over the country. And then a new organization called born like a thorn in your side. And they are a technology company, that airs building tech to help prevent trafficking, specifically, child trafficking, and identifying kids that may be in danger and perpetrators, and then helping organizations that need that data use the tech. So those are the two organizations that were donating our proceeds to.


Justin Grammens  42:06  

It's fabulous. So yeah, I just took a look. It's just thorne.org Looks like an amazing group. Yeah, it's, it's great. It's great to see technology being used, obviously, for good. I feel like technology can be used in multiple different ways. And so for you to sort of support the proceeds and kind of bring it back into the community. That's fabulous. That's fabulous. Thank you so much for being on the program today. I really appreciate the time, really appreciate you sharing your your wisdom. hear your perspective here. Obviously, you did have been doing a lot of research and stuff in this space for a number of years to come up with this, with this book that I know everyone will enjoy and has enjoyed. And I think you know, the most coolest part of it is just the discussion that I think it's prompting, you know, when I had sort of read about you wanting to get this involved with basically book groups, and having this conversation around ethics and AI, that's, that's an area where I think it's not covered enough that you know, people are so busy to jump into new technology. And myself, I'm probably as guilty of it as just about anybody else's. I see the new shiny thing. And I want to go ahead and start using it in all sorts of different ways without actually thinking about maybe how I'm not helping to solve the overall big picture of the problem. So I appreciate you spending the time writing the book. You guys seem like a great dynamic duo. It's it's like you know, he both bring something excellent to the table. So look forward to seeing what the next project is in the future.


Lois Melbourne  43:25  

Thank you. Thank you very much for the conversation. It's been fun.


Ross Melbourne  43:31  

You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening


People on this episode