Tech Travels

EP11: Engineering the future: Ethical AI and Inclusive Innovation with CTO Swathi Young

April 19, 2024 Steve Woodard Season 1 Episode 11
EP11: Engineering the future: Ethical AI and Inclusive Innovation with CTO Swathi Young
Tech Travels
More Info
Tech Travels
EP11: Engineering the future: Ethical AI and Inclusive Innovation with CTO Swathi Young
Apr 19, 2024 Season 1 Episode 11
Steve Woodard

Send us a Text Message.

When curiosity meets the cutting edge of technology, transformational ideas emerge. Swathi Young, CTO of Alwyn Corporation and esteemed advisor at SustainChain, embodies this fusion as she recounts her evolution from a childhood steeped in wonder in India to becoming a vanguard in the realm of AI and machine learning. Her narrative is a testament to the potent mix of inquisitiveness and diversity in shaping technology that respects and enriches the human experience. As we traverse Swathi's story, the conversation lights up with her insights on the necessity of diversity in algorithm creation and her pivotal work on ethical AI frameworks for the US government, emphasizing the tech's role in societal betterment.

Imagine a world where every AI decision is as transparent as a nutritional label. This episode takes you through the ethical minefield of artificial intelligence, spotlighting the often-underestimated issue of bias and its ramifications across critical sectors like criminal justice and employment. Swathi shares the collaborative zeal among federal leaders, industry experts, and academics to unravel these biases, underscoring the essential balance of transparency and human oversight. The discourse extends to governance, analyzing President Biden's executive order on AI transparency and the implications for sensitive domains. A clarion call for inclusivity and diversity in AI resonates throughout, highlighting the imperative for equitable tech strategies.

Peering into the crystal ball of healthcare AI, we probe the significance of broad-based data collection and the need for diversity within teams to reflect all voices in the data—key to the efficacy of healthcare outcomes. Swathi then paints a picture of AI's burgeoning role in enhancing patient care, from hospital management to personalized medicine. The finale of our discussion propels us into the future of AI technology, with Swathi Young contemplating the arrival of artificial general intelligence and the urgency for robust ethical frameworks. Her insights not only sketch the rapid progression of AI but also invite listeners into a dialogue shaping the movement toward a technologically responsible society.

About Swathi Young

Swathi Young

https://www.linkedin.com/in/swathiyoung/ 

Subscribe to Swathi’s youtube channel
 https://www.youtube.com/channel/UCK8s69t8fdrR6a-h78yZKqw 

Follow Swathi on tiktok - https://www.tiktok.com/@pinkintech

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Tech Travels +
Get a shoutout in an upcoming episode!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Send us a Text Message.

When curiosity meets the cutting edge of technology, transformational ideas emerge. Swathi Young, CTO of Alwyn Corporation and esteemed advisor at SustainChain, embodies this fusion as she recounts her evolution from a childhood steeped in wonder in India to becoming a vanguard in the realm of AI and machine learning. Her narrative is a testament to the potent mix of inquisitiveness and diversity in shaping technology that respects and enriches the human experience. As we traverse Swathi's story, the conversation lights up with her insights on the necessity of diversity in algorithm creation and her pivotal work on ethical AI frameworks for the US government, emphasizing the tech's role in societal betterment.

Imagine a world where every AI decision is as transparent as a nutritional label. This episode takes you through the ethical minefield of artificial intelligence, spotlighting the often-underestimated issue of bias and its ramifications across critical sectors like criminal justice and employment. Swathi shares the collaborative zeal among federal leaders, industry experts, and academics to unravel these biases, underscoring the essential balance of transparency and human oversight. The discourse extends to governance, analyzing President Biden's executive order on AI transparency and the implications for sensitive domains. A clarion call for inclusivity and diversity in AI resonates throughout, highlighting the imperative for equitable tech strategies.

Peering into the crystal ball of healthcare AI, we probe the significance of broad-based data collection and the need for diversity within teams to reflect all voices in the data—key to the efficacy of healthcare outcomes. Swathi then paints a picture of AI's burgeoning role in enhancing patient care, from hospital management to personalized medicine. The finale of our discussion propels us into the future of AI technology, with Swathi Young contemplating the arrival of artificial general intelligence and the urgency for robust ethical frameworks. Her insights not only sketch the rapid progression of AI but also invite listeners into a dialogue shaping the movement toward a technologically responsible society.

About Swathi Young

Swathi Young

https://www.linkedin.com/in/swathiyoung/ 

Subscribe to Swathi’s youtube channel
 https://www.youtube.com/channel/UCK8s69t8fdrR6a-h78yZKqw 

Follow Swathi on tiktok - https://www.tiktok.com/@pinkintech

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Speaker 1:

Great question, because we often assume, when it comes to ethical AI or responsible AI, it's technology tools, but a lot of it is outside of the technology tools. It's who are writing the algorithms, and I wrote a Forbes article and there is a lot of statistics around it. But, in a nutshell, a couple of things the more diverse your team is who writes algorithms, think of it as diversity by design algorithms. Think of it as diversity by design.

Speaker 2:

Welcome to Tech Travels hosted by the seasoned tech enthusiast and industry expert, steve Woodard. With over 25 years of experience and a track record of collaborating with the brightest minds in technology, Steve is your seasoned guide through the ever-evolving world of innovation. Join us as we embark on an insightful journey, exploring the past, present and future of tech under Steve's expert guidance.

Speaker 3:

Welcome back, fellow travelers. Today we have the honor of hosting a true industry pioneer. Today we are excited to have Swathi Young, who is the CTO of Alwyn Corporation and CTO Advisor at SustainChain. Swathi excels at implementing data architectures and artificial intelligence to improve and enhance the efficiency across transportation, healthcare and federal sectors. She is passionate about using AI to drive sustainable solutions and positive social impact. Swathi, it's great to have you here and to share your insights on this potential game changer for artificial intelligence. Can you tell us a little bit about yourself and this amazing journey and how you really got into the topic of AI?

Speaker 1:

Awesome, thank you so much for having me, steve, and I think an overview of my experience would be a good starting point. So I'll take you way back in the day. A couple of decades ago I was a little girl in India playing carefree in mango trees and just didn't have a vision of her future. Frankly, I was a lot into both STEM and arts. I was passionate about science, but I was also passionate about Indian classical dance and drama. So I really felt like, as I was getting into high school and college, that what is a place for me, that I could balance my passions. So I was thinking like journalism. I even explored archaeology, but this was way back. Think of early 90s in India, which is different from the India today, and we didn't have those options readily available in most cities in India. And archaeology today is also a rare field. But journalism definitely could have been a possibility if I was in any other country. But my mom, I want to say, was very prescient. She said just do your engineering and in the future you might have a chance to do what you want. And who knew that? The way she thought might have been a vision that in today's day and age you don't need to be a journalist, to start a blog, your podcast, your um, your tiktok videos and youtube channels right. So I did do my engineering and listen to my mother and I got into the field of technology straight out of my engineering school and got a job actually heads down doing coding and writing software code. This was with Oracle. It had an India Development Center where we used to write software for Oracle applications.

Speaker 1:

But my curiosity was not fulfilled. I was very curious where the software that I was writing was being used. I did not know where it was being used. They talk big names. It's used for procurement, it's used in supply chain, and this is as a 21-year-old. I was so confused. I was like I've never seen what a procurement office looks like. I didn't know what a purchase order was. I didn't know supply chain. So my curiosity led me into consulting. I said I want to see where the software is being used. So I pivoted from developing software to implementing software. So that means that I had to travel a lot. So I was traveling from India to US, india to Belgium, where I did a very interesting consulting project for GE.

Speaker 1:

So every time if I look back on my career, it's my curiosity that has driven me to take different steps and lead to various outcomes. So the same thing holds true for AI. This was eight years back when I first started hearing about machine learning, artificial intelligence. Obviously, with technology background being in software development, I deal a lot with data. I've done way back in the day data warehouse projects. But when I learned about machine learning eight years back I was like, oh, I know what data is and I know the potential of data, so let me dig deep into what is the potential of machine learning. So that's how I got into AI Again.

Speaker 1:

My curiosity drove me to learn a lot. I'm an autodidact, did a lot of Udemy Udacity courses on my own. But the main thing is we have to see when the rubber meets the road. So I was grateful for Alwyn Corporation to give me multiple opportunities to explore using machine learning for research purposes, initially in lung cancer research. So we did a lot of machine learning projects just for research purposes. And then I got very deeply involved with the ethical AI framework for the US government. This was a volunteer project where I was working with multiple people on co-authoring the ethical AI framework. So that's how it led me to where I am today and the last 10 years I've had leadership positions as a CTO, leading large and small implementation teams for small, startup, midsize companies and even for large companies like Amtrak.

Speaker 3:

Wow. So, Jeff, it definitely seems like you have been there in the trenches and seen from the very early beginnings of machine learning you mentioned about eight years ago. It seems like that getting into the space, having that background in understanding the data, I think that is incredible Because I think with artificial intelligence and machine learning is you've got to be really proficient at understanding the data, how the data is kind of wrangled together, how you basically work with data classifications, data models definitely seems like verging on the edge of kind of data scientist almost.

Speaker 3:

So it was like you definitely wear multiple hats. And then you mentioned the ethical framework that you'd worked on with US government. I'm really interested to explore more about that and kind of what you're seeing across the industry. So let's explore that a little bit. What is really kind of when we talk about an ethical framework? Can you help illuminate us a little bit on what that really means?

Speaker 1:

Yeah, that's a very interesting topic and very pertinent to the dialogue that's happening, especially being in Washington DC on the Hill and the government and big tech having conversations with the government, right. So this was even. We started this initiative one year before the pandemic and this was an initiative with multiple leaders actually working in the federal IT space they might be their CIOs and CTOs in federal agencies and we all came together together. So it was a collaboration between industry, which I represented, then academia, some university uh membership also was there and these federal leaders. We came together and said, hey, um, you know, we are all seeing the emerging technologies like artificial intelligence coming up, but we also aware for those of us who are technology oriented, are aware that with large data comes the challenge of data biases. So we are human and we have biases and that will actually perpetuate into the data and actually exasperate this problem, right, and actually exacerbate this problem, right. So machine learning just uh, reiterating is based on large quantities of data. Even for those of you who are enjoying using chat gpt, they did build a large language model using existing data off of the internet. So without data, there's no artificial intelligence, right? Whether it is synthetically created or all the existing corpus of data.

Speaker 1:

So we all knew, as people working in this space, something has to be done and start this dialogue about various aspects of ethical considerations. It could be fairness, it could be how to deal with bias, transparency, responsible use of AI. So we divvied up, we formed subcommittees and created these working groups where we came together, debated and discussed hey, what are the problems? How do you identify the problems? One, secondly, it's not enough to identify. How do you mitigate the problems? And, thirdly, how do you educate and advocate about these existing issues? That could lead to really bad outcomes, especially for folks who are already facing these issues, and we know that there are certain sectors of the community that have biases perpetuated in the community already in the society, and that would be actually exasperated because data we are building machine learning on top of data. So, for example, I'll give you a quick example In criminal justice, if you're using a machine learning algorithm, it would look into historical data, because machine learning algorithms are based on historical data, and it could be biased against African-Americans because they might have a propensity in the data to see patterns that they have had a lot of trans or misdemeanors, right, which is not, which is not being very fair to the person who is present in in a court.

Speaker 1:

And I know pro publica has published some articles where a teenage girl was actually who had stolen a bike, actually went in front of a judge and whether it's a bailable offense or not, and there were some recommendations, given that it is a bailable offense even though it was a misdemeanor and she didn't have any history, and they compared the case to another you know homogeneous looking man and how those outcomes were different.

Speaker 1:

So it can have far reaching consequences in society, whether it is known or unknown, especially if people or organizations using these algorithms don't even declare so.

Speaker 1:

If you are getting recruited by an organization and the organization used an algorithm to make a decision, whether you have to be interviewed or not, so there is some bias entered into that decision making process. So it's very important for those of us who are in the technology field to start educating and advocating the risks and the bias that could be perpetuated, and this is why we always say there should be a human in the loop for the decision making efficiency. But at the same time, a human in the loop needs to understand the transparency of what are the attributes or what are the factors that went into consideration before the algorithm made this decision and say, hey, interview this person, do not interview this person. So there is an onus on technologists, but I would not eliminate the non-technologists. We have to have an open dialogue with the legal team, with your HR, everyone in the room, as to what are the risks and inherent possibilities of bias.

Speaker 3:

It's definitely a concern. I think that there is a lot of bias that's built into artificial intelligence and the social impact that it's having across different sectors and especially across different communities, and definitely would love to explore the idea of coming to kind of a consensus between the big tech and government entities around, kind of helping. I'm not sure if is there ever a way that we ever kind of see AI built without a bias. Are there let me flip the coin? Are there good the coin? Is there a good type of bias built into artificial intelligence?

Speaker 1:

I recently read a use case of good bias where they were recommending somebody who never had an opportunity for a job before. But before we can go down that path, one of the things I would say is that we can learn a lot from the healthcare and biomedical industry, right? So today, if you ask any health, biomedical or healthcare researcher, there is a possibility of gene editing. There is a possibility of brain transplants. There are a lot of things that are, you know, technically, uh, biologically speaking, possible, but these are not allowed because of the bioethics principles and laws and regulations, right so? So I think we can, um, we can learn from that industry and say, yes, there is possibilities of using ai for a lot of different use cases, but some of that might not be allowed, and I think we are in the nascent stages compared to where biomedical sciences has come a long way. We are in the nascent stages of AI, ai moderation and guardrails establishment.

Speaker 1:

I know Biden has issued the executive order. I read that long paper multiple times. It's more right now where we are is. It's more like a, not even at a warning level. It's more about. These are the considerations.

Speaker 1:

The next step should be we should start giving warnings and labels. I think just this past week, broadband has to issue labels about their speed, etc. So we should start issuing labels. Think of it as labels like nutrition labels on your, on your food. Um, you will have to have labels for ai about how transparent is the algorithm. You know things like that. What is the sources of data? Since it's it's a complex topic, both technologically and research wise it's. We are not yet there, but I see a future where we take a leaf from the biomedical sciences and say, well, we shouldn't be using for criminal justice. Can we use it to make all our law cases, you know, expedite? Yes, there is possibility and potential and it could totally have the capability to do. But should we use? It is a question, and maybe not, just like we don't want to use gene editing right now, although the science exists. So I think that's where we would go.

Speaker 3:

What are some of the challenges that pop up in this space when you navigate conversations between people in the tech sphere and people in the government sphere right? So you've kind of got people who have a very deep level of technical proficiency when it comes to artificial intelligence and some of the people who work on the government side. What are some of the challenges? Is it more of a learning curve that they maybe not understand the tech fully, or is it just something where they understand but they're trying to wrangle how they implement policies around the tech? Or is it just something where they understand but they're trying to wrangle how they implement policies around the tech? What are you seeing?

Speaker 1:

a bit of both. Yeah, it's a. It's a good point. We are actually having some conversations with people on the hill. For another project I'm working um with georgetown because I'm pursuing my executive mba there and I'm actually doing a capstone project. So we've had a lot of conversations in the AI governance space.

Speaker 1:

I think it's a bit of both, because the complexity of AI is it's a, it's a general utility, analogous to electricity, so the use cases are multitude. You can't even start, you know, documenting the use cases, just like electricity, and I think, to the extent I can make an analogy to electricity, the complexity of producing it is also very complex and comprehensive. So there is a bit of that and a lot of people on the Hill are coming up to speed and getting educated. And this is where I come in with my videos and try to do education and advocacy around, demystifying some of the AI, because for those of the folks who are not in technology, they don't need to know the science behind algorithms, right, the math behind algorithms, to the extent that people think they want to know, because we don't want to go and see how is electric current produced, how are the electrons moving? I was just reading a book about electricity last night to my eight-year-old and I was like, oh my God, this is a complex process of how electric current flows through the wire, so similar to that. If I could make an analogy, we might not need to know the math behind it.

Speaker 1:

What is more important is to know what are the inputs that go into this decision making. So, for example, if it's used for recruitment, the question is are you looking at all the historical data of all the candidates who have applied and the candidates who have been rejected? Are you looking at historical data of your organization and the promotions that have happened in your organization? Are you looking at gender data? Are you obfuscating the gender data? Are you anonymizing the data? These are the questions, the logical questions, anyone in that particular field should be asking. So that is where I try to have a lot of conversations with people.

Speaker 1:

As to, you are a subject matter and a domain expert in your area. The types of questions you should ask to you are a subject matter and a domain expert expert in your area. The types of questions you should ask about ai are these what are the inputs? How is so inputs? How is this black box called ai? And the algorithms? How is it processed, as in, what is the weightage of the attributes?

Speaker 1:

If you're taking again the recruitment case, if you're taking thousand of your employees as an input to your recruitment algorithm, are you anonymizing the gender and what are the outcomes? But even though you anonymize the gender, maybe women have been less promoted. What are you doing about it? Because your outcome recommendation might be this person has to be interviewed or not. So if you're a subject matter expert, learn to ask the right questions about the inputs the process that's happening or the business logic that's happening and the outputs. And why is the output this way? Why is it a false positive or a false negative, and what is the rationale behind it. If you can know to that extent, I think you're in a good place.

Speaker 3:

That's incredible. I think that's the hard part, part right, I think that's the part that we're really as kind of maybe even from the technologist perspective, um, you know, even for some of us who are in it, uh, there's still a there's still almost kind of like an ocean of questions to ask and you almost can get stuck in paralysis by analysis of of not knowing what's the right questions. Um, and it's interesting because you kind of say, well, here's a series of questions you should ask it and then here's kind of a series of kind of outputs that you should look to derive from that type of data you mentioned around. I want to touch on the inclusive, inclusive part around the inclusive and diversity within the AI strategy. Can you kind of paint a picture of really what this looks like in practice when we talk about inclusion and diversity within AI?

Speaker 1:

Yeah, I think it's a great question because we often assume, when it comes to ethical AI or responsible AI, who are writing AI algorithms, who are doing AI research and there's a lot of statistics around it which I have covered in my Forbes article. But in a nutshell, a couple of things. The more diverse your team is, who writes these algorithms, the more inclusive your outcomes would be be. Think of it as diversity by design. So you are having these diverse perspectives. That's come into play. So, for example, when it comes to large language models, if somebody is from russia, they think when this large language model has to be used with the russian language, they are more, you know, conversant in the nuances of that language. Similarly, diverse engineering machine learning teams will bring that nuanced approach to design and ask the right questions about being inclusive. So that's being inclusive in your machine learning team. But the second aspect is is being being inclusive of your data sets and data sources. So, so, very small example, think of it if you're doing analysis on some healthcare data and your government agencies will get healthcare data from all the hospitals in the US and there might be some regions and pockets in the US they're not getting healthcare data and they can move on. They can, you know, hypothetically move on without that data. But the question is, you have to dig deep into why are certain regions not giving the data? The reason could be a lack of access of healthcare in those certain pockets, right? So by asking those questions, a technology question about your data sources, you're hitting on a societal point and a challenge, right? So this is where we have to have those inclusive set of people in the conversation when you're designing an algorithm. So if you had your subject matter experts and healthcare officials to say, hey, you're not getting the data and you might omit that data because there's no data that exists, because there is a lack of access of healthcare to certain individuals and certain pockets of communities, and maybe there is another way to get the data because otherwise you're by omission, you're excluding that population in your outcomes.

Speaker 1:

So those kind of questions also could be very useful to include everybody, a diverse team of your subject matter experts. Useful to include everybody, a diverse team of your subject matter experts. And the last point I want to say is it's not enough to have a diverse team and inclusive team. You should have an inclusive culture in your organization. And, as a woman leader and a woman of color, I can speak personally. I will not thrive well if the culture is not inclusive. Right, because you have to be very inclusive and give opportunities for the people who are as part of your organization.

Speaker 3:

Yeah, 100% agree, absolutely. You mentioned healthcare a couple of times and I wanted to kind of explore some of those things. You mentioned healthcare. You mentioned data that there's more and more emergence of artificial intelligence being used by more practitioners For example, doctors working in surgical centers and emergency care units being able to triage patients that come in with certain symptoms and look across maybe a person's potential medical history to see what medications they're taking and then giving them the ability to have some sort of predictive and prescriptive type of approach for care. And you mentioned a human still has to be in the room, right, the doctor is still looking at, interpreting and then making the decision based upon some recommendations they're giving. What are you seeing, also from the healthcare space kind of around the adoption of AI into more of our things like emergency room centers or urgent care centers?

Speaker 1:

I can't speak to urgent care or emergency because I have no personal experience and I've not read any in-depth use cases, but there's a lot of activity happening in areas such as hospital management, in areas such as research to increase patient outcomes, in areas such as pharma.

Speaker 1:

The big pharma are already leveraging it to reduce the time for clinical trials. We know that is the long pull and there's a lot. There's a lot of activity and research happening in, uh, hyper personalized medication, because one of the things we always know is, like the medicine you take and the medicine I take will react differently because of my genes and your genes, because we come from different lineages, right? So there's a lot of activity happening in that space, for sure. With respect to diagnosis, I think the activity that I'm noticing is there's a lot of still a lot of handwritten notes and things of that nature happening that is actually being leveraged using natural language processing and getting to improve diagnostic accuracy, and the biggest area that's being adopted is also radiology. There is one interesting project I did in the lung cancer research space where we took the CT scans of lung cancer, images of lung cancer patients, and we sort of automated reading of the CT scans using machine learning and vision processing, so there's a lot happening in that area as well.

Speaker 3:

That's incredible to see that. What are your predictions for the next five years? Where do you kind of see AI trending? Where do you see government coming out in terms of being able to meet the technologists kind of almost kind of at a tipping point where there is a kind of a joint venture partnership between government and tech industries where they can come together, collaborate and actually put some real kind of maybe regulations around AI? Where do you see that happening in the next couple of years?

Speaker 1:

Yeah, that's a very interesting question. I recently wrote also a blog, a very detailed blog, after the European Union actually came up with very stringent measures against AI use cases, implementation use cases and so on. Maybe US will follow suit. Obviously, we err on the side of innovation compared to establishing more regulation, right? So, um, the the thing I see is the rate at which, especially with since chat gpt, the rate at which the ai technology is accelerating, both because of the hardware think of n, nvidia and Intel coming up and providing the GPUs and as well as the availability and the corpus of these language models. Many of them are open source, sometimes they're closed box it's accelerating like. It's like unprecedented, like. One year back I think I was reading somewhere one year back, chad GPT Tokens could read a blog. Today it can read like Leo Tolstoy's 500 page DOM or in piece, right. So it's accelerating at an unprecedented pace and, from what we heard from Sam Altman's interviews, artificial general intelligence is on the horizon. It's interesting.

Speaker 1:

Eight years back, when I got into the world of machine learning, I was so curious about artificial general intelligence. I went and read so many papers and discourses from world's leading authorities on AI research this was eight years back and their prediction at that point of AGI was 50 years. So but looking at the pace at which OpenAI is accelerating, I would say in the next two years it could be a possibility, agi. But I think we as a society and humanity as a whole, the adoption and people have a lot of friction to adopting things very quickly, right? So and I deal with a lot of federal agencies and a lot of other agencies you won't believe that some of the organizations are still in paper-based processing. They have not even come to the digital transformation. So, while there is incredible possibilities of adoption, I think the adoption will slow down and the hype cycle will slow down.

Speaker 1:

But I think it will be technically feasible to have an artificial general intelligence and, um, the government will come around to establishing stringent guard rails because, like I said, we've learned a lot from biosciences. We might have definitely some guardrails about using AI specifically, whether it's in defense purposes, using AI for criminal justice, these specific use cases will have more and more stringent guardrails. I think a couple of years down the line, definitely a lot more. A lot more what to say, holding the feet to the fire sort of responsibility of the people who create the algorithms Because right now there are no rules right, because, like, there is so much ambiguity. If you are the creator of algorithms, you can license your algorithm and things like that, so who is responsible? There's no legalities around it. I think there's a whole new, exploratory way of establishing legal policy and regulation around that as well.

Speaker 3:

That's incredible, so for many of our listeners out there. So how do we get involved in the movement around building and kind of agreement towards an ethical AI framework that's inclusive and has diversity strategies built into it? What's your final thoughts on that? How do we, as technologists, get involved and stay involved in that movement?

Speaker 1:

No, that's a great question because I think whether you're a technologist and actually even a not a technologist, you have to. Everybody has to know the risks of AI. They have to know what are the boundaries of use of ethical AI. They have to know what are the use cases of responsible AI and what are some use cases that can have very detrimental effects on society. So the best ways to follow some amazing people on LinkedIn there's Elizabeth Adams. She's very prolific.

Speaker 1:

Dr Joy Boulamany, who is the well-known face and name in the ethical AI space, just came up with a book. Please follow her. And there are many others. I can also post a link to the ethical AI framework I co-authored for the US government and it's publicly available. So that's one resource I can offer. But there are a lot of conversations happening, and there's a lot of conversations happening on the Hill as well. There have been multiple times. Even Sam Altman has been called to the Hill in the last one year. So just keep yourself abreast of all the happenings on this topic and follow some of the well-ings in on this topic and follow some of the well known names on this topic.

Speaker 3:

Amazing. And Swati, where can we follow you so we can still continue to keep up with you? I know things are moving so fast work. Where's the best place to follow you?

Speaker 1:

All the socials. I am pink in tech. On TikTok, I am very you know, very proficient and prolific on LinkedIn. I post a lot of videos on LinkedIn and I'm also on YouTube, where I also interview. Actually, two years back, I did a series of interviews of people working in the ethical AI space. For those of them who are more interested in that space can check out my YouTube channel, also under Swati Young.

Speaker 3:

Wonderful. Swati, thank you so very much for joining us on the Tech Travels podcast. Your insights into this topic is extremely fascinating. We hope to have you back on again. Thank you for sharing your vision with us. It's so amazing to see you leading this charge. So thank you for all the work that you're doing and we look forward to the work to come.

Speaker 1:

Thank you so much, Steve, for having me. I look forward to hearing from you and your audiences about any questions you have on AI or ethical AI.

Speaker 3:

Wonderful Thanks everyone.

Speaker 1:

Thank you.

Exploring Ethical AI With Industry Pioneer
Ethical Considerations in Artificial Intelligence
Challenges and Considerations in AI Governance
The Future of AI in Healthcare
Accelerating AI Technology and Ethical Framework