Full Circle with Shawn

Episode 22: Navigating the Maze of Tech Ethics: AI, Privacy, and the Quest for Moral Ground

June 04, 2024 Shawn Taylor Season 1 Episode 22
Episode 22: Navigating the Maze of Tech Ethics: AI, Privacy, and the Quest for Moral Ground
Full Circle with Shawn
More Info
Full Circle with Shawn
Episode 22: Navigating the Maze of Tech Ethics: AI, Privacy, and the Quest for Moral Ground
Jun 04, 2024 Season 1 Episode 22
Shawn Taylor

Is our technological prowess outstripping our ethical guidelines? Join me, Shawn, as I steer you through the murky waters of tech ethics on Full Circle – from the moral quandaries of AI in warfare to the thorny issues of data privacy in our pockets. Together, we'll scrutinize the moral repercussions of AI's decision-making authority and confront the biases that skew its judgment, debating the ever-urgent question: can we keep up with the machines we've created? We also dissect the delicate interplay between personal freedoms and the security measures that occasionally encroach upon them, offering a critical perspective on legislation like the Patriot Act that has reshaped our understanding of privacy.

As we cast our gaze over the horizon of technology's future, we'll dissect the policies molding its ethical backbone, from the GDPR to China's AI strategy. No stone is left unturned as we unravel IBM's AI ethics guidelines and the broader implications for stakeholders and society at large. Pioneering technologies such as quantum computing and neurotechnology are also on the table, as we demand accountability and transparency in an age of unparalleled innovation. The dialogue extends to the promise of a unified ethical AI framework and the tightening grip of regulations that may define the next era of technological advance. Tune in for this crucial conversation where technology meets conscience, and the future is anything but certain.

Send us a Text Message.

Support the Show.

Full Circle with Shawn
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Is our technological prowess outstripping our ethical guidelines? Join me, Shawn, as I steer you through the murky waters of tech ethics on Full Circle – from the moral quandaries of AI in warfare to the thorny issues of data privacy in our pockets. Together, we'll scrutinize the moral repercussions of AI's decision-making authority and confront the biases that skew its judgment, debating the ever-urgent question: can we keep up with the machines we've created? We also dissect the delicate interplay between personal freedoms and the security measures that occasionally encroach upon them, offering a critical perspective on legislation like the Patriot Act that has reshaped our understanding of privacy.

As we cast our gaze over the horizon of technology's future, we'll dissect the policies molding its ethical backbone, from the GDPR to China's AI strategy. No stone is left unturned as we unravel IBM's AI ethics guidelines and the broader implications for stakeholders and society at large. Pioneering technologies such as quantum computing and neurotechnology are also on the table, as we demand accountability and transparency in an age of unparalleled innovation. The dialogue extends to the promise of a unified ethical AI framework and the tightening grip of regulations that may define the next era of technological advance. Tune in for this crucial conversation where technology meets conscience, and the future is anything but certain.

Send us a Text Message.

Support the Show.

Speaker 1:

Hello and welcome back to Full Circle with Sean. I am your host, sean, and today we're talking about technological ethics, which is basically the study of moral issues surrounding the development and the use of technology. And this is a really important subject because it ensures that technological progress both benefits society but it doesn't cause harm or inequity and we all have heard the news stories about inequity in, say, ai, and we'll get into that. So in early history, in debates on nuclear energy, there were a lot of debates around moral implications. So they discussed the ethical ramifications of using nuclear weapons and I know I just said nuclear energy, but nuclear energy went to developing nuclear, which went to nuclear weapons and I know I just said nuclear energy, but nuclear energy went to developing nuclear, which went to nuclear weapons, and this was highlighted by the bombings of Hiroshima and Nagasaki. These debates also considered deterrence theory versus, you know, the ethics of mutually assured destruction which we've all seen in the movies. And then into nuclear power plants. So concerns were raised about safety measures, the risk of meltdowns, and we actually saw meltdowns right, we saw Chernobyl in 1986 and Fukushima in 2011. And then it went even further into the long-term environmental consequences of managing nuclear waste.

Speaker 1:

Now, if we look at more modern concerns, we're talking about artificial intelligence, so AI, we're looking at autonomy. So there's a lot of ethical questions about the decision-making capabilities of AI systems, and this is especially critical with, say, military drones or autonomous vehicles, and we all know in a lot of news stories. You know, you hear about the bias, right. So issues such as racial or gender bias and facial manufacturing and services where AI and robots might replace human roles. And then there's always exceeding human control, right. So that's what a lot of movies are made of the existential risk that AI might one day surpass human intelligence and this could lead to scenarios where humans lose control over the systems. And you get the superintelligent. That's actually a really good movie, I really enjoyed it. But superintelligence, right.

Speaker 1:

And then you have concerns around data privacy. So surveillance, the ethical implications of government and corporate surveillance program, and this was really exemplified by the NSA's global surveillance disclosures by Edward Snowden in 2011. And continuing on. There are a lot of different areas of technological ethics. So, with the amount of data that we provide now, you can have data breaches, right. So what's the ethical responsibilities of companies to secure user data? And you got the Equifax data breach in 2017. And I mean that compromised what? 147 million people. And since we're looking at data, let's look at consent right, so we need clear and informed consent from users before collecting data and using their personal information. And there have been regulations now, like the GDPR, which is the General Data Protection Regulation in the EU.

Speaker 1:

And then we look at what's the balance between, say, security and privacy rights, right, and there's always a debate going on about privacy and security and, which is more important, where's the happy medium? Because it's a debate about how much privacy individuals should sacrifice for national security, and we can see this in the Patriot Act. So I guess we should talk about what is the Patriot Act and the Patriot Act stands for. I guess we should talk about what is the Patriot Act and the Patriot Act stands for uniting and strengthening America by providing appropriate tools required to intercept and obstruct, and it was signed into law in October of 2001. And it was a response to the September 11 attacks, right, and it was aimed to enhance national security by expanding surveillance powers Okay, so it increased the ability of law enforcement and intelligence agencies to conduct surveillance and this includes tapping phone calls and monitoring emails without a standard court order. Now. It also had financial regulations. It imposed stricter regulations on financial activities to prevent money laundering and to cut off funding to terrorist organizations. It enhanced border security, so the act expanded the criteria for detaining and deporting immigrants. And information sharing and I think that was the key point that most of the news covered was the information sharing, so it encouraged greater sharing of information between various government agencies. And that's the Patriot Act right, and it's very controversial. Some supporters are arguing that it's essential for preventing terrorism, while the critics contend that it infringes on civil liberties and privacy rights and particularly they focus on the provisions that allow surveillance without oversight. And over the years, some parts of the Act have been modified or even allowed to expire, while many still remain in force.

Speaker 1:

So if we look at around the world, not just in the United States the UK, the United Kingdom, has a terrorist act as well Terrorism Act of 2000. And Canada Canada has an Anti Terrorism Act of 2000. And Canada Canada has an anti-terrorism act of 2001. Australia Australia has the Australian Anti-Terrorism Act of 2005. India India had the Prevention of Terrorism Act in 2002, but it was repealed in 2004. France France has various laws post-2015 attacks and all of these laws often share the common features that expand surveillance powers, reduce checks on law enforcement and provisions that can curtail civil liberties.

Speaker 1:

So, again, we're looking at technological ethics. So we're using technology and how should we be using it in a way that we still are ethical? And if we look at some of the key events, right, let's go back to some more key events that shaped ethical considerations in technology. Right, we have the Manhattan Project, and that was what 1942 to 1946, and that included the development and deployment of the atomic bomb, and it raised a lot of ethical questions about the use of scientific knowledge for destruction and the moral implications of mass civilian casualties. In 1957 and 1961, we had the thalidomide disaster right, the widespread use of that drug, which caused thousands of birth defects, and it highlighted the need for more rigorous drug testing and regulation and it really sparked a lot of reforms in pharmaceutical testing and ethics standards in clinical trials a lot of reforms in pharmaceutical testing and ethics standards in clinical trials. In 1932 to 1972, there was the Tuskegee syphilis study, and it was considered unethical hindsight right, no-transcript and this led to major changes in US law and policy on the protection of humans subject to medical research.

Speaker 1:

In 1986, as we already said, we had the Chernobyl disaster, and this nuclear accident in the Soviet Union emphasized the risks associated with nuclear power and it led to increased security, improved safety standards and stronger regulation for frameworks worldwide. In 2001, we had the Enron scandal, and this was massive corporate fraud right and it highlighted the need for better oversight and corporate governance and financial accountability, and this led to a lot of legislation to enhance transparency and protect shareholders. And then, as I already said as well, in 2013, we had the Snowden revelations, so the global surveillance disclosure that Edward Snowden made, and it sparked a worldwide debate on privacy, surveillance and even the balance between national security and individual rights. And then in 2018, the Cambridge Analytical scandal, and that was the misuse of personal data from millions of Facebook users for political campaigning and called for stricter laws and data protection, like GDPR that we talked about, to be enforced more rigorously and to be enforced globally in various areas of technology, and it has led to significant legislations and some regulatory changes. But it also should be used to increase our awareness about ethical responsibilities and technology so that we can continue to develop further standards.

Speaker 1:

Now, some of the major ethical principles in technology is benefits, so technology should benefit the people and contribute positively to society right. Technology innovations should not harm individuals or society. And then, when you look at autonomy, it's respecting the capacity of individuals to make voluntary and informed decisions about technology use and justice right. So ensuring equitable access to technology and its benefits and preventing discrimination. And, as we've talked about some of the ethical issues in technology, obviously data privacy and security. So ethical handling of user data, implications of breaches and the balance between security and privacy. And then, if we look at AI, so AI ethics right. Autonomous decisions made using AI can be biased they're biased in AI algorithms and the potential consequences on employment and society.

Speaker 1:

And then, if we look at social media ethics, there's the issue of misinformation right. The digital well-being and the responsibilities of platforms to manage content and to what point should they have to manage content. And that's actually a big global issue right now and you can see in the news all the time take this down, put this up. You know at what point should they be responsible for managing the content. So let's look at some case studies and some approaches to the ethical dilemma.

Speaker 1:

Google's AI principles To start with. Google has established a set of AI principles. It's meant to be a guide in development and deployment of artificial intelligence technologies, and their principles focus on ensuring that AI applications are socially beneficial, they try to avoid creating or reinforcing unfair biases, and are built and tested for safety, are accountable to people and are privacy focused. Now, just because you have principles set in place doesn't always mean that they work or are followed, because, at the end of the day, you have the human side of it as well. Right, and Google's approach to AI ethics has been under scrutiny, especially after they dismissed a bunch of AI ethic researchers, and it really raised a bunch of questions about the enforcement of their principles.

Speaker 1:

Now, if we look at the, say, an analysis of a significant data breach and its ethical implications, we can go back to the Equifax data breach of 2017. And this breach exposed the personal information of about 147 million people. As we said before, this incident highlighted ethical lapses in corporate responsibility to protect consumer data. Now, it also led to discussions on the need for stronger regulatory oversight and even cybersecurity measures, and then transparent communication with affected individuals about the risk and protections against identity theft. And then we look at, you know, as we said before, social media, right, so say, facebook and hate speech, and Facebook has faced significant ethical challenges in moderating content, specifically regarding hate speech and misinformation, because the platform uses a combination of AI tools and human reviewers to enforce its community standards. It's not as straightforward and it's a massive platform, as you are well aware to really monitor it, and it's led to ethical considerations, including the balance of freedom of expression with the need to prevent harm caused by hate speech and false information. And the debate is really intense on all sides, right, especially after platforms are all in political outcomes and social movements have led to political scrutiny and public demands for more efficient and transparent moderation processes.

Speaker 1:

Now, if we look at global and national policies aimed at guiding ethical technology development, again we have GDPR. Right In 2018, gdpr came out and it set a very stringent guideline on data protection and privacy for all people within the EU. You have, you know, china's AI development plan, which came out in 2017. And their policy is to become the world leader in AI by 2030. But it does include ethical guidelines for AI development, more focusing on enhanced welfare and adhering to global ethical norms. You have, you know, the FCC, so the FCC plays a role in enforcing privacy standards and ethical practices within the tech industry and really particularly in terms of consumer protection and preventing deceptive practices and we've talked quite a bit now around misinformation, presenting deceptive practices, and we've talked quite a bit now around misinformation and it is supposed to take actions against companies like Facebook and even YouTube for privacy violations. You have the European Data Protection Board, so EDPB, and that's everybody's an acronym right, and that's an EU body that ensures consistent application of GDPR across member states and it even provides the guidance and clarifications on the regulations provisions.

Speaker 1:

Now, there are a lot of challenges in crafting regulations, especially that keep pace with the advancements, because technology is moving very, very fast. So new technologies like blockchain, cryptocurrency and even advanced biotechnology pose significant regulatory challenges and, like I said, because their development is very rapid and they're cross-border in nature, they often outpace existing legal frameworks. So what are some best practices, let's say, for incorporating ethics into your tech development or tech development as a whole? So, if you look at IBM's AI ethical guidelines, ibm has implemented AI ethic guidelines that emphasize, say, trustworthiness, transparentness and fairness in AI systems. These guidelines serve as a foundation for all AI development within a company and they could be used worldwide. Right, this is the basis we're trying to create a foundation that we should be working with.

Speaker 1:

You need to incorporate diverse stakeholders okay, so companies like Google and Microsoft engage with external experts, including people that focus specifically on ethics and society and people that represent potentially impacted communities, to gain diverse perspectives on the ethical implications of said technology. You need training. So you need leading tech companies and even academic institutions to require ethics training for software developers and for engineers, and if you look at, say, stanford University, they offer courses in computer science ethics and it covers topics like privacy, bias, social impact and it really prepares their students to consider these aspects in their future careers prepares their students to consider these aspects in their future careers. You need to look at implementing ethical audits and reviews. So companies like Airbnb and Facebook they employ third-party firms to conduct independent ethics audits on all of their AI algorithms, and these audits assess the fairness and the impact of the algorithm, so it really helps to identify biases and ensure compliance with their ethical standards. You need iterative testing and feedback loops right, so by adopting user-centric design approaches, companies can continuously gather feedback from users and really improve the ethical alignment of their technologies, and this is really important in fields like healthcare and education, where the impact on end users are very significant. And then, finally, you need transparency and accountability right? So, say, salesforce and some other tech giants promote transparency by publicly sharing their ethical AI development process and the outcomes of internal audits. So transparency and this openness holds the company accountable to both the users and the regulators, and it really increased their trust and their credibility in what they're building, or what they have right at the time.

Speaker 1:

Now let's look quickly at some emerging technologies with significant ethical implications, like, let's start with, quantum computing, right so cryptography and security. Quantum computing poses a very significant ethical concern, okay, because it has the potential to break current cryptographic methods, and so it could compromise data security globally, and this raises the question about privacy and the protection of sensitive information. If we look at neurotechnology, so say, your BCIs, your brain computer interfaces these are technologies that allow direct communication between the brain and an external device, and they can enhance human capabilities, sure, but they also raise ethical concerns related to consent, mental privacy, identity and agency. For example, companies like Neuralink are working on an implantable brain-machine interface, and this is really prompting a lot of discussions on the ethical treatment of users and the potential for manipulation. And then, if we look at, say, your enhanced reality technologies, your VR, your virtual reality, or your AR, your augmented reality. These technologies can create really immersive experience and they can alter your perception, and they have been used in everything from gaming to psychological therapy.

Speaker 1:

But the ethical implications can be around desensitizing people and privacy, because a lot of these technologies collect detailed user information. I think some of them even scan your retina and they can really blur the line between reality and simulation, right? So? Especially when you get into mixed reality, so that mixed reality is like a combination of VR or AR and it integrates digital content with the real world in real time. And this raises a lot of ethical questions similar to VR and AR, regarding user safety, data privacy, psychological impacts, but also, what are the impacts of prolonged exposure to altered realities? Right, and we don't know these things yet.

Speaker 1:

So sure, fast forward 5, 10, 20 years and we'll have hindsight and we can go oh well, we should have done this or we did this and this was good, but right now it's about debating these things and trying to find as good as we can do right now, right? So let's look at some predictions on how ethical debates in technology will evolve over the next decade or so, and what we should expect to see is a greater emphasis on AI ethics, right? So some kind of standardizing ethical AI frameworks maybe. You know, as AI technologies become more pervasive, there will likely be a push toward establishing more global standards and frameworks, and this can involve international agreements similar to the, say, the Paris Agreement for climate change, and it'll really focus on AI governance right, increased regulation and legislation, as we said, but more specific laws for emerging technologies. So governments may enact more specific laws targeting emerging technologies, such as deep fakes, autonomous vehicles and personalized biotechnologies. And you should really think about that for a second, because that's where we're going right. We are starting to break the bounds of what we have today and we're starting to have all this really cool technology that's going to start flooding the market in most of our lifetimes, and it needs to find some kind of standard, some kind of regulation and make sure that it's as safe as we can have it.

Speaker 1:

Okay, so you're also going to see a lot of debates on data privacy and sovereignty. So it's not just about privacy, it is about sovereignty. So you know, people's individual rights to control where their information is stored, individual rights to control where their information is stored, and this is already starting to push out in a lot of countries and a lot of laws over the misuse of personal data by both corporations and by governments. You're going to find a lot of ethical use of neurotechnology. So consent, cognitive liberty right. With the advances in neural technologies, ethical debates are expected to focus on cognitive liberty, the right to mental privacy, consent for cognitive enhancement and even the ethical implications of memory manipulation, because we're going to get there. And then there's going to be a lot of focus on equality and access, and that's going to be a massive thing, right, trying to bridge the divide the digital divide that's there, and discussions will likely address the inequalities in access to technologies, and there'll be lots of initiatives aimed at ensuring that all populations have access to the benefit of emerging technologies, and this will be obviously geared toward preventing a split into technology-rich and technology-poor communities. Now, obviously, these predictions suggest a dynamic and evolving landscape for ethical debates in technology, where we reflect both the rapid pace of technological change and the increasing awareness of its impacts on society, because, while we all talk about it now, we all watch the science fiction shows and everything like that as they become more real, more tangible and here today, then everybody's going to start becoming aware and they're going to start looking at the ups and downs, start becoming aware and they're going to start looking at the ups and downs and hopefully the governments and the corporations have already put in some kind of framework to get us started right Now.

Speaker 1:

What I didn't go through a bit is what is right and what is wrong, because I don't know. And if you go back to my ethics podcast that I did a while back, you'll see one part of it is ethical dilemmas, right. So I mean, for example, if you talk to somebody about surveillance and their view might be no surveillance, I don't agree with it. I don't think that the government should watch me and I agree with very little surveillance, and that's fine. And then you might get somebody that's been a victim of something, something abusive or something in society, and they will want more surveillance and they will, they want more surveillance and they'll they'll see the benefits of surveillance and be willing to give up more of their personal freedoms for more, more benefit. So so what is the happy medium? And how do you, how do you deal with that? And look, I don't know.

Speaker 1:

I go back and forth myself half the time. I mean, no, I don't want somebody watching my absolute every move and you can say well, if you're not doing anything wrong, then what do you care if somebody's watching you? But it's not about that, it's about the trends that you do, the manipulation that can happen, because somebody knows your every move and not just in, say, robbing your house. It could be, you know, all of a sudden you know that that data goes to commercial companies and and they know your, your habits and your trends, and when you get paid and when you come home, and they can target that. And and sure, yeah, I might. You know, sign up for you know a super saver program where I go and I buy groceries and I scan my card and obviously they know what I normally buy. But I also get some money off and I might be more willing to do that. But that's my choice. When it's not my choice, then me as a person. I don't like it as much, and you shouldn't either, but that doesn't mean that it shouldn't happen.

Speaker 1:

And then to what extent? What extent do we allow it? And then, if you go to, okay, well, surveillance is fine, it's mined by AI, so a lot less problems with it. But is that really true? Because we know that AI is built by people and people have biases, and we've found a lot of biases in AI, and just because there might be audits doesn't mean that there still won't be biases or things that won't be found until later.

Speaker 1:

So we have to be cautious. We need to be cautious, we need to make sure there's human interaction, and these things need to be debated, but they need to be debated by people on both sides of the fence, so people that are very passionate on zero and people that are very passionate on 100, and everybody in between. And through that kind of debate and through those kind of conversations, we can start to pick out what will be acceptable and what we can try, because this is a new age for the world, right? This is an unmapped landscape that we need to find our way in, and we will, and we've done it with other things in the past and we will find our way. Now, hopefully, there's less pain than we could have if we don't debate it, if we don't start putting in some rules. But this conversation can obviously go on and on, and on, and on, and I have passed my time today, so let's stop it there. Now. Something really exciting On the next chat that we have, we'll be talking about startups.

Speaker 1:

So I do two podcasts a week and every once in a while I do a special Saturday I'll release. But so for the two a week, one will be on startups and it also be relevant to small businesses and that'll be one of our chats every week. And the other chat we will stay focused on foundational building and life's lessons. So the next chat we have again will be the first chat on startups and we will start at the beginning with getting started. So thanks again for joining me on Full Circle with Sean and I look forward to chatting with you next time.

Technological Ethics and Moral Implications
Ethics in Technology Development

Podcasts we love