Employment Matters

593: How the AI Evolution is Reshaping the Future of Legal Systems

June 20, 2024 Cynthia Chung & Albert Yen Episode 593
593: How the AI Evolution is Reshaping the Future of Legal Systems
Employment Matters
More Info
Employment Matters
593: How the AI Evolution is Reshaping the Future of Legal Systems
Jun 20, 2024 Episode 593
Cynthia Chung & Albert Yen

In this episode, we discuss technological advancements in AI and the impact of these advancements on our current legal systems and how they might need to be reshaped to cope with the changes. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.

Host: Cynthia Chung (email) (Deacons / Hong Kong)

Guest Speaker: Albert Yen (email) (Lee, Tsai & Partners / Taiwan)

Support the Show.

Register on the ELA website here to receive email invitations to future programs.

Employment Matters +
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript

In this episode, we discuss technological advancements in AI and the impact of these advancements on our current legal systems and how they might need to be reshaped to cope with the changes. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.

Host: Cynthia Chung (email) (Deacons / Hong Kong)

Guest Speaker: Albert Yen (email) (Lee, Tsai & Partners / Taiwan)

Support the Show.

Register on the ELA website here to receive email invitations to future programs.

Hello everyone, and welcome to the Employment Matters podcast, brought to you by the Employment Law Alliance – the largest network of labour and employment lawyers from the best law firms around the world.

I’m your host – Cynthia Chung – Partner at Deacons in Hong Kong.  

On the program, we span the globe, and receive updates on critical issues from ELA members in each region.   

On today’s episode, we’ll be discussing How the AI Evolution is Reshaping the Future of Legal Systems.

Joining us on the program is Albert Yen, Associate at Lee, Tsai & Partners in Taiwan.

Albert - WELCOME- we are delighted to have you on our program!  Thank you for joining us.  [How are you today?]

Albert Yen: Thank you, Cynthia! Hi everyone on ELA podcast, this is Albert Yen from Lee Tsai & Partners. I’m so glad to have this opportunity to share some of my observations on the AI industry and its legal aspects with you.

[Cynthia – here are the questions and answers provided by the speaker.  Please modify the questions in your own words, as usual]. 

Cynthia Chung: Today's topic is "How the AI Evolution is Reshaping the Future of Legal Systems." Can you start by telling us what this topic means?

Albert Yen: Sure! As you might have noticed, there’s been a huge breakthrough in AI technology recently. The Big Techs are now increasing their investment in AI development, especially in AI infrastructure. 

The capital markets are also crazy for AI, and it feels like the new AI era is closer than ever. 

Beyond the industry, AI is also sparking intense discussions on many aspects such as social, economic, and law.

Regarding our topic, "How the AI Evolution is Reshaping the Future of Legal Systems," I would like to talk about two parts. 

The first part will focus on the AI evolution itself. I would like to talk about the nature essence of these technological advancements, how they differ from traditional AI, and why they capture so much imagination. 

The second part will cover the impact of these advancements on our current legal systems and how they might need to be reshaped to cope with the changes.

Cynthia Chung: Sounds interesting. Shall we start with the first part?

Albert Yen: Sure!  You might have heard this analogy before: the principles of Artificial Intelligence models are kind of like mimicking the real human brain. Although not all experts will agree with this analogy, I think it actually helps ordinary people understand AI better. 

Specifically, our brains have a vast number of basic units called neurons. Neurons are connected by axons and dendrites, forming a complex neural network. 

Similarly, AI models—especially the ones developed recently using deep learning—are built from numerous basic units called perceptron, mathematically. 

Each perceptron processes multiple inputs, applies different parameters, and then produces a single output, and this output will become the input for the next layer of perceptron. 

You might have found a similarity between them, from the math perspective. And, because of this similarity, these models are called “artificial neural networks.”

Just like the human brain, with its vast and intricate network of neurons performing incredible functions, AI models have shown similar phenomena.When AI models reach a certain size—meaning when they have a huge number of parameters—scientists have noticed a sudden leap in performance. This phenomenon is not limited to a single task but spans multiple ones, just like a person experiencing a sudden epiphany. So, academically, this phenomenon is also referred to as emergent abilities.

Cynthia Chung: Just like the abilities ChatGPT has shown, right?

Albert Yen: Exactly.ChatGPT is really what brought large models into the spotlight. We know ChatGPT is based on the GPT model, which is a type of large language model. When ChatGPT was first released at the end of 2022, it was running on GPT-3.5, which had about 180 billion parameters. Recently, with GPT-4, the number of parameters has jumped to around 1.8 trillion.Compared to the smaller traditional AI models people were familiar with before ChatGPT, these numbers are just huge!

Cynthia Chung: I’m just curious, why was this emergent ability only discovered recently?

Albert Yen: Great question!  It's because the technical conditions were just not advanced enough in the past. 

We know that the larger the number of parameters in a model, the more FLOPs, Floating-point operations per second, is needed for training. This requires incredibly powerful processors and very efficient parallel processing. And without this, we are actually unable to train such large AI models.So, advances in semiconductor technology are just crucial for the AI evolution.  Besides, large AI models need a lot of training data. And, the improvements in mobile communications have made data collection much more efficient, which are also essential.  And finally, there have been also significant progresses in the AI algorithms themselves. One of the key breakthroughs was the introduction of the well-known Transformer architecture in 2017.

All these technological advances just have created a powerful synergy that propelled this AI wave.

Cynthia Chung: So, are current AIs close to human intelligence?

Albert Yen: Obviously, not yet. But people are definitely starting to imagine that.We’ve mentioned that GPT-4 has 1.8 trillion parameters, but do you know how many connections there are between the neurons in a human brain?  It's about 100 trillion! 

We can't yet create an AI model that large in a cost-effective way, but this doesn’t prevent us from imagining: if AI models reach that scale, will there be a sudden leap in their capabilities again, I mean, like the phenomenon of emergent ability we just talked about, but potentially matching or even surpassing human intelligence?This question just ties into the next part of our discussion.

Cynthia Chung: Let's get into that! How is the evolution of AI impacting legal systems?

Albert Yen: Sure. The AI evolution affects the legal system in many ways. Let's start with intellectual property. Recently, we've seen many copyright infringement disputes between AI developers and copyright holders. 

This is because, unlike traditional smaller AI models, which are typically used for missions of classification, clustering or regression, specifically, answering yes or no, or giving us a number, large AI models are mostly for "generation," which means to create structured content like text, images, or audio, also known as “generative AI.” 

During the training phase, generative AIs often use other people's works. For example, some well-known text-to-image models just use web scraping techniques to extract data from online image databases for training.  

The model developers might argue this is a "fair use," and there are actually some court decisions on similar issues regarding the search engine technology they can refer to.How the courts will handle these new disputes actually needs further observation.

Another aspect of AI's impact on intellectual property is whether AI can be considered an inventor or a creator. Currently, most jurisdictions consider that inventors or creators must be natural persons. 

Accordingly, If AI assists in an invention or creation and a human has contribution to that invention or creation, the human then should be considered the only inventor or creator. However, if the AI independently invents or creates something, it cannot be granted a patent or copyright under the current law systems.

This viewpoint actually makes sense for now, because today's AIs are more of tools assisting humans, despite their impressive capabilities. 

However, we tend to think more futuristically.

You might remember we just talked about the emergent abilities that appear when model parameters reach a certain threshold.This phenomenon just drives us to increase the model size, which is a trend known as "scaling law." 

Another important trend is "multimodality."The term "multimodality" refers to that AI model’s inputs and outputs are not limited to a single type such as text, images, or audio, but can input and output text, images, and audio simultaneously.

For example, OpenAI's recently released GPT-4o (the "o" stands for "omni," meaning everywhere or everything) is a multimodal model.  Unlike Large Language Models like ChatGPT that only handle text, (if you want to use LLMs to handle images or audio, you may need to connect them to other type of models,) multimodal models significantly enhance the interaction with human, just like AI has developed eyes, ears, and gain the ability to speak.

With the advancements in scaling law and multimodality we just mentioned, the line between AI-assisted creation and AI-independent creation will just blur. 

Then we may wonder, if AI reaches a point where it can truly create something independently, should our intellectual property laws still deny to grant rights to AI?Will we need a new legal framework to protect these kind of creations? These are some questions that will need further exploration.

Cynthia Chung: Interesting!So, what about the impact of AI on other aspects of the legal system?

Albert Yen: OK, let's move on to civil liability. 

Nowadays, AI is already being used commercially in many applications.However, when AI-related products cause accidents, determining the liability of the users or manufacturers can actually be very challenging. 

This is because AI models, particularly the artificial neural network-based models wejust mentioned, are somewhat like "black boxes" to us.After training AI models via deep learning algorithms, we can only know the inputs and roughly predict the outputs, but we can't actually understand the complex relationships between the parameters within the model. 

This difficulty to explain the input-output relationship does complicate some issues, such as how to establish the causality between the actions of the users or manufacturers and the accident result, or how to determine whether there was a breach of duty of care.

To address this, at first, we may need to think about how to more appropriately allocate the burden of proof in such cases, which may require more research in the field of economic analysis of law.Furthermore, to take a more fundamental approach, we may need to ensure that AI models are explainable, to a certain extent, depending on their application context.However, technically, this might be a trade-off, as making models more explainable could affect their performance in other perspectives.

Cynthia Chung: This involves how public authorities should regulate AI, right?

Albert Yen: That’s right.  Let me move on to the government regulation of AI. 

This can actually be two phases: the development phase and the application phase. Ensuring that AI models are explainable just falls under the development phase. Another frequently mentioned regulation goal in this phase is transparency.We know that the Big Techs are heavily investing in the development of large AI models (also known as "foundation models"), and it’s foreseeable that the future landscape of AI industry will revolve around these large AI models, much like the APP ecosystems that grew around iOS and Android platforms. 

As we’ve already realized, the impact of the AI on society will be so large, and the central role of these large AI models is just crucial.  However, the resources needed to develop these large AI models, including the algorithms, the computing power, and the data needed for training, are primarily controlled by the few Big Techs. (It’s mainly because they are the only ones who can afford the costs.) 

Also, currently, the mainstream development of the large AI models remains to be closed-source. 

Therefore, even though the Big Techs have made many public commitments regarding AI ethics, it doesn't entirely alleviate public concerns. This might push public authorities to emphasize the transparency more in the future, possibly requiring developers to disclose relevant information about algorithms and training data during the development phase. This remains to be seen.

And, as for the application phase of AI models, it may be necessary to design regulatory policies tailored to different legal fields.For example, in the field of employment law, should the public authorities adopt regulatory measures to ensure that employers using AI for recruitment do not discriminate or treat unfairly? How should the line be drawn when employers use AI to monitor the employees' work situations? 

Similarly, in the field of competition law, should the use of pricing algorithms between competing businesses to achieve a tacit collusion be allowed, and how should the line be drawn? 

These are some examples of the issues that may need further discussion as the legal system evolves with AI.

Cynthia Chung: Our podcast is coming to an end today. Albert, do you have any final topic you'd like to share with us?

Albert Yen: Sure, I would like to move on to the last point.  

As we've mentioned several times, the Big Techs are engaging in an arms race in AI infrastructure to train large AI models. For example, according to media reports, Microsoft and Open AI are now investing over $100 billion to build a super data center called "star gate." 

We may need to note that the ongoing expansion of data center construction signifies the need for a much larger and more stable energy supply.However, it seems that the existing energy technologies may not be enough to take us far, unless we make a significant breakthrough in something like nuclear fusion.

Let's not forget that these Big Techs have made some commitments to reduce carbon emissions before this AI wave, and those commitments were already challenging given the industry's circumstances at that time, let alone now.Also, besides electricity, the rapid increase in water resource consumption is also worth noting. 

Therefore, in the future, how to find a good balance point between developing AI and addressing climate changes will actually be a very important issue for us.

[Cynthia - wrap-up - feel free to use your own words for these closing remarks]

Well, this is all the time that we have today.  Albert, this has been a very interesting discussion. Thank you for your time.  It was a pleasure speaking with you!  

Albert Yen: Thank you, Cynthia, and thank you to all on ELA podcast for your time. See you next time!

And thank you, listeners, for tuning in.  If you would like to connect with Albert, please click on his bio in the description of this podcast.  

We also encourage you to reach out to any of our lawyers around the world, by selecting “Find a Lawyer” on the ELA website at ela.law.  

In addition, search the ELA website where you can sign up to receive invitations to our upcoming webinars, download white papers and on-demand content from our online library, or access the ELA’s exclusive Global Employer Handbook. 

Lastly, please download the brand-new Employment Law Alliance mobile app, and have the power of the ELA in the palm of your hand. You’ll find it in the Apple App Store or Google Play store.  

You’ve been listening to Employment Matters; a podcast brought to you by the Employment Law Alliance - the world’s largest network of labour and employment lawyers from the best law firms around the globe.  

I’m Cynthia Chung.  Thanks for listening.