Image Rights byNMashinini
My name is Nomalanga Mashinini. I advocate for an economically and socially advanced approach to the protection of images. It is important that people should be prepared to protect their image rights on social media, against misuses of artificial intelligence and commercial exploitation. Since you have a personality, you have image rights. Find out how to #controlyourimagerights by listening to this podcast.
Image Rights byNMashinini
Deepfakes in Cybercrimes
Send me a message, question or comment
In this episode I discuss deepfakes regulation under the Cybercrimes Bill B6D-2017 in South Africa and the impact of deepfakes on Image Rights. This Bill was signed two days after this episode aired. The relevant law is now the Cybercrimes Act 19 of 2020.
Updates on Image Rights are published on social media @irbynmashinini
Facebook https://web.facebook.com/IRbyNMashinini
Twitterhttps://twitter.com/irbynmashinini
Instagram https://www.instagram.com/irbynmashinini/
LinkedIn https://linkedin.com/company/irbynmashinini
Email: questions@irbymashinini.com
Host: Nomalanga Mashinini
Sources:
- Constitution of the Republic of South Africa (1996).
- Cybercrimes Bill B6D-2017.
- Citron, D.K. and Chesney, R. (2019) “Deep fakes: A looming challenge for privacy, democracy and national security”, California Law Review, Vol 107, pp 1753-1820.
- Citron, D.K. and Chesney, R. (2019) “21st century-style truth decay: Deepfakes and the challenge for privacy, free expression, and national security”, Maryland Law Review, Vol 78, pp 882-891.
- Farish, K. (2020) “Do deepfakes pose a golden opportunity? Considering whether English law should adopt California’s publicity right in the age of the deepfake”, Journal of Intellectual Property Law & Practice, Vol 15, No. 1, pp 40-48.
- Ferraro, M.F. (2019) “Deepfake legislation: A nationwide survey – state and federal lawmakers consider legislation to regulate manipulated media”, [online], WilmerHale https://www.jdsupra.com/legalnews/deepfake-legislation-a-nationwide-86809/.
- Kietzmann, J. et al. (2020) “Deepfakes: Trick or treat?”, Business Horizons, Vol 63, No. 2, pp 135 -136.
We are a growing community seeking to ensure the protection of Image Rights for all people. Thank you for contributing to the growth of this podcast.
Deepfakes in Cybercrimes
0:00
Welcome back to Image Rights by N Mashinini. I'm your host, Nomalanga Mashinini. This is a podcast that speaks about image rights, the right to identity, the right to publicity, and in the last episode, we spoke about the deepfakes phenomenon and how it impacts on image rights. That perspective was more on a one-on-one basis, a civil rights kind of basis. This time around, I want to talk about how the state can prosecute people who create and disseminate deepfakes. Deepfakes are getting really popular, especially in my circles, I should say, since last year in 2020. I think with the whole quarantine, people started getting some attention on what the deepfakes phenomenon presents to us and the dangers of it. And by now, I think a lot of people have probably seen the Jordan Peele skit on deepfakes where he uses former president of the United States Barack Obama's image to say some really questionable things, but from a perspective of trying to educate people about the dangers with deepfakes, especially for election and what politics resemble in media and also especially in the context of preserving democratic states.
2:19
Deepfakes are really threatening what we know as truth, as we see it on social media on television, so forth. Last episode, I explained that deepfakes are basically videos and audios and photographic material that are a fake. But the the element that is making it such a concern is the artificial intelligence deep learning mechanism that has being used to create these fake audio visual media. And deepfakes have grown really popular in political spaces, as well as pornography. In fact, the first viral deepfake was posted by someone on Reddit known as 'deepfake'; hence, the name, and it was a pornographic image of a celebrity's face superimposed on a pornographic video.
3:23
So the key elements of a deepfake are that it looks and sounds real. So it's intended to deceive whoever is looking at it. And the good quality of artificial intelligence audio visual media is that it has this ability to make something look so real, which raises questions about "Can we really trust electronic evidence or digital evidence these days? Can we really trust what we see on social media in particular?" Some of you will remember that not so long ago, Congresswoman Stefanik was a victim of deepfakes when a picture of her showing the middle finger in Congress went viral. And it was only hours later, when people found out that actually there was a deepfake, it was an altered image, but it looks so real, because of how deepfake software is available out there freely for everyone to download. I mean, I myself was fiddling with a deepfake software just the other day to see really what can come out of this. The problem with deepfakes is that what we see never really happened. It's one thing to have a viral post that shows something that happens is another thing to have a viral post that show something that never really happened because it means that with deefakes we're living in a world where anyone can place you in a context that is totally made up. And the issue is that, it is our identity attributes that run the risk of being used and misused in deepfakes.And the main gap with deepfakes is found in the protection of identity, and that is image rights, which is why it's so relevant for me to discuss this.
5:23
Image rights are part of one's human dignity. It's about how people see you, in the public space, and deepfakes have the potential of degrading the way in which people see you and misrepresenting who you are. In fact, to create a deepfake of someone takes stealing the images and running them through deepfake software. So there's an element of misappropriation of your identity as well there. And the main issue in South Africa as well is that when you're looking at issues of human dignity and your right to identity, you're dealing with constitutional rights and values that are being infringed. No doubt, there's a question of freedom of expression for those who create and share deepfakes, that is, deepfakes that do not degrade human beings, but it always becomes a problem particularly in the deepfakes space because a lot of deepfakes engage in creating or sharing or depicting material that relates to pornographic and degrading material.
6:38
The question about pornographic material is one key thing that raises questions about gender based violence, a new form, or maybe not necessarily new, but a different kind of angle to gender-based violence. Obviously, the majority of the victims of deepfake pornographic materials are woman and children. So, in this world of identity misappropriation and exploitation, you need to understand that deepfakes have the ability of really undermining our legal system, and also our criminal law system.
7:22
Now the Cybercrimes Bill has been on the table for years in South Africa. And I'm sure those of you who know about it already rolling your eyes like, "why are we waiting"? The issue with the Cybercrimes Bill is that at this point in time, I know that it is still waiting for the President's signature, before it can come into force. So while we're still waiting, I could say that at least we are almost there compared to where we were a couple of months back. And I do believe that the Cybercrimes Bill could try and solve some of the issues that we see with deepfakes because deepfakes give grounds for prosecution under the Cybercrimes Bill for all those who create and those who distribute deepfakes because it regulates malicious communications in section 16 of the Bill. According to section 16, either fake or real videos that are intentionally revealing of a person's private parts in a manner that offends their dignity, which obviously includes their image rights, will be punishable, if a victim or a person who is depicted in a deepfake can show reasonable expectation of privacy.
8:48
What is interesting about the Cybercrimes Bill is that nowhere does it really specify the term deepfake but the broad language that speaks about malicious communications can make room for deepfakes, and also the broad language that speaks about data messages that are used to infringe on people's identity and dignity. All those kinds of terms give us room to open up interpretation that deepfakes are included in what is regulated under the Cybercrimes Bill. What I find troubling though about section 16 of the Cybercrimes Bill is that it only deals with fake or real videos that reveal people's private parts. And a deepfake will not always deal with pornographic material or not all always be about pornographic material. And so I wonder, in cases dealing with elections, politicians and so forth, especially in South Africa, having an oncoming election soon, I wonder when a defect is used in that context, would something like that be captured under the Cybercrimes Bill, especially not knowing when the President is going to sign this Bill.
10:14
I think there's actually already issues that need to be addressed in this particular Bill. Although it doesn't prevent anyone, even a politician who suffers the brunt of a deepfake during election, it doesn't prevent them from going for a fraud charge. Because, if you think about the old charge of fraud, or the old crime of fraud under the common law system, it does cover issues of misrepresentation that has a detrimental effect on its victims, and this could be in cases involving or not involving money. So, I think the common law fraud crime is capable of covering deepfakes as well.
11:06
Another issue that I've found so interesting under section 16 of the Cybercrimes Bill is that whatever malicious communications or a deepfake in this case, that is used in a way that is revealing of someone's private parts, it would only be punishable if the victim has a reasonable expectation of privacy over that material. And I wonder, why is it that the legislature chose to limit section 16 to those private facts that one seeks to keep private because if you think about the way in which a deepfake creator would normally operate, whether or not your image is available in the in the public's eye or in your private space, the deepfake creator is using someone's image without permission, and as we know it in the context of image rights, that in itself is not allowed. That's illegal, that's unlawful. And so I wonder why is it that the legislature would limit section 16 to the confines of what we know as privacy? So does it mean that if I've posted an image of my own personal videos, intimate videos, and if someone else steals them to create a deepfake for their own purposes, does it mean it's okay? Those are the questions that kept coming up when I was looking at section 16 of the Cybercrimes Bill, because it overlooks the possibility that someone else may want to commercially exploit videos that may even already be publicly available.
13:00
Another key takeaway from the Cybercrimes Bill is that a normal common law crimes like fraud, forgery, and uttering have been imported into this legislation in order to make up for the developments in the cyber space. And you do see that even in the context of the Cybercrimes Bill, one can actually pursue a charge on the basis of the Bill for fraudulent acts through deepfakes because some deepfakes are also meant or aimed for market manipulation. I found it quite interesting when I was reading up on how deep fakes could be potentially used for market manipulation. Authors like Chesney and Citron give an example of how deepfake video can misrepresent a CEO of a company to the detriment of his share price on trading platforms, or the one by Kietzman, an example of how deepfake can show a statement of false earnings estimates which may hurt a company's stock prices.
14:12
So deepfakes have quite a wide range of purposes that can affect different branches of our world. I mean, if you think also about national security breaches, defamation, even, it could even put into question the very core value of democracy such as freedom of expression, because obviously, if you want to prosecute someone for a deepfake or even if you want to sue someone for a deepfake, a question of freedom of expression would come up, particularly in the civil context, especially if someone wants to raise the defense that a deepfake is being published for public interest or it's newsworthy, or it's even just a parody or satirical kind of deepfake. For instance, I don't see how Barack Obama could sue Jordan Peele for that deepfake because there isn't any intention to deceive or defraud or degrade anyone with that kind of a deepfake. It sort of gives this impression that everyone's face is 'fair game' because of the idea of freedom of expression, and its core in a democratic state. It gives this rise to this notion that it is okay to appropriate and misuse people's images, as long as you can get away with proving that you've done it within the boundaries of the right to freedom of expression. And this is a challenge that I also have with memes.
16:11
I think about the dangers that we're facing now with deepfakes, in that, it's highly possible that you could tag someone or share or like a deepfake not knowing that it's a deepfake because the AI that is used to create deepfakes presents us with this deceptive media. So, in posting or tagging someone to it or allowing yourself to be tagged to that kind of a post, you wouldn't really know just how wrong or how infringing that activity is to someone else's right. And obviously, you would allow for that tag to remain or that like or that share to remain and you'd probably share it too, especially if you like to share memes and so forth. You'd find yourself in a very sticky situation because in our courts, it has been decided that a person who is tagged to an infringing post on social media may be held liable, along with whoever posted the initial post. And so it's quite challenging in this context of deepfakes because, I am of the view that we can't hold everyone who is tagged to a post liable along with the initial poster, because in the deepfakes context, if I'm not aware that a deepfake is violating someone's rights, or that it's even a deepfake, and that it's a fake material. I will not be prompted naturally to untag myself or to remove myself or disassociate myself from that post. And so it would be unreasonable to hold me liable for being associated to a deepfake. Social media, in conjunction with deepfakes, have me thinking a lot about this meme culture as well.
18:12
I spoke about memes in episode three of this podcast, you can listen to that episode. And remember, this podcast is also available on all major podcast platforms, Apple podcasts, Spotify, Google podcasts, Deezer... the works.
18:30
Follow @byNMashinini on Twitter, Facebook, Instagram, and even LinkedIn. Yes, I'm on LinkedIn. And remember, this podcast series is based on research supported by the National Research Foundation of South Africa, grant number 121887, but the views and opinions expressed in the series do not reflect the views and opinions on the National Research Foundation, its management or its governance structures.
19:00
Until next time, control your image rights!