The Shifting Privacy Left Podcast

S3E15: 'New Certification: Enabling Privacy Engineering in AI Systems' with Amalia Barthel & Eric Lybeck

July 23, 2024 Debra J. Farber / Amalia Barthel & Eric Lybeck Season 3 Episode 15
S3E15: 'New Certification: Enabling Privacy Engineering in AI Systems' with Amalia Barthel & Eric Lybeck
The Shifting Privacy Left Podcast
More Info
The Shifting Privacy Left Podcast
S3E15: 'New Certification: Enabling Privacy Engineering in AI Systems' with Amalia Barthel & Eric Lybeck
Jul 23, 2024 Season 3 Episode 15
Debra J. Farber / Amalia Barthel & Eric Lybeck

In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that  helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need.

Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout.

Topics Covered

  • How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering 
  • Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneys
  • The unique ideas and practices presented in this course and what attendees will take away 
  • Frameworks, cases, and mental models that Eric and Amalia will cover in their course
  • How Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework 
  • The importance of upskilling for privacy engineers and attorneys


Resources Mentioned:


Guest Info


Send us a Text Message.



TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Show Notes Transcript Chapter Markers

In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that  helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need.

Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout.

Topics Covered

  • How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering 
  • Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneys
  • The unique ideas and practices presented in this course and what attendees will take away 
  • Frameworks, cases, and mental models that Eric and Amalia will cover in their course
  • How Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework 
  • The importance of upskilling for privacy engineers and attorneys


Resources Mentioned:


Guest Info


Send us a Text Message.



TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Amalia Barthel:

We know intuitively that actually, privacy engineering really is about being in- the- know about how much good or harm you could be doing with the data and the processing of the data. And I think when organizations embark on these new projects, like AI, they actually have no idea whether their outcome is going to be good or bad for society, for other people, and not just for their company. I think this is going to be a or bad for society for other people and not just for their company, and I think this is going to be a huge eye-opener for them. They're going to go in with their eyes wide open, not shut.

Debra J Farber:

Hello, I am Debra J Farber. Welcome to The Shifting Privacy Left Podcast, where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans and to prevent dystopia. Each week, we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding edge of privacy research and emerging technologies, standards, business models, and ecosystems.

Debra J Farber:

Welcome everyone to The Shifting Privacy Left Podcast. I'm your host and resident privacy guru, Debra J Farber. Today, I'm delighted to welcome my next two guests: Amalia Barthel from Designing Privacy and Eric Lybeck, Independent Consultant and Privacy Engineer. With

Debra J Farber:

over 15 years experience in building privacy management and compliance programs with chief Chief privacy Privacy officers Officers and compliance Compliance officers Officers general General counsel Counsel, and CISOs across many industry verticals verticals, Amalia founded Designing Privacy, a privacy consultancy that helps build privacy into clients' business operations through privacy engineering, risk management, management and easy-to-implement tools. She's also a Lecturer lecturer and Academic academic Programs programs Advisor advisor for the University of Toronto SCS, so that's the School of Continuing Studies.

Debra J Farber:

Eric has two decades of combined cybersecurity and privacy experience, developing solutions that help organizations implement responsible AI, protect their data and comply with regulatory requirements. Eric was most recently the director of privacy engineering at Privacy Code. He's now an independent privacy engineering consultant currently on assignment with a major automobile manufacturer and is also working with Michelle Dennedy to update and co-author her seminal book, The Privacy Engineer's Manifesto.

Debra J Farber:

Today, we're going to be discussing the need for more training on AI system enablement and why it's not enough for privacy professionals to just focus on AI governance. We'll learn more about Amalia and Eric's new hands-on course and certificate, which they call Privacy Engineering in AI Systems Certificate (PEAS). Welcome Amalia and Eric!

Erick Lybeck:

Thank

Amalia Barthel:

Thank you. That was such a great intro.

Debra J Farber:

Yeah, well, I'm just reading your bios. You have such great backgrounds and are doing some really exciting stuff, so thank you for being here today.

Erick Lybeck:

Yeah, you're most welcome. We're delighted to have the opportunity to talk about what we're doing.

Debra J Farber:

Awesome. Well, Eric, why don't we start with you? You've worked as a privacy engineer for the last 10 years, so why privacy engineering and AI? Why is that a topic we should focus on? I feel like this is an obvious question, but how is it different?

Erick Lybeck:

Right. Well, I mean, I think most of us have already been doing this. We've already been working with systems. We've already been doing engineering in these systems with privacy, and now we're just talking about AI, because we have so many more capabilities now. So, the automated decision-making that 10 years ago we didn't really touch that much, now we have these great new AI technologies that are allowing us to do so much more decision-making. So, this is really about AI system enablement - making sure that we're engineering the right features, the right systems, we're considering the different privacy threats when we're working on those systems, and making sure that we're all up-skilling so we understand how AI may be impacting these systems differently.

Debra J Farber:

How is AI system enablement different from AI governance?

Erick Lybeck:

I think we know how to do AI governance. It's similar to how we've done privacy governance, grant policy stand up an organization to do governance, handle the structure, handle the people. But, instead, we need to be talking about what to do with AI and specific systems. So, if we're using AI, we need to understand what some of the threats are to AI-enabled systems. We need to understand how to use personal information correctly, what the risks are to be using personal information that might be processed through machine learning or through a large language model. Whereas AI governance is more that high level strategy, what we're doing in this course is getting into practical examples of how we can be working and engineering better AI- enabled systems.

Debra J Farber:

That's awesome. Thank you for that. Amalia, what prompted you both to design this new course, this Privacy Engineers in AI Systems course.

Amalia Barthel:

So, that's a really great question and, just for our listeners, I want to tell them the story about how Eric and I decided to go together on this. We've known each other for a long time. We actually met in our previous lives, as we all have previous lives. At PwC we worked me in Canada and Eric in the U. S. in the privacy departments. We started talking about privacy, how we can collaborate. So, that's when I first got to know Eric and we touched base throughout, like over a decade, and then I found out that he is one of the right- hands of Michelle (Dennedy). Michelle has many right hands at Privacy Code and I was just fascinated.

Amalia Barthel:

I was trying to implement a privacy engineering discipline for a couple of my private clients and when I saw the Privacy Code software, I thought, "Oh my God, this is exactly what I need to do." So I got talking to Eric and it just was a meeting of the minds. But, I have also authored, as you caught in my intro, in my bio - I've actually designed, and am delivering, a certificate program for privacy at University of Toronto School of Continuing Studies. That was done in 2016 and it was done on the same premise. I found a gap in the market. At the time, the gap was that there were a lot of privacy professionals that understood privacy at a theoretical level. But a lot of them came to me and they would say "Can you mentor me? I know about privacy, but I don't know how to do privacy. So I found that there was a gap then in 2016 in the market with the operationalizing of privacy and, more so, bridging that gap between the business people, the legal people, and then the IT people, because you have to tell them how to create those features, the functionality, in such a way that is privacy- protective and respectful. I saw the exact same problem now and I talked to Eric. And, Eric put actually a post kind of understanding, engaging the market.

Amalia Barthel:

How are people going to receive this if we were going to do something, of course, about AI, that goes deeper, deeper than governing risk, governance strategy of AI in general? But how do you do AI? You know? How do you implement it in operations? We've had some great feedback from our network and we thank them. And somebody said, y"You know, I think you should orient this course towards lawyers and engineers. So that was the first thing - our aha moment - was "we need these people. In the same room Now, Michelle has been saying that in her book for 25 years, Eric or so?

Erick Lybeck:

It's the 10th year anniversary this year, actually.

Amalia Barthel:

Oh, okay, all right, so sorry.

Debra J Farber:

(Debra) She probably has been saying that for 25 years.

Erick Lybeck:

Exactly, exactly. I'm sure she has.

Amalia Barthel:

So, that is why privacy engineering in AI systems, because we just felt that we have this gap and we need to bring together the two worlds of the people involved with either privacy or privacy and technology so that they can talk and understand each other.

Debra J Farber:

I was going to ask, ideally, who should take this course, but I think you already kind of answered. It's meant for both legal professionals and technical professionals. I do know that those looking for technical coursework often are looking for hands-on labs, or lawyers or consultants are looking for maybe tools like frameworks they can use and unpacking those. How do you guys bring together the concepts that are at the right level for both technical folks and maybe potentially legal? I don't want to just say technical. I feel like I'm a technologist, but I'm not necessarily going to like configure some servers or write code. So, when I mean tech, I mean applied technologists versus maybe someone who's interested in technology and can talk about it but isn't necessarily going to go into a lab and start coding something.

Erick Lybeck:

We're working with, planning for, any sort of skill level. Certainly, if you're a legal professional and you have some experience understanding case studies, understanding use cases from your business. That's how we're going to be teaching the course. So we'll have case studies that have specific use cases of some AI-enabled system and there may be some technical aspects to it. So if we have a very technical component diagram or something like that, we'll thoroughly explain those diagrams. So we'll do this in a way that any professional will understand what we're teaching. We've done benchmarking. Professional will understand what we're teaching. We've done benchmarking. We've seen courses out there that are maybe more specific on AI risk management or are very technical about artificial intelligence technology itself. Those courses would require college calculus or linear algebra. That's not us. I mean we're going to be focused on real practical case studies, real practical examples, as well as working with students to develop a capstone project that is really real world for them so they can really apply what they learned through the course to their jobs.

Debra J Farber:

I think that's pretty exciting, because one of the things I guess I didn't draw out of you earlier on is that this course it's not pre-recorded and you just pay a price and then do it at your own pace. This is actually like a weekly course. We'll go into the different modules and what you'll be covering and the approach a little later on, but this way people will be able to bring their own experiences and talk amongst themselves and share what they've seen and ask you questions, and so it's a live course.

Erick Lybeck:

Absolutely. We'll have live sessions because we know we're going to learn from our students as much maybe as our students learn from us, and so those conversations and those classroom discussions will just be very essential for the learning in this course, because we'll all understand these case studies, these use cases, much better through those classroom conversations.

Amalia Barthel:

Yeah, and one of the unique, maybe ideas that we're bringing into this course is that, even though we have more technologists on one side and legal people on one side, we're actually going to put them together in a virtual room and we're going to ask them to explain things to each other, and I think that is going to benefit them tremendously both of them because what we're finding is that we're reading from the same page, but we understand completely different things.

Debra J Farber:

Absolutely, in fact. To go back to the little parable about Michelle Dennett, he's been talking about this probably for 25 years. I just want to draw out that the actual challenge that she's always talking about, especially in that book, is that lawyers like to architect their language a little more generally to capture as many risks as possible, right? So in a privacy policy, you might see something like we use reasonable security mechanisms or approaches, and engineers need something that is tangible, that they can code to and determine whether or not it's built correctly or not, right. And you can't code to reasonableness, right. And so I think having these discussions, as you're talking about, will really get folks to flex their muscle on exercise how they discuss these topics, so that the other side not other side, but the other- speciality can understand, and then they could realize oh, I need to be more specific, or maybe I need to be more high level and systemic about how I'm framing something right.

Debra J Farber:

So I think this is really exciting. Ambalya, what do you hope attendees take away from this course? What will it enable them to do as they each go back to their respective organizations?

Amalia Barthel:

I know I'm jumping the gun a little bit by saying that, but we're going to talk about the frameworks. But in our prep work for each class, we are feeding the fundamentals of trustworthy AI and privacy engineering one step at a time. So it's like a ladder we bring, we build knowledge with every single module and when they actually start working on the use cases, they get to dip into various frameworks. I'm not going to name them because we talk about them a bit later, but what they're going to bring with them is an entire toolkit. They're going to know about these many resources that they can always mix and match. They will understand again. It will be a systematized approach.

Amalia Barthel:

As to how do I approach one request that comes from group X. The business has this need, they want to use artificial intelligence and they will not take no for an answer. A how am I going to be an enabler? And B how am I going to protect the organization from itself? And that's what they're going to be an enabler. And B how am I going to protect the organization from itself? And that's what they're going to take away a toolkit that enables them to do that.

Debra J Farber:

Thank you so much. That's awesome, Eric. What about you? What do you think that attendees might walk away with and bring back to their organizations?

Erick Lybeck:

We did a series of webinars and we talked about one of the frameworks as being a good tool to be used for developing policy.

Erick Lybeck:

So we'll be talking about developing policy and we'll about one of the frameworks as being a good tool to be used for developing policy.

Erick Lybeck:

So we'll be talking about developing policy and we'll be talking about developing the program that goes in place around governing AI systems and performing these tasks, and we'll bring in other examples of a lighter touch you know ways of doing assessments, and we'll bring in concepts of privacy by design, but AI systems by design, so working through the systems, through a systems lifecycle, and so that's some of the things we'll be doing through these case studies, and one of the examples we've worked on we've worked on a number of different case studies as we've prepared for this is we had one example where it was a product and the product enables police departments to save time by automating the report writing, and so it could actually connect to the video camera footage and go through all of that and automate the report writing, and so we can bring a case study like that and talk about it.

Erick Lybeck:

What are some of the privacy risks? What are some of the ethical considerations. What are these potential risks or threats by using AI in this particular use case? And talk through that and I think by talking through those different types of use cases that are not just about one specific industry they can be public safety or they could be automobile, they could be different types of industries It'll really provide a nice rich foundation for you to work in this area and to have just better results when you're sitting at that table as the privacy professional, so you'll be able to contribute so much more to the projects that you get involved with.

Debra J Farber:

That would be really helpful and I also want to point out it sounds like CIPP US, the M, the T.

Erick Lybeck:

You know it provides a certain amount of core background knowledge. It's the theory knowledge of privacy, but you still need that real-world example, that real-world experience, to really be an effective privacy professional. And it's the same thing with AI and it's the same thing with AI. And so that's what we're working on in our course is help students develop that real-world skill by going through real-world examples, real-world case studies, helping students go through their own project through the course so they apply what they learn to perhaps a real concern that they have in their organization. They have in their organization, and so it really helps to apply the information in a much more specific way than just having the knowledge of what machine learning is or what privacy is. It's really going into more depth with it.

Debra J Farber:

That makes a lot of sense. In some ways. To me it feels more like a bootcamp. It's getting you ready to actually practically work on AI projects within your organization, so that's pretty cool Right.

Erick Lybeck:

I like the word bootcamp, but I don't think Amali and I are very much drill instructors. We're much more about having those conversations and bringing these two different groups together the legal professionals and the technical professionals and I think the groups will have a lot of fun interacting with each other and coming to the class with the different perspectives that they have.

Debra J Farber:

So we spoke about it a little earlier and we talked about AI risk frameworks, but I'm curious what approaches and mental models, basically what risk frameworks are out there that you end up covering and not only educating on, but then taking those frameworks and then walking through the use cases of how you would evaluate and map to those frameworks. What are some of them?

Amalia Barthel:

In our free webinars that we offered to anyone who was interested. We talked about three of them. Of course, the darling of AI governance, nist AI. But NIST AI is a risk management framework, so a lot of people are maybe a little bit confused. It's not just about AI governance, because AI is a risk management framework, so a lot of people are maybe a little bit confused. It's not just about AI governance, because AI is a technology. You have to remember the days of bring your own device. We had to govern how we introduced that technology, the cloud. We had to have a position how is our organization going to work with this particular technology offering? But NIST also goes into risk governance and risk management. So I'm going to go through the other frameworks that we talked about.

Amalia Barthel:

So we talked about NIST AI framework, the US Government Accountability Office AI framework, also known as GAO, g-a-o, the generative AI risk assessment, created by Vischer, a fantastic lawyer from Switzerland who has created an Excel spreadsheet tool that is fantastic At the use case level. It talks about applying generative AI, but it could be used as a privacy impact assessment for AI. It's very broadly augmented to add other considerations such as intellectual property, copyright law, fraud, other laws that may intersect with an AI type use. And these were the three frameworks that we discussed in the webinar, as we had not a lot of time, but there are additional ones that are coming, almost being issued every day. Some of the ones we noted was the Germany's joint guidance on AI and data protection. That was a very, very good framework. Of course, the Colorado AI Act that is also incredibly informative because it's very risk-based and very interestingly formulated.

Amalia Barthel:

There is the UK Privacy Commissioner ICO AI Risk Toolkit, which I have personally used and I think it's a fantastic tool. There's the CUNIL guidance, the French regulator guidance for AI. The World Economic Forum Adopting AI Responsibly. The Future of Privacy Forum has issued an AI policy checklist, which I also we found that was very, very informative. So there's a number, and recently the EU just issued a framework called Human Rights, democracy and the Rule of Law Assurance Framework for AI Systems framework for AI systems. It's 335 pages and it's called. The acronym is impossible to remember, but it's human rights, democracy and rule of law assurance framework and we're going to talk about that too. What we are asking our students to do is to learn how to navigate these frameworks. Nobody's going to remember all these frameworks, but they're going to find areas where they feel that it fits better into their use case or their organization, and they have the ability to reach out into all of these different resources and use them to their advantage.

Debra J Farber:

That is pretty great. There's so many fire hoses of information out there and there's just so much you know. It's kind of like wading through a haystack trying to find needles that make sense, and so it's great to have you lead people through what is relevant right now, what's coming down the pike, what are good for certain use cases, what might be better for others. I think that's great that you'll be able to walk people through that. Eric, what about you? Are there any other?

Erick Lybeck:

Yeah, you know, one of the things that I worked on when I was at Privacy Code was a privacy engineering process, and it's also being included in the revision of the Privacy Engineers Manifesto, and we took a look at different sources. So, like the Privacy by Design ISO standard, 31700 standard so, for example, there were aspects of that standard that were different from some other privacy by design work that we'd looked at. We looked at what the Institute of Operational Privacy by Design had done and we combined these together into this privacy engineering process, and so we'll be using something similar like that as well. So there's all these different AI frameworks, right, and you can't well, you could, but you could apply all of them, but you'd never get anything done. You have to come up with some sort of process that can allow you to triage, can allow you to understand where you need to spend your time, and maybe your organization does use the NIST AI risk management framework the entirety of the framework on some system or some major business transformation, but it's very comprehensive. Just that one framework is very comprehensive and you couldn't apply it to every single AI-enabled system.

Erick Lybeck:

So what we'll be working on is how do you come up with that toolkit that you can apply at the beginning in design, working with that product manager, to understand what the potential privacy risks and other risks would be from this AI-enabled system, and how do you scale that? How do you bring in more attention to some of the threats and some of the risks, how do you scale that up and what are some of the things that drive that? So we'll also be looking at are there some systems or is there some ways of automating doing that? Can you take these frameworks and put them into a large language model, into the prompt, to help you come up with some of those implementation requirements? So we'll be exploring some of that during the course as well.

Debra J Farber:

Oh, that's pretty exciting, because then you're actually using the technology to make it easier to use the technology.

Erick Lybeck:

Exactly, exactly.

Debra J Farber:

Why don't you tell us a little bit about how you structured the course? How long is it? What's the format? What topics do you cover?

Amalia Barthel:

Yeah, we have thought originally, also based on my experience with the University of Toronto there are certain requirements for a certification and a certain number of hours of study, of assignment work, projects and, of course, an exam that proves that the student has ingested and understands the knowledge they've gone through at a significant level. So we have envisioned this course to be 12 weeks, which includes assignments, of course, class discussions every week and also includes the capstone project which is at the end, and then we give students maybe a couple of weeks to do recap and take the certification exam. And then we give students maybe a couple of weeks to do recap and take the certification exam. In terms of how many hours is the classroom, what we've envisioned is that they would have some material that we provide up front for them to study as they come into the class and then, with the knowledge that they've obtained, by reviewing what we suggested in the materials, we're going to go into a case study and the discussion is going to be once a week. We bring the class in for an hour and a half around there and we just try to get to the bottom of this use case and progress it as we accumulate more knowledge.

Amalia Barthel:

The topics are, of course, the trustworthy AI, but we're also blending the trustworthy AI, so we talked about bias and fairness and ethics the fundamentals of AI. We bring in the concept of harms. We bring in the newest white papers written by the phenomenal Daniel Solove and Daniel Citron and Margot Kaminsky that are just bringing these questions around. How are privacy laws equipped to support the introduction of AI? Where are the gaps? And bring to the attention of the students that that's the area where they're going to be they're always going to be in the in-between and trying to solve something the business is going to ask them to do, but also within the confounds of the legal guidance available, and help them become instrumental in solving those problems. So that is the format of the classroom. I know that we've had an FAQ section during our webinar and that's also on the website, where potential students or anyone interested can find more information Excellent.

Debra J Farber:

And are there set dates for it, like when does the next course start?

Amalia Barthel:

We have put a kind of a date start date as September 9th because everyone goes back to school. We are hoping people will enroll through the summer and we counted the 12 weeks. We want to make sure that the Canadian Thanksgiving and the US Thanksgiving we're going to try to fit everything in but be ready for Christmas. That was the point.

Erick Lybeck:

I think that we'll have some sessions that we will have. We have a 12-week plan but some of the sessions will have maybe a midterm exam or we may have some ability to have. If somebody is not available to be one week, we may have the ability to have a catch up week. We have some extra time built into the schedule and we don't want it to be so onerous that people are afraid oh, there's 12 weeks of classes.

Debra J Farber:

Or oh, there's exams, there's exams. Yes, exactly.

Erick Lybeck:

We're really talking about a very conversational course where we're going to be looking for students to provide their perspectives from these case studies and really apply it to their real world organizations and the challenges that they have in their organizations, and so I think it'll be very valuable and for me for an hour and a half or two hours depending on the night or the day when it's offered I think it'll be very useful and everybody is going to get a lot out of it. And at the end, they're going to have a number of different tools that they've worked on through the course that they've been able to use to apply to different case studies, so those tools will be very useful as well. They'll have experience working with these tools. We mentioned the GIRA, the Generative AI Risk Assessment. There's a light version of that as well as a more comprehensive version of that. They'll understand how to potentially use the NIST AI risk management framework in a way that is a more light touch, more pragmatic approach as well, and so I think that'll be very, very good for our students.

Debra J Farber:

Excellent, so curious why you decided to create a certification element at the end of your course.

Amalia Barthel:

I think that the amount of work the students are going to put in, as well as the broad breadth and depth of the knowledge that we bring in the materials, really lend themselves into more of a certification rather than just the course that people take.

Erick Lybeck:

Exactly, and we don't intend this to be a one-time course. We want to continue to build this course over time. So we'll certainly be asking our students for feedback. If we get feedback and we revise the course, we'll provide the updated materials to the students going forward. But we'll also bring in new emerging topics in AI. So we actually one of these 12 weeks we've reserved it basically the 11th week to talk about emerging topics in AI. So if something comes up during the course and we haven't prepared for it maybe in the third or fourth session we get a question or like we didn't address that during our plan for this We'll, in the 11th session, then cover that topic. So we'll be very adaptive to the questions that the students are bringing to us during the course as well.

Debra J Farber:

That sounds great. I think it makes sense, as this is an evolving field, there's always going to have to be updates, but I'm excited, I want to take your course. I think it's going to be pulling from all of the good approaches that are out there and pulling it together into one place where there's a methodology as to how you're going through the material and then having hands-on use cases. It really seems a great way to upskill. I could see this being something that people are putting on their Q4, whatever quarter it is for them, but the end of the year, the last quarter of the year learning plan where they can say, hey look, I know AI is a big part of what we're working on, even though I'm a privacy professional. So it'd be great if I had this training, because it'll supercharge my ability to like hit the ground running with actual implementation and assessing not just a risk assessment but of future engineering and basically enabling AI systems.

Amalia Barthel:

And the thing is Debra. We know that people need to continue to upskill. They need to show that they're keeping up with the times to be marketable, and we feel like what's happening with AI. I think this is the perfect pairing of the fact that we still need to engineer privacy and safeguards and guardrails into technology from now to the end of times. Yeah, so why?

Debra J Farber:

not do AI focused privacy at the same time? If there's such hype around it right now and all companies are like looking at what is my AI strategy, it seems like an opportunity to also push the privacy shift left mantra of let's address it earlier and then thus enable the company to innovate but also protect personal data at the same time. Why not add them together and use this ai hype cycle to? It's not going away. It's only going to get more and more embedded into our organizations?

Amalia Barthel:

seems like a good time to combine the effort we think so because you and I and Eric, we know intuitively that actually privacy engineering really is about being in the know about how much good or harm you could be doing with the data and the processing of the data. And I think when organizations embark on these new projects like AI, they actually have no idea whether their outcome is going to be good or bad for society, for other people and not just for their company, and I think this is going to be a huge eye-opener for them. They're going to go in with their eyes wide open, not shut.

Debra J Farber:

Definitely that makes sense. So how much does the course cost? And by any chance, do you happen to have a discount code for listeners of the Shifting Privacy Left podcast?

Amalia Barthel:

Well, I'm glad you asked. I think we have priced it incredibly reasonable and I'm not going to say the cost on the podcast, but if anyone listening is interested and they've listened till the end, there is a bonus. Until the end there is a bonus. If you go to the website wwwdesigningprivacyca so wwwdesigningprivacyca and you click on the course tab, then you're going to be able to see right now we're in a promotion mode. See, right now we're in a promotion mode. And so for the listeners here, we have a 300 US dollars discount and you can submit an inquiry there and just say I'm interested, I'd like to take advantage of the code, and we will definitely make sure that you get that discount. Is there a?

Debra J Farber:

specific code that you wanted to share Podcast 300. Podcast 300. There it is. I will also put it in the show notes. It'll be easy to refer back to again. But thank you so much for giving that coupon to our listeners. I really look forward to getting feedback from them on the course and I hope we get a lot of signups.

Erick Lybeck:

We knew that your podcast definitely reaches a lot of the technical professionals that do work in privacy engineering, and so that's one of the reasons we reached out to you and wanted to talk to you about what we're doing. To me, it's just essential that, as a privacy engineer, that I continue to upskill. That's one of the reasons I reached when Amali and I started talking. We started talking about this course and doing this course together because it's part of my upskilling. Right by helping teach the class, I am certainly learning much more, in a much more depth as well, about different AI systems, different use cases that are going to be explained through these case studies. So that's really what I encouraged everybody that's a privacy engineer is really look at opportunities for continuous upskilling.

Erick Lybeck:

Ai is just moving so, so fast. It's probably not even reached the top of its hype cycle. I know every organization I've talked to, certainly from my work at Privacy Code, was doing work with AI. We were doing work with AI at Privacy Code, using machine learning to read privacy policies and identify the privacy engineering requirements. So I really encourage your listeners to take a look at what we're offering and let us know if you're interested, and if they have any questions, just email us and we'll definitely be more than willing to have conversations about what your listeners are looking for from an upskilling perspective.

Debra J Farber:

I was actually going to ask if you had any other words of wisdom to leave our listeners with today, but I think that upskilling makes a lot of sense. As a last point. What about you, Amalia? Any last words of wisdom?

Amalia Barthel:

I do have a couple of points because I really want everyone listening to realize that we are so different from a professional association or from an academic. We are practitioners, we're working in the trenches, so we are like you and so learning together at this level, with, of course, the added bonus that we've been in the instructor role for a long time, so we know how to teach, which is quite a different skill, but we feel your pain. We're going to be there with you to understand how to make sense of things, not like an academic, not maybe a professional association, but we're practical. We're going to give you that practical in the trenches knowledge. So that's one point I wanted to make. We are like you.

Amalia Barthel:

The second point I wanted to make there is the European Union has a program. They have an academy that is free and we're happy to provide the link through Deborah. They've created at least one course that talks about how they're actually going to rewrite or evolve the writing of policy in general laws so that they can be acted into machine code. So this is the future and I have sent that to Eric and I've sent that to a couple of my friends. I'm like the European Union is doing this they're leading the way. They've realized that's the biggest gap in adoption of their laws that people don't understand how to make them real, into technology, into code.

Debra J Farber:

Wow, yeah, I would love to read up on that. So please do share the link and I will add it to the show notes.

Amalia Barthel:

Yeah, it's at academyeuropaeu, and we will send you Debra the link so everyone can see what we're talking about.

Debra J Farber:

Excellent. Well, Amalia and Eric, thank you so much for joining us today on The Shifting Privacy Left Podcast. Until next Tuesday, everyone, when we'll be back with engaging content and another great guest. Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website, shiftingprivacyleft. com, where you can subscribe to updates so you'll never miss a show. While you're at it. If you found this episode valuable, go ahead and share it with a friend, and if you're an engineer who cares passionately about privacy, check out Privado: the developer-friendly privacy platform and sponsor of this show. To learn more, go to Privado. ai. Be sure to tune in next Tuesday for a new episode. Bye for now.

Privacy Engineering in AI Systems
Bridging Legal and Technical Realm
Navigating AI Risk Frameworks and Policy
Advanced AI and Privacy Course Discussion

Podcasts we love