Speaking of ... College of Charleston

ChatGPT Explained: A Conversation with Computer Science Professors About Conversational A.I.

Navid Hashemi and Sarah Schoemann Season 2 Episode 5

Send us a text

In this episode of Speaking Of... Tom Cunneff, College of Charleston Magazine editor talks to Navid Hashemi and Sarah Schoemann, computer science professors about Artificial Intelligence (AI) and ChatGPT and what this means to higher education. 

"It's revolutionary," says Hashemi who compares the development in technology to the Industrial Revolution. "In the industrial revolution, we tried to replace our muscles with robots or tools. But here, the systems are trying to help our brain to make better decisions and somehow make our life easier. If the paradigm shift is exponential in the next in the next few years, I believe that we are going to see a lot of new advancements in many different fields, in drug discovery, in music, in art, in robots, social living, driving and self driving."

One of the biggest questions and hottest debates is how these large language models will change higher education and whether educators should embrace or ban AI from classrooms. "I think it's a technology that you really need a nuanced approach to," says Schoemann. "Ultimately, the idea of banning it outright will fail because students are savvy, and trying to ban any technology from students is never really the right move. But I'm not sure that it needs to become the center of the classroom."

Featured on this Episode: 

Sarah Schoemann is an Assistant Professor in the Department of Computer Science, she received her PhD from GA Tech in 2021. At College of Charleston she is the director of the Critical Art and Technology Lab or CATLab. She primarily teaches courses in the CS department's Computing in the Arts or "CITA" program, which combines the study of the arts with computation. Trained as both a fine artist and a researcher in the field of Human Computer Interaction and Game Studies she is focused on the design and evaluation of new technologies such as games and interactive experiences with a focus on how creativity, and playfulness can have real-world impacts. She is particularly interest in the implications of technologies for critically engaging with broader social questions regarding justice, equity and inclusion.

Navid Hashemi is the director of the graduate program in Data Science and Analytics at the College of Charleston. He joined the Computer Science department in 2020 as an assistant professor and founded the Data Mining and Connectivity (DMC) research lab. He is an active researcher in spatiotemporal data mining, machine learning, Internet of Things (IoT) analytics, and crowd-sensing. Hashemi holds a doctorate in computer science from the University of Georgia, and prior to joining the college, he held a visiting faculty position at Emory University.

Resources from this Episode:

AI expert Timnit Gebru talks to 60 minutes about bias in large language models like ChatGPT

Article about Getty Images lawsuit against Stable Diffusion for copyright infringement

Refik Anadol, artist who uses AI to create wall-sized generative art, using only “ethically sourced data” as training data.

Official chatGPT/GPT-4 webpages:

https://openai.com/blog/chatgpt

https://openai.com/research/gpt-4




[00:00:00] Hello, and welcome to speaking of College of Charleston. I'm Tom Cunneff  from the Office of Marketing and Communications, [00:00:20] and on today's episode I'm speaking with Sarah Schumann and Nat Hashimi, both of whom are assistant professors of computer science here at C f. They're here to tell us about a new AI power technology known as Chat G P T, that really has people talking.

[00:00:34] But before we get to the questions, please give us a little background on yourselves. Sarah, you have a PhD in [00:00:40] digital media as well as a BFA in studio art, and you teach a course in com, hu, human computer interaction among others. Is that right? Yes. I teach HCI in one of our graduate programs as well as classes mostly in our computing and the arts program within cs.

[00:00:55] And I run a research lab called the Critical Art and Technology Lab in the [00:01:00] department. Very cool. How long have you been at the college? This is my second year. Nice to have you and Nevi. Tell us a little bit about yourself. I know you have a PhD in computer science and your research interest includes smart cities and the internet of things.

[00:01:14] Is that correct? Yes I have PhD in computer science and I started at the [00:01:20] college three years ago. It's almost my third year. And I teach computer science, computer programming, machine learning, and mainly data science courses. And I also run the research lab that is called Data Mining and Connectivity dmc, that we work on different projects that include data [00:01:40] science, and how to incorporate this data for different projects.

[00:01:44] So the chat g p t is right up your alley. Yeah, exactly. And now we are trying to figure out how to incorporate chat G P T in our research and how to embrace the new technology rather than somehow avoiding that. Explain to our listeners [00:02:00] what a chatbot is, which is what chat g p T is. Right? A chatbot basically is a system that try to simulate conversation as.

[00:02:10] And chat to, to be more specific, is a newer technology that understand the context and is being trained based on a lot of data. [00:02:20] That's why we end up having very good results when we try to talk with chat.

[00:02:24] G P T is the kind of conversation that you can have with this tool and it provides very interesting result for different domains There is no limit for the answers, and you can find many interesting answers [00:02:40] in each and every domain these days. I think people are a little bit familiar with these kind of as assistance, AI powered assistance and with Siri and Alexa.

[00:02:48] How does chat g p t different from them? For, Alexa, we try to have a system that translate our voice to text and do some specific commands. [00:03:00] Most of the time, simple commands. How is the weather today? Or make my iPhone silent. But for chat g p t, it's more open on ended model that designed to generate human-like responses for a wide range.

[00:03:16] For example, you can ask them to do a specific things to add a [00:03:20] reminder or to come up with a schedule for. The architecture is different rather than basic tools or machine learning. They add a chat, G b t use more advanced stuff such as transformers and deep learning models in terms of interactions, that is very [00:03:40] important and interesting for chat, G P T.

[00:03:42] It understand the context, then you can keep asking to questions and tweaking the questions. But for Siri, if you ask a question next time, it doesn't understand what you asked before and you have to say everything all over again. And finally, integration as Siri [00:04:00] or Alexa is integrated to a specific hardware, for example, for iPhone or a specific hardware that we have.

[00:04:06] But for chat, G P T, you can integrate. To make different tools, and even for programming, for designing for art, for music generation, for all of them, you can somehow [00:04:20] integrate chat G p t and use them. Many industry leaders believe developments and AI represent a fundamental technological shift as important as the creation of web browsers in the early 1990s.

[00:04:32] How excited are you guys by this development of chat gbt? Is it that Revolut. Yes it is. I really believe [00:04:40] that it's revolutionary. And also I would like to compare it with Industrial Revolution somehow because for Industrial Revolution, we try to somehow replace our muscles. With some robots or tools, but here the systems are trying to replace our brain to some extent.

[00:04:58] Or the [00:05:00] better wording would be to help our brain to make better decisions and somehow make our life easier. Then I do believe that it's really revolutionary and The paradigm shift is somehow exponential. In the next few years, or even two, three years, I believe that we are going to see [00:05:20] a lot of new advancements in many different fields in drug discovery, in music, in art, in robots, social living, driving, self-driving, everything. Sarah, what should educators embrace or ban chat G P T? I [00:05:40] think that's a really interesting question. I've heard a lot of arguments in both directions and I think it's a technology that you really need a nuanced approach to.

[00:05:49] Ultimately the idea of banning it outright will fail, right? Because students are savvy and, trying to ban any technology from students is never really the right move. But I'm not sure that it necessarily needs to become, the center [00:06:00] of the classroom.

[00:06:00] I think there's definitely use cases where it's could be a positive element. And as we've discussed a little bit the writing process, as maybe a way to kickstart ideas or get students familiar with basic information incorporate, various sort of like general knowledge.

[00:06:15] There's considerations in terms of the efficacy of the tool, in terms [00:06:20] of the information that it. Return to users it's been known to represent falsehoods as as authoritative knowledge. And so that's dangerous when you're working with students and they're not in a position to be able to necessarily identify what information is reliable and isn't. I think one of the safeguards is that I don't think it's really reliable at reproducing citations people have [00:06:40] found. If you're asking students to do, research based work, you'll probably be able to tell pretty quickly if they're using Shad G P t.

[00:06:47] Interesting. Full disclosure, I use chat g p t to come up with these questions. But I haven't actually asked any of the ones. it suggested, but it did prompt my brain to come up with other ones. And [00:07:00] so let's go through some of the rest of them. you said you've used it to help you with your teaching.

[00:07:04] Is that right? For my research mainly actually as soon as we heard about this technology, we started a project to figure out if some texts are AI generated or human generated. But after a few weeks, We change our mind because instead of trying to figure out the [00:07:20] problems, we try to shift the research into how we can embrace it.

[00:07:25] And for the last few weeks I'm working with one of my battery students on. His final bachelor essay to figure out how we can come up with good prompts, because prompt engineering is a new domain [00:07:40] generated and created by this new technology. The technology is there, the tool is there, how we can utilize that.

[00:07:47] we are trying to come up with a customized chat G p T version based off of the in prompt engineering tools that we are generating to be able to customize the sort of prompts to be sent, to be [00:08:00] able to have better answers. For this research we are working on Basic programming, computer programming, and we try to add some specific keywords to each and every question and do some sort of prompt engineering to be able to get the best answers that are.

[00:08:19] Easily [00:08:20] read and understood by students who do not have any background in computer programming. So you're pretty excited about the possibilities, aren't you? Yeah, exactly. The possibilities are numerous and I also Was watching a video by the co-founder of open AI who created chat G P T it was two, three days [00:08:40] ago.

[00:08:40] And even they are very excited about the technology and also the possibilities because they create a tool, but they have no idea how the tool can be used in different formats. based on the feedback, they're pretty excited to see the possibilities and how different people with different backgrounds [00:09:00] would be able to somehow utilize and use the system.

[00:09:04] And actually I think you brought up prompts and your research around prompts, which I think is a really interesting aspect of this. I think one of the questions that people have about this technology is the transparency around using it and how do we, if we wanna be honest about our use of it how can we present that as part [00:09:20] of legitimate research or legitimate writing?

[00:09:22] And one of the suggestions that I've heard is making prompts part of the citation process. Making prompts part of what we present or cite, I'd love if you could say a little bit more about like how props work, because I'm interested to know for this research we figure out that even if you ask the exact same question multiple [00:09:40] times the answer that you end up getting is not similar.

[00:09:44] And that is exactly what happens for us as human beings. They are trying to somehow mimic our brain system. If you ask me the same question, I wouldn't give you exact same answers. And that is the case for chat G p T and these tools. [00:10:00] Still there are some specific engineering tasks for prompting that help you figure out how you would be able to get better answers.

[00:10:10] For example, if you add, I have no idea about this topic. At the end of your prompt, you would be able to have more [00:10:20] detailed. Or if you say that I have intermediate knowledge about this domain, I know X, Y, Z, but I don't know about the third one or the fourth one.

[00:10:29] All of these are something that we can add to chat G p T. But to be honest, we still have no idea about how this system is being trained and [00:10:40] how it's working. We know. Overall technology, but it's not open source. And we have no idea exactly how it answers each and every question. But I know that they are spending a lot of time to make it human-like and also, They are trying to improve the prompts [00:11:00] and they're trying to be as responsive as possible.

[00:11:04] Even g PT four that was announced two, three days ago it was in the beta testing for about six months. They try to improve the technology and come up with very good answers as much as possible. They keep improving the system. There are some lag, there are some [00:11:20] problem with the system, but not only open AI and chat G P T all.

[00:11:25] Different high tech companies are now looking at these kind of systems, and I have no doubt, within one year we have multiple versions of systems like Chat g bt, from all of these big companies. We are [00:11:40] going to have open source versions, and very soon each of us will have an AI agent somehow to help us with our daily routines and tasks 

[00:11:51] There. Yeah. I see it as something that could help in my work as a writer, just maybe create a first draft. I wouldn't pass it [00:12:00] off as my own work, but sometimes the first draft can be the hardest. And then if you had that already and then you could go back and tweak it and make it your own, would be a big help to me.

[00:12:09] And what's wrong with something that helps you do your job more efficiently and better? Yeah, I think it's really effective at doing a lot of everyday tasks and there's a lot of writing tasks that, are [00:12:20] repetitive and are not super novel and there's no reason that a technology like this can't be extremely useful for that.

[00:12:26] I think, some of the questions that are starting to emerge, around. The problems with this technology or sort of not even problems, but just philosophical questions about what the nature of this technology should be, really come down to the more human elements of it.

[00:12:39] [00:12:40] Which is, the role that sort of like moderation and curation has in the data sets that are used to train the model, right? So there have been very legitimate findings of bias in the data. In terms of, this chatbot basically reproducing from this huge corpus of knowledge that it's being trained on reproducing, biases that are very familiar to us, that are pervasive in society, [00:13:00] around race or gender.

[00:13:01] At the same time we've had these conversations where, you know, some critics accuse the bot of being, having a left bias or being, quote unquote woke But that's been the accusation because there are certain biases that the people creating the tool are trying to weed out and trying to consciously Work against, this is a huge job, right?

[00:13:19] This is a [00:13:20] human job to actually intervene in the data and have it produce results that represent human values. But, that's an enormous task. and not something that can just be handed off to machines, right? It's something we have to actively do. Especially cause human values are all over the place, yeah. But it's a call. It's a language model. Is that what chat, g p t is it, [00:13:40] and somebody's feeding all the different sentences we use as human beings into the computer code. Is it? Is that how they do it? Yeah, exactly. This technology is called L M A Large Language Models.

[00:13:53] And this basically, let's assume that we have an architecture for a brain and we feed the brain, we train [00:14:00] this brain based off of all the data that we have out there. But one of the question or one of the concerns is that the data that we have available on the web or in the history, that whatever we collected so far is somehow biased based on what writers.

[00:14:15] Had in their mind at the time. And all of these biases are [00:14:20] somehow integrated to them because we are teaching these brains using the data If we have the architecture for these brains we can be somehow more specific for our domain and only use a specific data to train them and create a specific brain or chat.

[00:14:37] G p t for biologists create a [00:14:40] specific one. The ones who are doing research in a specific field of chemistry. But one of the things that is not super clear is the data that is being used to train these models. And another issue is somehow copyright issues. Because again, there might be something publicly available on the [00:15:00] web but they might not exactly follow the copyright rules and.

[00:15:05] But it's still, these systems know them because they already trained based off of those data available on the web. And these questions are actually, becoming they're emerging within the legal system now. I feel like this technology has clearly [00:15:20] outpaced the mechanisms that we have in place to deal with its repercussions and its impact.

[00:15:24] So a couple of examples would be to think of the corollary with image making would be like stable diffusion. a technology for generating images based on text prompts. And recently Getty Images sued the company behind Stable Diffusion because they were able to prove [00:15:40] that a large amount of their images had been incorporated without consent or without, permission or payment into the dataset that was feeding stable diff. Generative model. And similarly, I believe the Wall Street Journal created a legal statement to say that they don't consent to having their content incorporated into any large language models. So you know how that will hold [00:16:00] up in court or how they would even prove, if there's no transparency around the data that it's been used or not used.

[00:16:05] Sort of an open question, but we're in this sort of just the beginnings of these becoming legal discussions. It's a whole new world, isn't it? Yeah. Let me get to some of chat GT GPTs questions. The last one, number 10. It gave me 10 good questions, but I'll ask the [00:16:20] last one. 

[00:16:20] What exciting advancements can we expect to see in the field of conversational AI in the coming years? Something that we saw in the last version, GT four is being multimodal. Then instead of only text and trying to write down something you can copy and paste an image and [00:16:40] it'll understand the image and try to figure out what you want.

[00:16:43] For example, if you come up with a sketch for your web browser or for a specific website, it creates the HTM l CSS JavaScript code and create the website for. If you are trying to do game programming, you can just have a sketch and provide some [00:17:00] information and it creates for you. And I have no doubt, in a few months we can even upload movies.

[00:17:05] Or animations and change the characters. Come up with our own scripts and ask it to create it for us. And that is one of the most interesting things about the future of this technology. The other thing is [00:17:20] that we are going to have more natural or human-like conversations. Still. There are some way to fool the system and try to figure out if they are.

[00:17:30] AI based tools or not. But in the near future, I think it would be very hard for us to figure out if that's the case or not. And another thing is [00:17:40] personalization. Now these cool technology are created based off of a lot of data from everywhere. But if we have a personalized one based off. Very personal things.

[00:17:55] I like this kind of ice cream. I like that specific kind of [00:18:00] food. This is my blood pressure. I have this kind of information and if we would be able to train specific agents that is somehow personalized then we will have these agents with us all the time. And for each and every situation we can somehow use them.

[00:18:18] I dunno, I want to go [00:18:20] to. The airport, then I can ask my agent that how is the best way to get there? How can I use my free time? I'm waiting for half an hour. What is the best use of my time? What is the best podcast that was released in the last few days that I can listen to based on my interest and many more?

[00:18:39] Wow. [00:18:40] It's I saw in the New York Times and the me the medias haven't, is loving this subject, you can't, open a paper and not read something about it. And I saw in the New York Times yesterday, the other day the writer showed chat, G p t just an idea for a website with just wanted a joke website and it said that, [00:19:00] He said, just make it out.

[00:19:01] Make it using JavaScript and htm html. And he came up with a web website. It actually produced a website. Exactly. Yeah. It's just incredible what it can, which is instead of having to do it yourself, who wants to spend the time doing that? If you can get AI to do that. I was excited to go to a [00:19:20] demo of a vr health technology basically that was shown at the Citadel a couple weeks ago.

[00:19:25] And talking to the creators Who they're a Brooklyn based company called Mouth in Flame. That created this training that was actually originally for the military. They were talking to me about their use of AI in their production pipeline. And basically they said that, they had hired all these game [00:19:40] developers to work on creating their sort of simulations that they use for enterprise software and different kinds of training situations.

[00:19:47] they had completely stopped using programmers and 3D modelers because they could actually generate all of the 3D. Through AI at this point in time. So yeah. How else will it disrupt the workplace in your opinion? a lot of jobs that [00:20:00] can be done by AI will go away.

[00:20:01] But there's always new jobs that will come up and be, that will be needed, right? Yeah. I kind of wonder, if the kind of specialized skill that will be needed is people who know how to write really effective prompts, who know how to work with this as a tool. They'll be the people that are sought.

[00:20:16] Exactly. Learning these technologies, embracing [00:20:20] them and figuring out how to use them will be part of the job for all of us. Everyone who wants to remain in the job market. And for example, we are in a tax season. I haven't flat my tax return yet but you can easily upload all of the information that you have for your w2, whatever the system, and they can [00:20:40] be.

[00:20:41] CPA agents and do everything for you, just copy and paste that stuff there. Or there is a recent website based on GT four or GT 3.5 that is called first AI lawyer. Then for whatever case that you have it problem you have some documents of a hundred pages. You [00:21:00] just upload them and ask them to resolve the problem or figure out what are the limitations that you might have instead of paying.

[00:21:08] A lot for a lawyer. You can minimize the cost because you already know something. You just have some final questions. Instead of three hours, you can finish your job with the lawyer in 10 minutes because you [00:21:20] know all the things. You just need to have some final confirmation to go forward. I believe that some jobs will be lost.

[00:21:29] But at the same time, I'm somehow optimist that the jobs will be displaced rather than fully removed than the ones who used to be. The content [00:21:40] generators would use these tools to somehow add some level of judgment or some level of. Analysis to their article rather than just copy and pasting Something that these tools still cannot do is common sense or judgment, or [00:22:00] they do not have emotional intelligence. Something that is very specific to us as human beings yet, and all of these will be the components for the new job market. That if we integrate all of these tools still, we need to have human beings to use them [00:22:20] and to tweak them and to finalize them for their purpose.

[00:22:24] And to vet the results. Exactly. Yes. Can chat g p t be used for evil? yes. Sure not chat. G p T. All of these new technologies 50 years ago we didn't have cell phones and we didn't have all of our data in one small system. Now [00:22:40] everyone who can in include the system would have access to a lot of our personal.

[00:22:44] Same thing for chat G p T or for any new technology. They all come with some risk and Problems. And one of the main efforts in terms of AI is how to avoid them. For example, fake news. If you have a [00:23:00] lot of articles that are somehow fake news that is being used by these systems, then these systems start to produce some fake news.

[00:23:08] There are some researchers to create some AI tools to figure out if these are malicious activities, if these are fake news. And again, if you integrate those AI system in this big [00:23:20] bigger AI systems, they will help us to somehow filter the results and come up with the bigger ones.

[00:23:26] Something that we saw in GT four, they tried to reduce the. Percentage of fake news or reduce the chance of doing malicious activities. But again, all the [00:23:40] time we have this risk and all the time we have some intruders who come up with new system, new technology, new prompts to fool the system, and it's a battle for.

[00:23:52] And I would say, this is very much paralleled in image generating systems with ai. In terms of, basically that these systems [00:24:00] will only be as ethical as the guardrails that are put in place on them. There have been many complaints around image generating technologies for.

[00:24:06] Not having protections in place to prevent users from creating, for example, child pornography or other kinds of images that would be illegal. Violent imagery that's deep fakes of existing people in situations that could be incriminating, for example. Unless we [00:24:20] create the tools in such a way that those images can't be produced it's the same issue with chat G B T can you. Instructions on making a bomb on chat g p t, you can, unless the software is designed in such a way that you can't, right? So we have to anticipate all of these potential risks and then, design to prohibit them. And as NA is saying, there's always [00:24:40] gonna be risks about people getting around and we'll have to progressively work to keep them safe spaces for.

[00:24:44] Exactly when we talk about job re replacement as opposed to job displacement for example, for the content generators, instead of creating the content from scratch, they can be fact checkers. They can verify if the articles are [00:25:00] valued or not, and then come up with some analysis if the AI generated tools or AI generated.

[00:25:08] Staff are valued or not, then they are still in the job market, but they are somehow shifting the domain of their work. And something that I really like about chat, GBT and all of these [00:25:20] tools again, is the boiler plate for the email. I want to generate an email. I want to send you an email that I will be late for 10 minutes.

[00:25:27] I have to say hello, someone, I'm so sorry. Tell you about this one, then the actual email will be only one sentences, one or two sentences, but I have to write down 10 [00:25:40] sentence to have the full structure. All of these tools can help me. I just send one, two sentences and they create the whole platform and the whole structure for me.

[00:25:52] Yeah, very cool. I think, the one thing that I remain skeptical about, or I guess a little worried about with these technologies is, I mean there [00:26:00] is just basically the question of making sure that data is ethically sourced. There's a really famous contemporary artist Rafiq anal who is getting a lot of attention right now because he makes these huge AI generated beautiful moving images that are like paintings that are constantly shifting 

[00:26:15] He uses. Thousands potentially millions of pictures of [00:26:20] Coral, for example. And he'll train a, a machine on basically, producing coral and then it'll start to produce its own coral and you'll see these beautiful sort of blooming shapes. He does the same thing with flowers, but he's very specific to say that his data sets are all created by him.

[00:26:33] He actually. Take gets the pictures taken or acquires them. So it's all ethically sourced data. And when we look at [00:26:40] these, large language models there's a real question and there isn't transparency about whether this data has been collected ethically, right? If you write an academic paper, for example, and you publish it with a journal, has that been spidered up by a web bot that collected it and then incorporated it into somebody's you.

[00:26:56] Into somebody's language model. If that's your intellectual property, that's your [00:27:00] labor that went into, doing that research. Do you want it then represented potentially alongside false information by, a generator. So that's the question that comes to mind for me.

[00:27:09] Yeah. Garbage in, garbage out. Kind of thing. And I was reading an article about patents and it seems these tools also, Learned about the database of [00:27:20] patents and they can generate and they can produce some result for new questions. But no one. Basically take care of the copyright for the patents.

[00:27:30] That's somehow being used to train these systems to generate something new. For example, for drug discovery, I read an article that the system was able to [00:27:40] come up with a new specific drug for a new kind of disease. But again, when they search something that is somehow patented or copyrighted there is no specific paradigm yet or a very clear procedure to make sure that everything is ethical and they're not passing [00:28:00] any rules in terms of copywriting.

[00:28:03] Yeah, so hopefully, our legal systems and our society in general we can only hope we'll catch up to these conversations so that we can, not have just let the genie out of the bottle, but figure out how to proceed with these systems in a way that, respects all of the sort of human capabilities that go into these tools.

[00:28:19] Thank [00:28:20] you Nevi and Sarah. Your time and your insights. It's a really interesting conversation. Thank you so much. Thank you for having us.

[00:28:31] Thank you for listening to this episode of Speaking of College of Charleston, with today's guest, Sarah Schumann and Nav [00:28:40] Hashmi. For more episodes and to read stories about our guests, visit College of Charleston's official news site, the college today at today dot c. Dot edu. You can also find episodes on all your major podcast platforms, including Apple Podcast, Spotify, and Stitcher.

[00:28:58] This [00:29:00] episode is produced by Amy Stockwell from the Office of Marketing and Communications with Recording and Sound Engineering by Jesse Cuns from the Division of Information Technology. Thanks again and we'll see you next [00:29:20] time.

[00:29:21] Yeah, exactly. The possibilities are numerous. And I also was watching a video by the co founder of open An airline who created chat GPT, it was two, three days ago. And even they are very excited about the technology and also the possibilities, because they [00:29:40] create a tool, but they have no idea how the, the tool can be used in different formats. And now they have millions of users. And based on the feedback, they're pretty excited to see the possibilities and how different people with different backgrounds would be able to somehow utilize [00:30:00] and use the system.

[00:30:01] ​