Things Have Changed

Unveiling AI's Role in Content Creation with Originality AI's John Gillham

Jed Philippe Tabernero Season 22 Episode 6

Send us a text

Have you ever stumbled upon an article or a piece of content online and wondered, "Did someone actually write this, or is it the work of ChatGPT?" In today’s world, where content is produced at an incredible pace, it's becoming increasingly difficult to tell the difference.. and that’s a problem in the age of misinformation.

Think about it: people are getting their news on social media, X, Youtube or Facebook! With the advancements of AI, it’s hard to tell how something online can be truly authentic. With latest studies showing >12% of Google's search results being AI-generated, it's critical to ensure the integrity of the digital content we consume and create.  

That's where Originality AI comes in! We’re thrilled to host Jon Gillham, founder and CEO on Things Have Changed. as he shares how his team are tackling these issues head-on by developing cutting-edge tech to detect AI-generated content. In a short span of time, Originality AI have achieved remarkable results, and is the most accurate AI Detector in the market for ChatGPT, GPT-4o, Gemini Pro, Claude 3, Llama 3 etc.

So today on Things Have Changed, we'll dive deep into how Originality AI works, its impact on various industries, and why ensuring content authenticity is more important than ever.

Support the show

Things Have Changed

Today, The Washington Post and Axios reported a group of leading tech companies, including Meta, Google, and TikTok, committed to limiting misleading A. I. content on their platforms.

Jed Tabernero:

Ever scroll through your newsfeed and paused to question whether what you're reading was created by a person. Or a machine.. This question is becoming increasingly relevant as AI generated content floods, our digital spaces. Making it harder to distinguish. Between authentic human stories from those fabricated by algorithms.

Is there a concern that misinformation could rear its ugly head and it would be difficult for users to discern what's real and what's not.

Jed Tabernero:

Today, we're super excited to have someone who's been thinking about this problem for awhile.

John Gillham:

what we were building was a tool to help people that were hiring writers know whether or not they were getting human written content or AI generated content. And then, yeah, we ended up launching on Friday and then basically the next Monday chat GPT launched

John Gilham, the CEO of originality. AI joins us to discuss the technological arms race between content creation. And detection. Join us on things have changed podcast to uncover how originality AI is shaping the future of digital authenticity.

Shikher Bhandary:

Surfing the web these days, I'm not sure if an article or content that I come across was actually written by a human or by ChatGPT or Claude or any of the many AI chatbots that are available these days. So it's hard to tell the difference and what makes it a more pressing problem is we live in this age where things can go viral in a minute and it could just straight up be wrong, right? We live in the age of misinformation. So content is being produced at an incredible pace and we can't verify the authenticity, right? That's just like a recipe for disaster. People get their information from Twitter now X or Facebook. And with that, it's really important for us to find out if certain pieces of content is truly authentic or not. Right. And that's where. Our incredible guest today comes in. We are super thrilled to have John Gilliam, the founder and CEO of Originality AI. And John and his team are tackling these issues by developing the technology to actually detect AI generated content.

John Gillham:

Yeah, thanks. Thanks for having me. And yeah, I know it's we didn't intend to build a societally important product, but that certainly is is becoming what it is. So, yeah, no, it's a good to be here.

Shikher Bhandary:

yeah, even before jumping into the actual platform, John, you have great expertise experience in this space of just content, be it creation, be it marketing, like, how did that come about that you ended up in this space of actually detecting the originality of the content itself.

John Gillham:

The journey was building. So worked as an engineer and was building side businesses. This 15 years now. Thanks a lot of those businesses generated traffic by Google publishing content on the web had built up that business and that's their portfolio, and then had a content marketing business where we were doing that same service for others, where people would hire us for getting optimized content created that would rank well on Google. And we were struggling to be able to communicate that there was certainty that the writer had created the content versus a I had created the content. And this was chatty. We T changed the world in terms of the number of people that were aware of generative a I. But there was very big, very popular tools that predated Chachi PT that were built off of open a eyes. GPT three. And so that was what we were building for was a tool to help people that were hiring writers know whether or not that they were getting human written content or AI generated content. And then, yeah, we ended up launching on the Friday and then basically the next Monday that chat GPT launched obviously had no idea that it was launching and yeah, kind of change, change things.

Shikher Bhandary:

Talk about a product market time fit.

John Gillham:

Yeah, yeah, it was when it first came out, I was because we just spent months training or a I'm being able to detect GPT three a content and we were like, Oh, no, is this a new model? But you know what? We've all learned since is that this is just a, Wrapper with some protection around an existing open AI LLM. So it was it didn't actually drop our performance, but it was yeah, it was frustrating when it first happened at first. And then we realized that it was, I don't know, this is a good thing.

Jed Tabernero:

John, could you kinda walk us through just an example or a scenario where it's important to understand what is AI generated versus what's human generated?

John Gillham:

Yeah, I think it's a question that it's a complex question. It's like society's gonna have to wrestle with, over the coming, coming decades that, you know, where is it? Okay. Where's it? Not? Okay. The students submitting piece of their paper. We don't love a detection or tool being used within academia, but I think that's a pretty clear one that like, no, that's not your work unless you, you wrote it a. A lot of our users use it that they're happy to pay a writer 100, 000 an article, whatever the rate might be not super happy to find out it was copied and pasted out of chai GPT. There's just that sort of fairness component. And then the most. Publishers that are publishing content on the web would have a similar view that I have that publishing AI generated content introduces a new risk to your website in the eyes of Google. And so a lot of publishers are wanting to know, to be the ones that accept that risk, to know where AI has been used and have adequate sort of mitigation in place to ensure that the other downsides to AI content don't exist there. So those are some of our users from. Us as like people using the Internet. One of those main ones is when we're reading reviews. I think it's pretty clear, society is going to have this sort of debate about like where is it okay for AI was not okay for AI, but reading about a review for on baby formula. And finding out that it was AI generated. No one's, no one's okay with that. I would say is the sort of one of those examples of that's a pretty clear that society is never going to be okay with AI generated product reviews.

Jed Tabernero:

Sorry, I just wanted to clarify, did I hear that correctly that AI generated content is just ranked lower? Do you know that an SEOs or is that something that you verified?

John Gillham:

We have some data that supports that, I think but it's sort of questionable. I think Google has made it very clear that they are against spam content. And I think what's clear right now is that not all AI content is spam, but all spam content is AI right now. And so the result of that is that Google has taken action against sites and they did the, there's a March 5th manual action. Where they de indexed thousands of sites when we looked at those sites that have been de indexed, the majority of their content was a I generated content, and so it doesn't it isn't necessarily a one to one where a I content equals bad, but it is a risk factor for publishers if they have a I content being mass produced on their site, and especially if they're paying writers. We had some websites come to us that said our writers don't use a I. But we got a manual action update on March 5th, and we were able to look and say, You didn't know your writers were using AI, but they were, and your website has now been de indexed and your business is destroyed. And AI content unmitigated is a risk for publishers.

Shikher Bhandary:

And this explosion is not going to stop anytime soon, John, right? Because we were reading these articles and you provided a lot of context to where there are estimates that over 90 percent of new content. will be AI generated over the next two, three years. It's fast and it's exploding exponential growth from literally two years ago.

John Gillham:

Yeah, it's crazy. I'd say a couple interesting data points 97. 5 percent of the internet Gets zero clicks from Google. So if Google sends basically 60 percent of the world's traffic flows through the pipes of Google. Currently, we'll see if that changes with generative item that might and 97. 5 percent of the Internet doesn't get a single click from Google. And so that's an interesting sort of the lens that is Google is like the filter that is Google will filter out a lot of, Likely a generated spam that doesn't deserve to be viewed and doesn't add a lot of value into the world. We're tracking. And so those are under our own studies. Those are other studies. We're tracking the amount of a content that's showing up in Google. And it is Every month continuing to increase last update was 13 percent of the top 20 search results on across the sort of sample of 200, 000 results was showing up as a suspected of being a generated content and every month it's increasing already

Shikher Bhandary:

Wow. Okay. So the same study that I was looking at, it was 2 percent a year ago.

John Gillham:

13%. Okay.

Shikher Bhandary:

We are looking at something that no one really knows how to deal with. Because it's so nascent and it's so fast.

John Gillham:

Yeah, so it raises the question to have, from a Google standpoint, like if those search results become overrun with AI, then why go to. Cool. Google and then like filter through links and then click on an ad and have to read like a bunch of content versus just go to the AI that probably at that point knows something about you might be native in your device. And would provide potentially a more beneficial answer. So I think it's an interesting challenge for Google to face to be this sort of tech forward AI forward company while still trying to deliver. Search results. It's their golden goose. That is human. If it's gone, then what's left?

Jed Tabernero:

I think it's interesting because the last couple of years where we've seen the blow up of generative AI, right? People have used it indiscriminately on everything, right? I got people using it for, I got people, I used it for a wedding invite, right? That I just had to do quite recently to go look it up, right? And it's interesting because Bringing this specific topic up. When we started doing our research, we started thinking to ourselves like, Oh, maybe it's not super great to even use a small portion of this AI. Or we started thinking about why it wouldn't be so great to use AI for this, because initially we were like, Oh, this will take care of 90 percent of our content creation problems. But now we're seeing that kind of. There is a double edged sword to that. And this piece comes in. That's why I asked the question initially to say, is it lower ranked than other things, or does it hit your ranking if if you were to generate AI, I think that's just such an interesting thing and people are going to start realizing that there are some issues with posting completely AI generated content. The

John Gillham:

100 percent I think I'll a lot are and it still shocks me at some that are just learning that now that have no like big publishers that have no sort of a I policy in place, let alone a I controls in place. That it's a it's yes, it's pretty it has to your point, has moved so fast that not all organizations were capable of moving at the pace that this is, this has moved at.

Shikher Bhandary:

Yeah. John, it just, talking about an organization that has moved so quickly that they were two days before they launched two days before ChachiBD released. So I've having used your product. It's amazing. Gives me great understanding of what the veracity of the originality of the content that I provided has. Right. So I'd love to understand the platform that you have built and maybe peel the layers a bit as to how it effectively works. I think did mention about actually training the AI on a previous GPT from open AI.

John Gillham:

Yeah, so the first step was, I fortunately was have a great fortunate to hire a AI research lead that has now built up a team with him. But he's been phenomenal. So the way the it can be kind of unsettling. Like a lot of things with when it comes to AI and like, how does it work? What, why did this tag, why did this document get identified as AI generated? And the answer is we don't know. Because the model picked up some patterns that are, Likely too complex for us to think through. It's certainly at that speed. And so the way it works is it's a classifier. The simplest form of it is it's a classifier. That gets trained on at this point. Millions of records of human known human content. Millions of records of known AI across all of the top. Large language models. So whether it be cloud three or Gemini Pro Lama three or, of course chat, GPT and GPT four gets trained across all those and then starts to learn to tell the difference between what is human and what is a I train up the model and then run it against a bunch of our own benchmark test. And then at this point, now there's a bunch of publicly available benchmark test that we can run any new model against to see what our Accuracy rate is what our false positive rate is. And those are really the two numbers that are our customers care about the most is correctly identifying as a is a I incorrectly identifying human as human.

Jed Tabernero:

Yeah, that's actually an insane task. If you think about it. I asked AI last night when I was doing the research to say, how do you think something like this is built? How do you actually determine what the difference of a human and AI generated content is? If you think about it, we kind of notice. Right. I think me and you at least checker, when we go on LinkedIn, there are some obvious posts that are like, okay, this guy

Shikher Bhandary:

I'm thrilled to announce.

Jed Tabernero:

one's human. That one's human.

Shikher Bhandary:

That was, that is human. Yeah.

Jed Tabernero:

But it was interesting to learn about this concept of stylometry. Like a linguistic kind of style and basically. All the content that was generated like pre a certain date when generative AI was popular was human generated content, right? So it's just for me, I can still understand completely and very clearly what AI writes for me, but I know that there's something off about it and I can't explain it. And the way I read into kind of that research of how to determine accuracy is there's that unknown sense that I have, that this is AI generated. Basically that was what was put into your product and say, inherently, this is what it's looking like. I think that's beautiful.

John Gillham:

Yeah. What's So you talk about the ability to identify that we think we can identify a content, and there's been some not around studies, but some interesting studies that have been done on this, and it's pretty shocking. So I think, as humans, we have to cognitive biases that can greatly impact because I agree with you, I think I can tell. Even though I've seen the studies but if we, if you were to like ask a room, so like a couple of cognitive biases that I think we are all susceptible to. If you were to ask a room on who's an above average driver, 70 to 80 percent of their room puts up their hand that they're an above average driver. Should be 50%. But we have this overconfidence bias. And then there's a pattern recognition bias that where humans always are trying to make sense of the world. And wherever there's chaos, we try and recognize a pattern in that chaos, ask any kind of anyone in a casino. No, I've got a system for this random game of chance. And in tests that looked at it. Just sort of straight ability to tell was this piece of content human or this piece of content AI with no sort of additional controls like Here's the students ten pieces of previous pieces of work and here's their new piece of work that was AI generated There's a high probability of being able to tell But if it's just like this might be human this might be AI And what's humans ability to tell the difference between the two and it was barely better than flip of a coin as soon as any adversarial technique was put in place if it's just like straight GPT content with no adversarial technique, just hey, right X, then it writes in that GPT for kind of style the chat

Shikher Bhandary:

Structured where no one speaks like that. Yeah,

John Gillham:

yeah, but if you ask it like, right, like so and so, humans ability to tell the difference between AI and human is totally out the window. Flip a flip of a coin.

Shikher Bhandary:

got it. And so John, this is just out of my curiosity. It feels like with 4. 0 and maybe the latest update from Cloud as well, I think more on the Cloud than on the chat GPT side, it feels a bit more human earlier iterations, like when GPD 3 and 4 came out, 3, 3. 5, I think. Yeah. It was easier to tell even from like someone who's seeing this daily. So you can pick up certain cues in the text. What happens in the case where these models get to the point that they jumble up aspects of human cues and. AI structure to give you this mess that is so hard to figure out and especially is probably one of the biggest challenges that your team faces, right?

John Gillham:

So for sure it is for sure it's it's a, one of our biggest concerns, the data says otherwise right now, which is interesting in that are when so I think you're exactly right that when, like when judge UBT first came out, it was hard to make it right differently than it's like forced style. Like it was like, but then when GPT four came out, You could say, write a Nobel Peace Prize accepted speech in the style of Dave Chappelle. And no one would think that had been AI generated pre that written piece of content. And so I think humans ability to tell went out the window because the diversity of what it could create and the directions that you could provide it became so significant. What we have seen from Our own model, whatever it's picking up on whatever our own AI is picking up on, we've seen the new model will come out, our accuracy rate would drop from, let's say it was 98 percent would drop down to 85%, big, unfortunate drop, have to train up on that new model. And then we would close that gap. What we have seen with the latest models, basically no drop off in 4. 0, Cloud 2 to Cloud 3 was minimal drop off, Gemini Pro, no drop off. We're seeing a lot of these models are trained on same data, common crawl of the web, same hardware, same sort of foundational technology around transformers and we're seeing this in 4. 0. Diminishing a gap get opened up with every new model over our capability of detection. That's what we're seeing right now. Pretty hard to bet against the exponential growth, the exponential amounts of money that are being poured into the space. And when GPT 10 comes out, are we going to be able to do that? Accurately detected. I don't know, but I'd say our current sort of our gap to new models has been closing with our current detection capability. And I think that's leading to us be on the side of okay, we're seeing a little bit of a plateauing right now around model capabilities. And a right intelligence of these models. The jump from GPT two to GPT three felt bigger than the jump from GPT three to GPT four. Will this will the same exists? Well, that will we see the same sort of Perceived diminished jump on the next models. I don't know.

Jed Tabernero:

Very interesting. So using AI to detect AI, we started looking into kind of the other platforms in this space just to understand, Hey, what's the competition looking like? There's other models probably doing a little bit of, the same industry. What would you say? Sets yours apart than these other companies and a lot of the cool features I want to go over here later But I'd love to hear from your perspective you know What sets you apart from the other model

John Gillham:

Yeah, I think it's a two things and they're related, but I think we made pretty quickly the decision that like the world we understood where was digital marketing content being published on the web. That was the world that we understood. Where we had felt like an unfair advantage. That's who we started building for. And so we build for basically copy editors that exist in any organization that are getting a piece of content and then need to publish that piece of content. So copy editors, toolkit, AI enabled toolkit. And that's what we're building. Core product is AI detection. Because of that, because of who we're building for, it's a far more B2B than a B2C play. Our free tier is far more limited. Our our pricing doesn't need to be. We're not competing for students. And so because of that, we can give more horsepower to our detection team to say, run, run harder on the compute. We can have a false positive to detection rate that is tuned to our users use case and not needing to be the sort of super general detector. And so because of all those things our data set gets tuned to our users as opposed to being this sort of general purpose detector. And so I think that decision of who we're building for has led to a bunch of other sort of tweaks along the way with the model that has led to repeatedly showing up as the most accurate aid detector. AI research team incredibly smart, but we're going up against teams that are also incredibly smart I think the combo of sort of that this AI research team being incredibly good And then a bunch of decisions to get them aligned with being capable of building Give them the tools to build the best detector. So most accurate detectors is one and then for our users, which consistently proven by studies. And then the second being we're pretty clear on who we're very clear on who we're building for.

Shikher Bhandary:

Big kudos to the fact that y'all are the most accurate. AI detection tool in the market right now. So that is in itself incredible. So the stakeholders right now, the customers that y'all are actively working with hand in hand, not just copy editors, like you mentioned, but also, these marketing companies, maybe educational institutions, like how are you thinking through the actual Customer cohorts to target and build those relationships with.

John Gillham:

Yeah, so we, I think we're, I think we're the only tool that actively lists that we're not for academia.

Shikher Bhandary:

Okay.

John Gillham:

that I think, so we do have a lot of academia that uses us because we are the most accurate. The sort of amount of tooling that you need to build around academia to be confident it's being used. In the right way is more than we that's not the problem. Like it's a big problem. It's just not the problem. We're focused on. And so we know academia is using us. We don't love it being used for academia. So we false positives do happen. And the amount of tooling you build around that to deal with that within an academic setting is different than what we build within a for for writers where we can have a free chrome extension and some other tooling that helps deal with false positives in the writer. Editor relationship. And so that's like the one sort of unique condition for us. And then the rest of our users mostly fall into the digital marketing world. So web publishing world, whatever we want to call it, but getting content from a writer. Reviewing the content, publishing on the web, and so any company where a copy editor, where somebody functions as a copy editor then our tool can be useful, and that can be incredibly small companies, one person operations that has one writer for their one website, and it can be incredibly large organizations that have thousands of writers and hundreds of editors. Really focused on that role, or people functioning in that role.

Shikher Bhandary:

Yeah. And it kind of fits, it probably fits better to what your incredible expertise is in any way, right? Because you've built what is it? Three companies within the content space. So you probably know exactly the workflows exactly where in that workflow and those individuals making these decisions.

John Gillham:

For sure. Off the start. I've been, I would have agreed with you wholeheartedly a year and a half ago. I think after enough conversations, I've been pretty humbled at Oh, maybe I didn't understand this space. I didn't understand all parts of this space as well as I thought I had. The use case for somebody in a large organization will be different than a marketing agency, which would be different than a website. So in general, yes. But I think Yeah, still lots to learn on exactly how everyone's workflow. Even though it's a similar function works, but it is definitely the space that we understand the best.

Jed Tabernero:

is there a specific feature that you're most excited about John? Of what you're providing today that be B to C or B to B customers that you have Not

John Gillham:

This would be a weird answer. It's the one I'm most disappointed in, but I'm also most excited for. So we built a fact checker so heavily reg enabled. Fact checker, super intensive in terms of going out, finding the information for every statement of fact, trying to do reg, laying it in, overlaying it with an LLM to then provide an answer on, is this factual or not? It's not very good. So we're really excited for it. It's still in beta. It provides it's an aid in terms of providing research to help people in the process of fact checking when they get a piece of content. I think hallucinations and factual accuracy of what elements output. Is a pretty massive problem. That is hard to solve by just enriching, increasing the data because the so the unbalanced the creative nature of these models really, really hard to keep them within. Within parameters and factual accuracy, even with all with a ton of constraints put on upon them. So that's the one I'm excited for it because I think it's solvable eventually. And so we're keeping a really close eye on when can this added effort that we can Inject in terms of reg and understanding the web and trusted sources. When can that be matched up with an LLM that will provide that level of fact checking that we can look at and be 99 percent plus confident is accurate and we're not there yet.

Jed Tabernero:

I think I'll just share what my favorite feature right now is. It's the readability piece, how cleanly it comes out when you put something in. Which I think if you guys have used other tools like Grammarly, it's it's in the same light. But I think the fact that. Right underneath the output. First of all, you color all of the, the unreadable sentences, the long sentences. It just basically told me how shit of a writer I am. But it's really

Shikher Bhandary:

I've been telling that to you like for two years, dude.

Jed Tabernero:

it'll tell you something's you've got over 20 syllables in this sentence, right? Yeah, that's difficult. That's probably difficult to capture. And you don't realize these things when you're writing stuff. So I can see how it was built for those content marketers as well, because these are the stuff, of course, me and Chick are having this as a passion project, this is one of the things that we care about the most. So very interesting. You also have a feature on paraphrasing, which I think was difficult for me to understand, but that's trained on something, right? Trained on a tool that, that does the paraphrasing for you.

John Gillham:

Yeah. This, what, one of the, one of the most fun aspects of this role has been the cat and mouse game. And just the, like the constant battles of like launch detector. And I was too dumb to listen to the feedback that we got, but it was like the feedback we got from day one was is there a button that I can just press that will make it past the detection? And I'm like, yeah, No, we can't build that like our tool would be useless with that which is which is true. But I got that feed that request so many times that I was like, well, why didn't I see coming? What came next, which is now there's all of these tools that attempt to bypass us. And so one of the most common ways, especially early on to buy and still works for a lot of other detectors was to use a paraphrasing tool. So using and that produces a new pattern. And so you'll it'll You can create content using an LLM and then remove, I won't watermarks, not the right term because that has a whole other meaning to it, but remove the pattern recognition that comes by paraphrasing. And so that's, that was one of the earliest methods of trying to bypass detection. There's been a bunch more since, but there, yeah, anyway it's fun to, we have a red team and a blue team. Red team's always trying to beat our detector, find out what tools are available to beat our detector and then build a data set to try and learn against it.

Jed Tabernero:

I love that, that that's part of the culture. There's a red team and a blue team.

John Gillham:

Yeah. Yeah, it's it's it's unique and fun for sure.

Jed Tabernero:

no, it makes it a lot more fun when I saw these tools, just. Thinking about somebody like myself, who's in this space where we're writing a lot of stuff, we're talking about a lot of things and then transcribing, then writing a lot of writing. I thought to myself, this would be really dope to integrate with some of the native tools that we already have, right? Some of the workflows that we might have in our space to just see, Oh, you know what, let's just, when we write something, make sure it goes through this test to ensure that it's readable, so that my shit writing doesn't get published. But, things like that. A lot of people. Are now publishing online content, as we said, and using AI to do so. So it's just like really useful. Just question for you, any integrations in the future, anybody courting you to become, part of their tool, et cetera would love to hear about any future plans like that.

John Gillham:

Yeah. So we have a chrome extension. That works that is tightly integrated into Google document workflow. Not so much from a creation standpoint. And so for some, to some extent, like what grammar really is to writers, we are aiming to be to editors. And so if there's integrations that make sense on under that lens, then yes, I don't think we will aim to be. Writing aid. And so I think the level of integrations that occur if we were focused on being a writing aid increases with a current focus not opposed to integrations, but they got to make sense from that use case of a copy editor. And in those cases like we have a Chrome extension that is very helpful for people to understand the originality authenticity of a Google document.

Shikher Bhandary:

That's great. The way the media or the way even the folks in the industry compare models is through, a bunch of these. technical jargon, right? Tokens, parameters. Oh, this is this size. This is this size. So specifically on your product, does it matter how advanced or how many tokens or parameters it has to be able to detect something that was generated by a model that does have those extraordinary numbers of tokens or parameters? Is it something that you're thinking through?

John Gillham:

Similar that sort of earlier conversation around, like we're seeing this sort of diminishing,

Shikher Bhandary:

Okay.

John Gillham:

gap that is showing up with every new model that comes out. You know, exponential growth of parameters. Output is. Better in some ways. But then our capability of detecting it doesn't drop off. And I'm hesitant to like project out in this world. What, what's coming, but I can say what's happened, his history being the last two years. What we've seen is that this sort of exponential growth Of parameters, model sizes, training costs data consumption has not led to to the ability for these LLMs to create content that whatever our models, our detectors, most accurate, but there's other detectors that are decently accurate. Whatever they're these detectors are capable of picking up is staying is staying in place right now. And so we're seeing, I'd say where it's been interesting and the biggest sort of gap that has exposed. There's all these different criterias for these. for these models. Some of the open source models have been, even though they might have a smaller parameter count, smaller training size they have produced some interesting variability in, in accuracy. So like Mistral in particular did, again, easy enough to close, but that was one of our even though most open source models up to that point had not been I challenged I've been quite easy to detect.

Shikher Bhandary:

It threw a wrench into things. Yeah.

John Gillham:

yeah. Yeah. So that was interesting.

Shikher Bhandary:

Got it. Yeah. I was just thinking because the talk of the town is LLMs, SLMs, action models, and things like that. And I was just wondering where in that whole category does an AI detection for those LLMs actually sit. But it's great that now that you're up 4. 0 or 4. 0 from here, it still is accurate on a high percentage basis.

John Gillham:

Yes, we're the most accurate we have ever been on the latest model. Our gap has been closing. This could aged extremely poorly, right? GPT five could come out and make detection totally impossible. Oh, yeah, we'll see.

Shikher Bhandary:

you're doing your best. You're doing your best.

Jed Tabernero:

Yeah. Can you imagine if GPT 5 came out and that's its entire goal was to consume AI detection software? Oh my god.

John Gillham:

You know, I just listened to Mira the CTO at OpenAI talking about detection and I, I think I understand why they're detector. So they had launched their own detector, but they were given a certain, like it was had to be free, had to be super low false positive. The result was pretty much useless on detection. And so I understand and viewed with a lens of certainty. So I understood why. Given the criteria that would have been provided to their team to build a detector, they would have had to fail. And I think they don't want, I think they would want their content to continue to be capable of being detected because It helps to decrease the societal harm that their product will be capable of producing. And that's at least the words that they're saying. They would love to have watermarking. I think watermarking will never be a solution within text. But yeah, so that's what I would say is it's interesting to yeah be trying to read the tea leaves of what OpenAI truly cares about moving forward.

Jed Tabernero:

Interesting point on the societal risk. We touched on it in the beginning, right? To say that, hey, what's the scenario this actually might be hurtful. And we looked into some examples, one of which was like a book on Amazon. That had very questionable information about foraging mushrooms is it was, that was your bullet, right? Chikor,

Shikher Bhandary:

Yeah. Yeah. And it was actually a link that John shared and baby formula stuff was also there where now people are questioning.

Jed Tabernero:

because you, you don't think about these right as a risk. And I'll tell you why this specific example was so funny to me in the beginning was because when ChadCPT first came out, I messaged Chikor and I said, look, I found another way to make money. I'm going to make a book and I'm going to have chat GPT write me every chapter, to make this book. And it was just interesting to me that the example that was provided was a

Shikher Bhandary:

Literally someone who thought about that.

Jed Tabernero:

AI. Yeah. Literally someone who thought about that and then put it on there and sold, probably made a little bit of money out of it. People don't think about that societal cost when it comes to these like super useful things for humans. So the way that I think about originality AI in general are guardrails. To these, really crazy innovations. That's why I think it's so interesting that we covered just a company before this highlighting, the power of what AI can do and AI agents. We talked a lot about AI agents, and we've been talking about a lot of the positives of artificial intelligence. And I. I appreciate that for this episode, we're actually able to step back and say, look, this is one thing what we're doing with an ad using AI actually to reduce societal, harm, which is pretty awesome. Have people who are talking to you about originally AI over index on the piece that you are reducing societal harm.

John Gillham:

I think there is a societal importance to the company that, that I didn't initially set out to, to build, and it is being used in ways that I hope is reducing harm. Again, a lot of our focus is on the, that were the, the specific vertical we're in, and in, in that vertical, I think there's a significant misunderstanding in that space, where right now we have writers, You need to use our platform to make sure their content passes AI. This is a problem that they never experienced before. They aren't using AI. We have false positives. And there's this sort of harm that comes from that because there's writers that aren't getting paid because there's a false positive. What I think is being missed is that short of having some AI Detection in that workflow that the volume of writers goes up to, the, basically the world population that's connected to the internet for a good chunk of writers. ChadGBT writes better than I do. Sounds like better than you, better than yourself. But

Jed Tabernero:

Just me specifically.

John Gillham:

yeah. So if I was, if I made my livelihood as a writer AI detection, although the occasional false So that can be harmful. It's a, it's at least defending that industry from being totally wiped out by by chat GPT and other LLMs.

Shikher Bhandary:

There's a lens or there's a view of, Hey, if this is good for the consumer, why should they care? And this is why they should actually care. Right. Because people are just going to be like, Hey, Content is going to be ingested regardless, whether it's from actual human or AI, if it's good content. What's the harm, right? And there's actually a harm here because it just leads to other things.

John Gillham:

Yeah, exactly.

Shikher Bhandary:

Just wrapping up, John, this was fantastic. When we have such guests on such founders, academics on, we give them the stage to give a shout out. Maybe it's for a team, maybe it's for a new product release. Maybe it's fundraising too, because we have a lot of founders that have actually Used our platform to connect with the right VCs and stuff. We'd love to give you the stage.

John Gillham:

Yeah, no sounds good. But yeah, so I'd say if anyone is working in an organization that, that has people functioning as a copy editor and is trying to wrestle with these questions that, that we wrestled with today on what is allowable, not allowable, and whether or not the risks are, Adequately managed in your organization with the use of a I in specifically for us within the world of content marketing. We're happy to chat and happy to help people think through the appropriate uses of it and the appropriate sort of use of originality for mitigating those risks.

Jed Tabernero:

Awesome. And do people still get Free tokens. If you get the Chrome extension.

John Gillham:

Yeah. So if you sign up and and get, yeah, 50, 50 free credits, so we have the free tool that you can use which is super limited, and then you can get 50 free credits to the premium tool when you confirm your, sign up, know with the Chrome extension.

Jed Tabernero:

Sweet. Well, this was super awesome, John. We learned a lot, even just doing the research, honestly, but talking to you was a lot better. Really appreciate your time and thanks for coming on. Things have changed, man.

John Gillham:

Yeah, thanks for having me. Fun fun conversation.

Thanks for tuning into today's episode of things have changed podcast. We hope you found our discussion with John Gill, him enlightening, and that it sparked new thoughts about the digital content we interact with daily. Remember. In a world where technology continually evolves, staying informed is key to navigating the complexities of digital authenticity. And that's what urgent Lottie AI can help you with. Cheers. And as always stay curious. The views and opinions expressed in this podcast are those of the guests and do not necessarily reflect the official policy or position of things have changed podcasts or its affiliates. The content provided is for informational purposes only and should not be taken as professional advice. Listener discretion is advised.

People on this episode