Creative AI

AI art

With this? so you might have seen ai in a bunch of forms all over social media lately, like i have, whether it's making ai art, or doing ai portraits, or ai chat bots having conversations, and writing poems, and all kinds of crazy stuff like that.

AI algorithms and machine learning models

are capable of some truly impressive feats. But when it comes to creating content online, there are some fundamental reasons why AI can't replace human creators. First, let's define what we mean by online creator

At its core, being an online creator is a creative process, right? So it involves coming up with ideas, developing content, publishing it online, engaging with an audience. And this process requires imagination, and creativity, and a human perspective. On the other hand, AI is a machine learning technology. It's not capable of imagination or creativity.

It doesn't have a human perspective. Instead, it's a tool designed to process data and perform specific tasks. In conclusion, AI might be able to perform some impressive tasks, but it can't replace online creators because it's not capable of imagination, creativity, and a human perspective. It's a tool, not a creator. You wouldn't wanna listen to an entire article created by an AI, would you? Except you just did, because every word that I just read to you came directly from asking OpenAI's AI chat bot, called ChatGPT, to write a script for an article on why AI can't replace online creators. And I simply just recited it. It's fascinating though, isn't it? I just did a article earlier this year on DALL-E, another project also by OpenAI, where you input a text prompt and it spits out a realistic, high resolution, new unique art piece in whatever style you want. And the art pieces are surprisingly detailed, and realistic, and it's accurate to the words you gave it.

It's such a powerful tool. And so now this other tool by OpenAI is also going viral and it's more along the lines of a robot you can talk to, a chatbot. So it's called ChatGPT, and it's capable of holding conversation back and forth about almost anything. And so you can kinda have a normal conversation with it right now. But the types of things people are asking of it are getting increasingly more and more complex. You can ask it for some facts, or you can ask it for the summary of a book, or you can have it write a poem for you, or ask it to find an error in some code, or clearly just ask it to write a whole script for a article. It is incredibly impressive what it's been able to do, just kind of drawing from the database of all of human knowledge and then having intricate, detailed, nuanced conversations with people on a variety of topics. So here's my actual real human take on the emergence of these new AI tools.

Two things. One is that really it's just kind of amazing that we're living through this time right now where we're able to see these tools evolve in front of our very eyes and get better and better. But two is just that, that's all it is, is a tool. In 2022 that's how I see it. It's just a really impressive tool. So I think the ideal use of this stuff, especially as a creator like me, is not to take my job, but to use it as a creative tool to sort of brainstorm earlier in the process and then let me put my human touch on top of it later. That's literally how I plan to use it. So you can ask ChatGPT to help brainstorm article ideas, or even titles for these articles, and that's what it'll do.

But then at the end of the day it'll be my human judgment that decides what I actually decide to publish. So kinda like how you might already use the AI subject selection tool in Photoshop, but then refine the edges and the selection yourself. Or you might use the AI sharpening tool, the AI enhance tool in Pixelmator, but then go in and do the rest of the edits to really match your style.

So, it's the beginning of the process.

This new stuff we're seeing is just the next level of that. The only difference here is this stuff is a much more general AI, and it is what we call a generative AI, meaning it creates things seemingly from scratch. So I think there will be people who ask ChatGPT for the summary of a book. And it can spit out an answer super fast, and you can use that as inspiration for your own writeup.

I think there will be college students that use it to brainstorm an essay. It won't be able to actually spit out a finished essay for you at this point, but it is a pretty damn good start. Clearly this is an amazing never before seen tool and it's the start of something huge. I'll even continue to ask it for article script ideas, why not? But just keep in mind, when I did, it did say at the bottom that this should be a starting point for a article. It didn't do any alliterations at the end like I usually do, and it did get some facts wrong in the longer full version, as you can see. And that actually brings me to the dangers of somethin' like this. Of course, with any massively impactful new technology you have to at least consider this type of stuff. And really, two of them have come to my mind at the very top, which are, one, the error rate, and two, the complexity of credit.

So I asked ChatGPT to write a script for an review of the iPhone 14 Pro, and it made one. And it's actually pretty positive. It goes over the specs, it calls the display bright, and vibrant, and sharp, "perfect for watching movies and playing games, or browsing the web.". And it calls it "incredibly fast and responsive.". I think that's true. And it says, "The standout new feature of this year is the camera system.". I would agree. But then it says it has a 12 megapixel ultra-wide, a 12 megapixel wide, and a 12 megapixel telephoto.

But as we know here on the channel, the iPhone 14 Pro was the first one to actually have that 48 megapixel main camera. ChatGPT was just wrong here. And it also shouted out the four colors, silver, gold, graphite, and Pacific blue. Except those were the iPhone 12 Pro's colors, this year there's a space black and a new purple. So ChatGPT has a habit of getting at least one thing wrong every time you ask it something, or at least every time I asked it something. Now on the whole six paragraph article script thing, that's actually pretty impressive, the fact that it got 90% of that pretty well. But I also asked the chat bot something much more simple. What's the fastest sea mammal on Earth? Just curious.

And ChatGPT said, "The fastest sea mammal on Earth is the peregrine falcon. Although it is not a sea mammal, it is capable of reaching speeds of over 240 miles an hour while diving, making it the fastest animal in the world. Other fast sea mammals include the sailfish, which can swim at speeds of up to 68 miles per hour, and the orca, which can reach speeds of up to 34 miles per hour.". Now, it's been a while since I've been in school, but I'm pretty sure only one of those was a sea mammal. And to be fair, it was the correct answer.

It did include the fact that an

Orca goes 34 miles an hour, that is the right thing to say, but clearly you can't rely on this thing to be 100% factually accurate.

It's kinda the same way, there's also

A 90% accuracy rate with dall-e, but it feels more and more impressive the more complex your prompts are.

Like if you ask DALL-E for a picture of a cat.

Okay, that's pretty easy, so it is kinda jarring when it messes up some parts of something that seems so easy and obvious. But when you ask for a cat wearing a rocket booster, jumping over a man watering his garden in space. It's like, okay, it's kind of amazing what it's able to generate from scratch to match the description. And then it's also not shocking when maybe it gets one or two of those things wrong, kinda just like that longer essay with a few incorrect facts. I expect these error rates to go down over time, that's kinda the whole point of these AI models advancing, but that's somethin' I'll keep an eye on. The other thing though is credit. And this is something that you may have seen pop up a little bit on social media lately, which is that AI steals art without consent. Which, here's what they mean by that.

So the number one app in the entire app store right now is something called Lensa AI by Prisma Labs. You might have seen some posts on your timeline, it's kinda blown up. And the basic premise is you pay a few bucks and you upload a bunch of real photos of your face, give it a few minutes, and the black box of AI inside will spit out a bunch of cool avatar characters that look like you in a bunch of different situations and as a bunch of different characters. Some of them are much better than others. I feel like it's kinda caught fire lately, 'cause most people don't typically have a bunch of cool art made about them, so it's kinda neat that you get to see that type of thing. But there's some other companies jumpin' on this too. Avatar AI is another one. So now you are technically consenting to uploading your own face for it to be used to train the models to put in these images.

But do you know who's not consenting to have their art used for this type of stuff? A lot of the artists who are also making the art that's being fed in to inspire these AI images, the backgrounds, the materials, the line work, the styling, the framing, et cetera. Here's somethin' to keep an eye on. You know how most artists, a lot of 'em will sign the bottom right hand corner of their drawing or their painting when they're done? Well, one of the telltale signs of potentially copyrighted art being used by these AI models without permission is a lot of people are getting back from this app with the mangled recreations of a bunch of different signatures, because clearly many of the source images that went into it had signatures at the bottom.

That is wild. So the unanswered question right now is, how do you give credit to the artist whose work is being fed into the machine that is creating AI art? If I were to just ask DALL-E for a picture of a cat, it could easily just spit out a brand new generic image of a photorealistic cat, and it's learned through its models how to reproduce, with stable diffusion, what an image of a cat might look like, inspired by, theoretically, any image of a cat on the internet that OpenAI has pulled in.

Actually, technically it's learned from the entire dataset, not just the images of cats, but basically I don't think any artist would get too mad at that. But you can also ask DALL-E for a picture of a cat in the style of Claude Monet, and it becomes much more clear what source material is making it through to the final input. And if I was Monet and I was still alive, I probably wouldn't be too happy with this.

Now, I'm not a copyright lawyer, so I'm not even gonna try to get into what counts as transformative work or what's copyright infringement and what's not. But the bottom line is, we don't actually really know the exact totality of exactly where these AI models are scraping from. There is some general description sometimes, if you dig into it, about publicly available images and licensed content, but there are also some huge databases. There's something called Common Crawl that scrapes huge amounts of the internet and creates a publicly available free data set that anyone can use. It's called LAION-5B. And again, not a copyright lawyer, but to me this kind of feels like a bit of the loophole, doesn't it? Where technically Common Crawl, they're not profiting from anything, they're doing all the scraping of billions and billions of things and then putting it all in one place and then it's available for free and then others can decide what to do about that legal stuff.

So OpenAI, they were using this data set, and they were initially doing all this stuff for free, but I think now it's 15 bucks for a set of 115 images, somethin' like that. But especially the ones you fed your face into, the Lensa one, the Avatar AI one, they're just straight up charging people.

They are making money from the data sets that they just crawled for free.

Think about it this way.

Here's a simple analogy. 

UMG is gonna be comin' after you in two seconds flat. But in this world of AI art, which is so new, we kinda don't have an answer to that yet. There is no precedent set, legally or culturally. So at the beginning it felt like the biggest question was, how do we define art? This is a crazy question, but now it feels like the more interesting question actually is, what is inspiration exactly? How do we define inspiration? When a human draws something new, of course it's a unique expression entirely of their own, but of course they were also inspired by previous drawings that they might have seen in their life.

Matter of fact, they are technically inspired by every moment of their life leading up to the point where the pen touches the paper. And so now AI art is basically just speedrunning inspiration. It's just dumping all of recorded human history into a black box and then making something from it. Or maybe just everything that's in data set LAION-5B, which includes a ton of my own work and thumbnails and images, by the way.

But at the end of the day, if I'm being an optimist, which I try to be, I hope this makes us appreciate human created art more for sure. But we gotta keep an eye on all these unanswered questions, 'cause there's a lot of 'em. And until then, let the robots rehearse the revolution. You know what's not an unanswered question though?