They’re trained on technical material too.
They’re trained on technical material too.
Art isn’t work, it’s speech. It’s part of the human condition. Art is useless, said Wilde. Art is for art’s sake—that is, for beauty’s sake.
I do not make art, I just post it here on lemmy. I’d be OK with that. People freely create, copy, and iterate on memes, and they are the greatest cultural touchstones we have. First and foremost, people create because they have something to say.
People already make memes and mods for free. Humans are a social species and will continue to create and share things until the end of time. Making money off of creation is a privilege for only a tiny few.
You keep moving the goal posts and putting words in my mouth. I never said you can do new things out of nothing. Nothing I mentioned is approaching, equaling, or exceeding the effort of training a model.
You haven’t answered a single one of my questions, and you are not arguing in good faith. We’re done here. I can’t say it’s been a pleasure.
Do you have any examples of how they fail? There are plenty of ways to explain new concepts to models.
https://arxiv.org/abs/2404.19427 https://arxiv.org/abs/2406.11643 https://arxiv.org/abs/2403.12962 https://arxiv.org/abs/2404.06425 https://arxiv.org/abs/2403.18922 https://arxiv.org/abs/2406.01300
What kind of creativity are you talking about then? I’ve also never heard of a bloated model. Which models are bloated?
But at what point does that guidance just become the dataset you removed from the training data?
The whole point is that it didn’t know the concepts beforehand, and no it doesn’t become the dataset. Observations made of the training data are added to the model’s weights after training, the dataset is never relevant again as the model’s weights are locked in.
To get it to run Doom, they used Doom.
To realize a new genre, you’ll “just” have to make that game the old fashion way, first.
Or you could train a more general model. These things happen in steps, research is a process.
There are more forms of guidance than just raw words. Just off the top of my head, there’s inpainting, outpainting, controlnets, prompt editing, and embeddings. The researchers who pulled this off definitely didn’t do it with text prompts.
I mean, you’ve never seen a purple elephant with a tennis racket. None of that exists in the data set since elephants are neither purple nor tennis players. Exposure to all the individual elements allows for generation of concepts outside the existing data, even though they don’t exit in reality or in the data set.
You should read these two articles from Cory Doctorow. I think they’ll help clear up some thing for you.
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
I think it’s really disingenuous to mention the DeviantArt/Midjourney/Runway AI/Stability AI lawsuit without talking about how most of the infringement claims were dismissed by the judge.
Damn, this article is so biased.
It was a different word when this show aired. https://youtu.be/rMoDslz0EtI
Check out Civitai for Stable Diffusion models. I’m not quite sure which model they are using, but you may be able to find something on there.
Also, there’s a Stable Diffusion community at !stable_diffusion@lemmy.dbzer0.com. If you make a thread there we can help you find what model they use or something similar.
They’re not good, but there are some Hajime no Ippo games like Punch Out for GBA.
Have you heard of Fightcade?
Drake is not a rapper.
As long as your AI doesn’t somehow infringe on your training data, you’re allowed to use whatever you want, just like reviewers, analysts, and indexers do.