Show newer

Time to find out how shelf stable they really are.

Complaining about how cold it got so fast, only to remember that I can finally fire up my 3D printer without creating a heat problem in my small apartment.

Printing the BYU One-Piece Compliant Blaster.

compliantmechanisms.byu.edu/ma

This screenshot will temporarily teleport your brain back to 2004. Use with caution.

Small correction: Stable Diffusion has 2.3B 256x256 images, and 170M 512x512 images that it is trained on.

But still, that comes out to less than one byte of model data per image in the training set. There's literally no way to argue that a 256x256 image got compressed down to less than a byte.

Show thread

It is worth noting to those who are in opposition to AI that the "models trained on my work contain my work" argument isn't technically or legally sound. Better arguments need to be made. If we could cram 1B+ 512x512 images into a single 2GB model, it would represent the single greatest breakthrough in data compression technology in human history. It turns out, this isn't a compression breakthrough, it's just proof that the original data isn't in the model.

Show thread

I agree that some degree of mimicry isn't infringement - fair use includes some degree of derivation - but I really don't think that training and using a model to deliberately imitate another artist's work represents fair use, especially if it can be found to harm the artist.

The line shouldn't be drawn at "is it an exact copy?", it should be drawn at "would it confuse a reasonable person into thinking this was original art by another artist?".

Show thread

Yesterday's lawsuits against Generative AI companies like Stable Diffusion and Midjourney were mostly dismissed, and I generally agree with that. There is one case still ongoing between Sarah Andersen and StabilityAI that I expect Sarah to lose on the grounds that her accusation isn't sound (her artwork isn't actually stored in the model).

But I am disappointed that they ruled that copying styles isn't infringement unless it is an exact copy of a work. I feel like this isn't a nuanced ruling...

Sad to remove Cohost from my bio, but I do not ever use it anymore. I don't really have anything bad to say about it, it just wasn't my style.

Still love eggbug though.

But, again, I don't fall for the "All AI art is art theft" argument, because it just doesn't make sense.

If I type "a moonlit beach, painting by [living artist]", It could be argued as infringing on that artist, but If I just type "a moonlight beach, oil painting", who does it infringe on? Every artist the model was trained on?

If a work infringes on every artist at once, it's safe to say that it doesn't infringe on any artist at all.

Show thread

I also agree that copyright law, as currently interpreted, does not support the copyrighting of AI art. I enjoy playing around with Stable Diffusion, and I would say that I have gotten pretty good at it, but I don't take ownership of the images that I generate, and I don't believe that I should be allowed to legally claim ownership over something that wasn't actually created by me.

But I still find the technology impressive and useful.

Show thread

Not to say that I don't understand people's arguments. I think that being able to type a living artist's name into a prompt and emulate their style could represent an infringement, but there needs to be a sense of nuance. I don't think that training models on copyrighted materials is infringing at all. The data a model was trained on doesn't exist inside the model, so it doesn't really count as a copy.

And prompts that don't evoke a specific artist don't produce works that infringe on anyone.

Show thread

I worry that is leading otherwise progressive people to argue against their own principles. Many of the same people that I previously knew to argue against strict Intellectual Property laws are suddenly demanding the expansion of IP protections because of AI. People that I thought would embrace broadly permissive fair use are now arguing against it under the pretense of fighting "art theft".

I'm won't fall for it. I hope Generative AI helps to invalidate IP laws and expand fair use.

One of my favorite aspects of Mastodon is that hardly anybody wants to fight. On other social networks, being disagreed with often comes with being harassed, but I hardly ever see that here. People are way more open to discussion when there's a disagreement, and are far more level-headed when somebody does something that the community dislikes. Even when I've seen "dogpiles", people remained civil, despite being blunt.

I've pretty much sworn off of all of social media except for here now.

Really wild watching conspiracy theories start taking root on the left. Not a great sign that this is happening...

At least it's a little entertaining. Currently amused by the theory that electronic music is state-sponsored psyop to eliminate lyrics from music so that we don't think for ourselves.

Curious to learn what these people think of Classical Music.

Random observation since I started this thread:

The people that are here on Mastodon are a mix of the early adopters starting in 2016, and every Twitter exodus since. Mostly people that got tired of the drama a long time ago

The people that exist on Bluesky are mostly recent Twitter expats who haven't fully left Twitter yet. Mostly people who either tolerate or thrive in an environment with drama.

So my money is still on Mastodon being a more pleasant place than most other alternatives.

Show thread

One thing I will say is that a lot of the same issues Mastodon had during it's first major growth spurts reminds me of what's happening on right now. I have also seen Mastodon and the larger fediverse mellow out over the past year and become and really pleasant place to exist.

I hope, for the sake of the Internet keeping a multitude of pleasant non-corporate social spaces, that Bluesky sees the same mellowing out effect over the next year or so.

Show thread
Show older
Mastodon (Vran.as)

This is the Vranas instance.