Show newer

We need to talk about Sam Altman's sanpaku eyes.

This absolutely was NOT an accident and I have nothing but respect for whoever was in charge of sending it to print.

cc @WearsHats

My wife and I are celebrating our 10 year anniversary today. It's been a wild decade, but we're making it.

Did something stupid as root and completely bricked my laptop. Really seeing the engineering from System76 pay off as I recovered my OS is less than a minute using the recovery wizard.

I've never recovered from this big of a fuck up so fast before on Linux. I'm having to reinstall a few things, but I lost absolutely nothing.

Disappointed in tech media for not using the term "dogfooding" to describe Apple's Scary Fast keynote event produced entirely with iPhones and Macs.

Time to find out how shelf stable they really are.

Complaining about how cold it got so fast, only to remember that I can finally fire up my 3D printer without creating a heat problem in my small apartment.

Printing the BYU One-Piece Compliant Blaster.

compliantmechanisms.byu.edu/ma

This screenshot will temporarily teleport your brain back to 2004. Use with caution.

Small correction: Stable Diffusion has 2.3B 256x256 images, and 170M 512x512 images that it is trained on.

But still, that comes out to less than one byte of model data per image in the training set. There's literally no way to argue that a 256x256 image got compressed down to less than a byte.

Show thread

It is worth noting to those who are in opposition to AI that the "models trained on my work contain my work" argument isn't technically or legally sound. Better arguments need to be made. If we could cram 1B+ 512x512 images into a single 2GB model, it would represent the single greatest breakthrough in data compression technology in human history. It turns out, this isn't a compression breakthrough, it's just proof that the original data isn't in the model.

Show thread

I agree that some degree of mimicry isn't infringement - fair use includes some degree of derivation - but I really don't think that training and using a model to deliberately imitate another artist's work represents fair use, especially if it can be found to harm the artist.

The line shouldn't be drawn at "is it an exact copy?", it should be drawn at "would it confuse a reasonable person into thinking this was original art by another artist?".

Show thread

Yesterday's lawsuits against Generative AI companies like Stable Diffusion and Midjourney were mostly dismissed, and I generally agree with that. There is one case still ongoing between Sarah Andersen and StabilityAI that I expect Sarah to lose on the grounds that her accusation isn't sound (her artwork isn't actually stored in the model).

But I am disappointed that they ruled that copying styles isn't infringement unless it is an exact copy of a work. I feel like this isn't a nuanced ruling...

Sad to remove Cohost from my bio, but I do not ever use it anymore. I don't really have anything bad to say about it, it just wasn't my style.

Still love eggbug though.

But, again, I don't fall for the "All AI art is art theft" argument, because it just doesn't make sense.

If I type "a moonlit beach, painting by [living artist]", It could be argued as infringing on that artist, but If I just type "a moonlight beach, oil painting", who does it infringe on? Every artist the model was trained on?

If a work infringes on every artist at once, it's safe to say that it doesn't infringe on any artist at all.

Show thread

I also agree that copyright law, as currently interpreted, does not support the copyrighting of AI art. I enjoy playing around with Stable Diffusion, and I would say that I have gotten pretty good at it, but I don't take ownership of the images that I generate, and I don't believe that I should be allowed to legally claim ownership over something that wasn't actually created by me.

But I still find the technology impressive and useful.

Show thread
Show older
Mastodon (Vran.as)

This is the Vranas instance.