This absolutely was NOT an accident and I have nothing but respect for whoever was in charge of sending it to print.
cc @WearsHats
Did something stupid as root and completely bricked my laptop. Really seeing the engineering from System76 pay off as I recovered my OS is less than a minute using the recovery wizard.
I've never recovered from this big of a fuck up so fast before on Linux. I'm having to reinstall a few things, but I lost absolutely nothing.
Important supplemental material: https://web.archive.org/web/20000303145808/http://twinkies.com/splash.html
I don't actually have these, but you could, for $995.
Complaining about how cold it got so fast, only to remember that I can finally fire up my 3D printer without creating a heat problem in my small apartment.
Printing the BYU One-Piece Compliant Blaster.
They're selling your data for $2,000/month.
Small correction: Stable Diffusion has 2.3B 256x256 images, and 170M 512x512 images that it is trained on.
But still, that comes out to less than one byte of model data per image in the training set. There's literally no way to argue that a 256x256 image got compressed down to less than a byte.
It is worth noting to those who are in opposition to AI that the "models trained on my work contain my work" argument isn't technically or legally sound. Better arguments need to be made. If we could cram 1B+ 512x512 images into a single 2GB model, it would represent the single greatest breakthrough in data compression technology in human history. It turns out, this isn't a compression breakthrough, it's just proof that the original data isn't in the model.
I agree that some degree of mimicry isn't infringement - fair use includes some degree of derivation - but I really don't think that training and using a model to deliberately imitate another artist's work represents fair use, especially if it can be found to harm the artist.
The line shouldn't be drawn at "is it an exact copy?", it should be drawn at "would it confuse a reasonable person into thinking this was original art by another artist?".
Yesterday's lawsuits against Generative AI companies like Stable Diffusion and Midjourney were mostly dismissed, and I generally agree with that. There is one case still ongoing between Sarah Andersen and StabilityAI that I expect Sarah to lose on the grounds that her accusation isn't sound (her artwork isn't actually stored in the model).
But I am disappointed that they ruled that copying styles isn't infringement unless it is an exact copy of a work. I feel like this isn't a nuanced ruling...
And done
Get your free 'NFTs' or whatever now and do whatever you like with 'em
Live: http://stux.stuxnet.ai/f-nft
Code: https://gitcat.org/stux/f-nft
But, again, I don't fall for the "All AI art is art theft" argument, because it just doesn't make sense.
If I type "a moonlit beach, painting by [living artist]", It could be argued as infringing on that artist, but If I just type "a moonlight beach, oil painting", who does it infringe on? Every artist the model was trained on?
If a work infringes on every artist at once, it's safe to say that it doesn't infringe on any artist at all.
I also agree that copyright law, as currently interpreted, does not support the copyrighting of AI art. I enjoy playing around with Stable Diffusion, and I would say that I have gotten pretty good at it, but I don't take ownership of the images that I generate, and I don't believe that I should be allowed to legally claim ownership over something that wasn't actually created by me.
But I still find the technology impressive and useful.
#Netsec Professional. Whitehat #Hacker. #Demoscene spectator. Nerd.
I'm a fan of #Linux, #FOSS, #Decentralization (not Crypto), Crypto (as in #Cryptography), and #Socialism. Always #Antifascist & #Antiwar.
Seattle, WA