Turns out it's easier to just update to #dotnet 7 than to figure out how to have multiple runtimes on a mac and get all of them recognized in all your tools...
It's probably going to be time load balance the main web server between more than one server soon too, we are getting often rate limited from twitter on that one. That's why there are often errors when loading tweets from the web UI
It’s Not About Status, Elon. Only Now It Is.
It's been about 48 hours since Elon killed legacy checkmarks in an attempt to convince people, but hopefully especially celebrities, to make the switch. How's it going?
https://twitter.com/oneunderscore__/status/1649563297690140673?s=20
Oh.
Well then! Let's talk about why this is such an awful and in retrospect obviously avoidable catastroph
https://www.zenofdesign.com/its-not-about-status-elon-only-now-it-is/
#Uncategorized
Please don't clog up the Transgender Concerns Form with so many spurious reports that it becomes unusable, and the Missouri government gets Big Mad. Because that would be wrong. Here's a link to the form so you'll know exactly where not to do that: https://ago.mo.gov/file-a-complaint/transgender-center-concerns
Curious to know what are people intuition about it without getting into a computer science argument
I would be nice if Mastodon admin could make an allow-list of accounts for other servers, this would allow them to only allow accounts of public interest for their community from bird.makeup and remove noise from the rest.
That feature might be too niche though. #MastoDev
@video_manager @dlakelan @marklemley
Stable Diffusion (as used by StabilityAI) has been trained on billions of images, each of which typically requires over a million bits ( 3×8×512×512 > 10⁶ ) to "store" in a way where you can recreate the image.
So if Stable Diffusion were copying the images, it would need at least a quadrillion bits for reasonable quality. (10⁹×10⁶=10¹⁵)
Instead, Stable Diffusion is conditioning less than one billion parameters, so the whole model needs less than 64 billion bits. (6.4×10¹⁰)
That's a ratio of less than 0.000064 model bits stored per input bit. Is it likely that it is storing 0.0064% of every image? No. It is conditioning the model to distinguish between "art" and "noise" and to be able to make input noise look more like the concepts of art that it has been trained on. One can imagine all possible 512×512 images lying in a 786432-dimensional space with the "art" images clustered together in a hard-to-imagine shape. The 890 million parameters of the model are about this shape and how parts of it are associated with certain keywords, not about recreating any particular image.
I'd really love to find a way to grow https://lemmy.ml/ using bird.makeup. I wonder if there is some types of data that I could forward that would help
Open source developer. Wikidata, IPFS, Linux, Ethereum. /r/fuckcars enthusiast. I tend to boost funny stuff.