Show more

Any experienced programmer worth their salt will tell you that •producing• code — learning syntax, finding examples, combining them, adding behaviors, adding complexity — is the •easy• part of programming.

The hard part: “How can it break? How will it surprise us? How will it change? Does it •really• accomplish our goal? What •is• our goal? Are we all even imagining the same goal? Do we understand each other? Will the next person to work on this understand it? Should we even build this?”

Show thread

I think I figured it out, currently sending backlogged tweets!

Show thread

bird.makeup is having some trouble with the current load. Looking into it!

Lifecycle CO2 g/km, Nissan Leaf: 104
Lifecycle CO2 g/km, ebike: 22

Electric cars sold across all of Europe, 2022: 1.5-1.6 million
Ebikes sold just in Germany, 2022: 2.2 million

Panasonic 2170 battery cells, Tesla 3 (short range): 2,976
Panasonic 2170 battery cells, Malibu GT ebike: 65
(= 45 ebikes : 1 Tesla 3—or 140 ebikes : 1 Tesla X)

Cars parked per parking space: 1
Ebikes parked per parking space: ~10

someone who is good at math please help me budget this. my city is dying

With all the GPT4-will-change-everything hype and fanfare happening now, it is worth mentioning that I just had to deal with some rural contractors that DIDN’T HAVE EMAIL ADDRESSES. The system of the world has more inertia than it sometimes seems.

People have a hard time defining woke without accidentally saying they want to be able to hate minorities and not face any consequences.

I wonder if someone will eventually harvest the (high quality) image descriptions of the fediverse for AI training? 🤔

"Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar." #gpt4

Join us at 1 pm PT today for a developer demo livestream showing GPT-4 and its capabilities/limitations: youtube.com/live/outcGtbnMuQ?f

(comments in Discord: discord.gg/openai)

Show thread

I don't think people realize what a big deal it is that Stanford retrained a LLaMA model, into an instruction-following form, by **cheaply** fine-tuning it on inputs and outputs **from text-davinci-003**.

It means: If you allow any sufficiently wide-ranging access to your AI model, even by paid API, you're giving away your business crown jewels to competitors that can then nearly-clone your model without all the hard work you did to build up your own fine-tuning dataset. If you successfully enforce a restriction against commercializing an imitation trained on your I/O - a legal prospect that's never been tested, at this point - that means the competing checkpoints go up on bittorrent.

I'm not sure I can convey how much this is a brand new idiom of AI as a technology. Let's put it this way:

If you put a lot of work into tweaking the mask of the shoggoth, but then expose your masked shoggoth's API - or possibly just let anyone build up a big-enough database of Qs and As from your shoggoth - then anybody who's brute-forced a *core* *unmasked* shoggoth can gesture to *your* shoggoth and say to *their* shoggoth "look like that one", and poof you no longer have a competitive moat.

It's like the thing where if you let an unscrupulous potential competitor get a glimpse of your factory floor, they'll suddenly start producing a similar good - except that they just need a glimpse of the *inputs and outputs* of your factory. Because the kind of good you're producing is a kind of pseudointelligent gloop that gets sculpted; and it costs money and a simple process to produce the gloop, and separately more money and a complicated process to sculpt the gloop; but the raw gloop has enough pseudointelligence that it can stare at other gloop and imitate it.

In other words: The AI companies that make profits will be ones that either have a competitive moat not based on the capabilities of their model, OR those which don't expose the underlying inputs and outputs of their model to customers, OR can successfully sue any competitor that engages in shoggoth mask cloning.

"We offer no explanation as to why these architectures seem to work; we attribute their success, as all else, to divine benevolence" - Noam Shazeer (second author of the transformer paper, now CEO of Character AI)

from the SwiGLU paper: arxiv.org/abs/2002.05202v1

Thanks to @foreverandaday for making it known the existence of bird.makeup, a fork of BirdsiteLIVE that uses the same back-end as Nitter(github.com/zedeus/nitter), which uses the hidden Twitter API. We do not wish to bypass the Twitter-allowed usage guidelines and have no guarantee Twitter will not axe hidden API access with tooling in the future, so if you would like to migrate to bird.makeup, which may last past the Feb 9 cut.

You know, bird.makeup is INCREDIBLE. Not only does it get past the Birdsite's API limits, it correctly mirrors threads, replies, retweets and quote tweets. I understand some large instances block it, and I feel doing so is as stupid as Birdsite's API blocks are. It's an amazing tool for bridging the gap for some of the people who haven't found their way here yet, but should. #birdmakeup

New patreon post about all the improvements that I made to bird.makeup in the last few weeks:

patreon.com/posts/79915224

New patreon post about all the improvements that I made to bird.makeup in the last few weeks:

patreon.com/posts/79915224

"Psychologists use "IQ" to politely put people & entire races in boxes with the label "idiots", and stick them there for a lifetime. They ruin people's careers and potentials. Politely. I call psychologists (& IQ mountebanks) idiots to their face. Not politely." - @nntaleb

Show more
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml