Show more

"Everyone is doing it. Projects older than me, and developers who could be seen as role models and important figures in the space are adopting LLMs into their development workflow more readily than anything I'd ever seen. Vim, VLC, gstreamer, Kitty, the Linux Kernel, these are all already actively integrating the lying machine into their workflows.

Am I just being a vegan about it?"

racc.at/blog/?id=35

I would argue that we who refuse to use LLMs are more of "AI" antifascists than "AI" vegans. Only one third of voters voted for the nazis. A certain percentage opposed them. But many of those who didn't actively support them turned a blind eye, because they didn't want to make waves, the system benefited them, or at least they had comfortable enough lives under it. It is the same with technofascism*. Until it stops being comfortable/cheap. And then, everyone will always have been against it.

*The percentage might be closer to those of vegans though.

(HT @pelle )

#LLM #LLMs #noAI #AI #vibeCoding

New Book (following on from the podcast and the TV series) - Stuff the British Stole by Marc Fennell amazon.com.au/dp/1761354671/
'In the days of the British Empire, things were taken that probably shouldn't have been. So how come they're still in museums, galleries and some much stranger places?'
#culture #heritage #IndigenousIP

#OpenAI Backs Bill That Would Limit Liability for #AI-Enabled Mass Deaths or Financial Disasters - wired.com/story/openai-backs-b they wouldn't do this if they weren't worried it will happen...

@hailey

It would be interesting to see if Coverity found it (and even more interesting to see if Coverity reports were part of the training set).

FreeBSD was given a free Coverity subscription but it generated enormous numbers of reports. I went through the ones for bits of code I’d touched and they were almost all issues causes by not understanding code across complex control flow (particularly things invoked via function pointer). I think one was a real bug, out of dozens I looked at.

Paying someone $20k to go through and triage as many Coverity reports as they could in however long $20K buys of a competent person’s time would almost certainly have found and fixed more bugs.

Got my performance review today.
Positive feedback: literally every member of my team says I'm the best manager they have ever had. I solved multiple long-standing problems the team has been dealing with for years. Team members feel safe to share their struggles, everyone feels empowered, everyone receives valuable feedback.
Negative feedback: I am not enthusiastic enough about AI.
Overall ranking: 3/5.

Anyone hiring for a fully remote team lead?

#GenAI #LLM #GetFediHired #FediHire

The people who don’t view housing as a human right are those who believe they will always have housing.

They can’t fathom becoming disabled, losing their savings or their support network.

They can’t comprehend having everything ripped away from them.

They think they’re the exception.

They’re not.

People don’t plan for homelessness or disability.

It just takes one accident, illness or stroke of bad luck.

We all need and deserve a safety net.

#disability #ableism #poverty #eugenics #chronicillness

The use of “hallucinate” is a stroke of true evil genius in the AI world.

In ANY other context we’d just call them errors & the fail rate would be crystal clear.

Instead, “hallucinate” implies genuine sentience & the *absence* of real error.

Aw, this software isn’t shit! Boo’s just dreaming!

“why won’t the AI haters admit claude mythos is good?” when it turns out the exploits it found are utterly overblown (ie the firefox exploit that only works on a custom build with the sandbox disabled amongst many other non-exploitable bugs), were found at extreme expense, and required a ton of human staff to verify (just like with existing non-LLM techniques), why won’t you admit this is a grift? why won’t you admit you’ve been falling for the same grift since 2019?

After almost twenty years on the platform, EFF is logging off of X.

This isn’t a decision we made lightly, but it might be overdue. 🧵 (1/5)
eff.org/deeplinks/2026/04/eff-

Unpacking Trump’s Use of Emergency Powers to Prop Up Coal - insideclimatenews.org/news/090 "A World War II-era policy is stopping old coal plants from closing, despite high costs and the wishes of their owners." has there ever been a more perversely wrong policy?

In the past, many FOSS proponents would mistakenly apply the "many eyes make bugs shallow" quote to all classes of bugs, in particular security ones. That historically hasn't been true because you need security expertise to find security bugs, it's not democratized in the same way as general classes of bugs.

LLMs have now changed that. This blog post by Thomas Ptacek does a good job of explaining what is going on:

sockpuppet.org/blog/2026/03/30

#security #AI #LLM

The company that - last week - accidentally published the source code of their flagship product, which was in turn discovered to be a contraption riddled with security holes and included instructions to deliberately mislead people, is telling us this week that they've got a new product that is _too good_ at finding security issues so they need to make a special secret cabal and only share it with them because it would be too dangerous to show anyone else.

This is definitely all very believable.

@cr00ky I know. It just don't pass the open definition. (Nor is it listed as an approved open source license by the OSI.) So, a "hardware printer you can actually understand, repair, and upgrade" yes, but it is not open in the sense open is normally used when talking about open hardware or open source. I think the name either implies that they don't really understand the context they are in (which is bad) or that it is a case of open washing (which is worse).

RE: infosec.exchange/@mttaggart/11

I think the way I would put it is:

1) The point of the AI project is ideological; the goal is to reshape industries such that we are dependent on AI companies' products, and to destroy free and open knowledge such that we are dependent on these products for thought and reasoning. We see an injection of AI into cybersecurity, while simultaneously drawing money and resources away from (boring) efforts that would actually broadly improve cybersecurity.¹ We see an injection of AI into knowledge acquisition, while simultaneously polluting the landscape of the internet as a useful source of knowledge. Both are in service of the same ideological project, and working towards the same goal.

2) The touted usefulness of AI for programming and cybersecurity is directly funding the project to expand it everywhere else, where it is causing massive harms to civil society, individuals' mental health, and the information landscape. You or your company paying for these products is keeping investment money flowing and extending the runway, for AI companies to reach that point of "indispensability". There is no divorcing your cool shiny toy from the creation of AI deepfakes that destroy democracy, or the AI psychosis that destroys lives. This is because the AI companies are pursuing an ideological project that ultimately has nothing to do with improving people's work or their lives; the leaders of these companies have loudly and publicly said that very clearly. You are laundering the reputation of these companies and keeping them alive, when the only moral option is to destroy them.

I've said this elsewhere, but: Maybe you, who are reading this, is offended by this framing, because you use and enjoy the AI tools. But it's also likely that you, and many other technologists, take moral abdication almost as a point of pride, where the only thing that matters is "capability". In that case, I don't understand the defensive response. Why are you uncomfortable being described as the thing you're bragging to be?

#fuckAI

¹ The stark contrast: The breathless and brainless promotional posts about Glasswing came into my feed at the same time as the posts about the final gutting of CISA. securityweek.com/white-house-s

So, since the "the fediverse needs to be open to new ideas" canard is going around again, let's just be clear. AI is not a new idea, it's the oldest idea in the tech industry.

It's the idea that capital can embrace, extend, and extinguish computing, the idea that industry is more important than labor, that the climate crisis is an externality not worth worrying about. AI is the idea that stocks matter more than people.

No big trucks for little roads: American OEMs say #EU is blocking imports - arstechnica.com/cars/2026/04/n "the big truck is evidently now emblematic of America and must be accepted by our trading partners, regardless of whether there’s customer demand." #nope

Show more
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml