Things I agree with:
- there's an AI bubble
- many uses of "AI" are shit/vile/immoral
- LLMs are bad at many things (counting etc.)
- LLM output /must/ be human-reviewed
- the copyright/labor question remains open
And:
- current-gen LLMs are a tremendous tool for programming
LLMs bridge the gap whenever repetition is needed, whenever you need to "draw the rest of the fucking owl".
If you can read/review/validate faster than you can write all these details (maybe you have RSI?), then LLMs are valuable.
I do believe there are domains where LLMs are /completely useless/: domains where you have to think hard about every little bit of every little line and every little detail matters.
And I'm kinda jealous of people working in those domains. Those are the exception.
Most folks work in boilerplate-heavy environments.
You can write macros, you can do codegen, you can use types or use dynamic typing to reduce repetition, there's still a lot of things that "a whole bunch of weights" can figure out for you, filling in the outline you created.
All the architectural decisions should be made by humans. Humans are in the driver's seat, always.
I don't give a shit about the "my 8yo built a React app" demos (maybe I will when I have an 8yo?). I care about senior developers being able to think at a higher level.
I'm fairly sure I alienated part of my audience by "not shitting unconditionally on LLMs" and, well, too bad?
What's happening to artists is vile and I will commission art as much as I can. What's happening to developers is exciting: you can build so much on your own now!
@fasterthanlime my experience with LLMs as a developer is that other people deploy them without thought and I have to fix or review their shit
how is this exciting exactly?
@fasterthanlime (I have not once found it that an LLM makes my own work faster or more efficient, and not for the lack of trying--including picking tasks that seemingly fit their limitations well)
@whitequark @fasterthanlime there is definitely cost/benefit thing going on when looking at the industry at large. They are good enough to enable people to create much more code faster than before that they are unfit to review which creates tech debt and burden to others. /1
@whitequark @fasterthanlime I do find the auto complete that copilot does very useful because it strictly does basic "boring repetitive structures" type stuff that I hate to write, can review in seconds and is correct 99% of the time.
Its rare that I engage with the "please write full functionality for me" chat type thing. I find it useful as a better documentation look up, in a sense what StackOverflow filled previously to some extent.
But not sure if thats worth the harm it can do. /2
@timonsku @whitequark @fasterthanlime did you run into issues where the LLM is inventing API calls that doesn’t exist? I was using the full line complete of pycharm pro and the moment it touches nonpublic code (so anything that isn’t matplotlib / pandas) it is utterly useless because it keeps making up plausible looking but nonexisting methods and parameters
@uint8_t @timonsku @whitequark @fasterthanlime i recently tried gemini again for some gtk4 code (in c), and it made up a lot of fantasy stuff, and even when called out excused and made up different fantasy stuff
@timonsku @mntmn @uint8_t @whitequark @fasterthanlime ChatGPT helped me once. I asked it about doing something in the PDF syntax. I could then completely ignore the nonsense it spewed, but use the terminology it used to find the relevant part of spec instead of Adobe Acrobat tutorials that search engines were very stubbornly providing me with 😜
@fasterthanlime @dos @timonsku @mntmn @uint8_t I've tried doing this a lot because it seems like they should be obviously good at it and I think I had one success total?
@Kahanis @uint8_t @mntmn @timonsku @whitequark @dos yeah. (lots of people in this thread, hi) after a while you get a sense of what you can get out of each model / what's better-suited for a "traditional" search engine (if there is even such a thing anymore — even DDG serves me AI answers I didn't ask for)
@fasterthanlime @Kahanis @uint8_t @mntmn @timonsku @dos I really dislike it that when I ask a question I can't easily answer--which is a *really big share* of questions I ask a search engine--an LLM is straight up lying to me, in a way I can only recognize *because I'm already a domain expert* who was in the field since high school
what if I wasn't? what sort of bullshit would I "learn" then?
@whitequark @fasterthanlime @Kahanis @uint8_t @mntmn @dos Yea, you can never treat it as an answer. It can be useful to uncover the specific terms that you need to find the right results in a domain you know nothing about but that makes it dangerous for people who do treat it like a search engine that gives you the final answer.
Those AI snippets in search engines need to go ASAP.
@dos @timonsku @mntmn @uint8_t @whitequark yeah — LLMs are very good at "tip of my tongue" queries — it's a really weird, really fuzzy query interface, and obviously depends a lot on what's in the original dataset.