The take that #gpt3 "has mastered analogical reasoning" is dangerous #bullshit. It's a predictive system, it understands nothing. What's going on is that analogies happen to be highly predictable. I can tell you from personal experience that it's dead easy to generate analogies without really understanding them. That's how I got 98th+ percentile on the SATs.
popsci.com/technology/gpt-3-la
[h/t @librarianshipwreck | mastodon.social/@librarianship]

Follow

@FeralRobots @librarianshipwreck that's a great analogy. Your SAT analogy struck a chord, I was always good at standardized tests, even in subjects I was not good in. I got 790/800 in the math SAT, and I barely could make it through calculus. That always left me the impression that standardized tests don't really measure capability or knowledge, but more the window dressing around capability and knowledge.

@eighthave
Very similar for me! Teachers trying to coach us would all tell us 'nobody finishes the math section' & I got through it with time to spare. It just seemed like the actual problem solutions were hardly ever necessary to get the answer, or that's how I recall it. There was usually a meta-approach - you could eliminate 2 of the answers & just pick the closest fit from the other 2. That'll be wrong a certain %, but it'll usually still give you a good score.
@librarianshipwreck

Sign in to participate in the conversation
Librem Social

Librem Social is an opt-in public network. Messages are shared under Creative Commons BY-SA 4.0 license terms. Policy.

Stay safe. Please abide by our code of conduct.

(Source code)

image/svg+xml Librem Chat image/svg+xml