Logos Stick said:
KingofHazor said:
Quote:
Your dismissal of AIs ability to excel in a subject matter simply because the AI has been trained in that subject is so off the mark it's almost not worth addressing. If I teach a human to do algebra, then throw calculus problems in front of him, no one would criticize him for not being able to do calculus. And no one would dismiss his ability to then do calculus because he was subsequently trained in calculus. Yet that is what you are doing.
Terrible analogy. The better analogy would be giving someone with a PhD in math a calculus problem, which they solve easily, but then they fail to solve a basic high school algebra problem.
He stated it can't do "algebra" using an example from three years ago. I point out (implied) that not only can it do "algebra" now, it can also do "calculus". Did you really think I was saying I agree with his three year old example that it can't do algebra still, but it can do calculus now? That's illogical. He then criticizes the way it learns and does calculus. My analogy is fine.
I don't care how AI "thinks", learns or processes it's ingress. I care about it's capabilities that I can benefit from, which I believe ultimately will cause much hardship going forward.
You didn't even understand that original criticism. It's not about how "difficult" the math is in an academic sense. That's a human way of thinking. It's about the number of integers involved in a basic arithmetic problem. You look at the number 123,456 and see a 6 digit figure. You can punch it into a calculator pretty easily and do whatever kind of operation you want.
The AI doesn't process language and numbers like that. It converts it all to vectors for matrix multiplication. Here's the problem: Even if the AI has been properly trained to identify the number 123,456, there's no guarantee the same is true for 123,457. That number generates a completely different identity in AI language, and it very well may have no idea what to do with it. So it'll hallucinate and spit out a number that it knows is junk.
Using my own anecdote on this: I wanted to do some space-efficient gardening in my backyard and figure out how many pounds of tomatoes I could produce using vertical aeroponic towers. I decided to ask both Chatgpt and Claude to do some analysis on this less than a year ago. On the plus side: They found research papers from Oregon State University on aeroponic gardening productivity that they quoted from to give me an indicator of how effective these towers were. On the negative side: They screwed up every basic arithmetic calculation larger than 3 digits. These LLM's were telling me I'd generate anywhere from $75,000 worth of tomatoes per year per acre all the way up to $1.5 million. When I asked them to show their work, they generated completely different numbers and said "Oh wait, we recalculated because that last answer was *insert flimsy reason here.*"
It's not reliable. If it can't figure out how many tomatoes can grow in a given space, how is it supposed to do something like accurately model the amount of natural gas I'll recover from a refrigeration plant?