Why Computers Won't Make Themselves Smarter

1,231 Views | 12 Replies | Last: 2 yr ago by Ulrich
ramblin_ag02
How long do you want to ignore this user?
AG
https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter

Ran across this article by Ted Chiang, one of my favorite contemporary science fiction authors. Thought some here would appreciate it. He discusses the idea that as soon as we build a computer smarter than a human, then that computer will build smarter computers than itself, and humanity will become quickly obsolete. He makes a lot of good points, but I think the key point is that there is no evidence people can make something smarter than themselves. For the all the computer dominance of games like chess, these programs aren't as smart as people. They just process more possibilities faster and with less errors.

That sort of "intelligence" doesn't translate to progamming AI. The limiting factor for programming AI is not errors and lack of man hours. The problem is that new concepts, new programming languages, and new innovations in general are needed. We've yet to show in any way whatsoever that we can program an AI that is more innovative than a human, much less able to program an AI more innovate than itself. His comparison to programming and compilers hits the point hard.

So maybe we're not doomed to be pod batteries for our machine overlords
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
Post removed:
by user
one MEEN Ag
How long do you want to ignore this user?
AG
"favorite contemporary science fiction authors."

This isn't the group I take input about future tech from.
Quad Dog
How long do you want to ignore this user?
AG
Quote:

but I think the key point is that there is no evidence people can make something smarter than themselves.
Seems like we say this kind of stuff all the time, and then sooner or later prove ourselves wrong. Making a computer that can beat a human at Go comes to mind as an example as something that was said couldn't be done. The computer that did that largely taught itself to play.
ramblin_ag02
How long do you want to ignore this user?
AG
one MEEN Ag said:

"favorite contemporary science fiction authors."

This isn't the group I take input about future tech from.


Jules Verne invented the concept of the modern submarine. Arthur C Clark basically invented the concepts of the satellite and the space elevator. Isaac Asimov was a polymath that was an expert in al kinds of fields. William Gibson wrote about the internet, hacking, and cyberculture in the 1970s. In the article I posted, Chiang attributes the idea of runaway superintelligent computers to Vernor Vinge. Cell phone makers almost universally thank Star Trek for the inspiration. Real world cloning was invented explicitly to copy scifi.

If you want to know the possibilities and pitfalls of current and future tech, there is no better place to look than scifi authors. And Chiang is a freaking genius
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
one MEEN Ag
How long do you want to ignore this user?
AG
Those authors are people who dreamed up the concepts of technology, not actually were a part of the detailed engineering that brought it to light. Their science fiction inspired people to create what they had described, but that's not the same as knowing its real world technical limits.

That is like asking Jules Verne, the inventor of the concept of the modern submarine, about how deep a nuclear submarine should be and where should they put the Virginia class subs to best counter the Russians. That guy might have been the first to think about putting a bottom on a diving bell, but he would useless in trying to accurately determine the limits of the current state of technology.
Quad Dog
How long do you want to ignore this user?
AG
I too like Ted Chiang a lot. His short story collections are great.
ramblin_ag02
How long do you want to ignore this user?
AG
Still not sure what that has to do with price of beer at Kyle Field. Who better to speculate on future technology than very successful writer of speculative fiction? The fact that Jules Verne doesn't know the depth tolerances of a vehicle made 70 years after his vision is meaningless.

I think the article makes great points, and I completely disagree that we've yet invented anything smarter than us. Computers that play Go can't speak a human language, paint a painting, raise a child or build a computer. Go input leads to Go output, other input leads to error. IBM's Watson is trained on about 4 or 5 tasks. The average elementary school student can do thousands and learn more with little help.

The example of the compiler is apt. A compiler is the only truly "generalized" program, but it's not recursively self-improving and really can't be. All other programs are specialized to a great extent and have to be carefully fed perfectly formatted data no matter how "smart" they are
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
ramblin_ag02
How long do you want to ignore this user?
AG
Besides all the above, the whole idea of recursively improving superintelligent AI is a science fiction scenario. Why are you giving me grief about posting the opinion of a brilliant science fiction author with a background in computer science?
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
Win At Life
How long do you want to ignore this user?
AG
Technology and computer programming has enhanced many human functions, such as our ability to see with microscopes and telescopes. Given that, it's very reasonable computers can be programmed to improve on our ability to "think" in time.
Aggrad08
How long do you want to ignore this user?
AG
I don't think we are anywhere close but I don't see there being a structural limitation.

Computers are good at tasks. Can you make their task list broad enough to allow them to program? No idea, and I don't think this author does either.
ramblin_ag02
How long do you want to ignore this user?
AG
Aggrad08 said:

I don't think we are anywhere close but I don't see there being a structural limitation.

Computers are good at tasks. Can you make their task list broad enough to allow them to program? No idea, and I don't think this author does either.
To contradict myself a bit, I think everyone in the past who said technology couldn't do something ended up being wrong. Never underestimate the future. I think he makes a good point though. Humans are self-sustaining organisms, and we tend to assume that when thinking of AI. However, the AI still needs hardware, electricity, some way of obtaining information and some way of impacting the outside world. The explosive growth in computing owes a lot to exploration, mining, materials science, circuitry advances, compiler advances, improvements in programming languages and specific programming techniques for AI not to mention steady electricity and climate control. Even if we developed an AI smarter than a human, it would take an entire ecosystem of such AIs to allow it to build another one of itself from a known schematic. That doesn't even figure adding improvements to any specific part of that chain or all parts of that chain.

I am also amused by the thought of a 6 year old beating a "super-intelligent" chess engine at a game of Pong, because it would error out with the different input.
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.
bmks270
How long do you want to ignore this user?
AG
The limit to AI is that AI cannot choose its own goal. AI lacks context and moral intuition. AIs are just goal seeking mechanisms without the ability to choose their goal. They are specialized to one objective the programmer assigned them.

Will an AI ever exist that can think for itself and reject the task assigned to it by the programmer? If we consider that an AI with its own will would not be useful to a programmer, then I think it's unlikely such an AI will ever be developed. But it's not evident such an AI ever can be developed with the current state of computing technology. And would an AI that chooses its own goal be considered conscious?

Maybe in the future if we engineer bio-computers and are growing brains in jars and can program them in some way, maybe then we understand enough about the brain to leverage brain power and combine it with rapid task checking like we have achieved with current AI. And then we construct proper super brains merging human brain reasoning and logic with machine learning advances. Would these brains be concious or could we maintain unconscious computing brains in jars?

Ulrich
How long do you want to ignore this user?
IMO the problem with this class of problem is that it inherently deals in discontinuities.

It's possible to imagine that a program like Stockfish will gradually be granted more power and ability to access more data formats and progressive alterations to its programming until eventually it teaches itself to teach itself new things in any domain. Solving a metaproblem, so to speak.

But more likely if computers do reach a singularity, it will be due to a discontinuity that radically changes what a computer can do in a very short period of time. Those are nearly impossible to forecast. It's even harder to understand the ramifications; anyone who could describe the innovation in a manner sufficient to predict the results already has the innovation.
Refresh
Page 1 of 1
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.