https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter
Ran across this article by Ted Chiang, one of my favorite contemporary science fiction authors. Thought some here would appreciate it. He discusses the idea that as soon as we build a computer smarter than a human, then that computer will build smarter computers than itself, and humanity will become quickly obsolete. He makes a lot of good points, but I think the key point is that there is no evidence people can make something smarter than themselves. For the all the computer dominance of games like chess, these programs aren't as smart as people. They just process more possibilities faster and with less errors.
That sort of "intelligence" doesn't translate to progamming AI. The limiting factor for programming AI is not errors and lack of man hours. The problem is that new concepts, new programming languages, and new innovations in general are needed. We've yet to show in any way whatsoever that we can program an AI that is more innovative than a human, much less able to program an AI more innovate than itself. His comparison to programming and compilers hits the point hard.
So maybe we're not doomed to be pod batteries for our machine overlords
Ran across this article by Ted Chiang, one of my favorite contemporary science fiction authors. Thought some here would appreciate it. He discusses the idea that as soon as we build a computer smarter than a human, then that computer will build smarter computers than itself, and humanity will become quickly obsolete. He makes a lot of good points, but I think the key point is that there is no evidence people can make something smarter than themselves. For the all the computer dominance of games like chess, these programs aren't as smart as people. They just process more possibilities faster and with less errors.
That sort of "intelligence" doesn't translate to progamming AI. The limiting factor for programming AI is not errors and lack of man hours. The problem is that new concepts, new programming languages, and new innovations in general are needed. We've yet to show in any way whatsoever that we can program an AI that is more innovative than a human, much less able to program an AI more innovate than itself. His comparison to programming and compilers hits the point hard.
So maybe we're not doomed to be pod batteries for our machine overlords
No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. See full Medical Disclaimer.