dds08 said:
Google's willow chip presentation makes it seem like Quantum is neural networks on the hardware level, and that each qubiit is it's own neural network on an atomic level!
It'll blow AI away because AI is software level, compared to atomic level.
That's... Not exactly a correct interpretation, but also kind of.
In very simple terms, qubits hold information probabilistically, but they're not like a neural network that is trained probabilistically.
Neural networks are made of layers of interconnected neurons, and each neuron is essentially a function that takes the results of each neuron in the input later or previous layers as a variable and allies its own weights to them. At the end, you have a single output layer that produces a desired result. Let's say you have a stock price prediction network with two input variables, yesterday's closing price x and the day before's closing price y, and two layers of two neurons. Each neuron in the first layer would take the input variables and apply their trained weights, so you'd have something like N1=1.2x + 2.1y and N2=1.3x + 1.8y, giving you N1 and N2. The next layer takes N1 and N2 and does the exact same thing, so you'd have something like N3=1.8(N1) + 0.7(N2) and N4=1.2(N1) - 1.5(N2). Then output layer gives something like OUTPUT=N3 - N4. You just run x and y through and get your prediction. But where do those variable weights come from, you ask? They're trained by using complex math and lots of data. ML and AI algorithms run data through with a random set of weights and make changes to them based on the deltas between predictions and known results (they also use much more complex, nonlinear functions). Eventually, the weights all arrive at a combination where more changes don't produce better or much better results. However, this is a probabilistic solution because that combination is not guaranteed to be the best combination possible. It's just a local minimum that is good enough. If you start with a different set of random weights, you may end up with different trained weights that produce similar predictions.
Qubits can't necessarily hold all of those weights and the connections between them. You can't really train them like that, AFAIK (maybe view them like that in an algorithm?). They CAN act like the neurons in the network though, because they're effectively a nonlinear probability function, and when you think about the network as a whole, it kind of is just one big ass function.
As to blowing away AI, yes and no. Quantum parallelization may be able to process more data faster and arrive at more optimal weightings. AI will probably still be the same AI, though. I think the big takeaway is that it has the potential to make AI better, but not necessarily replace it.
Willow's big break through is also more about reducing errors in qubits. You need stable qubits to do any kind of quantum computing, but because of their nature they're highly susceptible to being influenced by the rest of the universe. Think of a hard drive that stores information as bits by polarizing tiny little pieces of metal into 1's and 0's. Now, what happens when you run a magnet across it? It's all ****ed up and unusable. Same thing for qubits. You're trying to use them in an algorithm, but some random particle or background radiation causes one to change. All of a sudden you have an error. The more qubits you have, the more errors your likely to have because the complexity of the system has increased and you have more chances for errors. Willow is the first way to correct errors in qubits faster than they appear or propagate, and apparently the more qubits you have the better it performs.
At least that's all my understanding.