Using Grover's Algorithm

Learn about Grover's algorithm for search problems.

Using the qubit phase

Machine learning can be thought of as a search problem. Given only observations of inputs and related outputs, we can characterize the learning problem as searching for a mapping of inputs to the correct outputs. A good mapping, the solution to the search problem, allows predicting the output for any given input.

Logically, the mapping is a function, such as y=f(x)y=f(x) with xx as the input and yy as the output. The problem is that we don’t know anything about the function ff. If we did, we wouldn’t need to learn it from the data. We would specify it directly.

If we can’t specify the function, the next best thing we can do is find an approximation of the function ff. Since we don’t know anything about the mapping function ff, we can’t tell how good an approximation would look. It could be a mathematical function. It could be a Bayesian system. It could even be an artificial neural network. There are infinitely many possible approximations, and we don’t know which kind of approximation would work best.

We can think of all possible approximations of the mapping function as a huge search space. The learning task is to search this space for a good enough approximation. In general, searches involve trial and error. We can try a possible solution, see how it performs, and adjust the solution accordingly.

Get hands-on with 1400+ tech skills courses.