...

/

Multiple Variables Using SciPy

Multiple Variables Using SciPy

Learn to apply gradient descent and Newton's method to problems with several variables.

Until now, we’ve had a broad theoretical background in optimization. We’ve understood some algorithms and the importance of derivatives, gradients, and Hessians. To better grasp all this knowledge, we’ve implemented all the algorithms we’ve seen so far. Implementing our version of the algorithms is a great exercise that allows us to develop a deeper understanding of all the different elements that are involved when solving a problem.

As we’ve seen when visiting gradient descent or Newton’s method, there are advanced versions of these algorithms out there that can deal with saddle points and convergence in a better way. Explaining the details of those advanced algorithms is beyond the scope of this course, but that doesn’t mean we can’t use them. Once again, a Python library will allow us to do it: SciPy.

Our versions have been implemented in problems with just one variable. But real-life problems, like training machine learning models, can involve hundreds or thousands of variables. The versions provided by SciPy will handle multiple variables for us. Nevertheless, to effectively use SciPy, we need to know about the best ways to operate with so many variables. So let’s dive into SciPy and its advanced algorithms.

Press + to interact

Multidimensional world

When we talk about problems with hundreds of variables, it’s hard to even write them in mathematical form. How can we write a formula with hundreds of variables? In math, there’s a way to do it compactly, using vectors and matrices. When we write a variable in lowercase and boldface, we’re referring to vectors:

x,y,z\mathbf{x}, \mathbf{y}, \mathbf{z}

A vector represents a point in the multidimensional world. The same way we said (a,b,c)(a, b, c), now we can say x=( ...