What is the hill-climbing algorithm?

The hill-climbing algorithm is a local search algorithm used in mathematical optimization. An important property of local search algorithms is that the path to the goal does not matter, only the goal itself matters. Because of this, we do not need to worry about which path we took in order to reach a certain goal state, all that matters is that we reached it.

The basic principle behind the algorithm is moving across neighboring states according to elevation or increase in value. This working principle also causes the algorithm to be susceptible to local maximums.

We will now look at the pseudocode for this algorithm and some visual examples in order to gain clarity on its workings.

HillClimbing(problem){
   currentState = problem.startState
   goal = false
   while(!goal){
      neighbour = highest valued successor of currentState
      if neighbour.value <= currentState.value
         goal = true
      else
         currentState = neighbour
   }
 }

Explanation

We begin with a starting state that we assign to the currentState variable. Following that, we proceed to perform a loop until we reach our goal state.

The current objective function of our algorithm is to find the maximum valued state or, in simpler terms, a ‘peak’.

In order to proceed, we first find out the immediate neighbors of our current state. The way this is done is left up to the reader since the data organization of the problem may vary.

After we have found the neighbors, we make the highest valued neighbor and compare it with currentState.

If the neighbor’s value is higher than our current state, we move to the neighboring state; else, we end the loop (since according to the algorithm we have found our peak).

1 of 6

Python implementation

import numpy as np
def find_neighbours(state, landscape):
neighbours = []
dim = landscape.shape
# left neighbour
if state[0] != 0:
neighbours.append((state[0] - 1, state[1]))
# right neighbour
if state[0] != dim[0] - 1:
neighbours.append((state[0] + 1, state[1]))
# top neighbour
if state[1] != 0:
neighbours.append((state[0], state[1] - 1))
# bottom neighbour
if state[1] != dim[1] - 1:
neighbours.append((state[0], state[1] + 1))
# top left
if state[0] != 0 and state[1] != 0:
neighbours.append((state[0] - 1, state[1] - 1))
# bottom left
if state[0] != 0 and state[1] != dim[1] - 1:
neighbours.append((state[0] - 1, state[1] + 1))
# top right
if state[0] != dim[0] - 1 and state[1] != 0:
neighbours.append((state[0] + 1, state[1] - 1))
# bottom right
if state[0] != dim[0] - 1 and state[1] != dim[1] - 1:
neighbours.append((state[0] + 1, state[1] + 1))
return neighbours
# Current optimization objective: local/global maximum
def hill_climb(curr_state, landscape):
neighbours = find_neighbours(curr_state, landscape)
bool
ascended = False
next_state = curr_state
for neighbour in neighbours: #Find the neighbour with the greatest value
if landscape[neighbour[0]][neighbour[1]] > landscape[next_state[0]][next_state[1]]:
next_state = neighbour
ascended = True
return ascended, next_state
def __main__():
landscape = np.random.randint(1, high=50, size=(10, 10))
print(landscape)
start_state = (3, 6) # matrix index coordinates
current_state = start_state
count = 1
ascending = True
while ascending:
print("\nStep #", count)
print("Current state coordinates: ", current_state)
print("Current state value: ", landscape[current_state[0]][current_state[1]])
count += 1
ascending, current_state = hill_climb(current_state, landscape)
print("\nStep #", count)
print("Optimization objective reached.")
print("Final state coordinates: ", current_state)
print("Final state value: ", landscape[current_state[0]][current_state[1]])
__main__()

Please try and fiddle with the given code! A few things to try out are:

  1. Change the coordinates of the start state.
  2. Modify the optimization objective of the algorithm (maybe try and find the local or global minimum instead of maximum as a start)

Variants

The hill-climbing algorithm shown above was just one variant called the steepest ascent hill climbing.

We may also change the objective function to find the local minimum and make the algorithm descend instead of climb.

Other variants that prioritize exploring the state space also exist such as Random Restart Hill Climbingiterates over random initial states, doing the hill-climbing again and again, and picks the one that shows the largest improvement and Stochastic Hill Climbingpicks a neighborhood at random and chooses whether or not to move to that neighborhood based on the amount of improvement it shows.