Calculating Stationary Distribution in Python

Nina Vergara
3 min readAug 30, 2021

--

Calculating the stationary distribution of a system gives us an idea of system behavior as a whole, irrespective of time or initial starting conditions. It can be thought of as a specialized PCA analysis, allowing us to pick out the most important states. This helps us decrease the complexity of our system for subsequent analyses.

Each probability in a stationary distribution is indicative of the contribution of the relative state to the system. A Large stationary distribution probability indicates that the system regularly transitions to and from this state.

Let's say we have a system that contains three states: A, B, and C. The probability of transitioning from one state to the next is calculated with the transitions themselves during a given time period. So, within a time span t:t+n, the probability of transitioning from state1 to state2, is

# of transitions from state1 to state2 / # of transitions from state1

For example, from t=0 to t=15, if 10 transitions occurred from A and in 5 cases the system transitioned to B then the transition probability of A to B is 5/10 or 0.5.

In the diagram above, we can see that if the system is in state A, then at the next time period, there’s a 0.5 probability of the system transitioning into state C, and a 0.5 probability of the system transitioning to state B. We can see that if the system is in state C, then it’ll always transition to state A in the next time period. For state B, there’s a 0.3 probability that the system will stay in state B for the next cycle and a 0.7 probability that the system will transition to state C.

This system can be represented by a first-order Markov transition matrix. Each row and column are assigned a state. Each cell is the probability of transitioning from one state to the next. This would be considered a row stochastic matrix because each row sum is equal to one.

There are a few ways to calculate stationary distribution, but a common method is to find the eigenvector with an eigenvalue that is are close to one. To produce a probability vector, every element in the selected eigenvector is divided by the sum of the eigenvector. I’ll be using Logan Shelley’s answer to this stack overflow post to implement these steps in python.

import numpy as nptransition_matrix = np.array([[0,0.5,0.5]
[0,0.3,0.7]
[1,0,0])
'''
Since the sum of each row is 1, our matrix is row stochastic.
We'll transpose the matrix to calculate eigenvectors of the stochastic rows.
'''
transition_matrix_transp = transition_matrix.T
eigenvals, eigenvects = np.linalg.eig(transition_matrix_transp)
'''
Find the indexes of the eigenvalues that are close to one.
Use them to select the target eigen vectors. Flatten the result.
'''
close_to_1_idx = np.isclose(eigenvals,1)
target_eigenvect = eigenvects[:,close_to_1_idx]
target_eigenvect = target_eigenvect[:,0]
# Turn the eigenvector elements into probabilites
stationary_distrib = target_eigenvect / sum(target_eigenvect)

--

--

Nina Vergara
Nina Vergara

No responses yet