Markov process

From CEOpedia | Management online
Revision as of 17:56, 1 December 2019 by Sw (talk | contribs) (Infobox update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Markov process
See also


Markov process are an important class of the stochastic processes. The Markov property means that evolution of the Markov process in the future depends only on the present state and does not depend on past history. The Markov process does not remember the past if the present state is given. Hence, the Markov process is called the process with memoryless property[1].

Basic types of Markov process

Markov processes are classified according to the nature of the time parameter and the nature of the state space. With respect to state space, a Markov process can be either a discrete-state Markov process or a continuous-state Markov process. A discrete state Markov process is called a Markov chain. Similarly, with respect to time, a Markov process can be either a discrete-time. Markov process or a continuous-time Markov process. Thus, there are four basic types of Markov processes[2][3]:

  1. Discrete-time Markov chain
  2. Continuous-time Markov chain
  3. Discrete-time Markov process
  4. Continuous Markov process

Structure of Markov processes

A jump process is a stochastic process that makes transitions between discrete states at times that can be fixed or random. In such a process, the system enters a state, spends an amount of time called the holding time (or sojourn time), and then jumps to another state where it spends another holding time, and so on. If the jump times are t= 0<t1<t2<..., then the sample path of the process inconstant between t1 and t1+1. If the jump times are discrete, the jump process is called a jump chain.

There are two types of jump processes:

  1. Pure (or nonexplosive)
  2. Explosive

If the holding times of a continuous-time jump process are exponentially distributed, the process is called a Markov jump process. A Markov jump process is a continuous-time Markov chain if the holding time depends only on the current state. If the holding times of a discrete-time jump process are geometrically distributed, the process is called a Markov jump chain. However, not all discrete-time Markov chains are Markov jump chains. For many discrete-time Markov chains, transactions occur in equally spaced intervals, such as every day, every week, and every year [4].

Footnotes

  1. F. Grabski 2014, p.2
  2. O.C. Ibe 2013, p.50
  3. M. Kijima 2012, p.13
  4. K. Taira 2010, p.98

References

Author: Natalia Węgrzyn