Markov process
Markov process |
---|
See also |
Markov process are an important class of the stochastic processes. The Markov property means that evolution of the Markov process in the future depends only on the present state and does not depend on past history. The Markov process does not remember the past if the present state is given. Hence, the Markov process is called the process with memoryless property[1].
Basic types of Markov process
Markov processes are classified according to the nature of the time parameter and the nature of the state space. With respect to state space, a Markov process can be either a discrete-state Markov process or a continuous-state Markov process. A discrete state Markov process is called a Markov chain. Similarly, with respect to time, a Markov process can be either a discrete-time. Markov process or a continuous-time Markov process. Thus, there are four basic types of Markov processes[2][3]:
- Discrete-time Markov chain
- Continuous-time Markov chain
- Discrete-time Markov process
- Continuous Markov process
Structure of Markov processes
A jump process is a stochastic process that makes transitions between discrete states at times that can be fixed or random. In such a process, the system enters a state, spends an amount of time called the holding time (or sojourn time), and then jumps to another state where it spends another holding time, and so on. If the jump times are t= 0<t1<t2<..., then the sample path of the process inconstant between t1 and t1+1. If the jump times are discrete, the jump process is called a jump chain.
There are two types of jump processes:
- Pure (or nonexplosive)
- Explosive
If the holding times of a continuous-time jump process are exponentially distributed, the process is called a Markov jump process. A Markov jump process is a continuous-time Markov chain if the holding time depends only on the current state. If the holding times of a discrete-time jump process are geometrically distributed, the process is called a Markov jump chain. However, not all discrete-time Markov chains are Markov jump chains. For many discrete-time Markov chains, transactions occur in equally spaced intervals, such as every day, every week, and every year [4].
Footnotes
References
- Freidlin M.I., (2012), Processes and Differential Equations, Birkhäuser, Basel.
- Grabski F., (2014), Semi-Markov Processes, Elsevier, Amsterdam.
- Ibe O.C., (2013), Processes for Stochastic Modeling, Elsevier, Boston.
- Kijima M., (2012), Markov Processes for Stochastic Modeling, Springer, London.
- Taira K., (2010), Boundary Value Problems and Markov Processes, Springer Science & Business Media, New York.
Author: Natalia Węgrzyn