Return to site

Option Pricing with Stochastic Models

· Trading,Investing,Academic Research

Option Pricing with Stochastic Models

Theoretical models that heavily rely on price data and follow a stochastic process are popular among quantitative hedge funds.

One example of such a model is the one relating the price of a CDS (Credit Default Swap) to the price of a put on a stock. The most well-known example of such models is pricing a European call option, solved by Fischer Black and Myron Scholes, giving birth to the famous Black-Scholes equation [1]. Following the original paper there have been many other models suggested that use an underlying distribution similar to the one implied in the Black-Scholes paper.

We will explore the pricing abilities of such a model as well as the difficulty of fitting it. We first introduce the financial terminology, the relevance of this problem, the results from Black-Scholes original paper and then expand on these by presenting the Heston model [2]. Then we showcase two approaches that can be used to obtain estimates for the parameters of this process.

Financial Information

A European Call option with strike or exercise price K and exercise date or maturity T on an asset is the right but not the obligation to buy the asset for K at time T. The execution of this right is termed the exercise of the option [3]. Since we would only opt to pay K for an asset worth ST if ST ≥ K, the payout g(ST )of a call option at at time T is:

ST − K if ST ≥ K 0 if ST ≤ K

= max{ST − K, 0} = (ST − K)+

A European put option is the right to sell the asset for K at time T. The payout of a put option at time T is:

K − ST if ST ≤ K 0 if ST ≥ K

= max{K − ST , 0} = (K − ST )+

A European option allows exercise only at T, while an American option allows exercise at any time t ≤ T. At time t, a call option with strike K at maturity T is said to be:

1. at-the-money (ATM) if St = K;

2. in-the-money (ITM) if St > K, that is, if the call option would have positive value if the stock price remains unchanged until maturity;

3. out-of-the-money (OTM) if St < K, if the option would be worthless if the stock price remains unchanged [3].

The utility of options comes from the fact that they can provide investors with risk-reduction strategies. At the same time they are used by speculators as a lower cost alternative to betting on the appreciation or devaluation of some underlying asset while simultaneously having a limited downside risk [4].

As a simplistic example of options providing utility, consider a company that produces product A whose price depends on product B. If company A wants to minimize its risk that the price of product B will become too high they can buy Calls on product B and hedge the risk. In a similar fashion synthetic futures on the price of chickens allowed McDonalds to launch the Chicken McNugget [5].

In order to relate the future payout from the option to the value the option has now we use the Fundamental theorem of asset pricing. It states that ”There are no arbitrage portfolios ⇐⇒ there exists a risk-neutral probability distribution P∗such that the ratios of asset prices to the money market account are martin gales under P∗” [3].

In this definition by arbitrage portfolios we mean a selection of investments that will give a positive return with a probability greater than 0 or no return otherwise. In this sense, the fundamental tenet of quantitative finance is pricing instruments such that no portfolio gives positive value with no assumed risk. In other words, the highest risk-free value we can get in the markets is by investing in the money market account.

The money market account is an investment in a risk-free financial instrument, typically taken to be government bonds. This risk-neutral probability distribution P∗is such that when evaluating the expected value of our portfolio it we will be equivalent to having invested in government bonds.

If we let D(t, T) be the price of a financial derivative at time t with expiration at time T, then by the Fundamental theorem its value at time t is:

D(t, T) = Z(t, T)E∗(D(T, T) | St),

where E∗ is the risk-neutral expectation with respect to the money market account and Z(t, T) is a discounting term based on current interest rates (r), i.e. typically

Z(t, T) = 1

(1+r)T −t or Z(t, T) = e−r(T−t)if we are assuming continuous discounting.

Using this formula, pricing a derivative becomes conceptually easy, as for a European Call with strike price K we have:

CK(t, T) = Z(t, T)E∗((ST − K)+| St)

That is, all we need is to evaluate the conditional expectation under the risk-neutral probability measure. The problem is now transferred to finding the distribution of ST under the risk-neutral probability measure.


The distribution often used to model stock prices under the risk-neutral probability measure is the geometric Brownian motion. We first define Brownian Motion. A continuous-time stochastic process (Bt)t≥0 is a standard Brownian motion if it satisfies the following properties:

1. B0 = 0.

2. For t > 0, Bt has a normal distribution with mean 0 and variance t. 3. For s,t > 0, Bt+s − Bs has the same distribution as Bt.

4. If 0 ≤ q < r ≤ s < t, then Bt − Bs and Br − Bq are independent random variables.

5. The function t 7→ Btis continuous, with probability 1 [6].

Even if the mathematical object Brownian motion is impossible to create in practice, the theoretical implications we obtain from its use are valuable nonetheless. In addition, it is fairly easy to simulate a Brownian process (up to some accuracy).

For real µ and σ > 0, the process defined by Xt = µt + σBt, for t ≥ 0, is called Brownian motion with drift parameter µ and variance parameter σ2. With this we define Gt = G0eXt, where G0 > 0, to be a geometric Brownian motion [6].

The geometric Brownian motion is commonly used to model stock prices. It has the advantage of growing exponentially in expectation, as it is the case for stock prices. It’s well defined, as just like stock prices it can’t have negative values. Lastly, ratios of geometric Brownian motions for different time intervals are independent from each other. This is desired as in an efficient market percent changes in price from day to day should be independent from each other.

With the stock prices distributed in such a way, we have that for some parameters a and b: St = S0ea·t+bBt. In addition, from applying the fundamental theorem with the derivative being the stock itself we have:

St = Z(t, T)E∗(ST | St) = e−r(T−t)E∗(ST | St)

=⇒ E∗(ST | S0) = S0erT

This leads to the stock price being modeled as St = S0e(r−12σ2)t+σBt. Here r is the same risk-free continuously compounding interest rate as before, while σ denotes the volatility of the process.

With this we can obtain that ST | St = Ste(r−12σ2)(T−t)+σBT −t, a log-Normal distribution. We thus obtain the Call price by evaluating the corresponding expectation: CK(t, T) = Z(t, T)E∗((eYT − K)+| St)

log ST | St ∼ N

log St +

r −12σ2 (T − t), σ2(T − t)

Substituting YT = log ST and ν = log St +

r −12σ2 (T − t)

Z ∞

(ey − K)exp

−(y−ν)2 2σ2(T−t)

CK(t, T) = Z(t, T) log K

√2πσ√T − tdy

= StΦ(d1) − KZ(t, T)Φ(d2)

Here we have:

d1 =log St

K + (r +12σ2)(T − t)

σ√T − t

d2 = d1 − σ√T − t [3]

A. Properties of Black-Scholes formula:

The Black-Scholes formula has many convenient properties. Firstly, the fact that it’s a closed form solution means it’s easy to perform the computation on a computer knowing the required parameters of the equation. Secondly it is consistent in the limiting cases:

1. As St → ∞, Φ(d1) → 1 and Φ(d2) → 1, such that:

CK(t, T) → St − Ke−r(T−T)

This is the value of a forward contract on the stock. This result means that as the stock price increases, it becomes increasingly likely we will exercise the call and respectively obtain the same value as if we purchased a forward contract.

2. As σ → 0,

CK(t, T) →


St − Ke−r(T−t)if St > Ke−r(T−t) 

0 if St ≤ Ke−r(T−t)

This is consistent since as the volatility approaches zero the movement of the price becomes a deterministic exponential growth. Thus if the price will grow such that we can exercise the option at T, the value of the option is the same as a forward contract. Otherwise, we won’t exercise the option so its value is 0.

3. As σ → ∞, Φ(d1) → 1 and Φ(d2) → 0, such that CK(t, T) → St. This means that when the movement of the price is very volatile, the value of the option becomes equal to the value of actually owning the stock.

B. Simulating the geometric Brownian Motion:

Another advantage of the fact that implicitly Black-Scholes assumes the stock price follows a geometric Brownian motion is that we can easily simulate it. Since the stock price is:

St = S0e(r−12σ2)t+σBt

We can approach the problem of simulating it in two different ways. Either simu late Bt and then exponentiate and scale it, or use the Stochastic Differential Equation (SDE) for the geometric Brownian motion to compute it:

dSt = rStdt + σStdBt

We choose to use the SDE as it doesn’t require exponentiation so it is faster.

The ease of simulating the price like this suggests that another way of obtaining the price of the Call is just simulating multiple price processes, applying the call payoff function and using the average of all our values as an estimate for the expectation.

Of course, since we already have the Black-Scholes formula for the price, even simulating very simplistic distributions is more computationally intensive than just applying the formula. Nonetheless, for more advanced risk-free distributions of the stock price, when a close-form solution cannot be found, such simulations are essential to obtaining a price.

C. Implied volatility and the volatility smile

The Black-Scholes formula is a one-to-one function between volatility and call price. In particular, given the observed call option price CK(t, T), one can determine the volatility σK that would give this price when entered into the Black Scholes formula. This is known as the implied volatility [3].

From observing real option prices on the same stock, same maturity but different strike prices it was discovered that the implied volatility is different for different strike prices. Of course, the Black-Scholes formula implies that irregardless of the strike price or maturity, the volatility of the stock should be the same. This means that the implicit assumption that the risk-free distribution of the price is a geometric Brownian motion is incorrect. The tendency of the graph of implied volatility as a function of striking price for otherwise identical options to depart from a horizontal line has become popularly known among market professionals as the ”smile” [7].

An analysis of S&P 500 index option prices has shown that before 1987 the implied volatilities were almost the same. The perception of the market changed however during 1987 after the market crashed in that October. The price of OTM Puts and ITM Calls had increased since that crash relative to options ATM. This phenomenon of options indicating such a risk aversion is aptly described as ”crah o-phobia” by Rubinstein [7].

D. Limitations of the Black-Scholes formula

The fact that real implied volatilities are not constant is just one of the several ways that the assumptions of the

Black-Scholes formula are proven to be incorrect.

Additional limitations are the following:

1. The formula assumes the risk free rate of return is constant over the life-time of the option, even though it’s clear that over time the rate of return of any asset changes.

2. Real trading incurs a liquidity risk and brokerage charges that are not accounted for in the model.

3. The lognormal assumption for the price is not consistent with large tail risks experienced in the real world.

4. The original formula doesn’t take into account dividend payments that change the value of the underlying over time. [9]

There are several models that strive to correct some of these inconsistencies. Among these we have models assuming a stochastic volatility process (Heston [2], Gatheral [10]), jump-diffusion processes (Bates [11]), stochastic rates (Hull and White [12]) and many more.

We will turn our attention to the Heston model, which extends the original Black-Scholes model by replacing the constant assumed volatility of the prices σ with a stochastic, mean-reverting volatility. The reason why we choose to explore this model is because it has a closed-form solution (albeit still requiring numerical methods to evaluate the integral). In addition it is relatively easy to simulate the price process under this model’s assumptions.


Recall that the Black-Scholes model implicitly uses the assumption that prices follow a geometric Brownian motion with the SDE:

dSt = rStdt + σStdBSt

Here σ is assumed to be constant for the entire duration that we observe the price. Of course, in the real world both the rate r and the volatility σ of the price process are variable. With this in mind Heston suggested using a stochastic process to model the volatility:

dνt = κ(θ − νt)dt + ξ


This way, the price process is:

dSt = rStdt +√νtStdBSt where cor(dBνt, dBSt) = ρ

The instantaneous variance νt follows a Cox-Ingersoll-Ross process (CIR) which was first used by the authors to model interest rates. This process has the appealing feature that it ensures mean reversion. In our case this means that even though the instantaneous volatility may be high or low sometimes, it will always tend to an equilibrium value.

Conveniently the parameters of the process have intuitive meanings: θ is the long term variance, κ is the rate at which νt reverts to θ and ξ is the volatility of the volatility and determines the variance of νt, ρ is the correlation between the price Brownian process and the volatility Brownian process. An important detail of our volatility process is that if the parameters obey 2κθ > ξ2then the νt process is strictly positive. This is known as the Feller condition [13] and is important especially when trying to simulate such processes or fitting such models.

A. Effects of the Stochastic Volatility

Since the only difference between the Heston model and the Black-Scholes model is the use of a volatility process, it is interesting to examine the consequences of this addition. Some effects are the same, like the fact that higher volatility raises the price of all options [2], while the extra parameters of the Heston model allow us to better price options under certain circumstances.

When comparing these 2 models we need to be careful in making a distinction between the ”true” volatility process described before and the risk-neutralized volatility process. The distinction comes from the fact that when pricing options one tries to evaluate the expectation of the price under a theoretical risk-neutral probability (where the price of all instruments is a martingale under the risk-free probability measure), which may not coincide with the actual real probability measure of stock movements [14]. In this sense our risk-neutral volatility process is:

dνt = κ∗(θ∗ − νt)dt + ξ√νtdBνt


κ∗ = κ + λ and θ∗ = κθ/(κ + λ)

The model is then analyzed under the risk-neutralized volatility process instead of the ”true” process because this process exclusively determines the prices [2]. For the purpose of option pricing in real conditions, the parameter λ refers to the risk-premium parameter and can be estimated by using average returns on option positions that are hedged against the risk changes in the spot prices [2].

The correlation parameter ρ positively affects the skewness of spot returns. Intuitively, a positive correlation results in high variance when the spot asset rises, and this “spreads'' the right tail of the probability density (of spot returns). Conversely, the left tail is associated with low variance and is not spread out.

This increases the prices of out-of-the-money options and decreases the prices of in-the-money options relative to the Black-Scholes model with comparable volatility [2]. The parameter ξ, the volatility of volatility, has an effect on the kurtosis on the distribution. As it increases so does the kurtosis of the spot returns. At the same time, when ξ = 0, the volatility is deterministic and we operate in the Black-Scholes regime [2].


A. Kalman filters approach

Now that we know how to model the price, we need to find a way to obtain the parameters for our model using only price data. One approach to do that uses Kalman filters. Kalman filtering, also known as linear quadratic estimation (LQE) is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of the unknown variables, by estimating a joint probability distribution over the variables for each timeframe. [15]

The algorithm is based on linear systems that are discretized in time. The underlying model for Kalman Filter is that of a Markov chain on continuous space where the subsequent states are linear transformations of previous visible states and hidden states. In our case, the visible states are the price of an

asset, while the hidden state is the volatility process [15]. The model is specified by the following quantities:

• Fk, the state-transition model

• Hk, the observation model

• Qk, the covariance of the process noise

• Rk, the covariance of the observation noise

• Bk, the control-input model

The model assumes that the true states transitions as follows:

xk = Fkxk−1 + Bkuk + wk

Here wk is assumed to be drawn from a multivariate normal distribution with 0 mean and covariance matrix Qk. In addition, an observation of the true state xk is made according to:

zk = Hkxk + vk [15]

To obtain the parameters of the Heston model we implement a paper by Wang, He, Zhao, & Zuo [16]. In this paper the authors estimate the state ν(t) using the CEKF algorithm (Consistent Extended Kalman Filter). Then using the PMLE method they estimate the parameters of the volatility process [16].

Using the Ito equation the price equations are transformed into the equations of ˆ the logarithm of the price:

dln(St) = 1StdSt −121S2tdStdSt

= (r −12νt)dt +√νtdW(1)


= (r −12νt)dt −q(1 − ρ2)νtdBSt

+ ρ√νtdBνt

dνt = κ(θ − νt)dt + ξ√νtdBνt

Here we have the following factorization:

t =p1 − ρ2BSt + ρBνt



t = Bνt

In the Pseudo-Maximum Likelihood Estimation (PMLE) method the volatility process equation is approximated as follows:

dνt = κ(θ − νt)dt + ξ√νt−∆tdW2t

Then the approximate solution is obtained:

νt = e−κ∆tνt−∆t + θ(1 − e−κ∆t) + t[16]

E( t) = 0, E( t s) = 0, if t 6= s and E( 2t) =12ξ2κ−1(1 − e−2κ∆tνt−∆t). Using the assumption that t is normally distributed given νt−∆t we have that when

maximizing the pseudo log-likelihood our estimators are:

κˆ = −1∆tlog βˆ1 ,ˆθ = βˆ2, and ˆσ2 =2ˆκβˆ3

1 − βˆ21


βˆ1 =n−2 Pnk=1 νkPnk=1 ν−1

k−1 − n−1 Pnk=1 νkν−1

n−2 Pnk=1 νk−1Pnk=1 ν−1


βˆ2 =n−1 Pnk=1 νkν−1

k−1 − βˆ1

(1 − βˆ1)n−1 Pnk=1 ν−1 k−1

k−1 − 1

βˆ3 = n−1Xn k=1

νk − νk−1βˆ1 − βˆ2(1 − βˆ21) ν−1


The PMLE method finishes with obtaining ρ by the minimization of the pseudo log-likelihood as a function of ρ, `(ρ) [16]

To find the volatility we use the Extended Kalman Filter [18]. Since the Heston Model is not linear the parameter matrix Pk which stands for the estimation error covariance matrix does not exist. Wang et al. use the CEKF (consistent EKF) [19]. It proceeds as follows:

1. Initializing νˆ0, P0 = E (ν0 − νˆ0)(ν0 − νˆ0)T , Θ0 = (ˆk0,ˆθ0, σˆ0, ρˆ0)T

2. Computing the linearization matrices of the state equation Fk,Lk

3. Performing the update of the state prediction estimation-error variance ν¯k+1, P¯k+1

4. Computing the linearization matrices of the measurement function Hk+1, Mk+1

5. Performing the update of the state estimate and estimation covariance for Kk+1, νˆk+1, Pk+1 [16]

Lastly once the νˆ values are obtained, Wang et al. use the PMLE to obtain the updated Θ parameters, and repeat the process until the Heston model parameters converge [16]. We obtain that the ability of this model to obtain the right parameters is highly dependent on the values of the ρ of the underlying process.

In addition, even using the correct parameters for the CEKF step our volatility estimates are still far from the correct ones.

In order to illustrate this we simulate a volatility and a price process using the transitions of the Heston model. We use the following values for our parameters: κ = 3.0, θ = 0.25, ξ = 0.075.

The first plot uses a value of ρ = −0.8, where the first subplot is the original process, the second with Estimated Volatility I is the estimated volatility obtained using the CEKF process when the parameters are the correct ones, and the third subplot contains Estimated Volatility II which is the estimated volatility obtained using the CEKF process when the parameters are estimated using the PMLE method.

In the following plot we use a value of ρ = −0.2. The subplots have the same meaning as before. From experimentation as well as the formulas we see that the sign of ρ is not as important to our volatility estimates as much as the magnitude. When the magnitude of ρ is bigger the estimate of our volatility is better because the price price movements are more correlated to the changes in volatility, which is the source of our estimates. We can see that for ρ = −0.2 the volatility esti-

mates deviate very little from the long-term mean θ volatility. Since typically the ρ does not have a big magnitude it means this approach will not yield accurate measurements of the underlying volatility process, which is necessary for pricing options.

B. MCMC approach

Another approach to obtain the parameters of the processes is by using Markov chain Monte Carlo methods. More specifically, using the language Stan [20] and the corresponding python library pystan we can create a sampler that given the price data and prior distributions on our parameters will obtain the posterior distribution of our parameters by running Markov Chains with the corresponding transition steps.

The way we define our transformation has a drastic effect on the performance of the sampler. Since vectorized operations in Stan are more efficient than iterative ones we try to rely on vectors as much as possible. Typically one would need to write code for Stan and then make the Stan program run it, after which the files should be passed back to python. The pystan package allows us to write Stan code in python and run it through python.

In order to encode the model more efficiently we make use of an η variable that is

defined as:

ηt =St+1

St− 1

In this way our main function we sample is not St but ηt, and we sample it as:

η[1 : n] ∼ N (rdt + ρ


ν[1 : n] · dt dW2[1 : n],


ν[1 : n]dt(1 − ρ2))

Here dW2 ∼ N (0, 1) and is a vector. In addition to make the computation of the gradients more efficient we define ν in the transformed parameters section. For the other parameters except r (for which we use µ in the code) we use Cauchy priors because of the longer tails of the distribution. In turn, for r we use a Normal distribution.

The Stan code encoding this information is the following:

data {

real dt;

int<lower=1> N;

vector[N] St;


transformed data {

int<lower=1> n = N-1;

vector[N-1] eta = St[2:N] ./ St[1:(N-1)] -1 ; // the ratio -1


parameters {

real<lower=-0.1, upper=0.08> mu;

real<lower=-0.99, upper=0.99> rho; // the correlation

real<lower=0, upper=0.6> theta;

real<lower=0, upper=12> kappa;

real<lower=0, upper=0.5> xi;

real<lower=0, upper=1> niu0;

vector[n] dW2;


transformed parameters {

vector<lower=0, upper=1>[N] niu;

niu[1] = niu0;

for (i in 1:n){

niu[i+1] = niu[i] + kappa*(theta-niu[i])*dt + xi*sqrt(niu[i]*dt)*dW2[i];



model {

rho ˜ cauchy(0,0.1) T[-0.99, 0.99];

kappa ˜ cauchy(6,2) T[0,];

xi ˜ cauchy(0.1, 0.1) T[0,];

theta ˜ cauchy(0.4,0.1) T[0,];

mu ˜ normal(0.07, 0.03) T[-0.01,];

dW2 ˜ normal(0, 1);

niu0 ˜ normal(theta, xi) T[0,1];

eta[1:n] ˜ normal(mu*dt + rho*sqrt(niu[1:n]*dt).*dW2[1:n], sqrt(niu[1:n]*dt*(1-rho*rho)));

Listing 1: Stan code to perform stochastic sampling

The result of the sampling are posterior distributions for our parameters. Since each νt is considered as a single parameter it means that for each of them we obtain a set of samples instead of a single value. This leads us to deciding which value should be used as an estimate for the underlying ν process. Typically when estimating a single value the expectation is used. In our case we try both the expectation and the mode for each νt since the mode is less affected by outliers.

Unfortunately, because we simply take the mode of the volatility, without taking into account the previous or latter values of ν (or their higher order derivatives), the estimated volatility experiences jumps from one time step to another. This could be remedied by a smoothing approach, such as a 3rd order smoothing spline applied on the data. Another more complex smoothing that could help with the continuity of our results is applying a Gaussian kernel filter.

Here we compare the results of the 2 approaches when the Kalman Filter approach is given the correct model parameters while the MCMC approach is given

moderately informative priors. For the MCMC we check both using the mean and the mode as estimates of the volatility. The simulations use the same parameters as before with ρ = −0.8. We obtain the following plot:

As expected because the magnitude of the correlation is big the Kalman approach with correct parameters has a greater performance. We also notice that using the mean for the volatilities offers a more regularized result for the volatility estimates as opposed to the mode.

In the case of ρ = −0.2 the Kalman filter doesn’t perform as well even when given the correct parameters. On the contrary, the performance of the MCMC is not significantly affected. One measure that can be used to compare how well one method does against the other is the residual sum of squares, according to which the MCMC approach is more efficient.


In this paper we have given an introduction to the Black-Scholes model. We have seen its advantages as well as disadvantages for use in option pricing. Following that we introduced the Heston model, presenting it as an extension of the Black-Scholes model. Furthermore we performed a comparison between using a Kalman filter approach [21][16] to obtain the underlying volatil ities and an MCMC sampling approach using the Stan language [20].

Based on our results we conclude that the Kalman filter approach is most effective when the magnitude of ρ is as close to 1 as possible. At the same time, since for most practical purposes this is not the case but rather the magnitude of ρ has values close to 0 the MCMC approach works better.

The downside of this approach are the computational requirements, with larger series requiring hours to converge using 12 simultaneous chains. These parameters can in turn be used to obtain estimates for the corresponding option price.

Written by Laurentiu Calancea & Edited by Alexander Fleiss


[1] M. Black, Fischer Scholes, Journal of political economy 81, 637 (1973).
[2] S. L. Heston, The review of financial studies 6, 327 (1993).
[3] S. Blyth, An introduction to quantitative finance (Oxford University Press, 2013).

[4] Investopedia, Four advantages of options (2020), [Online; accessed 26-September-

[5] CNBC, How billionaire ray dalio helped launch mcdonald’s chicken mcnugget
(2020), [Online; accessed 26-September-2020].
[6] R. P. Dobrow, Introduction to stochastic processes with R (John Wiley & Sons, 2016).
[7] M. Rubinstein, The journal of finance 49, 771 (1994).
[8] Investopedia, Volatility smile definition and uses (2019), [Online; accessed 3-

[9] Investopedia, Circumventing the limitations of black-scholes (2020), [Online; ac-
cessed 3-October-2020].

[10] J. Gatheral, in Bachelier congress, Vol. 37 (2008) pp. 39–51.
[11] D. S. Bates, The Review of Financial Studies 9, 69 (1996).
[12] J. Hull, Journal of Derivatives 3, 26 (1996).
[13] W. Feller, Annals of mathematics , 173 (1951).
[14] Investopedia, Risk-neutral probabilities (2019), [Online; accessed 7-October-2020].
[15] Wikipedia, Kalman filter (2020), [Online; accessed 14-December-2020].
[16] X. Wang, X. He, Y. Zhao, and Z. Zuo, IFAC-PapersOnLine 50, 14100 (2017).
[17] C. Y. Tangand S. X. Chen, Journal of Econometrics 149, 65 (2009).
[18] D. Simon, Optimal state estimation: Kalman, H infinity, and nonlinear approaches
(John Wiley & Sons, 2006).
[19] Y. Jiang, W. Xue, Y. Huang, and H. Fang, in Proceedings of the 33rd Chinese Control
Conference (2014) pp. 6838–6843.
[20] B. Carpenter, A. Gelman, M. D. Hoffman, D. Lee, B. Goodrich, M. Betancourt,
M. Brubaker, J. Guo, P. Li, and A. Riddell, Journal of statistical software 76 (2017).

[21] Y. Jiang, Y. Huang, W. Xue, and H. Fang, Journal of Systems Science and Complex-
ity 30, 751 (2017).

All Posts

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!