Automatic Parameter Estimation (APE)

Subtopics:

Related Topic:

Automatic parameter estimation (APE) is a mathematical process, known as multi-variable optimization, which automatically adjusts a specified set of function parameters to minimize error between the function and measured data. The math function being minimized is called the objective function; this function contains a calculated value based on a particular mathematical function. In the case of pressure transient modeling, this function calculates the synthetic pressure of the model. This synthetic pressure is then used in the objective function to determine the error between the model and the measured data. Over an iterative process, the modeling parameters are adjusted accordingly, based on the optimization method to minimize this error. The optimization method controls how the parameters are adjusted to minimize the error. There are many different optimization methods available, each with certain advantages and disadvantages. These methods are available in the software:

  • Mead (Simplex)
  • Marquardt-Levenberg (QR Factorization)
  • Marquardt-Levenberg (Gauss-Jordan)
  • Marquardt-Levenberg (Smooth Damping)

APE should not be used exclusively since it is possible to find several sets of parameters that yield an acceptable model match. In other words, the solution can be non-unique. It is always recommended to start with parameters obtained from diagnostic analyses and then fine-tune these parameters manually to obtain a close match. APE can then be used for unknown parameters, or parameters that are not known with confidence. The more parameters that are selected for APE or fitting, the longer the fitting process takes, and the more chance there is of finding non-unique solutions. Also, including more data points in the fit reduces performance.

Marquardt-Levenberg

The Marquardt-Levenberg method is a non-linear regression algorithm used for APE of reservoir and well parameters when modeling well test data. It is a modified version of LMDIF, a public-domain non-linear regression algorithm from Argonne National Laboratory. The algorithm requires partial derivatives of the objective function with respect to each of the parameters (Jacobians). The derivatives are calculated numerically, using a forward difference approximation. The objective function is the sum of squares of the difference (residuals) between pressure derivative data and the corresponding calculated model data.

Modifications and additions to improve LMDIF include a restart scheme to jump-start the routine whenever it is determined that the routine is slowing down without reaching convergence. A constraint mechanism is also included to keep estimates of the parameters within the physical realm of the problem.

The Marquardt-Levenberg routine is generally faster than the Mead (Simplex) method. However, the requirement of derivative calculations in the routine tends to make it less robust and more computationally intensive when farther away from the solution.

Mead (Simplex)

This a variation of the downhill Simplex method. The Simplex routine is a non-linear regression algorithm used for APE of reservoir and well parameters when modeling well test data. It requires only function evaluations of the objective function, and not the derivatives. The objective function is the sum of squares of the difference (residuals) between observed pressure, or pressure derivative data and the corresponding calculated model data.

Modification of the downhill Simplex method to achieve greater convergence is accomplished by imposing constraints on the parameters during the search. Estimates of the parameters are always checked against preset maximum and minimum values for each parameter. After the routine has converged on some parameters, it is restarted, with a slight perturbation away from the final values, and allowed to converge again. This ensures that the parameter estimates found are not the result of some local minimum in the residual, but rather a more global minimum.

Compared to other non-linear regression methods, this method is not always very efficient because it can require a large number of function evaluations. This tends to make it extremely slow in some cases. However, it is straightforward and not encumbered by the requirement of derivatives, and hence tends to be more robust under any conditions.