ADAPTIVE GUARANTEED ESTIMATION OF A CONSTANT SIGNAL UNDER UNCERTAINTY OF MEASUREMENT ERRORS

ditions, the values of measurement errors , 1, , k v k N  are unknown (uncontrolled). A priori information about measurement errors is formalized by choosing a hypothesis about the properties of errors k v . The following hypotheses are traditional. 1. The measurement errors k v are random and given by probability density function with known parameters. 2. The measurement errors k v are uncertain quantities: k v V  , where V is a given convex set of their possible values. Acceptance of the hypothesis about the probabilistic nature of measurement errors makes it possible to formulate the problem within the framework of the stochastic approach as the problem of finding the optimal estimate in the mean square sense and to use statistical methods [2]. The most common is the use of the least-squares method (LS) [1, 2], i.e. minimizing a function


Introduction
The estimation problem of a constant signal x from noisy measurements is considered [1] , 1, 2, ..., , where 1 x R  is a constant value (useful signal), 1 k v R  are the measurement errors. Under natural conditions, the values of measurement errors , 1, , are unknown (uncontrolled). A priori information about measurement errors is formalized by choosing a hypothesis about the properties of errors k v . The following hypotheses are traditional.
1. The measurement errors k v are random and given by probability density function with known parameters.
2. The measurement errors k v are uncertain quantities: k v V  , where V is a given convex set of their possible values.
Acceptance of the hypothesis about the probabilistic nature of measurement errors makes it possible to formulate the problem within the framework of the stochastic approach as the problem of finding the optimal estimate in the mean square sense and to use statistical methods [2]. The most common is the use of the least-squares method (LS) [1,2], i.e. minimizing a function       2 1, arg min .
In the guaranteed estimation problems under uncertainty relative to disturbances and measurement errors, the admissible sets of their possible values are determined. The solution is chosen due to conditions of guaranteed bounded estimates optimization corresponding to the worst realization of disturbances and measurement errors. The result of the guaranteed estimation is an unimprovable bounded estimate (information set), which turns to be overly pessimistic (reinsurance) if a prior admissible set of measurement errors is too large compared to their realized values. The admissible sets of disturbances and measurement errors can turn to be only rough upper estimates on a short observation interval. The goal of research is the accuracy enhancement problem of guaranteed estimation when measurement errors are not realized in the worst way, i.e. the environment in which the object operates does not behave as aggressively as it is built in a priori data on the permissible set of error values. Research design. The problem of adaptive guaranteed estimation of a constant signal from noisy measurements is considered. The adaptive filtering problem is, according to the results of measurement processing, from the whole set of possible realizations of errors, to choose the one that would generate the measurement sequence. Results. An adaptive guaranteed estimation algorithm is presented. The adaptive algorithm construction is based on a multi-alternative method based on the Kalman filter bank. The method uses a set of filters, each of which is tuned to a specific hypothesis about the measurement error model. Filter residuals are used to compute estimates of realized measurement errors. The choice of the realization of possible errors is performed using a function that has the meaning of the residual variance over a short time interval. Conclusion. The computational scheme of the adaptive algorithm, the numerical example, and comparative analysis of obtained estimates are presented.
Recurrent algorithms are most widely used in solving problems of processing noisy measurements when an estimate of an unknown quantity is formed by the sequential processing of each available measurement and the results obtained at the previous processing step. The recurrent LS-method is the relations of the Kalman filter (KF) for the considered problem (1) [3,4]. However, any inaccuracy in the knowledge of the probabilistic characteristics of errors k v can cause divergence of the filtering process [5][6][7][8].
However, in many situations, the application of stochastic estimation methods can be difficult: due to the small number of available measurements, based on the results of which the search for the best estimate is carried out, or the absence of probabilistic characteristics of measurement errors. Besides, the assumption about the random nature of measurement errors is not always justified [5,8]: often it is only known that the measurement errors k v are bounded. Given a set of possible values of measurement errors, it is possible to formulate the problem within the guaranteed (set-membership approach) as the problem of finding the bounded set of possible values of an unknown quantity [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26]. In this case, the problem solution is selected from the condition of the optimization of guaranteed bounded estimates corresponding to the worst realization of measurement errors [8,12,18,24]. The advantage of guaranteed estimation methods is the absence of random filtering errors [10-15, 21, 23, 27]. However, the resulting bounded estimate (information set) may turn out to be overly pessimistic (reinsurance) if the set of possible values of measurement errors is too wide [8,17,18]. The problem of adaptive algorithm development for guaranteed estimating becomes relevant [28]. The adaptive guaranteed estimation problem is, according to the results of measurement processing, from the whole set of possible realizations of errors, to choose the one that would generate the sequence of measurements [8].
One of the central issues of modern estimation theory [29][30][31][32] is the synthesis of adaptive filters enabling of providing a sufficiently accurate estimate of the state vector in the absence of accurate a priori information about disturbances and measurement errors is one of the. In [6,7,29,32], various algorithms for adaptive filtering of stochastic systems with unknown values of the noise covariance matrices are discussed.
This article is focused on the problem of adaptive guaranteed estimation of a constant signal from noisy measurements. The development of an adaptive estimation algorithm is based on a multialternative method based on a Kalman filter bank, which was first proposed in [33] for estimating random processes with unknown constant parameters [34,35]. This method has found wide application in problems with a multi-alternative description of a system state or process [36][37][38]. The work continues research [39,40].

Statement of the problem
Consider the estimation problem solution of unknown constant signal from a single realization of measurements (1) in the framework of a guaranteed (set-membership) approach. A priori information about the initial value 0 x of a variable and errors k v is represented in the form of admissible sets of the corresponding quantities [9-12, 16-20, 24, 26] are respectively left and right bounds of the set 0 X , v  , v  are respectively left and right bounds of the set V . The result of guaranteed estimation is the construction of the information set k X that is guaranteed to contain an unknown signal x [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24].
The information set is defined as follows [18,23]: where   k X y is the measurement consistent set The presence of the estimate k X (4) is fundamentally important from determining the consistency of a priori information (2) [23]. The algorithm efficiency mainly depends on a priori estimate V which is adequate to the realized errors k v : 1. Errors in the set V definition, i.e. a failure of the assumptions (2) when k v V  , can lead to the fact that the information set k X may be empty at some time step k : k X   . Errors in set 0 X definition can also lead to such a situation.
2. If the set V is too wide, then the information set k X will regularly within the measurement con- . In this case, measurement processing is useless, i.e. it does not lead to an increase in the estimation accuracy -a decrease in estimation errors.
Consider an algorithm for solving the guaranteed estimation problem for the case, when a prior admissible set V is too wide, as a result Adaptive algorithm of guaranteed estimation By following the LS-method and the KF, consider the measurement residual formed as the difference between the measured value and the estimate obtained at the previous step [4,8,9]   Substituting the measurement equation (1) into this equation, we find that , 1, 2, ..., , is the estimation error of unknown signal x . Thus, the residual k  (7) corresponding to the current moment of time k is an estimate of the realized measurement error k v , and the estimation error of the measurement error is equal to the estimation error of the signal .
x As for estimation error k e , it is known that is the centered set symmetric about zero, (8) is guaranteed and means that the actual estimation error k e can take any value from the set 0 1 k X  . Taking into account the constraint (8) on the error value k e , the permissible set of measurement errors k v can be represented as In the equation for the measurement residual (6) substitute the estimate * 0 x given a priori for the estimate obtained at the previous time step * The value * 0 e x x   (12) is the error of the initialization of x .
The centered set is the set of possible values of errors e (12), symmetric about zero. Taking into account equations (10) and conditions (11), (13), represent the admissible set of measurement errors in the form Thus, the width of the permissible set k V (14) of measurement errors k v is determined by the width of the permissible set of errors e in setting information about the actual value x (13).
Explain this choice. As shown above, if the admissible set , given too wide, so that 0 X V  , then the information set k X is within the measurement consistent set: . According to the minimax principle, the estimation error and is constant over the entire considered time interval. The admissible set (13) of the estimation error e can be represented as the sum of two subsets The value and sign of the actual estimation error e are unknown. Therefore, we can talk about accepting one of two hypotheses, a hypothesis 0 H :  . The acceptance of the hypothesis 0 H with the fulfillment of conditions The acceptance of the hypothesis 0 H , while An error in setting the set k V (17) can lead to the fact that the information set k X at some time step k may turn out to be empty: k X   . In this case, further construction of the sets using the filter equations (4), (5) becomes impossible. However, it may turn out that x . To characterize the actual quality of estimation, one can use a sequence of a posteriori residuals of measurements [8,9,39,40]. * * , , , 1, 2, ..., , where * k x is the estimate of unknown signal x obtained by the time step k .
Of particular interest is to obtain the best (optimal) estimates * k x . These estimates can be obtained by solving the problem of minimizing the function * 2 1 min .
Function (19), which is the sum of the squares of a posteriori residuals, carries information about the estimated error of estimation [8,9]. Therefore, the criterion for choosing the admissible set of realized measurement errors is the accuracy of the obtained point estimates * k x of the signal x for different values of k V (16), (17). In this case, the algorithm accuracy for the selected set k V is estimated by averaging over the considered measurement interval.
Thus, it is possible to specify the following guaranteed estimation algorithm, which is adaptive to the realized measurement errors.
 respectively, rather accurately describes the behavior of actual estimation errors on the measurement interval 1 2 , , ..., l y y y . 3. Following the accepted hypotheses, the admissible sets of measurement errors are calculated (16) and (17), respectively. We will consider the results of the estimation algorithm for different admissible sets of measurement errors.
4. The estimate of the signal x obtained on this measurement interval will be denoted * l x and will be found by the criterion of the minimum squared residuals (19), comparing the results of the algorithm for different admissible sets of measurement errors. 5. For the next measurement interval 1 2 , , ..., l l l l y y y    as a priori estimate of the signal x , we will consider the estimate obtained from measurements at the last time steps l : The measurement processing on the interval 1, k l l l    is carried out in the same way as the measurement processing on the interval 1, k l  . The application of the algorithm does not require storing l measurements, but only calculating and storing estimates with the width of the measurement interval equal to l .
Represent a multi-alternative model of the algorithm in the following form.
Step 2. Calculate * 0 x following (11). Accept the hypothesis Step 3. Calculate k  following (10) and the admissible set of measurement errors k v following (16).
Step 2. Calculate * 0 x following (11). Accept the hypothesis Step 3. Calculate k  following (10) and the admissible set of measurement errors k v following (17).
Step 5. If k X   , go to Step 2 of Algorithm 1. Otherwise, go to Step 6. Step 6. Calculate k  following (18).
Step 7. Define 1 k k   go to Step 3. If k l  , go to Step 8.
Step 8. Calculate Step 9. If the value

Numerical simulations
The problem of constant signal estimation from noisy measurements is considered , 1, ..., .
, the number of measurements is 100 N  . The noisy measurements k y (20) and measurement errors k v are shown in Fig. 1. The measurement errors are assumed to be zero mean Gaussian white noise sequence with standard deviation 0.17 v   . The prior admissible sets are taken as follows: Fig. 1b shows, the realization of measurement errors is such that 3 Fig. 1

. The processes considered in the example: a -noisy measurements k y ; b -measurement errors k v
The measurement interval was divided into 5 equal sub-intervals. According to the results of measurement processing, the information set of possible values of the signal x is obtained (Fig. 2 The information set of possible values of the signal x computed by "non-adaptive" filter is (Fig. 2  . The quantity  shows what part of the prior uncertainty is the information set [41]. The information set computed by the adaptive guaranteed algorithm does not exceed 2%   1.56   of the prior uncertainty value, while the information set computed by the "non-adaptive" guaranteed algorithm exceeds 11%   11.74   of the prior uncertainty value.

Application of the Kalman Filter
Recurrence equations of LSE [3,8]   As mentioned above, equations (21), (22) are the KF equations for the considered problem (20). The variance of measurement errors is known: 2 v r   . Initial conditions for the KF are: From a comparison of the results of the adaptive guaranteed estimation and the KF (Fig. 3, Table 1), it follows that the implementation of the adaptive guaranteed estimation algorithm made it possible to reduce the initial uncertainty in the knowledge of the signal x by 64 times, and the use of the Kalman filter -by 20 times.  x of the adaptive guaranteed algorithm, respectively, we have (Fig. 4, Table 2)   *   3 ,3ˆ1 00% 17.74%, 100% 50%. max max  x of the adaptive guaranteed algorithm. The Kalman estimate turns out to be more accurate since the real probability distribution law of measurement errors k v is Gaussian. The estimate of the guaranteed algorithm is selected based on the worst realization of measurement errors. In the case of a single realization of measurements   1 N k k y  , the solution of the guaranteed estimation problem, when the estimate is a point which is equidistant from bounds of the information set (middle point of the interval), is nonrational [41]. In the considered example, the true value of the signal x is on the border of the information set. However, in practice, such a situation cannot be recognized.
Consider the measurement errors k v in terms of uniformly distributed in the interval   , v v  white noise at level of about 0.5 v  (Fig. 5). The prior admissible sets are Initial conditions for the KF are:  Fig. 5 shows, the realization of measurement errors is such that at some time steps 0,5 k v  . A comparison of the results of "non-adaptive" guaranteed estimation and the KF is shown in Fig. 6 and Table. 3.
Thus, in the case when the admissible set of measurement errors ,  is adequate to the realized measurement errors so that the measurement errors can take values on the set bound or close to its bound, the guaranteed estimation errors are minimal. For the considered realization of measurement errors (Fig. 5), at time steps 13, 34, 84 k  the values of measurement errors are closest to the boundary values. At these time steps, the guaranteed algorithm provides the most accurate estimates. In this case, the application of adaptive methods is not required.

Conclusion
The article has proposed a solution to the problem of adaptive guaranteed estimation for a constant signal from noisy measurements. It is based on a multi-alternative method when a set (bank) of filters is used, with each of which tuned to a specific hypothesis about possible realizations of measurement errors. Filter residuals are used to compute estimates of realized measurement errors. Choosing the possi-ble implementation of errors is made by using a composed function that makes sense of the variance of residuals over a short time interval.