9512.net
甜梦文库
当前位置:首页 >> >>

Gaussian membership functions are most adequate in representing uncertainty in measurements


V. Kreinovich1 , C. Quintana1 , L. Reznik2 1 Computer Science Department, University of Texas at El Paso, El Paso, TX 79968 USA 2 Sikeirosa 15{1{46, St. Petersburg 194354, Russia Abstract. In rare situations like fundamental physics we perform experiments without knowing what their results will be. In the majority of real-life measurement situations, we more or less know beforehand what kind of results we will get. Of course, this is not the precise knowledge of the type \the result will be between a ? and a + ", because in this case, we would not need any measurements at all. This is usually a knowledge that is best represented in uncertain terms, like \perhaps (or \most likely", etc.) the measured value x is between a ? and a + ". Traditional statistical methods neglect this additional knowledge and process only the measurement results. So it is desirable to be able to process this uncertain knowledge as well. A natural way to process it is by using fuzzy logic. But there is a problem: we can use di erent membership functions to represent the same uncertain statements, and di erent functions lead to di erent results. What membership function to choose? In the present paper, we show that under some reasonable assumptions, Gaussian functions (x) = exp(? x2 ) are the most adequate choice of the membership functions for representing uncertainty in measurements. This representation was e ciently used in testing jet engines for airplanes and spaceships. fundamental physics we perform experiments without knowing what their results will be. In the majority of real-life measurement situations, we more or less know beforehand what kind of results we will get. Of course, this is not the precise knowledge of the type \the result will be between a ? and a + ", because in this case we would not need any measurements at all. This is usually a knowledge that is best represented in uncertain terms, like \perhaps (or \most likely", etc.) the measured value x is between a ? and a + ". Traditionally the uncertain prior knowledge is not used in measurement processing. Traditional statistical methods neglect this additional knowledge and process only the measurement results. So it is desirable to be able to process this uncertain knowledge as well. The usage of fuzzy logic and related problems. A natural way to process uncertainty is by using fuzzy logic Z65]. This way we represent every statement of the type \most likely, jx ? aj " by a membership function (x) that for each x gives us a degree to which we are certain that this particular x satis es the given condition. But there is a problem: we can use di erent membership functions to represent the same uncertain statements. What membership function to choose? What we are planning to do? In the present paper, we show that under some reasonable assumptions, Gaussian functions (x) = exp(? x2 ) are the most adequate choice of the membership functions for representing uncertainty in measurements. This representation was e ciently used in testing jet engines for airplanes and spaceships. the nal resulting knowledge about the value of a physical quantity does not consist of a single 1

GAUSSIAN MEMBERSHIP FUNCTIONS ARE MOST ADEQUATE IN REPRESENTING UNCERTAINTY IN MEASUREMENTS

1. INTRODUCTION Usually in measurement situations there is some prior knowledge. In rare situations like

2. MOTIVATION OF THE FOLLOWING DEFINITIONS We must have in mind that di erent experts can have di erent opinions. Therefore

jx ? a1 j

statement, but can be formed by adding several statements of several expert, e.g., \most likely, 1 ", \most likely, jx ? a2 j 2 ", ... The resulting statement is \most likely, jx ? a1 j 1, and most likely, jx ? a2 j 2 ,..." In order to represent this resulting knowledge we must choose some operation for &. Then the resulting membership function will be equal to (x) = 1 (x) 2 (x) :::, where i (x) corresponds to the opinion of i-th expert. What &?operation to choose? Experimental results given in HC76], O77], and Z78], show that among all possible operations a; b ! min(a; b) and a; b ! ab are the best t for human reasoning. The min operation does not seem to be adequate for our purposes, because if we use min, then, e.g., the degree, to which a function x(t) satis es the condition \for all t, most likely jx(t)j M ", is equal to the minimal of the degrees of the statements \most likely, jx(t)j M " for all t. This minimum is attained when the value of jx(t)j is the biggest possible. Therefore, the function x1 (t) that is everywhere equal to 2M , gets the same degree of consistency with the above-given rule, as the function that is almost everywhere equal to 0, and is attaining the value 2M only on a small interval. Intuitively, however, for the rst function, for which the inequality is not true in a single point, our degree of belief that x1 (t) satis es this condition is practically 0, while for the second function, for which this inequality is almost everywhere true, our degree of belief must be close to 1. So, using min in our problem is inconsistent with our intuition, and therefore we must use the product for &. Comment. Other arguments for choosing di erent & operations are given in our previous publications KR86] and KQLFLKBR92]. We want to describe membership functions for the following statements. We are interested in describing statements of the type \most likely, jx ? aj ", where x is unknown, and a; are known values. So we must describe, to what extent any given value x satis es this condition. All these membership functions can be obtained from one of them. Evidently, x satis es the inequality jx ? aj if and only if the value y = (x ? a)= satis es the inequality jyj 1. Therefore, it is natural to assume that the statement \most likely, jx ? aj " has the same degree of belief as the statement \most likely, jyj 1", where y = (x ? a)= . So, if we will be able to describe a membership function (y) that corresponds to the statement \most likely, jyj 1", then we will be able to describe our degree of belief 1 (x) that x satis es the condition \most likely, jx ? aj " as ((x ? a)= ). So the main problem is to nd an appropriate function (x). What if we ask several experts. A statement \most likely, jx ? aj " means that an expert estimates x as a, and his own estimate of his precision is . Since such estimates are often very crude, it is reasonable to ask the opinion of several experts. After we have asked k experts, we get k statements of the same type: \most likely, jx ? aij i ", where i = 1; 2; :::; k, and ai and i are the estimates of the i-th expert. The corresponding membership functions are ((x ? ai )= i ). Since all of them are experts, we believe in what all of them say, and therefore our resulting knowledge is: \most likely, jx ? a1 j 1 , and most likely, jx ? a2 j 2 , and ..." Since we agreed to represent \and" as a product, the resulting membership function is equal to (x) = ((x ? a1 )= 1 ) ((x ? a2 )= 2 )::: ((x ? ak )= k ). In case we have a precise knowledge, and each of the experts describes an interval, in which the unknown value x must be, the resulting knowledge is that x belongs to the intersection of all 2

these intervals. This intersection is itself an interval, and therefore the only e ect of asking several experts is that we decrease uncertainty. We do not change the form of the knowledge: it is still an interval, and in principle one smart expert could have named it from the very beginning. In a similar way, it seems reasonable to assume that in the general fuzzy case, by combining the opinions of several experts, we do not seriously add any additional knowledge; we may diminish slightly an uncertainty domain for the unknown x, but that's all. How to describe this argument mathematically: we must apply normalization. In mathematical terms, we would like to postulate that the resulting membership function (x) coincides with one of the functions ((x ? a) ), and in principle it could represent the opinion of just one smart expert. We cannot, however, postulate precisely that. The reason is as follows. The bigger y, the smaller is our belief that \most likely, jyj 1". So, the function (y) must be monotonously decreasing for y > 0. Its maximum m is attained, when y = 0. So, when we combine the two statements \most likely, jxj 1", and \most likely, jx ? 0:3j 1", the resulting membership function (x) = (x) (x ? 0:3) is always smaller than m2 , because both factors are m, and for x 6= 0 the rst factor is < m, and for x = 0 the second. So even if m = 1, the function (x) never attains m, and thus it cannot be equal to ((x ? a)= . The solution to this problem is well known in fuzzy logic: we can normalize (x), i.e., turn from (x) to 0 (x) = N (x), where the normalization constant N is equal to N = 1=(maxy (y)). Comment. A motivation for using namely this type of normalization is given in KQLFLKBR92]. Now we are ready to formulate our demand.

set R of all real numbers into the interval 0,1]. De nition 2. We say that two membership functions (x) and (x) are equivalent if (x) = C (x) for some constant C > 0. De nition 3. We say that a membership function (x) is adequate for describing uncertainty of measurements if it satis es the following conditions: it is symmetric ( (?x) = (x)), (x) is strictly decreasing on (0; 1) and tends to 0 as x ! 1 for every nite sequence of pairs (a1 ; 1 ); (a2 ; 2 ); :::; (ak ; k ) there exist a and such that the product ((x ? a1 )= 1 ) ((x ? a2 )= 2 )::: ((x ? ak )= k ) is equivalent to ((x ? a)= ). THEOREM. Any membership function, that is adequate for describing uncertainty of measurements, is equivalent to exp(? x) for some > 0. (The proof is given in Section 5). Comment. So we conclude that Gaussian functions are the only adequate membership functions. These functions are really widely used K75], BCDMMM85], YIS85], KM87, Ch. 5], etc. Alternative explanation of why Gaussian functions are used is given in KR86] and in Section 8 of KQLFLKBR92]. 3

3. MATHEMATICAL FORMULATION OF THE PROBLEM AND THE MAIN RESULT De nition 1. By a membership function we will understand a continuous function (x) from the

4. HOW THIS RESULT CAN BE USED AND HOW IT WAS USED How it can be used. If for some physical quantity x several experts give their estimates a1 , a2 , ..., ak , and they estimate the precision of their estimates as correspondingly 1 ; 2 ; :::; k , then the resulting membership function is equal to (x) = exp(? (x ? a)2 = 2 ), where ?2 ?2 ?2 ?2 ?2 ?2 ?2
=(
1

+

2

+ ::: + k )?1=2 and a = (a1

1

+ ::: + ak k )=(

1

+ ::: + k ).

Comments. 1. These formulas can be easily obtained by explicitly computing (x) as a result of normalization of the product 1 (x) 2 (x)::: k (x), where i (x) = exp(? (x ? ai )2 = i2 ). 2. These formulas are surprisingly identical with the statistical formulas that correspond to the case when we have k statistical estimates ai with precisions i and apply the least squares P method i (a ? ai )2 = i2 ! maxa to get the resulting estimate for a. This is not such a big surprise, because least squares method is based on the assumption of a Gaussian distribution. The positive side is that not only the resulting formulas are extremely simple to implement, but maybe there is no need to implement them at all, because we can copy the existing statistical software.

How this result was actually used. Expert estimates are extremely important in testing the jet engines. The reason is that an important part of this testing is trying to gure out what is going on in the high-temperature regions, and the temperatures are so high there that we cannot place any sensors. So the only available information about these regions is the experts estimates.
One of the authors (L.R.) used this fuzzy representation of uncertainty in designing software for the automatized jet engines testing system IVK-12 KR86]. This system was actually used to test jet engine for aircraft and spaceships. we determine the position of a Space Shuttle. The existing systems use several di erent types of sensors, with di erent precisions, and often with only experts estimates of that precision. In order to make appropriate control decisions we must combine these estimates into a single value. Fuzzy approach allows us to do that.

Possible other applications. One area where we believe this approach can be useful is when

5. PROOF OF THE THEOREM

Comment. This proof contains some mathematical ideas from our previous publications KR86] and KQLFLKBR92]. 1. Assume that (x) is an adequate function in the sense of the above de nition. It is easy to check that if (x) is an adequate choice, then the result (x)=(max (y)) of its normalization is also an adequate choice. Since (x) is monotone, this maximum is attained for x = 0, and therefore the result of this normalization satis ed the condition (0) = 1. So, without losing any generality, we will further assume that (0) = 1. 2. From the de nition of an adequate function it follows, in particular, that (x) (x) = C ((x ? a)= ) for some a; C and . The left hand side attains its maximum (= 1) at x = 0, the right-hand side attains its maximum (that is equal to C ) for x = a. Since these two sides are one and the same function, we conclude that a = 0 and C = 1, i.e., that 2 (x) = (k2 x) for some constant k2 (= 1= ). For l(x) = log (x) we conclude that 2l(x) = l(k2 x). Likewise, if we consider 3, 4, etc terms, we conclude that 3l(x) = l(k2 x), 4l(x) = l(k4 x), etc. 4

3. The function (x) for x > 0 is monotonously decreasing from 1 to 0. Therefore, l(x) is monotonously decreasing from 0 to ?1. Since is continuous, the function l(x) is also continuous, and, therefore, there exists an inverse function i(x) = l?1 (x), i.e., such a function that i(l(x)) = x for every x. For this inverse function, the equality nl(x) = l(kn x) turns into i(nl(x)) = i(l(kn x)) = kn x = kn i(l(x)). So, if we denote l(x) by X , we conclude that for every n, there exists a kn such that i(nX ) = kn i(X ). If we substitute Y = nX , we conclude that i(Y ) = kn i(Y=n), and therefore, i(Y=n) = (1=kn )i(Y ). From these two equalities, we conclude that i((m=n)X ) = (1=kn )i(nX ) = (km =kn )i(X ). So, for every rational number r, there exists a real number k(r) such that i(rX ) = k(r)i(X ). Therefore, the ratio i(rX )=i(X ) is constant for all rational r. 4. Since i(X ) is a continuous function, and any real number can be represented as a limit of a sequence of rational numbers, we conclude that this ratio is constant for real values of r as well. Therefore, for every real number r there exists a k(r) such that i(rX ) = k(r)i(X ). All monotone solutions of this functional equation are known: they are i(X ) = AX p for some A and p A66]. Therefore, the inverse function l(x) (x > 0) also takes the similar form l(x) = Bxm for some k and m. Taking into consideration that (x) and hence l(x) are even functions, we conclude that l(x) = B jxjm for all x. 5. Now, from the demand that a function (x) is adequate, we conclude that for every a > 0 we have (x ? a) (x + a) = C ((x ? a1 )= ) for some a1 and . The left-hand side of this equation is an even function, so the right-hand side must also be even, and therefore a1 = 0. So, (x ? a) (x + a) = C (x= ). For x = 0 we get (a) (a) = C . Turning to logarithms, we conclude that for every a, there exists a k(a) such that l(x ? a)+ l(x + a) = l(k(a)x)+2l(a). If we substitute here l(x) = B jxjm , and divide both sides by B , we conclude that jx ? ajm + jx + ajm = k(a)m jxjm + 2am . 6. When x > 0, and a is su ciently small, then x + a, x, and x ? a are all positive, and, therefore, (x ? a)m + (x + a)m = k(a)m xm + 2am . If we move 2am to the left-hand side, and divide both sides by xm , we conclude that (1 ? (a=x))m + (1 + (a=x))m ? 2(a=x)m = k(a)m . The left-hand side of the resulting equality depends only on z = a=x, the right-hand side only on a. Therefore, if we choose any positive real number , and take a0 = a and x0 = x instead of a and x, then we can conclude that the left-hand side will be still the same, and therefore, the right-hand side must be the same, i.e., k(a)m = k( a)m . Since was an arbitrary number, we conclude that k(a) does not depend on a at all, i.e., k(a)m is a constant. Let us denote this constant by k. So the equation takes the form (1 ? z )m +(1+ z )m = k +2z m . When z ! 0, then the left-hand side tends to 2 and right-hand side to k, so from their equality we conclude that k = 2. The left-hand side is an analytical function of z for z close to 0. Therefore the right-hand side must also be a regular analytical function in the neighborhood of 0 (i.e., it must have a Taylor expansion for z = 0). Hence, m must be an integer. The values m < 2 are impossible, because for m = 0 our equality turns into a false equality 2 = 3, and for m = 1 it turns into an equality 1 ? z + 1 + z = 2 + z , which is true only for z = 0. So m 2. 5

Since both sides are analytical in z , the second derivatives of both sides at z = 0 must be equal to each other. The second derivative of the left-hand side at z = 0 is equal to m(m ? 1). The second derivative of the right-hand side is equal to 2m(m ? 1)z m?2 . If m > 2, then this derivative equals 0 at z = 0 and therefore cannot be equal to m(m ? 1). So m 2, and m cannot be greater than 2. Therefore, m = 2. So, l(x) = Bx2 , and hence (x) = exp(? x2 ) for some > 0. Q.E.D. How to represent in mathematical terms uncertain numeric statements about the value x of a physical quantity, e.g., statements of the type \most likely x is between a ? and a + "? Reasonable arguments lead us to the conclusion that the most adequate membership functions for such statements are Gaussian functions (x) = exp(? (x ? a)2 = 2 ). If we use these membership functions, then we can apply simple algorithms to combines the opinions of several experts. Namely, if k experts give estimates a1 ; :::; ak , and they estimate the precision of their estimates as correspondingly 1 ; 2 ; :::; k , then the resulting membership function ? ? ? is equal to (x) = exp(? (x ? a)2 = 2 ), where = ( 1 2 + 2 2 + ::: + k k )?1=2 and ? ? ? ? a = (a1 1 2 + ::: + ak k 2 )=( 1 2 + ::: + k 2 ): These formulas coincide with the ones that result from applying the statistical least squares method, so we do not even have to write a new software. This approach was applied to testing jet engine for aircraft and spaceships, and it may be useful in many other applications, e.g., in combining the results of several coordinate and distance sensors in spaceship navigation.

6. CONCLUSIONS

ACKNOWLEDGEMENTS

This work was started with the support of the Soviet Space Shuttle Program and was continued under NSF Grant No. CDA-9015006, NASA Research Grant No. 9-482, and the Institute for Manufacturing and Materials Management grant. The authors are greatly thankful to Harold Brown (GE Aircraft Engines) and Bob Lea (NASA Johnson Space Center) for inspiring discussions. A66] J. Aczel. Lectures on functional equations and their applications. Academic Press, N.Y. and London, 1966. BCDMMM85] G. Bartolini, G. Casalino, F. Davoli, M. Mastretta, R. Minciardi, and E. Morten. Development of perfomance adaptive fuzzy controllers with applications to continuous casting plants, in: M. Sugeno (editor). Industrial applications of fuzzy control, North Holland, Amsterdam, 1985, pp. 73{86. HC76] H. M. Hersch and A. Caramazza. A fuzzy-set approach to modi ers and vagueness in natural languages. J. Exp. Psychol.: General, 1976, Vol. 105, pp. 254{276. K75] A. Kau man. Introduction to the theory of fuzzy subsets. Vol. 1. Fundamental theoretical elements, Academic Press, N.Y., 1975. KQLFLKBR92] V. Kreinovich, C. Quintana, R. Lea, O. Fuentes, A. Lokshin, S. Kumar, I. Boricheva, and L. Reznik. What non- linearity to choose? Mathematical foundations of fuzzy control. Proceedings of the 1992 International Fuzzy Systems and Intelligent Control Conference, Louisville, KY, 1992, pp. 349{412. KM87] R. Kruse and K. D. Meyer. Statistics with vague data. D. Reidel, Dordrecht, 1987. O77] G. C. Oden. Integration of fuzzy logical information, Journal of Experimental Psychology: Human Perception Perform., 1977, Vol. 3, No. 4, pp. 565{575. 6

REFERENCES

KR86] V. Kreinovich and L. K. Reznik. Methods and models of formalizing prior information (on the example of processing measurements results). In: Analysis and formalization of computer experiments, Proceedings of Mendeleev Metrology Institute, Leningrad, 1986, pp. 37-41 (in Russian). YIS85] O. Yagishita, O. Itoh, and M. Sugeno. Application of fuzzy reasoning to the water puri cation process, in: M. Sugeno (editor). Industrial applications of fuzzy control, North Holland, Amsterdam, 1985, pp. 19{40. Z65] L. Zadeh. Fuzzy sets. Information and control, 1965, Vol. 8, pp. 338{353. Z78] H. J. Zimmermann. Results of empirical studies in fuzzy set theory. In: Applied General System Research (G. J. Klir, ed.) Plenum, New York, 1978, pp. 303{312.

7


赞助商链接

更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图