In der Wahrscheinlichkeitstheorie ist der Erwartungswert einer Zufallsvariablen intuitiv der langfristige Durchschnittswert von Wiederholungen des gleichen Experiments das er darstellt. Beispielsweise ist der erwartete Wert beim Würfeln eines sechsseitigen Würfels 3,5, da der Durchschnitt aller Zahlen, die auftauchen, 3,5 beträgt, wenn sich die Anzahl der Würfel gegen unendlich bewegt. Mit anderen Worten, das Gesetz der großen Anzahl besagt, dass der arithmetische Mittelwert der Werte fast sicher mit dem erwarteten Wert übereinstimmt, wenn sich die Anzahl der Wiederholungen gegen unendlich bewegt. Der erwartete Wert ist auch bekannt als die Erwartung mathematische Erwartung EV Mittelwert Mittelwert . Mittelwert oder erster Moment .
Praktischerweise ist der erwartete Wert einer diskreten Zufallsvariablen der wahrscheinlichkeitsgewichtete Durchschnitt aller möglichen Werte. Mit anderen Worten, jeder mögliche Wert, den die Zufallsvariable annehmen kann, wird mit seiner Eintrittswahrscheinlichkeit multipliziert, und die resultierenden Produkte werden summiert, um den erwarteten Wert zu erzeugen. Dasselbe Prinzip gilt für eine absolut kontinuierliche Zufallsvariable, außer dass ein Integral der Variablen bezüglich ihrer Wahrscheinlichkeitsdichte die Summe ersetzt. Die formale Definition fasst beide zusammen und funktioniert auch für Verteilungen, die weder diskret noch absolut kontinuierlich sind. Der erwartete Wert einer Zufallsvariablen ist das Integral der Zufallsvariablen in Bezug auf ihr Wahrscheinlichkeitsmaß. [1] [2]
Der Erwartungswert existiert nicht für Zufallsvariablen, die einige haben Verteilungen mit großen "Schwänzen", wie die Cauchy-Verteilung. [3] Bei Zufallsvariablen wie diesen verhindern die langen Enden der Verteilung, dass die Summe oder das Integral konvergiert.
Der erwartete Wert ist ein Schlüsselaspekt, wie man eine Wahrscheinlichkeitsverteilung charakterisiert. Es ist ein Typ von Standortparameter. Im Gegensatz dazu ist die Varianz ein Maß für die Streuung der möglichen Werte der Zufallsvariablen um den erwarteten Wert. Die Varianz selbst wird anhand zweier Erwartungen definiert: Es handelt sich um den erwarteten Wert der quadratischen Abweichung des Werts der Variablen vom erwarteten Wert der Variablen.
Der erwartete Wert spielt in verschiedenen Kontexten eine wichtige Rolle. Bei der Regressionsanalyse wünscht man sich eine Formel hinsichtlich der beobachteten Daten, die eine "gute" Schätzung des Parameters ergibt, die die Auswirkung einer erklärenden Variablen auf eine abhängige Variable ergibt. Die Formel gibt verschiedene Schätzungen mit unterschiedlichen Datenstichproben an, daher ist die von ihr gegebene Schätzung selbst eine Zufallsvariable. Eine Formel wird in diesem Zusammenhang in der Regel als gut angesehen, wenn es sich um einen unvoreingenommenen Schätzer handelt. Das heißt, wenn der erwartete Wert der Schätzung (der Durchschnittswert, den er für eine beliebig große Anzahl von separaten Stichproben angeben würde) dem wahren Wert von entspricht der gewünschte Parameter.
In der Entscheidungstheorie und insbesondere bei der Wahl unter Unsicherheit wird beschrieben, dass ein Agent eine optimale Auswahl im Kontext unvollständiger Informationen trifft. Für risikorneutrale Agenten umfasst die Auswahl die Verwendung der erwarteten Werte für unsichere Mengen, während für risikoaverse Agenten die Maximierung des erwarteten Werts einer objektiven Funktion, z. B. einer von Neumann-Morgenstern-Nutzenfunktion, erforderlich ist. Ein Beispiel für die Verwendung des erwarteten Werts zur Erzielung optimaler Entscheidungen ist das Gordon-Loeb-Modell für Investitionen in die Informationssicherheit. Dem Modell zufolge kann der Schluss gezogen werden, dass der Betrag, den ein Unternehmen zum Schutz von Informationen ausgibt, im Allgemeinen nur einen kleinen Bruchteil des erwarteten Verlusts (dh des erwarteten Werts des Verlusts infolge einer Verletzung der Cyber- oder Informationssicherheit) betragen sollte. [4]
Definition [ edit ]
Endlicher Fall [ edit
X
Da alle Wahrscheinlichkeiten addiert sich zu 1 (), der erwartete Wert ist der -gewichtete Durchschnitt, mit ist die Gewichtung.
Wenn alle Endpunkte gleich ), dann wird der gewichtete Durchschnitt zum einfachen ]durchschnittlich. Dies ist intuitiv: Der erwartete Wert einer Zufallsvariablen ist der Durchschnitt aller Werte, die er nehmen kann. der erwartete Wert ist daher das, was man erwartet im Durchschnitt . Wenn die Ergebnisse nicht gleichwertig sind, muss der einfache Durchschnitt durch den gewichteten Durchschnitt ersetzt werden in Anbetracht der Tatsache, dass einige Ergebnisse wahrscheinlicher sind als die anderen. Die Intuition bleibt jedoch gleich: Der erwartete Wert von ist das, was man erwartet .
Beispiele [
- ] repräsentiert das Ergebnis eines Wurfes eines fairen sechsseitigen Würfels. Insbesondere wird die Anzahl von Pips sein, die auf der Oberseite des Würfels nach dem Werfen angezeigt wird. Die möglichen Werte für sind 1, 2, 3, 4, 5 und 6, alle gleich wahrscheinlich (jeweils mit der Wahrscheinlichkeit von 1) / 6 ). Die Erwartung von ist
- Wenn man würfelt der Würfel mal und berechnet den Durchschnitt (arithmetischer Mittelwert) der Ergebnisse. Wenn dann wächst, wird der Durchschnitt fast zunehmen sicherlich mit dem erwarteten Wert konvergieren, eine Tatsache, die als starkes Gesetz großer Zahlen bekannt ist. Eine beispielhafte Folge von zehn Walzen des Würfels ist 2, 3, 1, 2, 5, 6, 2, 2, 2, 6, die den Durchschnitt von 3,1 hat, mit einem Abstand von 0,4 vom erwarteten Wert von 3,5. Die Konvergenz ist relativ langsam: Die Wahrscheinlichkeit, dass der Durchschnitt in den Bereich 3,5 ± 0,1 fällt, beträgt 21,6% für zehn Walzen, 46,1% für hundert Walzen und 93,7% für tausend Walzen. In der Abbildung sehen Sie eine Darstellung der Durchschnittswerte längerer Würfelfolgen und wie sie auf den erwarteten Wert von 3,5 konvergieren. Allgemeiner kann die Konvergenzrate grob quantifiziert werden, z. Chebyshevs Ungleichung und der Satz von Berry-Esseen.
- Das Roulette-Spiel besteht aus einem kleinen Ball und einem Rad mit 38 nummerierten Taschen am Rand. Wenn das Rad gedreht wird, hüpft der Ball zufällig herum, bis er sich in einer der Taschen festsetzt. Angenommen, die Zufallsvariable repräsentiert das (monetäre) Ergebnis einer $ 1-Wette auf eine einzelne Zahl ("Straight-Up" -Wette). Wenn die Wette gewinnt (was mit Wahrscheinlichkeit 1 / 38 beim amerikanischen Roulette passiert), beträgt die Auszahlung 35 $; Andernfalls verliert der Spieler die Wette. Der erwartete Gewinn aus einer solchen Wette wird sein
- Das heißt, der Einsatz von 1 $ verliert 0,0526 $, also seinen erwarteten Wert ist - 0,0526.
Unzählbar unendlicher Fall [ edit ]
Es sei X "/> X "/> X "/> ein abzählbarer Satz endlicher Ergebnisse ..., mit Wahrscheinlichkeiten ...
so dass die unendliche Summe konvergiert. Der erwartete Wert von wird als Serie definiert
Bemerkung 1. Beachten Sie, dass <img src = "https://wikimedia.org/api/rest_v1 / media / math / render / svg / 001fc3dd828292a1c09491ff627029caa5e4e6a3 "class =" mwe-math-fallback-image-inline "aria-hidden =" true "style =" vertical-align: -1.505ex; Breite: 27.225ex; height: 4.176ex; "alt =" { displaystyle textstyle { Bigl |} operatorname {E} [X] { Bigr |} leq sum _ {i = 1} ^ { infty} | x_ { i} | , p_ {i}
Bemerkung 2. Aufgrund der absoluten Konvergenz hängt der erwartete Wert nicht von der Reihenfolge ab, in der die Ergebnisse dargestellt werden. Im Gegensatz dazu kann eine bedingt konvergente Reihe über das Riemannsche Umlagerungstheorem beliebig konvergieren oder divergieren.
Beispiel [ edit ]
-
- +
+ k 2 + k 4 + 19659350] k 8 + [1945 = k . .{1945style Operatorname {E} [X] = 1 left ({ frac {k} {2}} right) +2 left ({ frac {k} {8}} right) +3 left ({ frac {k} {24}} right ) + dots = { frac {k} {2}} + { frac {k} {4}} + { frac {k} {8}} + dots = k.}
- +
- Da diese Serie absolut konvergiert, ist der erwartete Wert von .
- Für ein nicht absolut konvergentes Beispiel nehmen Sie an, dass die Zufallsvariable die Werte 1 annimmt. –2, 3, -4, ..., mit jeweiligen Wahrscheinlichkeiten ..., wobei ist eine Normalisierungskonstante, die sicherstellt, dass sich die Wahrscheinlichkeiten auf Eins summieren. Dann die unendliche Summe
-
- -
1 2 + 1 3 - 1 4
- -
- konvergiert und seine Summe ist gleich . Es wäre jedoch falsch zu behaupten, dass der erwartete Wert von dieser Zahl gleich ist - tatsächlich alternierende harmonische Reihe).
- Ein Beispiel, das divergiert, entsteht im Zusammenhang mit dem Petersburger Paradoxon. und für . Die Erwartungswertberechnung ergibt
-
- Da dies nicht konvergiert, sondern weiter wächst, ist der erwartete Wert unendlich.
Absolut kontinuierlicher Fall [ edit
X ist eine Zufallsvariable, deren kumulative Verteilungsfunktion eine Dichte zulässt dann wird der erwartete Wert als das folgende Lebesgue-Integral definiert:
Bemerkung Aus rechentechnischer Sicht das Integral in der Definition von wird oft als unsachgemäßes Riemann-Integral
- <img src = "https://wikimedia.org/api/rest_v1/media/math/render/svg/91adec0a23fdb89a90da476aa49a02c29d7582bd" class = "mwe-math-fallback-image-inline" aria-hidden = "true" " style = "vertical-align: -2.505ex; Breite: 57.839ex; height: 6.343ex; "alt =" { displaystyle min left ((- 1) cdot { hbox {(R)}} int _ {- infty} ^ {0} xf (x) , dx, { hbox {(R)}} int _ {0} ^ {+ infty} xf (x) , dx right)
dann stimmen die Werte (ob endlich oder unendlich) beider Integrale überein .
Allgemeiner Fall [ edit ]
Im Allgemeinen ist eine Zufallsvariable, die auf a definiert ist Wahrscheinlichkeitsraum ]dann die erwarteter Wert von bezeichnet durch oder wird als Lebesgue-Integral definiert
Remark 1. If und [19659314]
- P
[ [ ω ) = 19 Ω X + ( ω [1945655555]) []. 19659576] d P 19 ( ω ) - 19 Ω [1945659081] X [1965990] ] - ( ω ) d P ω ) ) = E 19 [ X + ] - E X - - ] { displaystyle { begin {align} operatorname {E} [X] & = in t _ { Omega} X ( omega) , d betreibername {P} ( omega) \ & = int _ { Omega} X _ {+} ( omega) , d betreibername {P} ( omega) - int _ { Omega} X _ {-} ( omega) , d Operatorname {P} ( omega) \ & = Operatorname {E} [X_{+}] - Operatorname {E} [X_{-}] end {align}}} und ist nicht negativ und möglicherweise unendlich.Folgende Szenarien sind möglich:
Bemerkung 2. If ist die kumulative Verteilungsfunktion von dann
wobei das Integral im Sinne von Lebesgue – Stieltjes interpretiert wird .
Bemerkung 3. Ein Beispiel für eine Verteilung, für die es keinen erwarteten Wert gibt, ist die Cauchy-Verteilung.
Bemerkung 4. Für mehrdimensionale Zufallsvariablen wird ihr erwarteter Wert pro Komponente definiert, d. H.
und für eine Zufallsmatrix mit Elementen ,
Basic properties[edit]
The properties below replicate or follow immediately from those of Lebesgue integral.
[edit]
If is an event, then where is the indicator function of the set .
Proof. By definition of Lebesgue integral of the simple function ,
If X = Y (a.s.) then E[X] = E[Y][edit]
The statement follows from the definition of Lebesgue integral if we notice that (a.s.), (a.s.), and that changing a simple random variable on a set of probability zero does not alter the expected value.
Expected value of a constant[edit]
If is a random variable, and (a.s.), where then . In particular, for an arbitrary random variable .
Linearity[edit]
The expected value operator (or expectation operator) is linear in the sense that
where and are (arbitrary) random variables, and is a scalar.
More rigorously, let and be random variables whose expected values are defined (different from ).
- If is also defined (i.e. differs from ), then
- Let be finite, and be a finite scalar. Then
Proof. 1. We prove additivity in several steps.
1a. If and are simple and non-negative, taking intersections where necessary, one can re-write and in the form
and
for some measurable pairwise-disjoint sets partitioning and being the indicator function of the set . By a straightforward check, the additivity follows.
1b. Assuming that and are arbitrary and non-negative, recall that every non-negative measurable function is a pointwise limit of a pointwise non-decreasing sequence of simple non-negative ones. Let and be such sequences converging to and respectively. We see that pointwise non-decreases, and pointwise. By monotone convergence theorem and case 1a,
(The reader can verify that using the monotone convergence theorem this way does not lead to circular logic).
1c. In the general case, if then
undSplitting up,
which is equivalent to,
and finally,
2. To prove homogeneitywe first assume that the scalar above is non-negative. The finiteness of implies that is finite (a.s.). Therefore, is also finite (a.s.), which guarantees that is finite. The equality, thus, is a straightforward check based on the definition of Lebesgue integral.
If <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/e5d7ca60f6ed64b99649dcee21847295fedf206c" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:5.491ex; height:2.176ex;" alt="{displaystyle a<0}"/>then we first prove that by observing that and vice versa.
E[X] exists and is finite if and only if E[|X|] is finite[edit]
The following statements regarding a random variable are equivalent:
- exists and is finite.
- Both and are finite.
- is finite.
Sketch of proof. Indeed, . By linearity, . The above equivalency relies on the definition of Lebesgue integral and measurability of .
Remark. For the reasons above, the expressions " is integrable" and "the expected value of is finite" are used interchangeably when speaking of a random variable throughout this article.
If X ≥ 0 (a.s.) then E[X] ≥ 0[edit]
Proof. Denote
If then ,
and hence, by definition of Lebesgue integral,On the other hand, (a.s.), so, through a similar argument, and therefore
.Monotonicity[edit]
If (a.s.), and both and exist, then .
Remark. and exist in the sense that <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/af9afd1015b1795b9d746e902eadae41120fb080" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:24.764ex; height:2.843ex;" alt="{displaystyle min(operatorname {E} [X_{+}],operatorname {E} [X_{-}])
and <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/ec0cee26e0baadf79157ffec4c25939070cb515d" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:24.263ex; height:2.843ex;" alt="{displaystyle min(operatorname {E} [Y_{+}],operatorname {E} [Y_{-}]) Proof follows from the linearity and the previous property if we set and notice that (a.s.).
If (a.s.) and is finite then so is [edit]
Let and be random variables such that (a.s.) and <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/1d673ec21dbeafe0aa85b387902be8f1e99c71ab" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:10.072ex; height:2.843ex;" alt="{displaystyle operatorname {E} [Y]
. Then . Proof. Due to non-negativity of exists, finite or infinite. By monotonicity, <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/76bf5ca01b19d23933c539e345beae216814a896" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:18.414ex; height:2.843ex;" alt="{displaystyle operatorname {E} |X|leq operatorname {E} [Y]
so is finite which, as we saw earlier, is equivalent to being finite. If <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/c6bf1dbcded47f9de08eea4f969d65983a9ec50d" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.857ex; height:3.176ex;" alt="{displaystyle operatorname {E} |X^{beta }|
and <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/bcf1d7eaa767964153d0cd2a3a979493d99b9943" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.671ex; width:10.179ex; height:2.509ex;" alt="{displaystyle 0<alpha then <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/de30ca11f4be68f42e7d21920686702bf0ddb625" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.967ex; height:2.843ex;" alt="{displaystyle operatorname {E} |X^{alpha }| [<a href="http://en.wikipedia.org/w/index.php?title=Expected_value&action=edit§ion=17" title="Edit section: If ${displaystyle operatorname {E} |X^{beta }|<infty } and ${displaystyle 0<alpha <beta } then ${displaystyle operatorname {E} |X^{alpha }| edit] The proposition below will be used to prove the extremal property of
later on.Proposition. If is a random variable, then so is for every . If, in addition, <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/c6bf1dbcded47f9de08eea4f969d65983a9ec50d" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.857ex; height:3.176ex;" alt="{displaystyle operatorname {E} |X^{beta }|
and
<img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/bcf1d7eaa767964153d0cd2a3a979493d99b9943" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.671ex; width:10.179ex; height:2.509ex;" alt="{displaystyle 0<alphathen <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/de30ca11f4be68f42e7d21920686702bf0ddb625" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.967ex; height:2.843ex;" alt="{displaystyle operatorname {E} |X^{alpha }| . Proof. To see why the first statement holds, observe that is a composition of with . As a composition of two measurable functions, is measurable. To prove the second statement, define
The reader can verify that is a random variable and . By non-negativity,
By monotonicity,
- <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/790efca2ca7cb2c531cb44ed4937b06c08a5718f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:33.898ex; height:3.176ex;" alt="{displaystyle operatorname {E} |X^{alpha }|leq operatorname {E} [Y]leq 1+operatorname {E} |X^{beta }|
Counterexample for infinite measure[edit]
The requirement that <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/a5d1b5744655c4446fe051fdc854a3e996c1f1ca" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:10.492ex; height:2.843ex;" alt="{displaystyle operatorname {P} (Omega )
is essential. By way of counterexample, consider the measurable space where is the Borel -algebra on the interval and is the linear Lebesgue measure. The reader can prove that even though (Sketch of proof: and define a measure on Use "continuity from below" w.r. to and reduce to Riemann integral on each finite subinterval ).
Extremal property[edit]
Recall, as we proved early on, that if is a random variable, then so is .
Proposition (extremal property of ). Let be a random variable, and <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/6c5ffb814ff31a1fdc2b6f3899412ac4d1bf1971" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.35ex; height:3.176ex;" alt="{displaystyle operatorname {E} [X^{2}]
. Then and are finite, and is the best least squares approximation for among constants. Specifically, - for every
- equality holds if and only if
( denotes the variance of ).
Remark (intuitive interpretation of extremal property). In intuitive terms, the extremal property says that if one is asked to predict the outcome of a trial of a random variable then in some practically useful sense, is one's best bet if no advance information about the outcome is available. If, on the other hand, one does have some advance knowledge regarding the outcome, then — again, in some practically useful sense — one's bet may be improved upon by using conditional expectations (of which is a special case) rather than .
Proof of proposition. By the above properties, both and
are finite, andwhence the extremal property follows.
Non-degeneracy[edit]
If then (a.s.).
Proof. For every positive constant . Indeed,
where is the indicator function of the set . By a property above, the finiteness of guarantees that the expected values and are also finite. By monotonicity,
- [1 9459057]
For some integer set . Definieren
,
undThe chain of sets
monotonically non-decreases, and . By "continuity from below",
. Applying this formula, obtainas required.
If <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/e04b1762cff984a0f687d746b5dc182a56244dfe" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:12.087ex; height:2.843ex;" alt="{displaystyle operatorname {E} [X]<+infty }"/> then <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/b44859100d0627bc602337f2a4d1e4646ffa6357" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.505ex; width:9.21ex; height:2.343ex;" alt="{displaystyle X<+infty }"/> (a.s.)[<a href="http://en.wikipedia.org/w/index.php?title=Expected_value&action=edit§ion=21" title="Edit section: If ${displaystyle operatorname {E} [X]<+infty } then ${displaystyle X<+infty } (a.s.)">edit]
Proof. Since is defined (i.e. <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/af9afd1015b1795b9d746e902eadae41120fb080" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:24.764ex; height:2.843ex;" alt="{displaystyle min(operatorname {E} [X_{+}],operatorname {E} [X_{-}])
), and we know that is finite, and we want to show that <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/1e790d70f1b2f9b9d67ec1ab81753fc31cc4d108" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.671ex; width:10.665ex; height:2.509ex;" alt="{displaystyle X_{+}<+infty }"/> (a.s.). We will show that where If then and the proof is complete. Assuming that define
Given that pick For every define
Clearly, and
for some constant independent from (One can easily see that, in fact, but this is of no interest to us here).
Suppose that The sequence strictly increases, so, by definition of Lebesgue integral,
in contradiction with an earlier conclusion that is finite.
Corollary: if then (a.s.)[]
Corollary: if <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/83693dfb9579ee1d9e9d8dafcd6c2b27e609460a" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:10.666ex; height:2.843ex;" alt="{displaystyle operatorname {E} |X|
then (a.s.)[<a href="http://en.wikipedia.org/w/index.php?title=Expected_value&action=edit§ion=23" title="Edit section: Corollary: if ${displaystyle operatorname {E} |X| edit] [edit]
For an arbitrary random variable .
Proof. By definition of Lebesgue integral,
Note that this result can also be proved based on Jensen's inequality.
Non-multiplicativity[edit]
In general, the expected value operator is not multiplicative, i.e. is not necessarily equal to . Indeed, let assume the values of 1 and -1 with probability 0.5 each. Dann
and
The amount by which the multiplicativity fails is called the covariance:
If, however, the random variables and are independentthen and .
Counterexample: despite pointwise[edit]
Let be the probability space, where is the Borel -algebra on and the linear Lebesgue measure. For define a sequence of random variables
and a random variable
on with being the indicator function of the set .
For every as and
so On the other hand, and hence
Countable non-additivity[edit]
In general, the expected value operator is not -additive, i.e.
By way of counterexample, let be the probability space, where is the Borel -algebra on and the linear Lebesgue measure. Define a sequence of random variables on with being the indicator function of the set . For the pointwise sums, we have
By finite additivity,
On the other hand, and hence
Countable additivity for non-negative random variables[edit]
Let be non-negative random variables. It follows from monotone convergence theorem that
for independent and [edit]
Let and be independent random variables with finite expectations and . Then .
Proof. 1. The case of non-negative -valued random variables.
Given a positive integer let the random variables and assume their values in the set
Then and
or equivalently,
where is the indicator function of the set ,
and denotes disjoint union. By definition of expected value,
Due to independence,
whence
2. The case of non-negative random variables.
Let and be (arbitrary) non-negative random variable. Definieren
- <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/5753f560511f770ea8bebe93d1ba5d11f25e0ab9" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -3.111ex; margin-bottom: -0.227ex; width:37.399ex; height:7.843ex;" alt="{displaystyle X_{n}(omega )={begin{cases}{frac {m}{n}}&{text{if}} {frac {m}{n}}leq X(omega )<{frac {m+1}{n}},\[6pt]0&{text{if}} X(omega )=+infty ,end{cases}}}"/>
for an arbitrary . Note that is a random variable and
As we saw previously, the finiteness of implies that is finite almost sure, and consequently, (a.s.) on . This, in turn, implies that .
Let the random variable be defined the same way but with respect to . We have
and were shown to satisfy . Therefore,
It follows that, being independent from the constant value can only be equal to 0.
3. The general case.
Let and be arbitrary random variables. We have
Inequalities[edit]
Cauchy–Bunyakovsky–Schwarz inequality[edit]
The Cauchy–Bunyakovsky–Schwarz inequality states that
Markov's inequality[edit]
For a nonnegative random variable and the Markov's inequality states that
Bienaymé-Chebyshev inequality[edit]
Let be an arbitrary random variable with finite expected value and finite variance . The Bienaymé-Chebyshev inequality states that, for any real number ,
Jensen's inequality[edit]
Let be a Borel convex function and a random variable such that <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/83693dfb9579ee1d9e9d8dafcd6c2b27e609460a" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:10.666ex; height:2.843ex;" alt="{displaystyle operatorname {E} |X|
. Jensen's inequality states that Remark 1. The expected value is well-defined even if is allowed to assume infinite values. Indeed, <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/83693dfb9579ee1d9e9d8dafcd6c2b27e609460a" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:10.666ex; height:2.843ex;" alt="{displaystyle operatorname {E} |X|
implies that (a.s.), so the random variable is defined almost sure, and therefore there is enough information to compute Remark 2. Jensen's inequality implies that since the absolute value function is convex.
Lyapunov’s inequality[edit]
Let <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/1713eefb65bf21aa1665bc0288619ffcb39e3a0f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.338ex; width:9.289ex; height:2.176ex;" alt="{displaystyle 0<s
. Lyapunov's inequality states that Proof. Applying Jensen's inequality to and obtain
. Taking the th
root of each side completes the proof.Corollary.
Hölder’s inequality[edit]
Let and satisfy and . The Hölder's inequality states that
Minkowski inequality[edit]
Let be an integer satisfying . Let, in addition, <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/cead74be2a1749fc3ecca189b8053b6608a1aef2" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.725ex; height:3.009ex;" alt="{displaystyle operatorname {E} |X|^{p}
and <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/8675f4fd68e1645b0b7598607b7a5974a45581fd" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.518ex; height:3.009ex;" alt="{displaystyle operatorname {E} |Y|^{p} . Then, according to the Minkowski inequality, <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/35f3d842e29deeaaebaf159c82e50bb68c34e32e" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:16.339ex; height:3.009ex;" alt="{displaystyle operatorname {E} |X+Y|^{p} and Taking limits under the sign[edit]
Monotone convergence theorem[edit]
Let the sequence of random variables and the random variables and be defined on the same probability space Suppose that
- is the pointwise limit of (a.s.), i.e. (a.s.).
The monotone convergence theorem states that
Proof. Observe that, by monotonicity, the sequence monotonically non-decreases, and
If then and we are done.
If <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/fc20b86c61d024aa0163cdf00e0e23940f97cb0c" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:12.527ex; height:2.843ex;" alt="{displaystyle operatorname {E} [Y]<+infty ,}"/> then, following the assumption that we conclude that is finite which, in turn, implies, as we saw previously, that is finite (a.s.).
Denote and . The finiteness of (a.s.) implies that the differences and are defined (do not have the form ) everywhere outside of a null set. On that null set, and may be defined arbitrarily (e.g. as zero or in any other way, as long as measurability is preserved) without affecting this proof. As a difference of two random variables, and are also random variables.
It follows from the definition that (a.s.), (a.s.), the sequence pointwise non-decreases (a.s.), and pointwise (a.s.).
By (the general version of) monotone convergence theorem,
whence the assertion follows.
Fatou's lemma[edit]
Let the sequence of random variables and the random variable be defined on the same probability space Suppose that
Fatou's lemma states that
(Note that is a random variable, for every by the properties of limit inferior).
Proof. If then, by monotonicity, so and the assertion follows.
If <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/89d56f23a38ccd3ee3264cd8fb3eeb0ee86bce33" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:11.88ex; height:2.843ex;" alt="{displaystyle operatorname {E} [Y]<+infty }"/>then, following the assumption that we conclude that is finite which, in turn, implies, as we saw previously, that is finite (a.s.).
Denote . Then (a.s.). The finiteness of (a.s.) implies that is defined (does not have the form ) everywhere outside of a null set. On that null set may be defined arbitrarily (e.g. as zero or in any other way, as long as measurability is preserved) without affecting this proof. As a difference of two random variables, is a random variable.
By (the general version of) Fatou's lemma,
whence the assertion follows.
Corollary. Let
Then
Proof is by observing that (a.s.) and applying Fatou's lemma.
Dominated convergence theorem[edit]
Let be a sequence of random variables. If pointwise (a.s.), (a.s.), and <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/1d673ec21dbeafe0aa85b387902be8f1e99c71ab" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -0.838ex; width:10.072ex; height:2.843ex;" alt="{displaystyle operatorname {E} [Y]
. Then, according to the dominated convergence theorem, Relationship with characteristic function[edit]
The probability density function of a scalar random variable is related to its characteristic function by the inversion formula:
For the expected value of (where is a Borel function), we can use this inversion formula to obtain
If is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,
where
is the Fourier transform of The expression for also follows directly from Plancherel theorem.
Uses and applications[edit]
It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.
The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learningto estimate (probabilistic) quantities of interest via Monte Carlo methodssince most quantities of interest can be written in terms of expectation, e.g. where is the indicator function of the set .
In classical mechanicsthe center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].
Expected values can also be used to compute the variance, by means of the computational formula for the variance
A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator operating on a quantum state vector is written as . The uncertainty in can be calculated using the formula .
The law of the unconscious statistician[edit]
The expected value of a measurable function of given that has a probability density function is given by the inner product of and :
This formula also holds in multidimensional case, when is a function of several random variables, and is their joint density.[5][6]
Alternative formula for expected value[edit]
Formula for non-negative random variables[edit]
Finite and countably infinite case[edit]
For a non-negative integer-valued random variable
Proof. If then On the other hand,
so the series on the right diverges to and the equality holds.
If then
Let
be an infinite upper triangular matrix. The double series is the sum of 's elements if summation is done row by row.
Since every summand is non-negative, the series either converges absolutely or diverges to In both cases, changing summation order does not affect the sum. Changing summation order, from row-by-row to column-by-column, gives usExample[edit]
In a coin tossing experiment, let the probability of heads be . Including the final attempt, how many tosses can we expect until the first head?
Solution. If is the random variable indicating the numbers of coin tosses before and including the first head, then, for ,
where we took into account the geometric series summation formula. We now compute
General case[edit]
If is a non-negative random variable, then
and
where denotes improper Riemann integral.
Proof. 1. For every ,
where and are the indicator functions of and respectively. Substituting this into the definition of obtain
Since and this integral (finite or infinite) meets the requirements of Tonelli's theorem. Changing the order of integration gives us
2a. The function is Riemann-integrable on each finite interval Indeed, since is non-increasing, the set of its discontinuities is countable. Due to countable additivity,
is a null set with respect to the linear Lebesgue measure. Furthermore, for all Using the Lebesgue criterionRiemann integrability of follows. We also conclude that2b. By "continuity from below",
The case of is similar.
Formula for non-positive random variables[edit]
If is a non-positive random variable, then
- <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/8274a2ff34778ca40961ad94ff1110325034d4a0" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -4.505ex; width:51.755ex; height:7.843ex;" alt="{displaystyle operatorname {E} [X]=-int limits _{(-infty ,0]}operatorname {P} (Xleq x),dx=-int limits _{(-infty ,0]}operatorname {P} (X
and
- <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/f0a3a9a53dde8716f65759be51975948cc8fff1f" class="mwe-math-fallback-image-inline" aria-hidden="true" style="vertical-align: -4.005ex; width:54.486ex; height:9.176ex;" alt="{displaystyle operatorname {E} [X]=-{hbox{(R)}}int limits _{-infty }^{0}operatorname {P} (Xleq x),dx=-{hbox{(R)}}int limits _{-infty }^{0}operatorname {P} (X
where denotes improper Riemann integral.
This formula follows from that for the non-negative case applied to
If, in addition, is integer-valued, i.e. then
General case[edit]
If can be both positive and negative, then ,
and the above results may be applied to and separately.History[edit]
The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of pointswhich seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré. Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough they both independently came up with a solution. They solved the problem in different computational ways but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution and this in turn made them absolutely convinced they had solved the problem conclusively. However, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[7]
Three years later, in 1657, a Dutch mathematician Christiaan Huygenswho had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense this book can be seen as the first successful attempt at laying down the foundations of the theory of probability.
In the foreword to his book, Huygens wrote: "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs." (cited by Edwards (2002)). Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657 he knew about Pascal's priority in this subject.
Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: "That my Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure me in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal Chance of gaining them, my Expectation is worth a+b/2." More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:
… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.
The use of the letter E to denote expected value goes back to W.A. Whitworth in 1901,[8] who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".[9]
See also[edit]
- ^ Sheldon M Ross (2007). "§2.4 Expectation of a random variable". Introduction to probability models (9th ed.). Akademische Presse. p. 38 ff. ISBN 0-12-598062-0.
- ^ Richard W Hamming (1991). "§2.5 Random variables, mean and the expected value". The art of probability for scientists and engineers. Addison-Wesley. p. 64 ff. ISBN 0-201-40686-1.
- ^ Richard W Hamming (1991). "Example 8.7–1 The Cauchy distribution". The art of probability for scientists and engineers. Addison-Wesley. p. 290 ff. ISBN 0-201-40686-1.
Sampling from the Cauchy distribution and averaging gets you nowhere — one sample has the same distribution as the average of 1000 samples!
- ^ Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of Information Security Investment". ACM Transactions on Information and System Security. 5 (4): 438–457. doi:10.1145/581271.581274.
- ^ Expectation Valueretrieved August 8, 2017
- ^ Papoulis, A. (1984), Probability, Random Variables, and Stochastic ProcessesNew York: McGraw–Hill, pp. 139–152
- ^ "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. 1960. doi:10.2307/2309286.
- ^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
- ^ "Earliest uses of symbols in probability and statistics".
Literature[edit]
- Edwards, A.W.F (2002). Pascal's arithmetical triangle: the story of a mathematical idea (2nd ed.). JHU Press. ISBN 0-8018-6946-3.
- Huygens, Christiaan (1657). De ratiociniis in ludo aleæ (English translation, published in 1714:).
No comments:
Post a Comment