This post is a fairly basic review of common probability notions. Things will get more interesting in future posts. Somebody who wants to do TCS but has not studied probability yet can read this post carefully and reach about the same level of formal training that I've had :)

Say we distribute *n* items into *b* bins randomly. Fixing our attention on one bin (e.g. the first), how can be bound the number of items landing there?

The expected number of items is n/b. Thus, by the Markov bound, we get:

Pr[bin contains ≥ 2n/b items] ≤ 1/2

**To strengthen the bound, we may look at the variance and apply the Chebyshev bound. Let X be the number of elements in the first bin. Also let μ=E[X]=n/b. The variance of X is defined as E[(X-μ)**

Variance.

^{2}], and this is exactly what we need to compute for Chebyshev.

How can we compute the variance? We let X

_{i}be the indicator for whether the i-th item falls in our bin (indicator means the variable is 1 if the event happens, and 1 otherwise). Then, X=X

_{1}+...+X

_{n}.

Since we are interested in X-μ, not X, it is more convenient to define Y

_{i}= X

_{i}-(1/b). With this definition, X-μ=Y

_{1}+...+Y

_{n}. Observe that E[Y

_{i}] = 0.

We can not break up the variance:

Var[X] = E[(X-μ)We are down to analyzing E[Y^{2}] = E[( ΣY_{i})^{2}] = Σ E[Y_{i}Y_{j}]

_{i}Y

_{j}], which is simple. If i≠j, Y

_{i}and Y

_{j}are independent random variables. Thus, the expectation commutes with the product:

E[YIn the case i=j, we use brute force calculation. Y_{i}Y_{j}] = E[Y_{i}] E[Y_{j}] = 0

_{i}=-1/b with probability 1-(1/b), and equals 1-(1/b) with probability 1/b. Thus, E[(Y

_{i})

^{2}] = O(1/b).

We have found the variance to be E[(X-μ)

^{2}]=O(n/b)=O(μ). Then:

Pr[X ≥ 2n/b] = Pr[X-μ ≥ μ] ≤ Pr[(X-μ)Observe that the Chebyshev bound is nothing more than Markov on the variable (X-μ)^{2}≥ μ^{2}] ≤ O(μ) / μ^{2}= O(1/μ)

^{2}.

**Third moment.**If stronger bounds are required, we can try to look at higher moments, E[(X-μ)

^{k}]. Unfortunately, moving to the 3rd moment (or any other odd moment) does not really help: the variable (X-μ)

^{3}is no longer positive, so Markov cannot apply.

One way to fix this is to look at the

*absolute*third moment: E[|X-μ|

^{3}]. It is no longer easy to compute this moment, since we cannot break up |ΣY

_{i}|

^{3}into components, due to the absolute value. Thus, we do not commonly use absolute moments.

However, I have come across absolute moments once, in the following interesting application. The central limit theorem states that the average of N i.i.d. variables tends to the normal distribution as N→∞. The Berry-Esseen theorem quantifies this phenomenon: it gives a fairly strong bound on the speed of the convergence, assuming the third absolute moment of the summands is bounded.

**Fourth moment.**To strengthen the variance bound, one most commonly looks at the 4th moment. To get a feeling for it, let's see how we can bound the 4th moment in the balls and bins case:

E[(X-μ)We can understand the terms generated by an (i,j,k,l) tuple by case analysis:^{4}] = E[( ΣY_{i})^{4}] = Σ_{ijkl}E[ Y_{i}Y_{j}Y_{k}Y_{l}]

- If one of the elements appears exactly once, the term is zero. For instance, let's say i∉ {j,k,l}. Then Y
_{i}is independent of the rest, so the expectation commutes with product: E[Y_{i}Y_{j}Y_{k}Y_{l}] = E[Y_{i}] E[Y_{j}Y_{k}Y_{l}] = 0. - If all elements are equal (i=j=k=l), E[(Y
_{i})^{4}] = O(1/b). - If the tuple consists of two equal pairs, for instance (i,i,j,j), we have E[(Y
_{i})^{2}(Y_{j})^{2}] = E[(Y_{i})^{2}] E[(Y_{j})^{2}] = O(1/b^{2}).

*n*terms of type 2, and O(n

^{2}) terms of type 3. Thus, the fourth moment is O(n

^{2}/ b

^{2}) = O(μ

^{2}).

To bound the bin size, we can now apply Markov on the fourth moment:

Pr[X ≥ 2n/b] = Pr[X-μ ≥ μ] ≤ Pr[(X-μ)Thus, our bounds have improved from 1/2 for Markov, to O(1/μ) for Chebyshev, and to O(1/μ^{4}≥ μ^{4}] ≤ O(μ^{2}) / μ^{4}= O(1/μ^{2})

^{2}) for the 4th moment. Going to the 6th, 8th, etc yields the predictable improvement.

**Chernoff.**The next step to improving our bounds is to go to the Chernoff bound. This bound has many forms, in particular two rather different instantiations for additive and relative error.

Let me quote an uncommon, but nifty version of the theorem:

Let X_{1}, ..., X_{n}be independent random variables bounded in [0,1]. Let X=ΣX_{i}and μ=E[X].

If Z≥μ, then Pr[X≥Z] ≤ e^{Z-μ}(μ/Z)^{Z}.

If Z≤μ, then Pr[X≤Z] ≤ eIn our case, we are interested in Z=2μ. Thus, the upper bound on the probability is e^{Z-μ}(μ/Z)^{Z}.

^{μ}/ 2

^{2μ}≈ 1 / 1.47

^{μ}. We have obtained an

*exponential*bound, instead of the polynomial bound possible by constant moments.

If we have been interested in showing that the bin gets at least Z=μ/2 elements, the second part of the theorem gives a probability bound of e

^{-μ/2}2

^{μ/2}≈ 1 / 1.17

^{μ}. Note how the two terms of the bound trade places: e

^{Z-μ}is pushing the probability down in the second case, while (μ/Z)

^{Z}is making sure it is small in the first case.

The proof of Chernoff is a bit technical, but conceptually easy. As before, we define Y

_{i}= X

_{i}- E[X

_{i}], so that X-μ=ΣY

_{i}. Instead of looking at E[(X-μ)

^{k}], we will not look at E[α

^{X-μ}] (where α>1 is a parameter than can be optimized at the end). This quantity is easy to compute since E[α

^{ΣYi}] = E[Π α

^{Yi}] = Π E[α

^{Yi}]. At the end, we can just apply Markov on the positive variable α

^{X-μ}.

**High probability.**In TCS, we are passionate about bounds that hold "with high probability" (w.h.p.), which means probability 1 - 1/n

^{c}, for any constant

*c*. For instance,

Algorithm A runs in O(n) time w.h.p.formally means the following:

There exists a function f(.) such that, if you choose any constantWhile such bounds may seem weird at first, they do make a lot of sense: think of applying some randomized procedure a polynomial number of times. Also, these bounds make a lot more sense when dealing with many experiments over huge data sets (the essence of Computer Science) than adopting the convention from statistics, which asks for bounds that hold with 95% probability.c, I can prove that algorithm A runs in time f(c)·n with probability 1 - 1/n^{c}.

Since we are usually happy with whp bounds, one often hears that Chernoff is morally equivalent to O(lg

*n*)-moments. Indeed, such a moment will give us a bound of the form 1/μ

^{O(lg n)}, which is high probability even in the hardest case when μ is a constant.

The paper of [Schmidt, Siegel, Srinivasan SODA'93] is often cited for this. Their Theorem 5 shows that you can get the same bounds as Chernoff (up to constants) if you look at a high enough moment.

## 8 comments:

excellent!

I don't get it. This is all in Mitzenmacher's book and all much better written. Besides regurgitating Mitzenmacher's textbook, what is really the point of this post?

I'm trying to get Mitz's book out of print and bankrupt him.

I like the reply to the second Anonymous!

nice post; more concise than in mitzenmacher's book; just on another note, i read in your earlier posts, you applied to schools in US for Professorship, did you apply to Cornell?Just curious since i am a grad student there!would be nice to see you here

I did apply to Cornell, but didn't interview there. Now that I live in NYC, it should be easy for me to come up and give a talk at Cornell (one day...)

Very complete excellent post.

Thank you so much.

Great and Thanks, I am waiting for the following post!

Post a Comment