tag:blogger.com,1999:blog-7863332855681061732024-01-26T03:26:55.919-05:00WebDiarios de MotocicletaInformatics Weekly, written by Mihai Pătraşcu.Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.comBlogger176125tag:blogger.com,1999:blog-786333285568106173.post-69622622055610925632012-05-06T20:43:00.000-04:002012-05-06T20:46:51.037-04:00Presburger AwardHurah!!I've got some excellent news:
I was co-awarded the
2012 Presburger Award, to be shared with <a href="http://www.cs.cmu.edu/%7Evenkatg/">Venkat Guruswami </a>,the young superstar of error correcting codes.<br />
In short, the Presburger Award is the European "young-scientist award" in Theory. <br />
It is given anually by the European Association for Theoretical Computer Science (<a href="http://eatcs.org/">EATCS</a>)<br />
to a scientist under 35 "for outstanding contributions in<br />
Theoretical Computer Sciene as documented by a series of published papers."<br />
<br />
Not surprisingly, I got the award for my work on data-structure lower bounds.The <a href="http://eatcs.org/index.php/component/content/article/1-news/1243-presburger-award-2012">citation </a>has a lot of nice things to say. Go read it :)Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com21tag:blogger.com,1999:blog-786333285568106173.post-22646157365883228362012-04-15T10:11:00.000-04:002012-04-15T10:11:13.215-04:00Happy EasterHappy Easter! <br />
<br />
...to all my readers cellebrating today in the Eastern Orthodox tradition.<br />
According to persistent rumours:<br />
Χριστός ἀνέστη<br />
Christos a înviat!<br />Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com0tag:blogger.com,1999:blog-786333285568106173.post-43912468229988204992012-04-05T13:01:00.000-04:002012-04-05T13:04:24.390-04:00Happpy recovery from the FOCS deadline:with a funny picture :)Happpy recovery from the FOCS deadline everyone:)<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpp4ZWiZv3lBnKajxY_7ULyn-QOJvWz57XHCleUyaI6xh5vcStb0LeggOapOeLEZdC3urE6J3qHyTzUTMofat5lbhiAQWDxwq9WiZgvrEe89ndpDGJ4eZGCKJwkaVXDyZ7IAjUC-O2xqjg/s1600/toc.jpg.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpp4ZWiZv3lBnKajxY_7ULyn-QOJvWz57XHCleUyaI6xh5vcStb0LeggOapOeLEZdC3urE6J3qHyTzUTMofat5lbhiAQWDxwq9WiZgvrEe89ndpDGJ4eZGCKJwkaVXDyZ7IAjUC-O2xqjg/s1600/toc.jpg.jpg" /></a></div>
With this kind of support from NY fashion stores, it'sodd that we have recruiting problems i n our field )<br />
<br />Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com8tag:blogger.com,1999:blog-786333285568106173.post-15614023526869187892011-11-06T23:05:00.000-05:002011-11-06T23:05:58.827-05:00New York Theory DayThe <a href="http://www.cs.columbia.edu/theory/f11.html">Fall 2011 New York Theory Day</a> is happening this Friday, November 11th, at NYU (Courant), and features yours truly as a speaker.<br />
<br />
I hope to see many of you there!Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com1tag:blogger.com,1999:blog-786333285568106173.post-77751567356216607942011-09-19T16:03:00.000-04:002011-09-19T16:03:26.334-04:00Follow-up: Sampling a discrete distribution<span class="Apple-style-span" style="font-family: Georgia, serif;">This is a follow-up to my last </span><a href="http://infoweekly.blogspot.com/2011/09/sampling-discrete-distribution.html" style="font-family: Georgia, serif; font-size: 16px;">post</a><span class="Apple-style-span" style="font-family: Georgia, serif;">, on a puzzle about "Sampling a discrete distribution" (see my comment there for the solution I originally thought of).</span><br />
<div style="font-family: Georgia, serif; font-size: 16px;">
<br /></div>
<div style="font-family: Georgia, serif; font-size: 16px;">
As an anonymous commenter (Rex?) points out, a nice elementary solution that has been known for a long time. <span class="Apple-style-span" style="font-size: small;">I admit to not knowing this solution ahead of time, and use the fact that it's not really my field as an excuse. Here I want to summarize this solution for educational purposes.</span></div>
<div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">Imagine each sample in the distribution is a vase containing liquid proportional to the probability of the outcome.</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">We classify these vases into 3 types:</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><i><u>LO</u> — </i>probability less than 1/n</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><i><u>GOOD</u></i> – probability precisely 1/n</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><i><u>HI</u></i> – probability greater then 1/n</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">In the ideal case when every vase is GOOD, we are done (uniform sampling). If not, we move towards this ideal case by inductively applying the following subroutine:</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">** Pick a LO vase (say, with x amount of liquid) and a HI vase (with y amount of liquid).</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">Grab a new vase and fill it up to the 1/n mark: pour all the liquid from the LO vase, and (1/n)-x liquid from the HI vase (remember that a HI vase has liquid y>1/n, so this is certainly possible). Note that the HI vase may now become LO, GOOD, or stay HI.</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">/end of **</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">Observe that the process terminates in n steps, since at each step we increment the number of GOOD vases. If we maintain 3 linked lists with samples of each type, each step can be done in O(1) time, so O(n) time total.</span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">At the end all vases are GOOD so we use uniform sampling. If we get a random x in [0,1], we use ⌊x·n⌋ to indicate that vase and </span>x·n mod 1 to pick a molecule of water inside that vase. If we trace back the original vase from where this molecule came, we have sampled according to the initial distribution.</div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;"><br /></span></div>
<div>
<span class="Apple-style-span" style="font-family: Georgia, serif;">Now we describe how to "undo" the water pouring at query time, i.e. tracing the origin of each water molecule. Observe that operation (**) is never applied to a GOOD vase. So when a vase becomes GOOD, it has reached final form. But the water in a GOOD vase can only come from two distinct vases. So we can essentially draw a water-mark on the vase, to separate the LO water from HI water and store pointers to the original vases whose water got poured into the GOOD vase. Then the sampling algorithm only need one comparison, i.e. it compares </span>x·n mod 1 to the mark on the vase <span class="Apple-style-span" style="font-family: Georgia, serif;">⌊x·n⌋, and follows a pointer from this vase to the input vases from which water was poured. Note that only one pointer is followed, since a GOOD vase is never touched again by operation (**).</span> Constant time!</div>
</div>
Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com5tag:blogger.com,1999:blog-786333285568106173.post-3494602300040431902011-09-16T13:55:00.002-04:002011-09-16T14:25:32.879-04:00Sampling a discrete distributionThe following cute question came up at lunch today (all credit goes to <a href="http://www2.research.att.com/~aarcher/">Aaron Archer</a>). <div><br /></div><div>Informal challenge: You are given a discrete distribution with <i>n</i> possible outputs (and you know the probability p<sub>i</sub> of each output), and you want to sample from the distribution. You can use a primitive that returns a random real number in [0,1].</div><div><br /></div><div>This was the way the problem came to me, and the goal was to find a very fast practical solution. Think of n as around 10<sup>6 </sup>and use a regular machine as your machine model (real number means a "double" or something like that).</div><div><br /></div><div><br /></div><div>Formal challenge: To formalize this completely for a bounded-precision computer, we can say that we work on a <i>w</i>-bit machine and probabilities have <i>w</i>-bit precision. The distribution is given by <i>n</i> integers x<sub>i</sub>, and these values sum to exactly 2<sup>w</sup>. Output "i" should be produced with probability exactly x<sub>i</sub> / 2<sup>w</sup>. You are given a primitive that produces a uniformly distributed <i>w</i>-bit number.</div><div><br /></div><div>The challenge is to get O(1) worst-case time. (NB: some fancy data structures required.)</div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com11tag:blogger.com,1999:blog-786333285568106173.post-27922599726277074042011-07-28T10:42:00.002-04:002011-07-28T11:32:24.778-04:00International Olympiad, Day 2<div>I suspect I got a lot of bad karma this day (I'm the evil mind behind "Crocodile" and "Elephant".) Congratulations to the winners! See here for final <a href="http://sc1.ioi2011.net/contest/">results</a>.</div><div><br /></div><b>Problem Crocodile. </b>You have a weighted undirected graph with a start node and <i>k</i> designated target nodes. Two players play a full-information game, taking turns:<div><ol><li>Player 1 moves on the nodes of the graph, starting at the designated start node, and aiming to reach a target node. In each turn, she can traverse any edge out of her current node [except the blocked edge / see below], incurring a cost equal to the weight of the edge.<br /><br /></li><li>Player 2 can block one edge at every turn. When an edge is blocked, Player 1 cannot traverse it in the next turn. An edge can be blocked multiple times, and any past edge is "unblocked" when a new edge is blocked.</li></ol><div>Find the minimum budget B such that Player 1 can reach a target node with cost ≤ <i>B</i>, <i>regardless</i> of what Play 2 does. Running time: O~(<i>n</i>+<i>m</i>).</div></div><div><br /></div><div><b>Problem Elephants</b>. You have <i>n</i> elephants on the line at given coordinates. The elephants make <i>n</i> moves: in each unit of time, some elephant moves to another position. You have cameras that can photograph any interval of length L of the line. After each move, output the minimum number of cameras needed to phtograph all elephants in the current configuration.</div><div><br /></div><div>Running time: subquadratic in <i>n</i>.</div><div><br /></div><div><b>Problem Parrots.</b> You are given a message of <i>M</i> bytes (0..255) to transmit over a weird channel. The channel can carry elements between 1 and <i>R</i>, but elements don't arrive in order (they are permuted adversarially). Send the message, minimizing the number L of elements sent across the channel.</div><div><br /></div><div>Running time: polynomial. (Yes, this is an easy problem for a trained theorist.)</div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com10tag:blogger.com,1999:blog-786333285568106173.post-74824098386406810142011-07-25T13:14:00.002-04:002011-07-25T14:07:41.379-04:00International Olympiad, Day 1<span class="Apple-style-span">The problems for Day 1 of the IOI have been </span><a href="http://www.ioi2011.or.th/hsc/tasks/day1/translations_raw.html">posted</a> (in many languages). Exciting! Here are the executive summaries. (Though I am on the International Scientific Committee (ISC), I do not know all the problems, so my appologies if I'm misrepresenting some of them. In particular, the running times may easily be wrong, since I'm trying to solve the problems as I write this post.)<div><br /></div><div>The usual rules apply: if you're the first to post a solution in the comments, you get eternal glory. (If you post annonymously, claim your glory some time during the next eternity.)<br /><div><br /><div><div><b>Problem Fountain</b>. Consider an undirected graph with costs (all costs are distinct) and the following traversal procedure. If we arrived at node <i>v</i> on edge<i> e</i>, continue on the cheapest edge different from <i>e</i>. If the node has degree 1, go back on <i>e</i>. A walk following these rules is called a "valid walk."</div></div><div><br /></div><div>You also have a prescribed target node, and an integer <i>k</i>. Count the number of valid walks of exactly <i>k</i> edges which end at the prescribed target node. Running time O~(n+m) ["O~" means "drop logs"]</div></div></div><div><br /></div><div><b>Problem Race. </b>You have a tree with integer weights on the edges. Find a path with as few edges as possible and total length exaclty X. Running time O(<i>n</i>lg <i>n</i>) [log^2 may be easier.]</div><div><br /></div><div><b>Problem RiceHub. </b><i>N </i>rice fields are placed at integer coordinates on the line. The rice must be gathered at some hub, which must also be at an integer coordinate on the line (to be determined). Each field produces one truck-load of rice, and driving the truck a distance <i>d</i> costs exactly <i>d</i>. You have a hard budget constraint of <i>B</i>. Find the best location for the hub, maximizing the amount of rice that can be gathered in the budget constraint.</div><div><br /></div><div>Running time: O~(<i>N</i>).</div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com8tag:blogger.com,1999:blog-786333285568106173.post-63912714241444242982011-07-22T13:29:00.003-04:002011-07-22T14:21:25.377-04:00Balkan Olympiad (Day 2)<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1zk54A6xIhpvmOifIYdw2Ibpmijqt5vyCErTP4QCQrrBR3Pftd5oSesVTEh4Qy5-1AxdVcUx4tVw5xRrC7dnCMaLvVAYlMUbwxGJUnTG9oq2KcqKXwZ0hDpY-vetxuobfruCXqdDFw8qN/s1600/trapez.GIF" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"></a>Now is time for the problems on day 2 (of 2). See here for <a href="http://infoweekly.blogspot.com/2011/07/balkan-olympiad.html">day 1</a>. Feel free to post solutions in the comments (you get eternal glory if you're the first to post one) and discuss anything related to the problems.<div><br /></div><div><b>Day2–Problem TimeIsMoney.</b> You have a graph with two costs (A and B) on each edge. Find a spanning tree that minimizes (its total A-cost) * (its total B-cost) [that's "times" in the middle]. Desired running time: polynomial. Sharper bounds are possible (I think I get O(n<sup>2</sup>m log)) but this is hard enough. To be entirely fair, the contestants just need to find an algorithm, not to prove it runs in poly-time, which may be easier (but I'm writing for a theory audience so consider yourself challenged).</div><div><br /></div><div><b>Day2–Problem Trapezoid.</b> Consider two horizontal lines, and <i>n</i> trapezoids stretching from one line to the other. The proverbial picture worth 1000 words:</div><div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1zk54A6xIhpvmOifIYdw2Ibpmijqt5vyCErTP4QCQrrBR3Pftd5oSesVTEh4Qy5-1AxdVcUx4tVw5xRrC7dnCMaLvVAYlMUbwxGJUnTG9oq2KcqKXwZ0hDpY-vetxuobfruCXqdDFw8qN/s1600/trapez.GIF" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1zk54A6xIhpvmOifIYdw2Ibpmijqt5vyCErTP4QCQrrBR3Pftd5oSesVTEh4Qy5-1AxdVcUx4tVw5xRrC7dnCMaLvVAYlMUbwxGJUnTG9oq2KcqKXwZ0hDpY-vetxuobfruCXqdDFw8qN/s400/trapez.GIF" border="0" alt="" id="BLOGGER_PHOTO_ID_5632236998499686914" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 400px; height: 86px; " /></a>Find the maximal set of nonintersecting trapezoids. Running time: O(<i>n</i>lg <i>n</i>).</div><div><br /></div><div><b>Day 2–Problem Compare. </b>Alice gets a number <i>a</i>, and Bob gets a number <i>b</i>. Both are integers in {0, ..., 4095}. Bob's goal is to compare <i>b</i> to <i>a</i> (and output "greater than", "less than" or "equal"). Charlie is helping them. Alice can send Charlie a set A ⊂ {1, ..., 10240} (intuitively, think of Alice marking some bits in an array of 10Kbits). Bob can ask Charlie whether some x is in A or not (think of Bob as querying some bits of the bit vector). The goal is to minimize (for worst-case <i>a</i> and <i>b</i>)</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>T=|A| + the number of queries made by Bob</div><div><i>Desired solution:</i> We know how to get T=9. In the Olympiad, T=10 was enough for full score, but we left the problem open-ended, so T=9 would've earned you 109 points.</div><div><br /></div><div>Somewhat unusually for computer olympiads, in this problem we could test contestant solutions exhaustively: we ran their code for all 4096<sup>2</sup> choices for (<i>a</i>,<i>b</i>) and really computed the worst-case T.</div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com5tag:blogger.com,1999:blog-786333285568106173.post-12932502353485059532011-07-12T17:45:00.002-04:002011-07-12T18:33:37.644-04:00Balkan OlympiadThe <a href="http://www.boi2011.ro/boi2011/">Balkan Olympiad</a> is over, and I am slowly recovering from the experience of chairing the Scientific Commitee. For your enjoyment, I will post summaries of the competition problems. Perhaps this post should be titled "Are you smarter than a high school student?"<div><br /></div><div><b>Day 1 – Problem 2Circles. </b>You are given a convex polygon. Find the largest R such that two circles of radius R can be placed inside the polygon without overlapping. Target running time: O~(N). I think this problem illustrates the rather significant difference between coming up with an algorithm on paper and actually implementing it (only 4 contestants scored nonzero; the committee says ``Sorry!'').</div><div><br /></div><div><b>Day 1– Problem Decryption. </b>We define the following pseudorandom number generator:</div><div><ul><li>initialize R[0], R[1], R[2] with random values in {0,..., 255}</li><li>let R[n] = R[n-3] XOR R[n-2].</li></ul><div>We also define the following cypher:</div></div><div><ul><li>let π be a random permutation of {0,...,255}</li><li>to encrypt the i-th byte of the message, output π(Message[i] XOR R[i])</li></ul></div><div>You have to implement a chosen plaintext attack. You get a device implementing this cypher. You may feed it a message of at most 320 bytes (you give the device one byte at a time, and observe the encyption immediately). Your goal is to recover the secret keys (R[0..2], and π).</div><div><br /></div><div><b>Day 1–Problem Medians. </b>Given an array A[1..n] of integers, we define the prefix medians M[0..(n-1)/2] as M[k] = median(A[1..2k+1]).</div><div><br /></div><div>The problem is: given M, recover A. O~(n) running time.</div><div><br /></div><div><br /></div><div><b>PS: </b>This was by far the best prepared Committee that I've been on, and I think we conclusively proved that, no matter how much work you do before the Olympiad, there's still a lot left to do during the Olympiad itself. Many thanks to everybody who volunteered their time. I wish I could do more to express my gratitude, but for lack of other ideas, it is my pleasure to acknowledge the committee here:<div><div><div><ul><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Marius Andrei - Senior Software Engineer, Google Inc.</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Radu Ștefan - Researcher, Technische Universiteit Eindhoven</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Dana Lica - "I.L. Caragiale" High School, Ploiești</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Cosmin Negrușeri - Senior Software Engineer, Google Inc.</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Mugurel Ionuț Andreica - PhD, Assist., Polytechnic University, Bucharest</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Cosmin Tutunaru- Student, Babes-Bolyai University, Cluj-Napoca</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Mihai Damaschin - Student, Bucharest University</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Gheorghe Cosmin - Student, MIT</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Bogdan Tătăroiu - Student, Cambridge University, </span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Filip Buruiană - Student, Polytechnic University, Bucharest</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Robert Hasna - Student, Bucharest University</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; ">Marius Dragus - Student, Colgate University NY</span></li><li><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; "><span class="Apple-style-span" style="border-collapse: collapse; font-family: arial, sans-serif; white-space: nowrap; ">Aleksandar and Andreja Ilic -- </span></span><span class="Apple-style-span" style="font-family: Tahoma, sans-serif; font-size: 13px; -webkit-border-horizontal-spacing: 2px; -webkit-border-vertical-spacing: 2px; "><span class="Apple-style-span" style="border-collapse: collapse; font-family: arial, sans-serif; white-space: nowrap; "><span class="Apple-style-span" style="white-space: normal; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; ">University of Nis, Serbia; external problem submitters, who were not intimidated by the fact that I only posted the problem call in Romanian :)</span><table cellpadding="0" class="cf NtHald" style="border-collapse: collapse; margin-top: 0px; width: 764px; "><tbody><tr class="UszGxc"></tr></tbody></table></span></span></li></ul><div><span class="Apple-style-span" ><span class="Apple-style-span" style="border-collapse: collapse;"><br /></span></span></div></div></div></div></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com3tag:blogger.com,1999:blog-786333285568106173.post-272143910266407252011-04-22T11:40:00.002-04:002011-04-22T12:13:43.456-04:00Please submit...<ol><li><a href="http://www.madalgo.au.dk/html_sider/2_5_Events/MASSIVE_2011/Call_For_Papers_2011.html">Papers to MASSIVE 2011</a>. This will be a SoCG satellite workshop (<i>think Paris in June</i>...). Publication here does not hinder publication anywhere else.<br /><br /></li><li><a href="http://people.csail.mit.edu/mip/boi11.html">Algorithmic problems for the Balkan Olympiad (BOI 2011)</a>. I am chairing the Scientific Committee, and we need your help in putting together an interesting set of problems. If your problem is selected for the competition, you will be invited to be a member of the Scientific Committee, and you get a free trip to the Olympiad if you accept (<i>think Bistrița in July</i>...). Anybody can submit problems, but I think the free trip only applies if you're starting somewhere in Romania and you are a Romanian/EU<sup>(?)</sup> citizen. Hence the call for problems is in Romanian; sorry.<br /><br /></li><li>Comments to other blogs. This blog is now on full moderation after I had to delete a batch of comments that did not score particularly high on the sanity scale. Don't worry, the main point of this is not censorship; nasty but coherent comments will continue to be accepted.</li></ol>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com2tag:blogger.com,1999:blog-786333285568106173.post-81489532695719049362010-11-19T10:00:00.002-05:002010-11-19T10:15:50.499-05:00Complexity TheoryIf you're a student thinking of going into complexity theory, take a good look around and ask yourself: "Do I really want to be in a field with this amount of groupthink?" [<a href="http://lucatrevisan.wordpress.com/2010/11/08/a-circuit-lower-bound-breakthrough/">1</a>,<a href="http://blog.computationalcomplexity.org/2010/11/breakthrough-circuit-lower-bound.html">2</a>,<a href="http://blog.computationalcomplexity.org/2010/11/is-ryans-theorem-only-interesting-to.html">3</a>,<a href="http://rjlipton.wordpress.com/2010/11/08/a-breakthrough-result/">4</a>,<a href="http://scottaaronson.com/blog/?p=472">5</a>, and last but certainly not least <a href="http://scottaaronson.com/blog/?p=474">6</a>]<div><br /></div><div>Back in the day, I was myself saved from a career in complexity by Omer Reingold's logspace connectivity. </div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com57tag:blogger.com,1999:blog-786333285568106173.post-2884177867771822142010-10-27T14:03:00.002-04:002010-10-27T23:35:18.163-04:00FOCS 2010<div style="text-align: left;">I am back from FOCS 2010, where I gave two talks:</div><div><ul><li>a tutorial on data structure lower bounds: <a href="http://people.csail.mit.edu/mip/talks/FOCS-tutorial.ppsx">PPSX</a>, <a href="http://people.csail.mit.edu/mip/talks/FOCS-tutorial.pdf">PDF</a></li><li>a regular conference talk on distance oracles: <a href="http://people.csail.mit.edu/mip/papers/balls/talk.ppsx">PPSX</a>, <a href="http://people.csail.mit.edu/mip/papers/balls/talk.pdf">PDF</a> (for this <a href="http://people.csail.mit.edu/mip/papers/balls/paper.pdf">paper</a> coauthored with Liam Roditty).</li></ul><div>This paper caused some amount of fun in certain "inner circles", which I think I should share with a wider audience. If you read the paper, the algorithms repeatedly grow balls (aka shortest path trees) around vertices of the graph. After obsessing about growing these balls for more than a year, I found it natural to name the paper "How to Grow Your Balls". At least it allowed me to begin various talks by telling the audience that, "This is a topic of great economic importance; I receive email about it almost every day."</div></div><div><br /></div><div>The committee took its role as Cerberus at the gates of Theory very seriously, and refused to accept the paper under the original title. In the process, many jokes were made, at least some of which transpired outside the PC room. I won't post them here, as the PC probably intends to keep some decorum. (But if you get a chance, ask your friend on the PC about that proposed proceedings cover.)</div><div><br /></div><div>So the title changed, but I stretched the joke a bit further by titling the tutorial "How to Grow Your Lower Bounds".</div><div><br /></div><div>This episode confirmed two things that I already knew. First, this community insists on taking itself way too seriously. For comparison, the title certainly triggered some PR alarms inside AT&T. But it made it through, perhaps because I told people that it's in the interest of AT&T Labs to prove publicly that it's a real research lab: it lets researchers be researchers, which includes being silly at times. Given the reputation of PR departments in American telecoms, it's a bit sad to find that the theory community has a higher bar on form.</div><div><br /></div><div>Secondly, I should turn it down a notch with this blog. Several people said they had expected me to withdraw the paper rather than change the title. I found it quite amazing that people had such an image of me. Perhaps it is time to emphasize that I'm doing my research because I think it's important and cool. The horsing around is, well, not quite the main message.</div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com10tag:blogger.com,1999:blog-786333285568106173.post-21818078603383020462010-09-29T16:47:00.002-04:002010-09-29T16:59:27.887-04:00Problem solving versus new techniques<b>This is a guest post by Mikkel Thorup:</b><br /><br />--------------------------<br /><br />I think there is nothing more inhibiting for problem solving than referees looking for new general techniques.<br /><br />When I go to STOC/FOCS, I hope to see some nice solutions to important problems and some new general techniques. I am not interested in semi-new techniques for semi-important problems. A paper winning both categories is a wonderful but rare event.<br /><br />Thus I propose a max-evaluation rather than a sum. If the strength of a paper is that it solves an important problem, then speculations on the generality of the approach are of secondary importance. Conversely, if the strength of the paper is some new general techniques, then I can forgive that it doesn't solve anything new and important.<br /><br />One of the nice things about TCS is that we have problems that are important, not just as internal technical challenges, but because of their relation to computing. At the end of the day, we hope that our techniques will end up solving important problems.<br /><br />Important problems should be solved whatever way comes natural. It may be deep problem specific understanding, and it may build on previous techniques. Why would we be disappointed if an old problem got solved by a surprising reuse of an old technique?<div><br /><div>Elegant reuse of old techniques is not a bad thing. After all, when we develop new techniques, we are hoping they will be reused, and from a teaching perspective, it is great to discover that the same general technique applies to more diverse problems. The last thing we want is to encourage authors to hide reuse. Of course, reuse should be highly non-obvious for a strong conference, but for established open problems, non-obviousness will normally be self-evident.<div><br /></div></div></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com11tag:blogger.com,1999:blog-786333285568106173.post-29475545340397745922010-09-27T14:17:00.004-04:002010-09-27T16:03:30.516-04:00Retrieval-Only DictionariesWe saw two <a href="http://infoweekly.blogspot.com/2010/09/static-1d-range-reporting.html">cool</a> <a href="http://infoweekly.blogspot.com/2010/09/veb-space-method-4.html">applications</a> of dictionaries without membership; now it's time to construct them. Remember that we are given a set <i>S</i>, where each element <i>x</i>∈<i>S</i> has some associated data[<i>x</i>], a <i>k</i>-bit value. We want a data structure of O(<i>nk</i>) bits which retrieves data[<i>x</i>] for any <i>x</i>∈<i>S</i> and may return garbage when queried for <i>x</i>∉<i>S</i>.<div><br /></div><div>A conceptually simple solution is the "Bloomier filter" of [Chazelle, Kilian, Rubinfeld, Tal SODA'04]. This is based on the power of two choices, so you should first go back and review my <a href="http://infoweekly.blogspot.com/2010/02/cuckoo-hashing.html">old post</a> giving an encoding analysis of cuckoo hashing.</div><div><br /></div><div>Standard cuckoo hashing has two arrays A[1..2<i>n</i>], B[1..2<i>n</i>] storing keys, and places a key either at A[<i>h</i>(<i>x</i>)] or B[<i>g</i>(<i>x</i>)]. Instead of this, our arrays <i>A</i> and <i>B</i> will store <i>k</i>-bit values (O(<i>nk</i>) bits in total), and the query retrieve-data(<i>x</i>) will return A[<i>h</i>(<i>x</i>)] <b>xor</b> B[<i>g</i>(<i>x</i>)].</div><div><br /></div><div>The question is whether we can set up the values in A and B such that any query<i> </i><i>x</i>∈<i>S</i> returns data[<i>x</i>] correctly. This is a question about the feasibility of a linear system with <i>n</i> equations (one per key) and 4<i>n</i> variables (one per array entry).</div><div><br /></div><div>Consider a connected component in the bipartite graph induced by cuckoo hashing. If this component is acyclic, we can fix A and B easily. Take an arbitrary node and make it "zero"; then explore the tree by DFS (or BFS). Each new node (an entry in A or B) has a forced value, since the edge advancing to it must return some data[<i>x</i>] and the parent node has been fixed already. As the component is acyclic, there is only one constraint on every new node, so there are no conflicts.</div><div><br /></div><div>On the other hand, if a component has a cycle, we are out of luck. Remark that if we <b>xor </b>all cycle nodes by some Δ, the answers are unchanged, since the Δ's cancel out on each edge. So a cycle of length <i>k</i> must output <i>k </i>independent data values, but has only <i>k</i>-1 degrees of freedom. </div><div><br /></div><div>Fortunately, one can prove the following about the cuckoo hashing graph:</div><div><ul><li>the graph is acyclic with some constant probability. Thus, the construction algorithm can rehash until it finds an acyclic graph, taking O(<i>n</i>) time in expectation.<br /><br /></li><li>the total length of all cycles is O(lg <i>n</i>) with high probability. Thus we can make the graph acyclic by storing O(lg <i>n</i>) special elements in a stash. This gives construction time O(<i>n</i>) w.h.p., but the query algorithm is slightly more complicated (for instance, it can handle the stash by a small hash table on the side).</li></ul></div><div>These statements fall out naturally from the encoding analysis of cuckoo hashing. A cycle of length <i>k </i>allows a saving of roughly <i>k</i> bits in the encoding: we can write the <i>k</i> keys on the cycle (<i>k</i>lg <i>n</i> bits) plus the <i>k</i> hash codes (<i>k</i>lg(2<i>n</i>) bits) instead of 2<i>k</i> hash codes (2<i>k</i>lg(2<i>n</i>) bits).</div><div><br /></div><div><b>Further remarks.</b> Above, I ignored the space to store the hash functions <i>h</i> and <i>g</i>. You have to believe me that there exist families of hash functions representable in O(<i>n</i><sup>ε</sup>) space, which can be evaluated in constant time, and make cuckoo hashing work.</div><div><br /></div><div>A very interesting goal is to obtain retrieval dictionaries with close to <i>kn</i> bits. As far as I know, the state of the art is given by <a href="http://www.itu.dk/people/pagh/papers/bloomier.pdf"><span class="Apple-style-span" style="font-size: small;">[Pagh-Dietzfelbinger ICALP'08]</span></a> and <a href="http://arxiv.org/abs/0804.1845"><span class="Apple-style-span" style="font-size: small;">[Porat]</span></a>.</div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com2tag:blogger.com,1999:blog-786333285568106173.post-29177232157244337982010-09-21T13:20:00.004-04:002010-09-23T10:30:54.608-04:00Static 1D Range ReportingMethod 4 for implementing van Emde Boas with linear space, described in my last post, is due to [Alstrup, Brodal, Rauhe: STOC'01]. They worked on static range reporting in 1 dimension: preprocess a set of integers <i>S</i>, and answer query(<i>a</i>,<i>b</i>) = report all points in <i>S</i> ∩ [<i>a</i>,<i>b</i>]. This is easier than predecessor search: you can first find the predecessor of <i>a</i> and then output points in order until you exceed <i>b</i>. Using van Emde Boas, we would achieve a linear-space data structure with query time O(lglg <i>u</i> + <i>k</i>), where <i>k</i> is the number of points to be reported.<div><br /></div><div>Alstrup, Brodal, and Rauhe showed the following surprising result:<br /><blockquote>Static 1D range reporting can be solved with O(<i>n</i>) space and O(1+<i>k</i>) query time.<br /></blockquote> </div><div>I like this theorem a lot, since it is so easy to describe to anybody with minimal background in Computer Science, yet the result is not obvious. I have used it many times to answer questions like, "Tell me a surprising recent result from data structures."</div><div><br /></div><div><b>The solution. </b>We need a way to find <i>some </i>(arbitrary) key from <i>S</i> ∩ [a,b] in constant time. Once we have that, we can walk left and right in an ordered list until we go outside the interval.</div><div><br /></div><div>Let's first see how to do this with O(<i>n</i> lg <i>u</i>) space; this was described by [Miltersen, Nisan, Safra, Wigderson: STOC'95]. Of course, we build the trie representing the set. Given the query [<i>a</i>,<i>b</i>] let us look at the lowest common ancestor (LCA) of <i>a</i> and <i>b</i>. Note that LCA(<i>a</i>,<i>b</i>) is a simple mathematical function of the integers <i>a</i> and <i>b</i>, and can be computed in constant time. (The height of the LCA is the most significant set bit in <i>a</i> xor <i>b</i>.)</div><div><ul><li>if LCA(<i>a</i>,<i>b</i>) is a branching node, look at the two descendant branching nodes. If the interval [<i>a</i>,<i>b</i>] is nonempty, it must contain either the max in the tree of the left child, or the min in the tree of the right child.<br /><br /></li><li>if LCA(<i>a</i>,<i>b</i>) is an active node, go to its lowest branching ancestor, and do something like the the above.<br /><br /></li><li>if LCA(<i>a</i>,<i>b</i>) is not an active node, the interval [<i>a</i>,<i>b</i>] is certainly empty!</li></ul><div>Thus, it suffices to find the lowest branching ancestor of LCA(<i>a</i>,<i>b</i>) <u>assuming</u> that LCA(<i>a</i>,<i>b</i>) is active. This is significantly easier than predecessor search, which needs the lowest branching ancestor of an arbitrary node.</div></div><div><br /></div><div>The proposal of [Miltersen et al.] is to store all O(<i>n</i> lg <i>u</i>) active nodes in a hash table, with pointers to their lowest branching ancestors.</div><div><br /></div><div>As in my <a href="http://infoweekly.blogspot.com/2010/09/veb-space-method-4.html">last post</a>, the technique of [Alstrup et al.] to achieve O(<i>n</i>) space is: store only O(<i>n</i> √<span style="text-decoration: overline; ">lg <i>u</i></span>) active nodes, and store them in a retrieval-only dictionary with O(lglg <i>u</i>) bits per node. We store the following active nodes:</div><div><ol><li>active nodes at depth i·√<span style="text-decoration: overline">lg <i>u</i></span> ;</li><li>active nodes less than √<span style="text-decoration: overline; ">lg <i>u</i></span> levels below a branching node.</li></ol></div><div>We first look up LCA(<i>a</i>,<i>b</i>) in the dictionary. If the lowest branching ancestor is less than √<span style="text-decoration: overline; ">lg <i>u</i></span> levels above, LCA(<i>a</i>,<i>b</i>) is in the dictionary and we find the ancestor. If not, we truncate the depth of the LCA to a multiple of √<span style="text-decoration: overline; ">lg <i>u</i></span> , and look up the ancestor at that depth. If [a,b] is nonempty, that ancestor must be an active node and it will point us to a branching ancestor.</div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com0tag:blogger.com,1999:blog-786333285568106173.post-81248896108685025502010-09-21T11:11:00.003-04:002010-09-21T13:09:38.640-04:00vEB Space: Method 4In the previous post I described 3 ways of making the "van Emde Boas data structure" take linear space. I use quotes since there is no unique vEB structure, but rather a family of data structures inspired by the FOCS'75 paper of van Emde Boas. By the way, if you're curious who van Emde Boas is, here is a <a href="http://staff.science.uva.nl/~peter/portretpeter.jpg">portrait</a> found on his webpage.<div><br /><div>In this post, I will describe a 4th method. You'd be excused for asking what the point is, so let me quickly mention that this technique has a great application (1D range reporting, which I will discuss in another post) and it introduces a nice tools you should know.</div></div><div><br /></div><div>Here is a particularly simple variant of vEB, introduced by Willard as the "y-fast tree". Remember from the last post that the trie representing the set has <i>n</i>-1 branching nodes connected by 2<i>n</i>-1 "active" paths; if we know the lowest branching ancestor of the query, we can find the predecessor in constant time. Willard's approach is to store a hash table with all O(<i>n</i> lg <i>u</i>) active nodes in the trie; for each node, we store a pointer to its lowest branching ancestor. Then, we can binary search for the height of the lowest active ancestor of the query, and follow a pointer to the lowest branching node above. As the trie height is O(lg <i>u</i>), this search takes O(lglg <i>u</i>) look-ups in the hash table.</div><div><br /></div><div>Of course, we can reduce the space from O(<i>n</i> lg <i>u</i>) to O(<i>n</i>) by bucketing. But let's try something else. We could break the binary search into two phases:</div><div><ol><li>Find <i>v</i>, the lowest active ancestor of the query at some depth of the form i·√<span style="text-decoration: overline">lg <i>u</i></span> (binary search on <i>i</i>). Say <i>v </i>is on the path <i>u</i>→<i>w </i>(where <i>u</i>, <i>w</i> are branching nodes). If <i>w </i>is not an ancestor of the query, return <i>u.<br /><br /></i></li><li>Otherwise, the lowest branching ancestor of the query is found at some depth in [ i·√<span style="text-decoration: overline; ">lg <i>u</i></span> , (i+1)√<span style="text-decoration: overline; ">lg <i>u</i></span> ]. Binary search to find the lowest active ancestor in this range, and follow a pointer to the lowest active ancestor.</li></ol><div>With this modification, we only need to store O(<i>n</i> √<span style="text-decoration: overline; ">lg <i>u</i></span> ) active nodes in the hash table! To support step 1., we need active nodes at depths i·√<span style="text-decoration: overline; ">lg <i>u</i></span>. To support step 2., we need active nodes whose lowest branching ancestor is only ≤ √<span style="text-decoration: overline; ">lg <i>u</i></span> levels above. All other active nodes can be ignored.</div></div><div><br /></div><div>You could bring the space down to O(<i>n </i>lg<sup>ε</sup><i>u</i>) by breaking the search into more segments. But to bring the space down to linear, we use heavier machinery:</div><div><br /></div><div><b>Retrieval-only dictionaries.</b> Say we want a dictionary ("hash table") that stores a set of <i>n</i> keys from the universe [<i>u</i>], where each key has <i>k </i>bits of associated data. The dictionary supports two operations:</div><div><ul><li>membership: is <i>x </i>in the set?</li><li>retrieval: assuming <i>x </i>is in the set, return data[<i>x</i>].</li></ul><div>If we want to support both operations, the smallest space we can hope for is log(<i>u</i> choose <i>n</i>) + <i>nk </i>≈<i> n</i>(lg <i>u</i> +<i> k</i>)<i> </i>bits: the data structure needs to encode the set itself, and the data.</div></div><div><br /></div><div>Somewhat counterintuitively, dictionaries that only support retrieval (without membership queries) are in fact useful. (The definition of such a dictionary is that retrieve(<i>x</i>) may return garbage if <i>x</i> is not in the set.)</div><div><br /></div><div>Retrieval-only dictionaries can be implemented using only O(<i>nk</i>) bits. I will describe this in the next post, but I hope it is believable enough.</div><div><br /></div><div>When is a retrieval-only dictionary helpful? When we can verify answers in some other way. Remember the data structure with space O(<i>n</i> √<span style="text-decoration: overline; ">lg <i>u</i></span> ) from above. We will store branching nodes in a real hash table (there are only <i>n</i>-1 of them). But observe the following about the O(<i>n</i> √<span style="text-decoration: overline; ">lg <i>u</i></span> ) active nodes that we store:</div><div><ol><li>We only need <i>k</i>=O(lglg <i>u</i>) bits of associated data. Instead of storing a pointer to the lowest branching ancestor, we can just store the height difference (a number between 1 and lg <i>u</i>). This is effectively a pointer: we can compute the branching ancestor by zeroing out so many bits of the node.<br /><br /></li><li>We only need to store them in a retrieval-only dictionary. Say we query some node <i>v </i>and find a height difference δ to the lowest branching ancestor. We can verify whether <i>v </i>was real by looking up the δ-levels-up ancestor of <i>v </i>in the hash table of branching nodes, and checking that <i>v</i> lies on one of the two paths descending from this branching node.</li></ol><div>Therefore, the dictionary of active nodes only requires O(<i>n</i> √<span style="text-decoration: overline; ">lg <i>u</i></span> · lglg <i>u</i>) bits, which is o(<i>n</i>) words of space! This superlinear number of nodes take negligible space compared to the branching nodes.</div></div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com1tag:blogger.com,1999:blog-786333285568106173.post-82641621427723937932010-09-19T14:34:00.002-04:002010-09-19T18:19:01.194-04:00Van Emde Boas and its space complexityIn this post, I want to describe 3 neat and very different ways of making the space of the van Emde Boas (vEB) data structure linear. While this is not hard, it is subtle enough to confuse even seasoned researchers at times. In particular, it is the first bug I ever encountered in a class: Erik Demaine was teaching Advanced Data Structures at MIT in spring of 2003 (the first grad course I ever took!), and his solution for getting linear space was flawed. <div><br /></div><div>Erik is the perfect example of how you can get astronomically high teaching grades while occasionally having bugs in your lectures. In fact, I sometimes suspected him of doing it on purpose: deliberately letting a bug slip by to make the course more interactive. Perhaps there is a lesson to be learned here.</div><div><br /><div style="text-align: center;">***</div><div><br /></div><div>Here is a quick review of vEB if you don't know it. Experienced readers can skip ahead.</div><div><br /></div><div>The predecessor problem is to support a set <i>S</i> of |<i>S</i>|=<i>n</i> integers from the universe {1, ..., <i>u</i>} and answer:<br /><blockquote>predecessor(<i>q</i>) = max { <i>x</i> ∈ <i>S</i> | <i>x</i> ≤ <i>q</i> }</blockquote></div>The vEB data structure can answer queries in O(lglg <i>u</i>) time, which is significantly faster than binary search for moderate universes.</div><div><br /></div><div>The first idea is to divide the universe into √<i>u </i>segments of size √<i>u. </i>Let hi(<i>x</i>) = ⌊<i>x</i>/√<i>u</i>⌋ be the segment containing <i>x</i>, and lo(<i>x</i>) = <i>x</i> mod √<i>u </i>be the location of <i>x</i> within its segment. The data structure has the following components:</div><div><ul><li>a hash table <i>H</i> storing hi(<i>x</i>) for all <i>x</i> ∈ <i>S</i>.</li><li>a top structure solving predecessor search among { hi(<i>x</i>) | <i>x</i> ∈ <i>S</i> }. This is the same as the original data structure, i.e. use recursion.</li><li>for each element α∈<i>H</i>, a recursive bottom structure solving predecessor search inside the α segment, i.e. among the keys { lo(<i>x</i>) | <i>x</i> ∈ <i>S</i> and hi(<i>x</i>)=α }.</li></ul><div>The query algorithm first checks if hi(<i>q</i>) ∈ <i>H</i>. If so, all the action is in <i>q</i>'s segment, so you recurse in the appropriate bottom structure. (You either find its predecessor there, or in the special case when <i>q </i>is less than the minimum in that segment, find the successor and follow a pointer in a doubly linked list.)</div><div><br /></div><div>If <i>q</i>'s segment is empty, all the action is at the segment level, and <i>q</i>'s predecessor is the max in the preceding non-empty segment. So you recurse in the top structure.</div><div><br /></div><div>In one step, the universe shrinks from <i>u</i> to √<i>u</i>, i.e. lg <i>u </i>shrinks to ½ lg <i>u.</i> Thus, in O(lglg <i>u</i>) steps the problem is solved.</div><div><br /></div><div><div style="text-align: center; ">***</div><div><br /></div><div>So what is the space of this data structure? As described above, each key appears in the hash table, and in 2 recursive data structures. So the space per key obeys the recursion S(<i>u</i>) = 1 + 2 S(√<i>u</i>). Taking logs: S'(lg <i>u</i>) = 1 + 2 S'(½ lg <i>u</i>), so the space is O(lg <i>u</i>) per key.</div><div><br /></div><div>How can we reduce this to space O(<i>n</i>)? Here are 3 very different ways:</div></div></div><div><br /></div><div><b>Brutal bucketing.</b> Group elements into buckets of O(lg <i>u</i>) consecutive elements. From each bucket, we insert the min into a vEB data structure. Once we find a predecessor in the vEB structure, we know the bucket where we must search for the real predecessor. We can use binary search inside the bucket, taking time O(lglg <i>u</i>). The space is (<i>n</i>/lg <i>u</i>) ·lg <i>u</i> = O(<i>n</i>).</div><div><br /></div><div><br /></div><div><b>Better analysis. </b>In fact, the data structure from above does take O(<i>n</i>) space if you analyze it better! For each segment, we need to remember the max inside the segment in the hash table, since a query in the top structure must translate the segment number into the real predecessor. But then there's no point in putting the max in the bottom structure: once the query accesses the hash table, it can simply compare with the max in O(1) time. (If the query is higher than the max in its segment, the max is the predecessor.)</div><div><br /></div><div>In other words, every key is stored recursively in just one data structure: either the top structure (for each segment max) or the bottom structure (for all other keys). This means there are O(lglg <i>u</i>) copies of each element, so space O(<i>n</i> lglg <i>u</i>).</div><div><br /></div><div>But note that copies get geometrically cheaper! At the first level, keys are lg <i>u</i> bits. At the second level, they are only ½ lg <i>u </i>bits; etc. Thus, the cost per key, in bits, is a geometric series, which is bounded by O(lg <i>u</i>). In other words, the cost is only O(1) words per key. (You may ask: even if the cost of keys halves every time, what about the cost of pointers, counters, etc? The cost of a pointer is O(lg <i>n</i>) bits, and <i>n</i> ≤ <i>u</i> in any recursive data structure.)</div><div><br /></div><div><b><br /></b></div><div><b>Be slick. </b>Here's a trickier variation due to <a href="http://www.itu.dk/~pagh/">Rasmus Pagh</a>. Consider the trie representing the set of keys (a trie is a perfect binary tree of depth lg <i>u </i>in which each key is a root-to-leaf path). The subtree induced by the keys has <i>n</i>-1 branching nodes, connected by 2<i>n</i>-1 unbranching paths. It suffices to find the lowest branching node above the query. (If each branching node stores a pointer to his children, and the min and max values in its subtree, we can find the predecessor with constant work after knowing the lowest branching node.)</div><div><br /></div><div>We can afford space O(1) per path. The data structure stores:</div><div><ul><li>a top structure, with all paths that begin and end above height ½ lg <i>u.</i></li><li>a hash table <i>H </i>with the nodes at depth ½ lg <i>u </i>of every path <i>crossing </i>this depth.</li><li>for each α∈<i>H</i>, a bottom structure with all paths starting below depth ½ lg <i>u </i>which have α as prefix.</li></ul><div>Observe that each path is stored in exactly one place, so the space is linear. But why can we query for the lowest branching node above some key? As the query proceeds, we keep a pointer <i>p</i> to the lowest branching node found so far. Initially <i>p </i>is the root. Here is the query algorithm:</div></div><div><ul><li>if <i>p </i>is below depth ½ lg <i>u</i>, recurse in the appropriate bottom structure. (We have no work to do on this level.)</li><li>look in <i>H</i> for the node above the query at depth ½ lg <i>u. </i>If not found, recurse in the top structure. If found, let <i>p </i>be the bottom node of the path crossing depth ½ lg <i>u </i>which we just found in the hash table. Recurse to the appropriate bottom structure.</li></ul><div>The main point is that a path is only relevant once, at the highest level of the recursion where the path crosses the middle line. At lower levels the path cannot be queried, since if you're on the path you already have a pointer to the node at the bottom of the path!</div></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com3tag:blogger.com,1999:blog-786333285568106173.post-28903144254055240702010-09-07T14:23:00.002-04:002010-09-07T14:28:16.008-04:00IOI Wrap-upIn the past 2 years, I have been a member of the Host Scientific Committee (HSC) of the IOI. This is the body that comes up with problems and test data. While it consists primarily of people from the host country (Bulgaria in 2009, Canada in 2010), typically the host will have a call-for-problems and invite the authors of problems they intend to use.<br /><br />This year, I was elected member of the International Scientific Committee (ISC). This committee works together with the HSC on the scientific aspects, the hope being that a perenial body will maintain similar standards of quality from one year to another. There are 3 elected members in the ISC, each serving 3-year terms (one position is open each year).<br /><br />I anticipate this will be a lot of fun, and you will probably hear more about the IOI during this time. When a call for problems comes up (will be advertised here), do consider submitting!<br /><br />I will end with an unusual problem from this IOI:<br /><blockquote>Consider the largest 50 languages on Wikipedia. We picked 200 random articles in each language, and extracted an excerpt of 100 consecutive characters from each. You will receive these 10000 texts one at a time in random order, and for each you have to guess its language. After each guess, your algorithm learns the correct answer. The score is the percentage of correct guesses.<br /><br />To discourage students from coding a lot of special rules, a random permutation is applied to the Unicode alphabet, and the language IDs are random values. So, essentially, you start with zero knowledge.<br /></blockquote>Considering the tiny amount of training data and the real-time nature of guessing, one might not expect too good solutions. However, it turns out that one can get around 90% accuracy with relatively simple ideas.<br /><br />My own approach was to define <i>Score</i>(<i>text</i>, <i>language</i>) as the minimal number of substrings seen previously in this language that compose the text. This can be computed efficiently by maintaining a suffix tree for each language, and using it to answer longest common prefix queries.<br /><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com10tag:blogger.com,1999:blog-786333285568106173.post-88248108846743931802010-08-29T12:20:00.004-04:002010-09-03T14:28:07.309-04:00Barriers 2Later today, I'll be giving a talk at the 2nd <a href="http://intractability.princeton.edu/files/2010/02/barriers2-sched.html">Barriers Workshop</a> in Princeton.<div><br /></div><div>Here's my attempt to survey data structure lower bounds in 70 minutes: <a href="http://people.csail.mit.edu/mip/talks/barriers2/talk.pdf">PDF</a>, <a href="http://people.csail.mit.edu/mip/talks/barriers2/talk.ppsx">PPSX</a>.</div><div><br /></div><div><b>Update: </b>Based on comments, I'm publishing the slides as PPSX (which can be viewed with a free viewer) and PDF. I will try to convert my other talks to these formats when I have time.</div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com6tag:blogger.com,1999:blog-786333285568106173.post-16402825371813631492010-08-26T23:26:00.002-04:002010-08-26T23:40:01.474-04:00IOI: A Medium ProblemHere is another, medium-level problem from the IOI. (Parental advisory: this is not quite as easy as it may sound!)<div><br /></div><div>I think of a number between 1 and N. You want to guess the secret number by asking repeatedly about values in 1 to N. After your second guess, I will always reply "hotter" or "colder", indicating whether your recent guess was closer or farther from the secret compared to the previous one.</div><div><br /></div><div>You have lg N + O(1) questions.</div><div><br /></div><div style="text-align: center;">***</div><div style="text-align: left;"><br /></div><div style="text-align: left;">The solution to the <a href="http://infoweekly.blogspot.com/2010/08/ioi-hard-problem.html">first problem</a> I mentioned can be found in the comments. Bill Hesse solved the open question that I had posed. He has a neat example showing that the space should be N<sup>1.5</sup>log<sub>2</sub>3 bits, up to lower order terms. It is very nice to know the answer.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">A very elegant solution to the <a href="http://infoweekly.blogspot.com/2010/08/ioi-another-hard-problem.html">second problem</a> was posted in the comments by a user named Radu (do you want to disclose your last name?). Indeed, this is simpler than the one I had. However, the one I had worked even when the numbers in the array were arbitrary (i.e. you could not afford to sort them in linear time). I plan to post it soon if commenters don't find it.</div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com22tag:blogger.com,1999:blog-786333285568106173.post-55212981463352136572010-08-22T19:54:00.002-04:002010-08-22T20:06:04.273-04:00IOI: Another Hard ProblemYou are given a matrix A[1..N][1..M] that contains a permutation of the numbers {1, ..., NM}. You are also given W≤N and H≤M. The goal is to find that rectangle A[i ... i+W][j ... j+H] which has the lowest possible <i>median</i>.<div><br /></div><div>Running time: O(NM).<br /><div><b><br /></b></div><div style="text-align: center;"><b>***</b></div></div><div style="text-align: center;"><b><br /></b></div><div style="text-align: left;">This could have been another hard problem at the IOI, but it was decided to give maximum score to solutions running in O(NM lg(NM)). This is considerably easier to obtain (but still interesting).</div><div style="text-align: left;"><br /></div><div style="text-align: left;">In the style of what Terry Tao tried for the IMO, I am opening this blog post as a discussion thread to try to solve the problem collectively ("poly-CS" ?). You are encouraged to post any suggestions, half-baked ideas, trivial observations, etc – with the goal to jointly reach a solution. If you think about the problem individually and find the solution, please don't post, as it will ruin the fun of the collective effort. Following this rule, I will not engage in the discussion myself.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">I did not encourage a discussion for the first problem, since it was the kind of problem that only required one critical observation to solve. This second problem requires several ideas, and in fact I can see two very different solutions.</div><div style="text-align: left;"><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com20tag:blogger.com,1999:blog-786333285568106173.post-4854020066728036152010-08-20T08:09:00.003-04:002010-08-20T08:22:11.017-04:00IOI: The Hard ProblemThe International Olympiad in Informatics (IOI 2010) is taking part this week at the University of Waterloo, Canada.<div><br /></div><div>The olympiad often features a hard problem, which is intended to be solved by a handful of contestants. This year, the problem was solved by about 6 people. Read the problem below and give it a shot! :)</div><div><br /></div><div>I will describe the problem in both TCS and IOI fashion.</div><div><br /></div><div><b>Asymptotic version. </b>You are given an unweighted, undirected graph on <i>N</i> vertices. Some sqrt(<i>N</i>) vertices are designated as "hubs". You have to encode the pairwise distances between all hubs and all vertices in O(<i>N</i><sup>1.5</sup>) bits of space.</div><div><br /></div><div>The encoder and decoder may run in polynomial time. Of course, the decoder does <i>not</i> see the original graph; it receives the output of the encoder and must output the explicit distances between any hub and any other vertex. (This list of explicit distances takes O(<i>N</i><sup>1.5</sup>lg <i>N</i>) bits.)</div><div><br /></div><div><b>Non-asymptotic version. </b>You are given a graph on 1000 nodes and 36 designated hubs. You have to encode the distances between all hubs and all vertices in 70,000 bits of space.</div><div><br /></div><div>The non-asymptotic version is a bit harder, since you have to pay more attention to some details.</div><div><br /></div><div><b>The research version.</b> Prove or disprove that the distances can be encoded using (1+o(1)) <i>N</i><sup>1.5 </sup>bits of space. I don't know the answer to this question (but I find the question intriguing.)</div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com9tag:blogger.com,1999:blog-786333285568106173.post-82545144182576920112010-08-04T13:12:00.004-04:002010-08-04T17:42:56.691-04:00A taxonomy of range query problems<div>In this post, I will try to enumerate the range query problems that I know about. Let me know if I'm missing anything.</div><div><br /></div><div><b>The query. </b>Say you have <i>n</i> points in the plane, and you query for the points in an axis-parallel rectangle. What could we mean by "query"?</div><div><ul><li><i>existential range queries</i>: Is there any point in the rectangle?</li><li><i>counting queries</i>: How many points are there in the rectangle?</li><li><i>reporting queries</i>: List the points in the rectangle. Unlike the previous cases, the query time is now broken into two components: it is usually given as f(n) + k*g(n), where k is the number of output points.</li></ul><div>Now let's assume the points have some number associated to them (a weight or a priority). Then one could ask the following natural queries:</div></div><div><ul><li><i>weighted counting</i>: What is the total weight of the points inside?</li><li><i>range min (max) query</i></li><li><i>range median query</i>. (Possible generalizations: selection or quantiles.)</li><li><i>top-k reporting: </i>Report just the top <i>k</i> guys, by priority (for <i>k</i> given). One may demand the output to be sorted. More stringently, one may ask the query algorithm to enumerate points sorted by priority, in time g(n) per point, until the user says "stop."</li></ul><div>The number associated to a point can also be a "color". For instance, points could represent Irish pubs / Belgian bars / etc, and we may only want one point of each type in our output. Then the queries become:</div></div><div><ul><li><i>colored counting: </i>How many distinct colors are in the rectangle?</li><li><i>colored reporting</i>: List the distinct colors in the rectangle (possibly with one example point from each color).</li><li><i>top-k colored reporting</i>: If the colors are sorted by priorities (e.g. I prefer points of color 1 over points of color 2), one can then ask for the top-k distinct colors inside the rectangle.</li></ul></div><div><br /></div><div><b>Dynamism.</b> The problem could be:</div><div><ul><li><i>static</i>: Preprocess the point set and then answer queries.</li><li><i>dynamic</i>: Insert and delete from the point set.</li><li><i>incremental</i> / <i>decremental</i>: We only insert or delete.</li><li><i>offline</i>: The sequence of operations is known in advance. This is enough for many applications to algorithms.</li><li><i>parametric</i> / <i>kinetic</i>. I confess ignorance with respect to these. </li></ul><div><b><br /></b></div><div><b>Orthogonal range queries. </b>The setting from above works in any number of dimensions <i>d</i>≥1: the data set consists of <i>n</i> points in <i>d</i>-dimensional space and the query is a box [a<sub>1</sub>, b<sub>1</sub>]×···×[a<sub><i>d</i></sub>, b<sub><i>d</i></sub>]. This setup is usually called "orthogonal range queries".</div></div><div><br /></div><div>We can consider the following important restrictions on the query:</div><div><ul><li><i>dominance queries</i>: the box is [0, b<sub>1</sub>]×···×[0, b<sub><i>d</i></sub>]. In other words, we are asking for the points dominated, coordinate-wise, by a point (b<sub>1</sub>, ..., b<sub><i>d</i></sub>).<br /><br /></li><li><i>k</i>-<i>sided queries</i>: exactly 2<i>d</i>-<i>k</i> values in (a<sub>1</sub>, a<sub>2</sub>, ..., a<sub><i>d</i></sub>) are zero. For instance, a 3-sided query in 2D is a rectangle with one side on the <i>x</i> axis. Dominance queries are the special case of <i>d</i>-sided queries.</li></ul></div><div><b><br /></b></div><div><b>The universe. </b>The coordinates of the points and queries can come from the following sets:</div><div><ul><li>general universe. In the standard RAM model, we assume that the coordinates are integers that fit in a machine word.<br /><br /></li><li><i>rank space</i>: the coordinates are from {1, 2, ..., <i>n</i>}. One can reduce any static problem to rank space by running 2<i>d</i> predecessor queries. Most problems can be shown to be at least as hard as predecessor search, so their complexity is precisely: "optimal cost of predecessor search" + "optimal cost for the problem in rank space". In other words, for most problems it is sufficient to solve them in rank space.<br /><br /></li><li>dense universe: the points are exactly the points of the grid [1, <i>n</i><sub>1</sub>]×···×[1, <i>n</i><sub><i>d</i></sub>] where <i>n</i><sub>1</sub><i>n</i><sub>2</sub>···<i>n</i><sub><i>d </i></sub>= <i>n</i>. In 1D this is the same as rank space, but in 2 or more dimensions the problems are very different. (To my knowledgeable readers: Is there a standard name for this case? For counting queries people call this "the partial sums problem", but how about e.g. min queries?)</li></ul><div>For dynamic problems, the "rank space" changes when a new coordinate value is inserted. Thus, a rank-space solution must support a "insert value" operation that increments all coordinate values after a given one, creating space for a newly inserted point. (I have heard the term "<i>list space</i>" for this. Should we just use "rank space"?)</div><div><br /></div></div><div><br /></div><div><b>Stabbing. </b>So far, our data set consisted of points and the queries asked for points in a given rectangle. Conversely, one can consider a data set of rectangles; the query is a point and asks about the rectangles containing that point ("stabbed" by it). This problem is important, among others, in routing: we can have rules for packets coming from some range of IP addresses and going to some other range of IP addresses.</div><div><br /></div><div>The notion of rank space, and all query combinations still make sense. For instance, <i>interval max stabbing</i> is the following problem: given a set of interval (in 1D) with priorities, return the interval of highest priority stabbed by a query point.</div><div><br /></div><div>Note that the rectangles in the input can intersect! If we ask that the rectangles be disjoint, or more stringently, be a partition of the space, we obtain the <i>point location problem.</i></div><div><b><br /></b></div><div><b><br /></b></div><div><b>Rectangle-rectangle queries. </b>So far we looked at containment relations between rectangles and points. More generally, the data set can consist of rectangles, and the query can also be a rectangle. Then one can ask:</div><div><ul><li><i>intersection queries</i>: analyze the set of input rectangles that intersect the query rectangle.</li><li><i>containment queries</i>: analyze the set of rectangles that contain / are-contained-by the query.</li></ul></div><div>Two important special cases arise when the rectangles degenerate to line segments:</div><div><ul><li><i>orthogonal segment intersection</i>: Given a set of horizontal segments, find the ones that intersect a vertical query segment.<br /><br /></li><li><i>orthogonal ray shooting</i>: Given a set of horizontal segments, find the lowest one immediately above a query point. In other words, consider a vertical ray leaving from the point and find the first segment it intersects. (This is the min segment intersection query, where the priority of each horizontal segment is its <i>y</i> coordinate.)</li></ul></div><div><b><br /></b></div><div><b>More complex geometry. </b>Of course, our ranges need not be orthogonal. One can consider:</div><div><ul><li>balls</li><li>arbitrary lines</li><li>half spaces</li><li>simplices (e.g 2D triangles).</li></ul></div><div>In non-orthogonal geometry, the concept of rank space disappears. However, most other combinations are possible. For instance, one could ask about the points in a query range; the ranges stabbed by a query point; the ranges intersecting a query range; the first rangeintersected by a ray; etc. We can ask existential, counting, or reporting questions, on ranges that can have weights or priorities.</div><div><br /></div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com9tag:blogger.com,1999:blog-786333285568106173.post-1493942322197721932010-07-22T09:13:00.003-04:002010-07-22T09:54:10.959-04:00SODABeing on the SODA PC is excruciating. Too many submissions are at the level of a hard exercise – things that David Karger would assign in <a href="http://courses.csail.mit.edu/6.854/current/">Advanced Algorithms</a> or <a href="http://courses.csail.mit.edu/6.856/">Randomized Algorithms</a> as homework. And since I'm a problem solver by nature, I cannot resist solving the problems before (in lieu of) reading the papers... <div><br /></div><div>I am left wondering how life on the <a href="http://conference.itcs.tsinghua.edu.cn/ICS2011/">ICS</a> PC is like. The fact that the PC members can solve the problem in half an hour is promised to not be a negative – so is there any point in engaging the technical side of your brain during the review process?</div><div><br /></div><div>Which brings me to graph algorithms, a field I have engaged in sporadically. This field strikes me as problem solving par excellence. You think about the problem, you pull a rabbit out of the hat, and run around naked screaming "<span class="Apple-style-span" style=" line-height: 19px; font-family:sans-serif;font-size:13px;"><i>Εὕρηκα</i></span>!" Yet most reviewers (which, I will assume, write graph-theory papers in the same way) cannot help commenting on the lack of new techniques in the solution. I interpret this as a code for "We didn't care to think about the problem, we just read over your solution and remarked that it solved the problem by looking at some edges in some order, then at some trees and some vertices."</div><div><br /></div><div>(I should say I have no sour grapes about this... Graph algorithms in not my main field and all my papers on the topic are in STOC/FOCS. Yet I am amazed by how much the field sabotages itself with this "techniques thing.")</div><div><br /></div><div>By contrast, I once came across a reviewer that actually wore his problem-solver hat while reading my paper. The review read "I spent days to figure out the interplay and indispensability of the data structures and techniques used. But after fully understanding the entire picture and the interplay of all the data structures, I felt very satisfied. It is indeed a very beautiful piece of algorithmic art. I congratulate the authors for such a novel algorithm." This is possibly the best review I ever got – complemented, of course, by some "no new techniques but important result" reviews.</div><div><br /></div><div>Which brings me to the main trouble with TCS, in my opinion. If only we could stop inventing a ton of artificial problems every year, and we actually cared about solving some age-old clear-cut problems – then it would be no surprise to find reviewers that have actually thought carefully about your hard problem.</div>Mihaihttp://www.blogger.com/profile/11599372864611039927noreply@blogger.com25