How many tanks?

The German tank problem is a fav of mine. The wikipedia page on it is a little long winded, but I think it can be looked at a lot faster with a little numerical mucking about.

The problem is quite simple. The enemy are producing tanks, and each has a sequential serial number (for simplicity, let's assume that the numbers are reset every month). You encounter this scene on the battle field;
and we see that this is tank number, say, 15 of a particular months production. How many tanks were produced in that month? Can we even answer the question?

This is the problem that faced the Allies in WWII; you really wanted to know how many panzers are out there. Intelligence officers were reporting productions of more than a 1000 tanks per month, but based on statistics, the predicted number was significantly fewer than that, in the hundreds. After the war, the numbers were checked against records and the statistical answer was amazingly correct (read the wikipedia page for more details).

But let's see the how we can calculate this. We'll adopt the Bayesian approach (because that's the correct thing to do it :). So, let's assume that the number of tanks actually made is a number N, and let's assume that we guess that the maximum number of tanks that could be possibly be made is M (we'll insert some real number in here soon).

On the battle field, we find a tank with a serial number, A. What is your estimate of the number of tanks made in that months (let's call this X)? We want to make a probability distribution, and where this peaks, this is our best estimate for the number of tanks build.

Clearly, the minimum number of tanks is A (because you have the serial number you have). What about the rest of the probability distribution? If you think about it, if the total number of tanks is X, then the probability of randomly selecting tank A is simply 1/X. So the probability distribution look like this

This is the case where the actual number of tanks produced was 274, and the maximum we think they could produce is 1000, and the serial number of the one tank found was 217. So the most likely number of tanks is 217, but there is still a lot of probability that there could be 900 or 1000.

Now for the cool part. You hear a report that another tank has been knocked out, this time serial number 91. You might think that tells you nothing new, as you know the minimum number is 217, but 91 has a similar probability distribution to 217, and to get the resultant distribution for the total number of tanks you multiply these together.

I've brushed over some of the key Bayesian words and concepts here, but this is basically what it boils down to; we get more evidence and we update our beliefs. So, what's the result of now finding tank 91? The result is the red curve below.
Notice that the most likely number of tanks is still 217, but knowing 91 as well has really started to suppress the numbers up near 1000.

Reports come in that three more tanks have been knocked out, 256, 248 and 61. What's the resultant distribution look like?
Again, each of the blue curves is the probability distribution for each tank, whereas the red is the total. Notice that the peak is now at 256, and the chances of more that 600 tanks being produced per month is pretty small, and 1000 is negligible.

Report come in of 5 more tanks, number 250, 172, 189, 29 and 170. What's the distribution now?
For clarity, I've left out the blue curves, but you can see that with just 10 tanks, we know the number produced is more than 256, but quite probably less than 400.

We can continue to play this game, and with 25 tanks knocked out, we get
Notice that I've changed the scale on the x-axis. We can be quite confident that less than 300 were made.

Now I think that is cool. And that's how information should be used.

Comments

  1. This is similar to Rich Gott's "95% confidence that I'm a typical observer" argument, from which he predicted the lengths of time plays would run on broadway etc. What's your take on that?

    ReplyDelete
  2. Yes - should be 1000 - corrected.

    As for the Gott argument - well, I've been arguing about that for almost 20 years, and I'm still not 100% sure what my actual take *is*! It clearly works for the examples in which it works, but beyond that....

    Perhaps I should compose a post to lay my ideas out a little more clearly.

    ReplyDelete
  3. This is an interesting problem indeed. I remember thinking about it a lot a few years back, and I discovered an interesting thing. If X is the total number of tanks, it is common to assign a "Jeffreys" 1/X ignorance prior, and then update using a 1/X likelihood for X >= A given the first tank number is A. But this is not quite right!

    The observed data is "I saw a tank and its number was A", and if you split this up, you get two propositions, "I saw a tank" and "its number was A". The likelihood for "I saw a tank" is proportional to X! Which exactly cancels the 1/X from the tank number.

    In other words, if you take into account that you might not have seen a tank at all, seeing one gives you evidence that the number of tanks is large, and reading the number gives you evidence that it's small. These cancel each other out exactly, so all you get for a result is that your posterior is whatever your prior was but truncated to X >= A.

    I believe this is equivalent to Carlton Caves's refutation of Gott's argument.

    ReplyDelete
  4. Hi Brendon - I've heard this one before, but don't know if I am convinced. Why? Well, if the prior is uniform, then we just end up with a uniform prior >A and so we don's seem to learn a lot. However, using 1/X does wipe out the larger values and we get a peakier result near the actual answer.

    There are complications on "I saw (knocked out) a tank" in that they are unlikely to be randomly thrown out onto the battlefield, or be knocked out, and so the "I saw a tank" is not simply proportional to X - but I don't think this solves the problem.....

    ReplyDelete
  5. Gott essentially claimed that a typical observer doesn't live during an atypical time, which in relation to an object would be near the beginning or near the end of its existence. In other words, with 95% confidence he should observe the object during the middle 95% of its existence. So, if one observes an object at a random time, then with 95% confidence it will probably last longer than 1/39 of its present age but be gone before 39 times its present age. He did some postdictions such as his visit to the Berlin wall in 1970 or whatever, then some predictions such as the run-time of Broadway plays. Low and behold, his theory is confirmed by observations!

    Gott's Nature paper generated quite a bit of discussion; we are looking forward to Cusp's thoughts here soon. However, I think the basic version above is accepted by everyone. (The questions are how useful it is and, in particular cases, if the assumptions are valid.)

    One can turn this around and say that the typical properties of an object are observed by a typical observer. In the case of the Berlin wall or a Broadway play, this doesn't mean much, since these change but little over their lifetimes. But what about objects whose properties change with time? An example of such an object is the universe, in which (in general) the cosmological parameters change with time. One formulation of the flatness problem states that we should be surprised that Omega is still of the order 1 today, since if Omega is only slightly larger than 1 in the early universe, then it will become arbitrarily large (in a finite time, no less) (at least in some classes of cosmological models). In a universe which will collapse in the future after expanding to a maximum finite size (probably ruled out by observations, of course, but to a theoretician that makes the problem if anything only slightly less interesting), Omega indeed becomes infinite. However, large values of Omega occur only during the special time near maximum expansion. So, one can reverse Gott's argument to say that the typical value observed by a typical cosmologist would, in such a universe, not be a very large value of Omega. This is one solution to one formulation of the flatness problem.

    I have written up my thoughts on the flatness problem in a paper which has been accepted by Monthly Notices of the Royal Astronomical Society.

    Enjoy.

    ReplyDelete
  6. Thanks for the paper Phil - I'm going to enjoy reading it and I'll see if I can get a Gott post together in the near future.

    ReplyDelete

Post a Comment

Popular posts from this blog

Falling into a black hole: Just what do you see?

Journey to the Far-Side of the Sun

Proton: a life story