Originally Posted by

**21st Century Schizoid Man**
I would not characterise "not related to each other" as "random" - random variables can be dependent. But it now seems to be common usage in everyday talk - if something is "random", people think it must have a uniform distribution, with each outcome having equal probability. If some outcomes have more probability than others, then they say that this is not "random". But a more precise way to characterise "not related to each other" would be "independent".

The word "expected" is missing before "number of events". For a Poisson distribution, the expected value is the same as the variance. But the "number of events" is a non-constant random variable, and the "variance" is a number, so those two cannot be equal. At least not in all possible outcomes.

I am not sure what "number of trials" in the above is supposed to mean. Is it supposed to be the number of occurrences? The meaning of "expected probability integer number" is also not clear - is this the expected number of occurrences in a unit of time?

If the Euler number is e (about 2.718), then I am not sure how that comes into it. There is not a single Poisson distribution, but an entire family of distributions, normally parameterised by "lambda", which turns out to be both the expected value and the variance. The probability mass function of a Poisson does have an "e" in it. Maybe there is some other way of parameterising the distribution such that an "e" shows up in the mean and variance. But even in that case, the mean would be some function that has an "e" in it, and also an unknown parameter, and the problem of finding the population mean has simply been transformed into the problem of finding the unknown parameter.

But if parameterised by lambda, then if you know the value of lambda, you know the population mean already (and the population variance, which is the same). If you don't know lambda, then it needs to be estimated in some way, which could be done using a sample average - the same way you normally find the sample average of any random variable. (The fact that it is a Poisson distribution would be completely irrelevant in the calculation of the sample average.)

However, if you want to find the population average, that brings us back to the point that I commented on - the way you do this is not to take some dataset, add up all the numbers, and divide by the number of observations. That is a sample average, and if you could choose your "sample" to be the entire population, then it would work. But here, the population has infinitely many possible outcomes (very high numbers of occurrences are unlikely, but possible), and the outcomes do not all have the same probability (since there are infinitely many, the can't have the same probability, since the probabilities have to add up to one). This is a case where you clearly cannot find a population average the same way you find a sample average. That only works with a finite population, with each outcome having equal probability.

For what it's worth, if we have the usual assumptions that cause the number of events per time period to have a Poisson distribution, then the time between events has an exponential distribution. I posed the problem with the failures as an expected time until failure, with the underlying distribution being exponential. However, this is equivalent to asking the expected number of failures per unit of time. ("Equivalent" in the sense that if you know one, you can find the other - not that they are the same number.)

But either distribution will serve the purpose here. Suppose we have either an exponential distribution (the way I posed the failure problem, the random variable being time until next failure) or a Poisson distribution (the random variable is the number of failures per unit of time). How to find the population mean? We cannot find it the same way we find the sample mean.