## Sunday, September 30, 2007

### Naturalism

I've started reading the just-published book, God's Undertaker: Has Science Buried God?, of John Lennox, whom I met at St. Ebbe's. It's good. I think I see the essence of the philosophical position of Naturalism now,Click here to read more

To view the post on a separate page, click: at (the permalink). Links to this post

## Saturday, September 29, 2007

### Shoulder Muscles and Handwriting

To view the post on a separate page, click: at (the permalink). Links to this post

## Friday, September 28, 2007

### Weighted Least Squares and Why More Data is Better

<p>In doing statistics, when should we weight different observations differently?<p>

Suppose I have 10 independent observations of $x$ and I want to estimate the population mean, $\mu$. Why should I use the unweighted sample mean rather than weighting the first observation .91 and each of the rest by .01?<p>

Either way, I get an unbiased estimate, but the unweighted mean gives me lower variance of the estimator. If I use just observation 1 (a weight of 100% on it) then my estimator has the variance of the disturbance. If I use two observations, then a big positive disturbance on observation 1 might be cancelled out by a big negative on observation 2. Indeed, the worst case is that observation 2 also has a big positive disturbance, in which case I am no worse off by having it. I do not want to overweight any one observation, because I want mistakes to cancel out as evenly as possible.<p>

All this is completely free of the distribution of the disturbance term. It doesn't rely on the Central Limit Theorem, which says that as $n$ increases then the distribution of the estimator approaches the normal distribution (if I don't use too much weighting, at least!).<p>

If I knew that observation 1 had a smaller disturbance on average, then I *would* want to weight it more heavily. That's heteroskedasticity. <p>

Labels:

To view the post on a separate page, click: at (the permalink). Links to this post

## Thursday, September 27, 2007

### Lizzie's Plus Table

To view the post on a separate page, click: at (the permalink). Links to this post

### Who Most Wants To Be Elected Policeman?

Two weeks after Israel's alleged bombing raid in Syria, which some foreign reports said targeted North Korean nuclear material, the UN's nuclear watchdog elected Syria as deputy chairman of its General Conference on Monday.

To view the post on a separate page, click: at (the permalink). Links to this post

## Tuesday, September 25, 2007

### Asymptotics

Page 96 of David Cox’s 2006 Principles of Statistical Inference has a very nice one-sentence summary of asymptotic theory:

[A]pproximations are derived on the basis that the amount of information is large, errors of estimation are small, nonlinear relations are locally linear and a central limit effect operates to induce approximate normality of log likelihood derivatives.

Labels:

To view the post on a separate page, click: at (the permalink). Links to this post

### Bayesian vs. Frequentist Statistical Theory

The Frequentist view of probability is that a coin with a 50% probability of heads will turn up heads 50% of the time.

The Bayesian view of probability is that a coin with a 50% probabilit of heads is one on which a knowledgeable risk-neutral observer would put a bet at even odds.

The Bayesian view is better.

When it comes to statistics, the essence of the Frequentist view is to ask whether the number of heads that shows up in one or more trials is probable given the null hypothesis that the true odds in any one toss are 50%.

When it comes to statistics, the essence of the Bayesian view is to estimate, given the number of number of heads that shows up in one or more trials and the observerâ™s prior belief about the odds, the probability that the odds are 50% versus the odds being some alternative number.

I like the frequentist view better. Itâ™s neater not to have a prior involved.

Labels:

To view the post on a separate page, click: at (the permalink). Links to this post