# What is this the name of this statistical method?

Asked by LuckyGuy (33746) May 30th, 2010

If I take multiple readings and average them, I can get finer resolution with a certain confidence limit.
For example if I want to know the diameter of a coin to 0.001 inches and only have a ruler labeled in inches, I can randomly drop the coin on the ruler 1000 times and count the number of times it lands on an inch mark. If the coin is exactly ½ inch I can expect to see it land on the line about 500 times within a certain gaussian distirbution. If the coin landed on the line 739 times for example, I can say that I am 95% confident that the coin is .739 inches.
What is this method called?
This is driving me crazy. Thanks!

Observing members: 0 Composing members: 0

Process mean?

dpworkin (27000)

You are way over my grasp with the ease you set up the problem and referred to Gaussian distribution… But are you asking about probability, inferential or hypothesis?

If you have a measuring device graduated in inches and a coin of unknown diameter and think that you can measure the diameter to within 0.001” using this device, then that process is known as “guessing”. And if you do it according to a very rigid process then it is a SWAG (scientific wild-ass guess).

You can’t infer precision into a measurement if your measuring device doesn’t permit it.

CyanoticWasp (20043)

@CyanoticWasp Yes, you can infer precision if you add white noise to the system and take more samples than the resolution required. For example if I drop the coin 4000 times I will be 95% confident that the coin diameter is within 0.001 +/- 0.001. Something like that.
There were equations to work it out I just can’t find them.

LuckyGuy (33746)

What about the method of Least Squares? Would that apply? Or a better ruler?

http://en.wikipedia.org/wiki/Least_squares

gailcalled (54405)

What you are asking and what you are describing are two different things.

If you drop the coin on the ruler and see how closely to one of the marks it lands, you obtain absolutely no information about its diameter no matter how many times you do it because you are not in fact measuring the diameter.

However, if you repeatedly sample a population (even if that population is “coin tosses”), you do get a more accurate measurement. This is a property of the central limit theorem. Each time you take a sample it approximates the mean; the more samples you take, the better you approximate the mean.

nikipedia (27439)

stochastic process?

PandoraBoxx (17961)

@nikipedia You drop the coin and look how many times it crosses a line (I really should have called it a grid line). A coin that is .99 inches will most likely touch the line 990 times in 1000 drops. A coin that is .50 inches will most likely touch the line 500 times and a small coin .010” will touch it only about 10 times. If you want to get it to 0.001 resolution you have to drop it 4000 times to reach a certain confidence limit.
This is a real problem involving acoustic energy. I can take thousands of readings in a very short time and by summing and averaging I expect to filter out the random noise, leaving only the signal.

@PandoraBoxx Hmm, stochastic.process. I will look it up.

@nikipedia It looks like central limit theorem gets me close. I can assume a gaussian result and use the std deviations to determine the probability and confidence limits.

LuckyGuy (33746)

I’m going with @PandoraBoxx and stochastic process/stochastic analysis, sometimes also referred to as Monte Carlo methods. Here’s a link to and article on stochastic processes.

lillycoyote (24783)

I thought a stochastic process was the chance of a Prussian General getting killed by being kicked in the head by his horse while on Dress Parade. and i thought the Process Mean was for establishing the the thickness of objects by repeated trials.

dpworkin (27000)

Monte Carlo simulation. I tried to find a nice website to link to, but I can’t remember. This might be it, it’s an estimation of PI by throwing darts at a circle.

For the question you are talking about, I’m reasonably certain that as long as you could get the coin to land on ruler every time (lined paper might work better—if the lines were 1 inch apart), this would estimate the diameter of any coin <1 inch, mathematically speaking. Anything larger would hit a line every time.

You do want to think in terms of the Central Limit Theorem here. What you really have is a binomial distribution (that is a hit/miss system with multiple attempts), and as the number of trials increases, this distribution converges to a Normal distribution.

Somehow I can’t edit the last post. Anyway, the Central Limit Theorem lets you use all the information you already know about Gaussian (Normal) distributions.

About the stochastic process idea. Yeah, a Monte Carlo simulation is a stochastic process, but that’s too broad of a category I think. The coin flippig is a simple discrete time problem that could be put under the narrower category of time series instead of stochastic process. For your actual problem (the acoustic energy one) this time series might be closer to what you are thinking of than the Monte Carlo simulations.

@worriedguy I don’t know if you have gotten the answer you wanted, needed or were looking for yet but if you could maybe clarify how or what determining the diameter of a coin has to do with solving a real problem involving acoustic energy we might be better able to help you.

lillycoyote (24783)

I’m all set. I will used the Monte Carlo method and probably take 10x the number of samples I need so I can extract 4x the resolution of one sample. For example I will drop the coin on a 1 inch grid 20 times and then use that number to determine the size to the nearest ¼ inch. Figuratively of course.

LuckyGuy (33746)

Don’t you need a way to measure the underlying probability distribution of how far along the ruler it falls?

roundsquare (5512)

@roundsquare Nope, because the underlying prob. dist. of hitting/missing an inch-mark is indepedent (one trial doesn’t depend on the outcome of other trials) and identically distributed (each trial has the same hit/miss chances) with many trials the distribtion will be Gaussian (Normal) regardless of what the distribution for one trial is.

But is the Monte Carlo method what you were trying to recall?Weren’t you looking for the name of a specific statistical approach?

But, at least you need a measure of the standard deviation? Or am I misunderstanding what it means to drop the coin?

roundsquare (5512)

or