General Question

pallen123's avatar

What is the technical name that describes this research approach?

Asked by pallen123 (1519points) December 30th, 2013

Is there a term that best describes a research scenario in which you have tremendous variability among independent variables and instead of choosing very specific independent variables to test, you more or less test random combinations or variables until you discover significant correlation? For example, in testing the effectiveness of the design of a web page on whether or not people make purchases, there are a wide range of possible variations in color, text, layout, etc.

You could thoughtfully recombine 1000’s of these elements to see what works best, or you could have software automate the random presentation while testing click through and purchases, etc.

I’m familiar with the terms “trial and error” and “shotgun” but is there a statistical model/theory/paradigm/whatever that says something like…

Whenever there is X degree of variability or imprecision among independent variables or the interaction between independent and dependent variables, then a “trial and error/shotgun/fishing expedition” approach is advisable.

Am I making sense?

Observing members: 0 Composing members: 0

15 Answers

gasman's avatar

I’ve heard the shotgun approach to finding statistically significant correlations among multiple variables referred to as the “silver strainer” technique.

LostInParadise's avatar

You have to be careful when doing this. If you test enough different combinations of variables, you are likely by sheer chance to find a false correlation.

glacial's avatar

It’s called “a bad idea”.

CWOTUS's avatar

It’s not called research, for one thing. It’s called dumb luck when it works, or searching for a needle in a haystack while it isn’t working – which will be most of the time.

pallen123's avatar

Ha. Well… this is actually the basis for a lot of pharmaceutical research that results in breakthrough discoveries. And in the mundane world of web marketing it’s called split testing and it’s used by most companies because it works quite well.

glacial's avatar

Sure, it might work quite well. Or you might be getting a crapload of Type I errors. How could you ever know? This is just bad statistics.

elbanditoroso's avatar

“crapshoot”

Lightlyseared's avatar

Its called sub-group analysis. You mention that this is the basis for a lot of pharmaceutical research and you’re right, but its still a bad idea and the result of a large amount of misleading and often dangerous “facts”. You often only see it if the primary hypothesis turns out to be negative and basically the researchers are trying to find something to publish to justify all the time and money spent on the trial. Unfortunately when trials are done with the primary hypothesis trying to replicate the results of the sub-group they don’t. Let me try to explain why.

Lee et al published a classic paper looking at this (Circulation 61, no 3 March 1980):508–15.
They recruited over 1000 patients with coronary artery disease and randomly allocated them to treatment A or B. Both treatment were non existent because this was a simulation of a trial but the researcher followed up the patients to see what they could find. Overall there was no difference between the 2 groups, just as you would expect but in a sub-group of 397 patients (characterised by “three vessel disease” and “abnormal left ventricular contraction”) they found that the survival of patients in treatment group A was dramatically different to those in treatment group B (remember that both treatments involved doing absolutely nothing).

If you have enough data there will be enough noise for you to find something statistically significant but that doesn’t mean that the effect is real. For example researchers looking at efficacy of endoartectomy surgery tried to push the sub-group analysis as far as they could for a joke and divided patients into every group you could imagine. They found the benefit of the surgery depended on the day of the week the patient was born. They found that patients born in May and June did much better than patients born in March. Lancet 1998; 351: 1379–87

Another example is from a trial looking at the benefits of aspirin for patients who’ve just had a heart attack. Aspirin was found to improve outcomes however sub-group analysis of their data showed that it wasn’t effective for patients whose star signs were Gemini or Libra.

This sort of sub-group analysis can and dose harm patients. In 1978, sub-group analysis of a trial looking at the benefit of aspirin to prevent strokes, found it to be effective in men but not women. For over a decade women were under treated and at an increased risk of stroke until the research was repeated and found that aspirin was actually effective at preventing stroke in women.

glacial's avatar

@Lightlyseared I agree with much of what you’ve written, but the term “sub-group analysis” is too broad for my liking. Any advanced ANOVA could rightly be called a sub-group analysis. If used wisely, it can yield meaningful results. But the kind of experimental design described by the OP is irresponsible, and probably pointless for the reasons you’ve given.

Lightlyseared's avatar

@glacial The real problem is looking for the sub-groups after you’ve collected the data as the OP described. If your original test design states that you are going to look at certain sub-groups before you collect the data then you are much firmer statistical ground. I used the term “sub-group analysis” because that is was this type of thing is often referred to in medical literature (the area I’m familiar with).

glacial's avatar

@Lightlyseared “If your original test design states that you are going to look at certain sub-groups before you collect the data then you are much firmer statistical ground.”

Yup, agreed – until we start getting into Bayesian analysis.

This is certainly one type of sub-group analysis, but my point is that there are many other types that could be considered quite valid. And yes, it’s funny how different disciplines seem to have their own statistical language.

pallen123's avatar

Thanks. I think this is helpful…

Maybe I can pose this another way…

Suppose I don’t really know exactly what makes a helicopter pilot a more precise navigator under stressful conditions (wind, artillery fire, mountains, experience, fitness, etc.) but it’s very important to me that a maximum number of our pilots are able to perform a particular maneuver under certain difficult conditions. I’d like to thoughtfully develop some training protocols for these pilots with whatever research rigor I can apply.

I don’t have the luxury of time or budget to create hundreds of separate cognitive, sensorimotor, performance tests to try to examine each minute confounding aspect of human physiology that together constitute a high performing helicopter pilot. I may have some “hunches” about what basic skills are required (spatial reasoning, balance, etc.) that I might use to hypothesize some training techniques.

I could then assign a group of pilots to trial these training techniques and see whether or not they result in any improvement in their flight performance, correct? I mean that’s essentially the basis of any process for acquiring new skills.

I guess my original question relates to the fact that these training techniques seem to be just a drop in the proverbial bucket compared to the giant forest of potential independent variables that influence a pilot’s performance under stressful conditions. My original question was not about willy nilly throwing nonsensical independent variables in the mix. Rather, for the type of situation I’m describing, how best do I describe the type of curricular development approach we are trying to pursue—one in which whatever analytical rigor can be applied, is applied?

pallen123's avatar

And thank you.

LuckyGuy's avatar

This sounds a bit like Taguchi Method . I’ve used that to determine the best solution when there are too many variable to check individually.

LuckyGuy's avatar

Almost forgot. There is also Monti Carlo Method where you make a rule and throw random numbers at it to calculate the result. For example you can use it to figure the area of random shape. Throw thousands of random points at it and count the ones inside the perimeter. The more points you throw, the more precise your answer.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther