# statistic responds question

Please respond to these questions from the previous discussion.

Discussion

P and the confidence interval are the invariable results of the statistical tests, and consequently we find them in all articles describing an original research. So, at the end of

a research, data analysis is performed and its results are p and the confidence interval, which shows us the well-known “statistical significance”. Why do we need statistics? Because we want to draw as valid as possible conclusions from limited amount of data and important differences are often masked by variability biological and / or experimental inaccuracy. On the other hand, the human mind excels in finding of patterns and relationships and tends to generalize in excess. It is assumed that the population is infinite, and we always do our research on a sample finished, whether it’s a few tens of subjects or tens of thousands like in some big cardiological studies. We use statistics, in particular p and the confidence interval to see if the results on our sample are valid in the end to the entire population, and can be extrapolated to it, or are the result of pure chance in our sample.

Supposedly we want to see if smoking is a risk factor for myocardial infarction.

For this, we select a sample of n patients (the number is calculated according to: 1) clinical significance of smoking = relative risk and / or attributable risk that I consider it deserves to be highlighted, and 2) the statistical significance that I want to achieve. We track and count how many infected smokers and how many of the non-smokers, and we calculate relative risk (RR) = 2; following a statistical test (in this case type X2) we obtain a p = 0.012. The lower the p, the probability that our obtained result is not real is smaller. The confidence interval (usually 95%) gives us more information: in our example, it tells us that in reality, at the population level, we are 95% sure that the relative risk is between 1.3 and 4, and if you smoke, the risk of heart attack will be 1.3 to 4 times higher than if you did not smoke.

Reducing the alpha to 0.01 will lower the chances of a false positive test and it will make it more difficult to find differences with a t-Test. Lower alpha levels are used to carry out multiple tests at the same time. If you have 5 tests with a 0.05 alpha level, then you can divide the alpha by 5 and get a 0.01 level, because this will reduce type 1 error risk. Some situations in which researchers might choose a 0.01 alpha level is when replicating an already finished study, to find new results or to confirm its validity again.

References

Banerjee, A., Chitnis, U. B., Jadhav, S. L., Bhawalkar, J. S., & Chaudhury, S. (2009).

Hypothesis testing, type I and type II errors. Industrial Psychiatry Journal, 18(2), 127â€“ 131. http://doi.org/10.4103/0972-6748.62274

Have you ever observed any specific details in a study in the medical field, or in the nursing experience, to determine the alpha value being used?

What were the exact situations in which accepting a higher level could prove to be erroneous? What is one simple way of finding this out?