### Visualization of Statistical Power Analysis

Thursday, July 28th, 2016Visualization of Power Analysis http://amarder.GITHUB.io/power-analysis/ Useful sliders giving one a feel of the #statistics

Visualization of Power Analysis http://amarder.GITHUB.io/power-analysis/ Useful sliders giving one a feel of the #statistics

How does multiple-testing correction work

http://www.nature.com/nbt/journal/v27/n12/abs/nbt1209-1135.html Intuition for teaching: genome-wide error rate on a single gene v family

Reg. variation in cplx traits by @LeonidKruglyak

http://www.nature.com/nrg/journal/v16/n4/full/nrg3891.html nice teaching figure for #eQTLs, showing how mostly cis + hotspots http://www.nature.com/nrg/journal/v16/n4/full/nrg3891.html

Know it all: 10 secrets of successful learning http://www.newscientist.com/article/dn27187-know-it-all-10-secrets-of-successful-learning.html Including quizzes, practicing to teach, buddying up & even video games

Useful helpers for #teaching #bioinformatics: Biostars forum https://www.biostars.org & Rosalind assignment evaluator http://rosalind.info/about

Why Most Published Research Findings are False http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 Evaluating 2×2 confusion matrix, effects of bias & multiple studies

PLoS Medicine | www.plosmedicine.org 0696

August 2005 | Volume 2 | Issue 8 | e124

QT:{{"

Published research fi ndings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [1–3] to the most modern molecular research [4,5]. There is increasing concern that in modern research, false fi ndings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research fi ndings are false. Here I will examine the key

…

Research fi ndings are defi ned here as any relationship reaching formal statistical signifi cance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null fi ndings. As has been shown previously, the probability that a research fi nding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical signifi cance [10,11]. Consider a 2 × 2 table in which research fi ndings are compared against the gold standard of true relationships in a scientifi c fi eld. In a research fi eld both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the fi eld. R

is characteristic of the fi eld and can vary a lot depending on whether the fi eld targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fi elds where either there is only one true relationship (among many that can be hypothesized) or the power is similar to fi nd any of the several existing true relationships. The pre-study probability of a relationship being true is R⁄(R + 1). The probability of a study fi nding a true relationship refl ects the power 1 − β (one minus the Type II error rate). The probability of claiming a relationship when none truly exists refl ects the Type I error rate, α. Assuming that c relationships are being probed in the fi eld, the expected values of the 2 × 2 table are given in Table 1. After a research fi nding has been claimed based on achieving formal statistical signifi cance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [10]. According to the 2 × 2 table, one gets PPV = (1 − β)R⁄(R − βR + α). A research fi nding is thus

"}}

Was This #Student Dangerous?

http://opinionator.blogs.nytimes.com/2014/06/18/was-this-student-dangerous Raises issue: Should #professors be involved in mental health counseling? I think not.

Bioengineering & #systemsbiology. Classic def’n in terms of the “4 M’s”—Measurement, #Mining, Modeling & Manipulation

http://www.ncbi.nlm.nih.gov/pubmed/16474915

QT:{{”

Systems Biology can also be defined operationally, as

by the MIT Computational & Systems Biology Initiative,

in terms of the “4 M’s”—Measurement, Mining, Modeling,

and Manipulation—illustrated schematically in Fig. 1 (see

http://csbi.mit.edu/).

“}}

Re my listing of #bioinformatics programs, see the Analysis of >2K CS Profs at Top Us: #crowdsourcing at its best!

http://jeffhuang.com/computer_science_professors.html

Inferring… #Networks Using Probabilistic Graphical Models. Nice intro to #Bayesian methods, useful for #teaching

http://www.sciencemag.org/content/303/5659/799.abs