Posts Tagged ‘cbb752’

Commonly Taught Bioinformatics Topics, Derived from Syllabi of 19 Universities.

Thursday, September 24th, 2015

A Helpful Reference

https://apps.lis.illinois.edu/wiki/download/attachments/4369699/curriculum+analysis.doc?version=4

Introduction to Systems Modeling in Biology MCDB 261 S15

Saturday, September 19th, 2015

http://mcdb261s15.commons.yale.edu/

Rebooting MOOC Research

Friday, May 15th, 2015

Rebooting #MOOC Research https://www.sciencemag.org/content/347/6217/34.summary
Perspective from an #education institution: How to measure engagement of the student?

Know it all: 10 secrets of successful learning – life – 25 March 2015 – New Scientist

Monday, April 13th, 2015

Know it all: 10 secrets of successful learning http://www.newscientist.com/article/dn27187-know-it-all-10-secrets-of-successful-learning.html Including quizzes, practicing to teach, buddying up & even video games

In Search of Bayesian Inference

Sunday, April 12th, 2015

In Search of #Bayesian Inference
http://cacm.acm.org/magazines/2015/1/181628-in-search-of-bayesian-inference/fulltext Nice intuition on priors in recovering air-crash wreckage & analyzing mammographs

QT:{{”

In its most basic form, Bayes’ Law is a simple method for updating beliefs in the light of new evidence. Suppose there is some statement A that you initially believe has a probability P(A) of being correct (what Bayesians call the “prior” probability). If a new piece of evidence, B, comes along, then the probability that A is true given that B has happened (what Bayesians call the “posterior” probability) is given by

P(A|B)=P(B|A) P(A) / P(B)

where P(B|A) is the likelihood that B would occur if A is true, and P (B) is the likelihood that B would occur under any circumstances.

Consider an example described in Silver’s book The Signal and the Noise: A woman in her forties has a positive mammogram, and wants to know the probability she has breast cancer. Bayes’ Law says that to answer this question, we need to know three things: the probability that a woman in her forties will have breast cancer (about 1.4%); the probability that if a woman has breast cancer, the mammogram will detect it (about 75%); and the probability that any random woman in her forties will have a positive mammogram (about 11%). Putting these figures together, Bayes’ Law—named after the Reverend Thomas Bayes, whose manuscript on the subject was published posthumously in 1763—says the probability the woman has cancer, given her positive mammogram result, is just under 10%; in other words, about 9 out of 10 such mammogram results are false positives.

In this simple setting, it is clear how to construct the prior, since there is plenty of data available on cancer rates. In such cases, the use of Bayes’ Law is uncontroversial, and essentially a tautology—it simply says the woman’s probability of having cancer, in light of her positive mammogram result, is given by the proportion of positive mammograms that are true positives. Things get murkier when
statisticians use Bayes’ rule to try to reason about one-time events, or other situations in which there is no clear consensus about what the prior probabilities are. For example, large passenger airplanes do not crash into the ocean very often, and when they do, the
circumstances vary widely. In such cases, the very notion of prior probability is inherently subjective; it represents our best belief, based on previous experiences, about what is likely to be true in this particular case. If this initial belief is way off, we are likely to get bad inferences.

“}}

ROSALIND | About

Sunday, February 8th, 2015

Useful helpers for #teaching #bioinformatics: Biostars forum https://www.biostars.org & Rosalind assignment evaluator http://rosalind.info/about

Why Most Published Research Findings are false

Saturday, February 7th, 2015

Why Most Published Research Findings are False http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 Evaluating 2×2 confusion matrix, effects of bias & multiple studies

PLoS Medicine | www.plosmedicine.org 0696
August 2005 | Volume 2 | Issue 8 | e124

QT:{{"
Published research fi ndings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [1–3] to the most modern molecular research [4,5]. There is increasing concern that in modern research, false fi ndings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research fi ndings are false. Here I will examine the key


Research fi ndings are defi ned here as any relationship reaching formal statistical signifi cance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null fi ndings. As has been shown previously, the probability that a research fi nding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical signifi cance [10,11]. Consider a 2 × 2 table in which research fi ndings are compared against the gold standard of true relationships in a scientifi c fi eld. In a research fi eld both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the fi eld. R

is characteristic of the fi eld and can vary a lot depending on whether the fi eld targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fi elds where either there is only one true relationship (among many that can be hypothesized) or the power is similar to fi nd any of the several existing true relationships. The pre-study probability of a relationship being true is R⁄(R + 1). The probability of a study fi nding a true relationship refl ects the power 1 − β (one minus the Type II error rate). The probability of claiming a relationship when none truly exists refl ects the Type I error rate, α. Assuming that c relationships are being probed in the fi eld, the expected values of the 2 × 2 table are given in Table 1. After a research fi nding has been claimed based on achieving formal statistical signifi cance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [10]. According to the 2 × 2 table, one gets PPV = (1 − β)R⁄(R − βR + α). A research fi nding is thus
"}}

Leaders in New York and New Jersey Defend Shutdown for a Blizzard That Wasn’t

Friday, January 30th, 2015

Leaders… Defend Shutdown for a #Blizzard that Wasn’t http://www.nytimes.com/2015/01/28/nyregion/new-york-blizzard.html Might’ve overreacted in canceling class but others did as well

Registration open for spring MOOCs: Free online learning with Yale experts

Friday, January 9th, 2015

Spring #MOOCs [open]…Online Learning w/ @Yale Experts
http://news.yale.edu/2014/12/22/registration-open-spring-moocs-free-online-learning-yale-experts 6 total: 4 soc science, 2 humanities, 0 natural science

Neural Networks Demystified Part 1: Data and Architecture – YouTube

Tuesday, December 30th, 2014

https://www.youtube.com/watch?v=bxe2T-V8XRs