Posts Tagged ‘deeplearning’

What to expect in 2018: science in the new year

Sunday, January 14th, 2018

What to expect in ’18: science in the new year Insights from cancer & ancient #genomes. Cures from #CRISPR. Progress in
#OpenAccess. Also, lots on outer space. But nothing on #cryoEM, #DeepLearning, #QuantumComputing or the brain connectome. HT @OBahcall

google released variant calling with deep learning

Sunday, December 17th, 2017

$GOOG Is Giving Away AI That Can Build Your Genome Seq. + GATK creators now doing a tensor-flow version. Release sounded a bit like IBM unveiling Deep Blue decades ago: “Today, we announce…DeepVariant, a #DeepLearning tech…"

Steven Salzberg’s response to deep variant:

QT:{{"On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to identify all the mutations that an individual inherits from their parents.1 Modeled loosely on the networks of neurons in the human brain, these massive mathematical models have learned how to do things like identify faces posted to your Facebook news feed, transcribe your inane requests to Siri, and even fight internet trolls. And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.”

Google Is Giving Away AI That Can Build Your Genome Sequence

The Serial-Killer Detector | The New Yorker

Saturday, December 9th, 2017

The Serial-Killer Detector Journalist finds subtle yet predictive crime patterns with the computer. Wonder if #DeepLearning would be helpful here? Probably not // #CrimeMap

A former journalist, equipped with an algorithm and the largest collection of murder records in the country, finds patterns in crime. “}}

New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine

Monday, November 13th, 2017

New Theory Cracks Open the Black Box of #DeepLearning Highlights the importance of a compression phase for generalization

“Then learning switches to the compression phase. The network starts to shed information about the input data, keeping track of only the strongest features — those correlations that are most relevant to the output label. This happens because, in each iteration of stochastic gradient descent, more or less accidental correlations in the training data tell the network to do different things, dialing the strengths of its neural connections up and down in a random walk. This
randomization is effectively the same as compressing the system’s representation of the input data. As an example”

interactive cnn

Monday, June 19th, 2017

an intuitive example of how CNN works

Journal Club Paper

Sunday, June 18th, 2017

Zhou, J. and Troyanskaya, O.G. (2015). Predicting effects of noncoding variants with deep learning–based sequence model. Nature Methods, 12, 931–934.

Predicting (& prioritizing) effects of noncoding variants w. [DeepSEA] #DeepLearning…model Trained w #ENCODE data

The Great A.I. Awakening – The New York Times

Monday, December 26th, 2016

The Great AI Awakening Quick history of #DeepLearning & its dramatic success in translation. Is med. diagnosis next?

Now flattered to have had 2 Hinton alumni in my lab…!

Journal Club

Saturday, July 23rd, 2016

Basset: #DeepLearning the regulatory code w/…NNs by @noncodarnia lab Has score for all possible SNVs in the genome

“Basset: learning the regulatory code of the accessible genome with deep convolutional neural networks”

A deep learning framework for modeling structural features of RNA-binding protein targets

Tuesday, July 12th, 2016

#Deeplearning framework for modeling…RBP sites Uses topic models & determines key primary & tert. struct. features

Apple’s Deep Learning Curve

Friday, November 6th, 2015

$AAPL’s #DeepLearning Curve A very secretive culture controlling Siri & your phone