Posts Tagged ‘machinelearning’

What is the Difference Between a Parameter and a Hyperparameter?

Thursday, February 28th, 2019

https://machinelearningmastery.com/difference-between-a-parameter-and-a-hyperparameter/

Why “Many-Model Thinkers” Make Better Decisions

Saturday, November 24th, 2018

Why “Many-Model Thinkers” Make Better Decisions
https://HBR.org/2018/11/why-many-model-thinkers-make-better-decisions Intuitive description of #MachineLearning concepts. Focuses on practical business contexts (eg hiring) & explains how #ensemble models & boosting can make better choices

QT:{{”
“The agent based model is not necessarily better. It’s value comes from focusing attention where the standard model does not.

The second guideline borrows the concept of boosting, …Rather than look for trees that predict with high accuracy in isolation, boosting looks for trees that perform well when the forest of current trees does not.

A boosting approach would take data from all past decisions and see where the first model failed. …The idea of boosting is to go searching for models that do best specifically when your other models fail.

To give a second example, several firms I have visited have hired computer scientists to apply techniques from artificial intelligence to identify past hiring mistakes. This is boosting in its purest form. Rather than try to use AI to simply beat their current hiring model, they use AI to build a second model that complements their current hiring model. They look for where their current model fails and build new models to complement it.”
“}}

New algorithm can create movies from just a few snippets of text | Science | AAAS

Monday, March 19th, 2018

Interesting paper by alumnus Renqiang Min on “Video Generation from Text,” using a generative #MachineLearning model.
http://www.AAAI.org/GuideBook2018/16152-72279-GB.pdf (Press report by @SilverJacket: New algorithm can create movies from just a few snippets of text
http://www.ScienceMag.org/news/2018/02/new-algorithm-can-create-movies-just-few-snippets-text )

Video Generation from Text Yitong Li†∗, Martin Renqiang Min‡ , Dinghan Shen† , David Carlson† , Lawrence Carin† †

Points of significance: Machine learning: supervised methods

Sunday, March 4th, 2018

Points of significance – #MachineLearning: supervised methods https://www.Nature.com/articles/nmeth.4551 Nice discussion of the k in k-NN & the slack parm. C, penalizing misclassified points in SVM — both which act somewhat analogously as regularizers. Good for #teaching

SVM movie

Saturday, March 5th, 2016

quick video for visualizing the power of kernels in SVM
https://www.youtube.com/watch?v=3liCbRZPrZA

Amazon Web Services Announces Amazon Machine Learning

Sunday, June 21st, 2015

AWS Announces $AMZN #MachineLearning
http://www.marketwatch.com/story/amazon-web-services-announces-amazon-machine-learning-2015-04-09 Will it be useful for genomics? http://aws.amazon.com/machine-learning

Also, redshift

Machine learning applications in genetics and genomics : Nature Reviews Genetics : Nature Publishing Group

Saturday, May 30th, 2015

#Machinelearning applications in…genomics
http://www.nature.com/nrg/journal/v16/n6/full/nrg3920.html Nice overview of key distinctions betw generative & discriminative models

In their review, “Machine learning in genetics and genomics”, Libbrecht and Noble overview important aspects of application of machine learning to genomic data. The review presents illustrative classical genomics problems where machine learning techniques have proven useful and describes the differences between supervised, semi-supervised and unsupervised learning as well as generative and discriminative models. The authors discuss considerations that should be made when selecting the right machine learning approach depending on the biological problem and data at hand, provide general practical guidelines and suggest possible solutions to common challenges.

Neural Networks Demystified Part 1: Data and Architecture – YouTube

Tuesday, December 30th, 2014

https://www.youtube.com/watch?v=bxe2T-V8XRs

My public notes from KDD 2014

Sunday, August 31st, 2014

https://storify.com/markgerstein/tweets-related-to-kdd-2014-i0kdd-kdd2014

https://www.flickr.com/photos/mbgmbg/tags/seriesspacyworldofbloombergbldg

http://linkstream2.gerstein.info/tag/i0kdd/

http://archive.gersteinlab.org/meetings/s/2014/08.28/kdd2014-i0kdd-meeting-materials/ (need password)

http://www.kdd.org/kdd2014/

Ensemble Methods in Machine Learning. Proceedings of the First International Workshop on Multiple Classifier Systems

Sunday, July 13th, 2014

Rich C, Alexandru N-M, Geoff C, Alex K (2004) Ensemble selection from libraries of
models. Proceedings of the twenty-first international conference on Machine learning. Banff, Alberta, Canada: ACM.
http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf

Thomas GD (2000) Ensemble Methods in Machine Learning. Proceedings of the First International Workshop on Multiple Classifier Systems: Springer-Verlag.
http://www.eecs.wsu.edu/~holder/courses/CptS570/fall07/papers/Dietterich00.pdf http://dl.acm.org/citation.cfm?id=743935

.@deniseOme Good ref is TG Dietterich #Ensemble Methods in
#MachineLearning MCS ’00
http://www.eecs.wsu.edu/~holder/courses/CptS570/fall07/papers/Dietterich00.pdf Not rel. to @ensembl #ismb #afp14

ref 17 & 18