Posts Tagged ‘excerptedquote’

Human brain samples yield a genomic trove | Science

Saturday, December 15th, 2018

The papers are out!
Using the tag pecrollout for this.
http://science.sciencemag.org/content/362/6420/1227

QT: {{”
The project’s namesake, ENCODE (Encyclopedia of DNA Elements), was a broader quest to map noncoding regions of the human genome. Its initial results, unveiled in 2012, stirred controversy. Scientists disputed the team’s claim that most of the genome was functional and questioned whether the project’s insights would be worth NIH’s $185 million investment (Science, 21 March 2014, p. 1306).
“}}

Human brain samples yield a genomic trove | Science

Saturday, December 15th, 2018

The papers are out!
Using the tag pecrollout for this.
http://science.sciencemag.org/content/362/6420/1227

QT: {{”
The project’s namesake, ENCODE (Encyclopedia of DNA Elements), was a broader quest to map noncoding regions of the human genome. Its initial results, unveiled in 2012, stirred controversy. Scientists disputed the team’s claim that most of the genome was functional and questioned whether the project’s insights would be worth NIH’s $185 million investment (Science, 21 March 2014, p. 1306).
“}}

Why “Many-Model Thinkers” Make Better Decisions

Saturday, November 24th, 2018

Why “Many-Model Thinkers” Make Better Decisions
https://HBR.org/2018/11/why-many-model-thinkers-make-better-decisions Intuitive description of #MachineLearning concepts. Focuses on practical business contexts (eg hiring) & explains how #ensemble models & boosting can make better choices

QT:{{”
“The agent based model is not necessarily better. It’s value comes from focusing attention where the standard model does not.

The second guideline borrows the concept of boosting, …Rather than look for trees that predict with high accuracy in isolation, boosting looks for trees that perform well when the forest of current trees does not.

A boosting approach would take data from all past decisions and see where the first model failed. …The idea of boosting is to go searching for models that do best specifically when your other models fail.

To give a second example, several firms I have visited have hired computer scientists to apply techniques from artificial intelligence to identify past hiring mistakes. This is boosting in its purest form. Rather than try to use AI to simply beat their current hiring model, they use AI to build a second model that complements their current hiring model. They look for where their current model fails and build new models to complement it.”
“}}

Ticked Off

Saturday, July 28th, 2018

QT:{{”

With the right precautions, there’s a lot you can do to prevent Lyme disease. Here’s what you need to know.

Know the Four Places Ticks Like to Hide

Dr. DeShaw stresses that a thorough daily check is key. Ticks, he said, particularly love to hide in these places: (1) behind the knees; (2) in the groin; (3) in the scalp; and (4) in the armpits. A tick should always be removed with tweezers. Grasp it as close to the skin as possible and pull out firmly and smoothly.

Use Repellents

DEET (on skin) and permethrin (on clothing) are the recommended repellents. Essential oils may have some repellent effect, but don’t rely on them, Dr. DeShaw said.

Assess Your Surroundings

If you live in an area where Lyme disease is prevalent, like Connecticut, you may want to treat your lawn or have it treated by a professional. Benjamin Asher, an ear, nose and throat specialist in Manhattan, emphasizes that spraying should be limited to natural options. “We should be respectful to the environment,” he said.


Katy Noble, a pediatrician in Stamford, Conn., recommends making “a daily shower or bath at night your nightly ritual, and use that time as a means to check your family for ticks. You may wash off any before they’ve really had a chance to really dig in.”

According to Dr. DeShaw, July and August are the peak months for Lyme disease. Don’t slouch on fall, though. He recommends staying vigilant through September and October.”
“}}

Ticked Off
https://www.nytimes.com/2018/07/18/style/ticks-lyme-disease-summer.html

Spam: A Shadow History of the Internet Excerpt, Part 2

Saturday, July 28th, 2018

Spam: A Shadow History of the Internet
https://www.Amazon.com/Spam-Shadow-History-Internet-Infrastructures/dp/026252757X Fascinating discussion of #LitSpam: how the #spam arms race led to the development of Bayesian filters & then, in response, a bizarre mash-up of free literary texts meant to evade them

QT:{{”

“Let us return to Turing, briefly, and introduce the fascinating Imitation Game, before we leave litspam and the world of
robot-read/writable text. The idea of a quantifiable, machine-mediated method of describing quali- ties of human affect recurs in the literature of a variety of fields, including criminology, psychology, artificial intelligence, and computer science. Its applications often provide insight into the criteria by which different human states are determined—as described, for example, in Ken Alder’s fascinating work on polygraphs, or in the still understudied history of the “fruit machine,” ….is the so-called Turing Test. The goal of Turing’s 1950 thought experiment (which bears repeating, as it’s widely
misunderstood today) was to “replace the question [of ‘Can machines think?’] by another, which is closely related to it and is expressed in relatively unambiguous words.” Turing considered the question of machines “thinking” or not to be “too meaningless to deserve discussion,” and, quite brilliantly, turned the question around to whether people think—or rather how we can be convinced that other people think. This project took the form of a parlor game: A and B, a man and a woman, communicate with an “interrogator,” C, by some intermediary such as a messenger or a teleprinter. C knows the two only as “X” and “Y”; after communicating with them, C is to render a verdict as to which is male and which female. A is tasked with convincing C that he, A, is female and B is male; B’s task is the same. “We now ask the question,” Turing continues, “‘What will happen when a machine takes the part of A in this game?’ …

What litspam has produced, remarkably, is a kind of parodic imitation game in which one set of algorithms is constantly trying to convince the other of their acceptable degree of salience—of being of interest and value to the humans. As Charles Stross puts it, “We have one faction that is attempting to write software that can generate messages that can pass a Turing test, and another faction that is attempting to write software that can administer an ad hoc Turing test.” …

Surrealist automatic writing has its particular associative rhythm, and the Burroughsian Cut-Up depends strongly on the taste for jarring juxtapositions favored by its authors (an article from Life, a sequence from The Waste Land, one of Burroughs’s “routines” in which mandrills from Venus kill Eisenhower). Litspam text, along with early comment spam and the strange spam blogs described in the next section, is the expression of an entirely different intentionality without the connotative structure produced by a human writer. The results returned by a probabilistically manipulated search engine, or the poisoned Bayesian spew of bot-generated spam, …
“}}

https://www.amazon.com/Spam-Shadow-History-Internet-Infrastructures/dp/026252757X

Spam: A Shadow History of the Internet [excerpt, Part 2]
https://www.scientificamerican.com/article/spam-shadow-history-of-internet-excerpt-part-two/

Why American medicine still runs on fax machines

Wednesday, March 14th, 2018

Why American medicine still runs on fax machines
https://www.Vox.com/health-care/2017/10/30/16228054/american-medical-system-fax-machines-why Great article explains how the inability to kill the “cockroach of American medicine” illustrates the incentives or anti-incentives toward data sharing & interoperability HT @DShaywitz

QT:{{”
“Competitive pressure between the companies that sell electronic record makers themselves only made things worse. The electronic record makers don’t have much incentive to connect well with other records, when they’d rather just convert that hospital on a different electronic platform into one of their own customers.

“When you want competing entities to share information, you have to realize that they’re sharing things that could help their competitors” “If [electronic record vendors] expended all that time and effort to make it so anyone could plug into any other system, it’s reducing the advantage of staying on your particular network,” Mostashari says.

This is especially true for larger electronic medical record companies, which want to sell the advantages of joining a record that is used in lots of doctor offices. “You want to make it easier for people to say, ‘Hey, if you’re on [our electronic record], look how awesome it is! You can talk to any user, anywhere in the country,” he argues.

In short, economics gave hospitals plenty of reasons not to connect their records with other hospitals — to stick with a clunky
technology, like fax, that makes it hard to transmit information. And the government didn’t give any incentives to connect — it stopped at digitizing medicine, falling short of the interoperability that patients actually want.
“}}

Thought experiments | The Economist

Sunday, February 25th, 2018

Thought experiments
https://www.Economist.com/technology-quarterly/2018-01-06/thought-experiments Amazing progress in Brain-computer interfaces (#BCIs): paralyzed patients manipulating silverware. Communicating w/ “locked-in” individuals. Will this scale?

QT:{{”
Brain-computer interfaces sound like the stuff of science fiction. Andrew Palmer sorts the reality
from the hype

IN THE gleaming facilities of the Wyss Centre for Bio and
Neuroengineering in Geneva, a lab technician takes a well plate out of an incubator. Each well contains a tiny piece of brain tissue derived from human stem cells and sitting on top of an array of electrodes. …
To see these signals emanating from disembodied tissue is weird. The firing of a neuron is the basic building block of intelligence. ..

This symphony of signals is bewilderingly complex. There are as many as 85bn neurons in an adult human brain, and a typical neuron has 10,000 connections to other such cells. The job of mapping these connections is still in its early stages. But as the brain gives up its secrets, remarkable possibilities have opened up: of decoding neural activity and using that code to control external devices.

“}}

The iPhone, the Pixel, and the tragic anxiety of having to choose

Sunday, February 25th, 2018

The iPhone, the Pixel, & the tragic anxiety of having to choose, by
@vladSavov https://www.theVerge.com/2018/2/20/17029324/iphone-x-pixel-2-xl-apple-google-choice-anxiety Mostly agreed with the comparison (better iOS interface v fantastic $GOOG camera) but feel gmail is definitely better on Android. My solution: carry both!

QT:{{”
“Android’s way of consolidating notifications from the same person or app is vastly superior to iOS’s massive bubble for every single message, Twitter like, or email. When I wake in the morning with the Pixel, I get a complete account of what I’ve missed just from my lock screen: a dozen unread emails, three Telegram chats,

When I want to actively use my phone, though, my hand tends to sneak toward the iPhone. Twitter, Slack, Telegram, and Speedtest each have meaningfully superior apps for iOS than Android….BBC iPlayer Radio consistently streams live content 30 seconds earlier on iOS, and Gmail for iOS fetches emails faster than Gmail on Android. And it looks better, dammit!”
“}}

Imaging Without Lenses

Sunday, February 4th, 2018

Imaging Without Lenses
https://www.AmericanScientist.org/article/imaging-without-lenses Computational #photography w. compressive sensing (reconstruction from arbitrary image bases) & diffractive imaging (forming an image via scattering from gratings) via @AmSciMag

QT:{{”

Computational Imaging

As its name suggests, the key advance in this new paradigm is the essential role played by computation in the formation of the final digital image. …
When the orbiting Hubble Space Telescope first sent its photos to Earth in the late 1980s, the images were far blurrier than expected; it quickly became apparent that something was wrong with the telescope optics. NASA scientists diagnosed the optical problems and, in the years before the unmanned telescope could be repaired, designed sophisticated digital processing algorithms to correct the images by compensating for many of the effects of flawed optics.

In the mid-1990s, W. Thomas Cathey and Edward R. Dowski, Jr., realized that one could go further still: One could intentionally design optics to produce blurry, “degraded” optical images, but degraded in such a way that special digital processing would produce a final digital image as good as, or even better than, those captured using
traditional optics
….

Diffraction for Imaging

One class of lensless devices for imaging macroscopic objects relies on miniature gratings consisting of steps in thickness in a
transparent material (glass or silicate) that delay one portion of the incident light wave with respect to another portion. The pattern of steps expresses special mathematical properties that uniquely ensure that the pattern of light in the material does not depend much on the wavelength of the light and thus upon the unintended variations in thickness arising during the manufacture of the glass. …The light from the scene
diffracts through the grating, yielding a pattern of light on the array that does not appear like a traditional image—it does not “look good” but instead more like a diffuse blob, unintelligible to the human eye. Nevertheless, the blob contains enough visual information (albeit in an unusual distribution) such that the desired image can be reconstructed through a computational process called image
convolution.

Compressive Sensing

….An optical image on a sensor is just a
complicated signal that can be represented as a list of numbers and processed digitally. Just as a complicated sound can be built up from a large number of simpler sounds, each added in a proportion that depends on the sound in question, so too can an image be built up from lots of simpler images. …

Enter compressive sensing. Theoretical results from statisticians have shown that, as long as the information from the scene is redundant (and the image is thus compressible), one does not need to measure such mathematically elegant bases, but can use measurements from a suitably random one. If such “coded measurements” are available then one can still exploit the idea that the signal can be well represented in the elegant basis elements (such as cosines or wavelets) and recover the image through compressive sensing.
“}}

The Dark Bounty of Texas Oil

Sunday, January 28th, 2018

The Dark Bounty of Texas Oil
https://www.NewYorker.com/magazine/2018/01/01/the-dark-bounty-of-texas-oil The development of #fracking & horizontal drilling by Mitchell et al. is perhaps not appreciated as a major tech success of late 20th century (up there w/ the web & iPod!) but it did radically change the #energy economy

QT:{{”
“In 1954, Mitchell obtained a contract to supply ten per cent of Chicago’s natural-gas needs. However, the producing wells operated by his company, Mitchell Energy & Development, were declining. He needed to discover new sources of petroleum, or else.

A safer and more precise method, developed in the seventies, was to use jets of fluid, under intense pressure, to create micro-cracks in the strata, typically in limestone or sandstone. Expensive gels or foams were generally used to thicken the fluid, and biocide was added to kill the bacteria that can clog the cracks. A granular substance called “proppant,” made of sand or ceramics, was pumped into the cracks, keeping pathways open so that the hydrocarbons could make it to the surface. The process, which came to be known as hydraulic fracturing, or fracking, jostled loose the captured oil or gas molecules, but the technology had a fatal flaw: it was too costly to turn a profit in shale.

In 1981, Mitchell drilled his first fracked well in the Barnett shale, the C. W. Slay No. 1. It lost money, as did many wells that followed it.

To cut costs, one of Mitchell’s engineers, Nick Steinsberger, began tinkering with the fracking-fluid formula. He reduced the quantity of gels and chemicals, making the liquid more watery, and added a cheap lubricant, polyacrylamide…

Mitchell combined his new fracking formula with horizontal-drilling techniques that had been developed offshore; once you bored deep enough to reach a deposit, you could direct the bit into the oil- or gas-bearing seam, a far more efficient means of recovery. In 1998, one of Mitchell’s wells in the Barnett, S. H. Griffin No. 4, made a profit. The shale revolution was under way. Soon the same fracking techniques that Mitchell had pioneered in gas were applied to oil.”


The world economy
was in danger of being held captive to oil states that were often intensely anti-American. Then, around the time that Barack Obama became President, U.S. production shot back up, approaching its all-time peak. On Fowler’s graph, it looked like a flagpole. “In the span of five years, we go from 5.5 million barrels a day to 9.5 million, almost doubling the U.S. output,”…The difference, Fowler said, was advanced fracking techniques and horizontal drilling. …
The town used to be called Clark, but a decade ago its mayor made a deal with a satellite network to provide ten years of free basic service to the two hundred residents, in return for renaming the town after the company. Satellite dishes still sit atop many houses there, and even though the agreement has expired the town’s name remains: dish.

“}}