Posts Tagged ‘quote’

Health ROI as a measure of misalignment of biomedical needs and resources : Nature Biotechnology : Nature Publishing Group

Sunday, January 10th, 2016

Health ROI as a measure of misalignment of…needs & resources by @arzhetsky http://www.nature.com/nbt/journal/v33/n8/full/nbt.3276.html See funding decisions like stock trades

QT:{{"In a recently published letter to Nature Biotechnology, Lixia Yao,
IGSB core faculty Andrey Rzhetsky and colleagues dissect the decisions
made in funding choices. His team compares these choices by funding
agencies to trades in a financial market. In this communication, they
expand on the idea that there exists an imbalance between health needs
and biomedical research investment.

In order to fairly examine the relationship between biomedical need
and biomedical research, they validated a new, insurance based measure
of health burden that enables automatic evaluation of burden and
research investment for many more diseases than have been previously
assessed. "
"}}

We Need a New Green Revolution

Saturday, January 9th, 2016

We Need a New Green Revolution http://www.nytimes.com/2016/01/04/opinion/we-need-a-new-green-revolution.html Advocates US agri-science funding to grow yields. Sensible given #obesity epidemic?

QT:{{"
“Today, farm production has stopped growing in the United States, and agriculture research is no longer a priority; it constitutes only 2 percent of federal research and development spending. And, according to the Department of Agriculture, total agricultural production has slowed significantly since the turn of the century. We need another ambitious surge in agricultural science.”
"}}

Given that eyes appear to have evolved multiple times independently through evolution, why has human-level intelligence not evolved more than once? – Quora

Thursday, January 7th, 2016

QT:{{”
Richard Dawkins (in “The Blind Watchmaker”) writes:

“Michael Land reckons that there are nine basic principles for image-forming that eyes use, and that most of them have evolved many times independently. For instance, the curved dish-reflector principle is radically different from our own camera-eye (we use it in radiotelephones, and also in our largest optical telescopes because it is easier to make a large mirror than a large lens), and it has been independently ‘invented’ by various molluscs and crustaceans. Other crustaceans have a compound eye like insects (really a bank of lots of tiny eyes), while other molluscs, as we have seen, have a lensed camera-eye like ours, or a pinhole camera-eye. For each of these types of eye, stages corresponding to evolutionary intermediates exist as working eyes among other modern animals.”

With all respect to Mr. Dawkins, to believe that a structure as complex as any brain has evolved more than once is stretching credulity too far.

Michael Land calls the eyes “the premier sensory outposts of the brain” (see The Evolution of Eyes (1992)), but he only mentions the eye/brain connection three times (and then only in passing), and not in any brain evolution context.
“}}

http://redwood.berkeley.edu/vs265/landfernald92.pdf
https://www.quora.com/Given-that-eyes-appear-to-have-evolved-multiple-times-independently-through-evolution-why-has-human-level-intelligence-not-evolved-more-than-once

Here’s Why Public Wifi is a Public Health Hazard — Matter

Wednesday, January 6th, 2016

Why Public #Wifi is a…Hazard
https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6 Exposes one’s past network usage; ergo, don’t put your street into your home’s SSID

QT:{{”

Wouter removes his laptop from his backpack, puts the black device on the table, and hides it under a menu. A waitress passes by and we ask for two coffees and the password for the WiFi network. Meanwhile, Wouter switches on his laptop and device, launches some programs, and soon the screen starts to fill with green text lines. It gradually becomes clear that Wouter’s device is connecting to the laptops, smartphones, and tablets of cafe visitors.

On his screen, phrases like “iPhone Joris” and “Simone’s MacBook” start to appear. The device’s antenna is intercepting the signals that are being sent from the laptops, smartphones, and tablets around us.

“More text starts to appear on the screen. We are able to see which WiFi networks the devices were previously connected to. Sometimes the names of the networks are composed of mostly numbers and random letters, making it hard to trace them to a definite location, but more often than not, these WiFi networks give away the place they belong to.

We learn that Joris had previously visited McDonald’s, probably spent his vacation in Spain (lots of Spanish-language network names), and had been kart-racing (he had connected to a network belonging to a well-known local kart-racing center). Martin, another café visitor, had been logged on to the network of Heathrow airport and the American airline Southwest. In Amsterdam, he’s probably staying at the White Tulip Hostel. He had also paid a visit to a coffee shop called The Bulldog.

“}}

For the Wealthiest, a Private Tax System That Saves Them Billions – NYTimes.com

Tuesday, January 5th, 2016

For Wealthiest…Tax System…Saves Them Billions http://www.nytimes.com/2015/12/30/business/economy/for-the-wealthiest-private-tax-system-saves-them-billions.html Since ’12 #tax for 1%ers flat at 24% v for top .1% down >3% to 18%

QT:{{"From Mr. Obama’s inauguration through the end of 2012, federal income
tax rates on individuals did not change (excluding payroll taxes). But
the highest-earning one-thousandth of Americans went from paying an
average of 20.9 percent to 17.6 percent. By contrast, the top 1
percent, excluding the very wealthy, went from paying just under 24
percent on average to just over that level.

“We do have two different tax systems, one for normal wage-earners and
another for those who can afford sophisticated tax advice,” said
Victor Fleischer, a law professor at the University of San Diego who
studies the intersection of tax policy and inequality. “At the very
top of the income distribution, the effective rate of tax goes down,
contrary to the principles of a progressive income tax system.”
"}}

The Blind Watchmaker – Simplest selector is a hole

Monday, January 4th, 2016

QT:{{”
The waves and the pebbles together constitute a simple example of a system that automatically generates non-randomness. The world is full of such systems. The simplest example I can think of is a hole. Only objects smaller than the hole can pass through it. This means that if you start with a random collection of objects above the hole, and some force shakes and jostles them about at random, after a while the objects above and below the hole will come to be nonrandomly sorted. The space below the hole will tend to contain objects smaller than the hole, and the space above will tend to contain objects larger than the hole. Mankind has, of course, long exploited this simple principle for generating non-randomness, in the useful device known as the sieve. “}}

http://dbanach.com/dawkins3.htm

How Chris McCandless Died

Monday, December 28th, 2015

How Chris McCandless Died
http://www.newyorker.com/online/blogs/books/2013/09/how-chris-mccandless-died.html Interesting connection between plant #toxin used at a concentration camp & starved camper

QT:{{”
““I first learned about Vapniarca through a book whose title I’ve long forgotten,” Hamilton told me. “Only the barest account of Vapniarca appeared in one of its chapters …. But after reading ‘Into the Wild,’ I was able to track down a manuscript about Vapniarca that has been published online.” Later, in Romania, he located the son of a man who served as an administrative official at the camp, who sent Hamilton a trove of documents.

In 1942, as a macabre experiment, an officer at Vapniarca started feeding the Jewish inmates bread made from seeds of the grass pea, Lathyrus sativus, a common legume that has been known since the time of Hippocrates to be toxic. “Very quickly,” Hamilton writes in “The Silent Fire,”
“}}

Kodak Gallery closing; billions of images to be transferred to Shutterfly

Monday, December 28th, 2015

QT:{{”
There’s no charge for the transfer, but note that once photos are transferred, the only way to obtain full-resolution originals will be to purchase a Shutterfly archive DVD; the company doesn’t currently offer full-sized image downloads. It’s also important to note that gift certificates, credits, etc. from Kodak Gallery will not be transferred to Shutterfly, and so if you don’t use these before the closure of Kodak Gallery, their value will be lost.
“}}

http://www.imaging-resource.com/news/2012/05/07/kodak-gallery-closing-billions-of-images-to-be-transferred-to-shutterfly

The Doomsday Invention – The New Yorker

Sunday, December 27th, 2015

The Doomsday Invention
http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom View of Transhumanism, Extropianism, the Singularity &
Superintelligence in terms of 1 person

about:
http://www.nickbostrom.com/

QT:{{

I arrived before he did, and waited in a hallway between two conference rooms. A plaque indicated that one of them was the Arkhipov Room, honoring Vasili Arkhipov, a Soviet naval officer. During the Cuban missile crisis, Arkhipov was serving on a submarine in the Caribbean when U.S. destroyers set off depth charges nearby. His captain, unable to establish radio contact with Moscow, feared that the conflict had escalated and ordered a nuclear strike. But Arkhipov dissuaded him, and all-out atomic war was averted. Across the hallway was the Petrov Room, named for another Soviet officer who prevented a global nuclear catastrophe. Bostrom later told me, “They may have saved more lives than most of the statesmen we celebrate on stamps.”


Although Bostrom did not know it, a growing number of people around the world shared his intuition that technology could cause
transformative change, and they were finding one another in an online discussion group administered by an organization in California called the Extropy Institute. The term “extropy,” coined in 1967, is generally used to describe life’s capacity to reverse the spread of entropy across space and time. Extropianism is a libertarian strain of transhumanism that seeks “to direct human evolution,” hoping to eliminate disease, suffering, even death; the means might be genetic modification, or as yet un­invented nanotechnology, or perhaps dispensing with the body entirely and uploading minds into
supercomputers. (As one member noted, “Immortality is mathematical, not mystical.”) The Extropians advocated the development of artificial superintelligence to achieve these goals, and they envisioned humanity colonizing the universe, converting inert matter into engines of civilization.

He believes that the future can be studied with the same
meticulousness as the past, even if the conclusions are far less firm. “It may be highly unpredictable where a traveller will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination,” he once argued. “The verylong-term future of humanity may be relatively easy to predict.” He offers an example: if history were reset, the industrial revolution might occur at a different time, or in a different place, or perhaps not at all, with innovation instead occurring in increments over hundreds of years. In the short term, predicting technological achievements in the counter-history might not be possible; but after, say, a hundred thousand years it is easier to imagine that all the same inventions would have emerged.

Bostrom calls this the Technological Completion Conjecture: “If scientific- and technological-development efforts do not effectively cease, then all impor­t­­­ant basic capabilities that could be obtained through some possible technology will be obtained.” In light of this, he suspects that the farther into the future one looks the less likely it seems that life will continue as it is. He favors the far ends of possibility: humanity becomes transcendent or it perishes. …
Unexpectedly, by dismissing its founding goals, the field of A.I. created space for outsiders to imagine more freely what the technology might look like. Bostrom wrote his first paper on artificial superintelligence in the nineteen-nineties, envisioning it as potentially perilous but irresistible to both commerce and government. “If there is a way of guaranteeing that superior artificial intellects will never harm human beings, then such intellects will be created,” he argued. “If there is no way to have such a guarantee, then they will probably be created nevertheless.” His audience at the time was primarily other transhumanists. But the movement was maturing. In 2005, an organization called the Singularity Institute for Artificial Intelligence began to operate out of Silicon Valley; its primary founder, a former member of the Extropian discussion group, published a stream of literature on the dangers of A.I. That same year, the futurist and inventor Ray Kurzweil wrote “The Singularity Is Near,” a best-seller that prophesied a merging of man and machine in the foreseeable future. Bostrom created his institute at Oxford. …
The parable is his way of introducing the book’s core question: Will an A.I., if realized, use its vast capability in a way that is beyond human control? One way to think about the concern is to begin with the familiar. Bos­trom writes, “Artificial intelligence already
outperforms human intelligence in many domains.” The examples range from chess to Scrabble. One program from 1981, called Eurisko, was designed to teach itself a naval role-playing game. After playing ten thousand matches, it arrived at a morally grotesque strategy: to field thousands of small, immobile ships, the vast majority of which were intended as cannon fodder. In a national tournament, Eurisko demolished its human opponents, who insisted that the game’s rules be changed. The following year, Eurisko won again—by forcing its damaged ships to sink themselves.

….worries that solving the “control problem”—insuring that a superintelligent machine does what humans want it to do—will require more time than solving A.I. does. The intelligence explosion is not the only way that a superintelligence might be created suddenly. Bostrom once sketched out a decades-long process, in which researchers arduously improved their systems to equal the intelligence of a mouse, then a chimp, then—after incredible labor—the village idiot. “The difference between village idiot and genius-­level intelligence might be trivial from the point of view of how hard it is to replicate the same functionality in a machine,” he said. “The brain of the village idiot and the brain of a scientific genius are almost identical. So we might very well see relatively slow and incremental progress that doesn’t really raise any alarm bells until we are just one step away from something that is radically superintelligent.”
….

Stuart Russell, the co-author of the textbook “Artificial
Intelligence: A Modern Approach” and one of Bostrom’s most vocal supporters in A.I., told me that he had been studying the physics community during the advent of nuclear weapons. At the turn of the twentieth century, Ernest Rutherford discovered that heavy elements produced radiation by atomic decay, confirming that vast reservoirs of energy were stored in the atom. Rutherford believed that the energy could not be harnessed, and in 1933 he proclaimed, “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.” The next day, a former student of Einstein’s named Leo Szilard read the comment in the papers. Irritated, he took a walk, and the idea of a nuclear chain reaction occurred to him. He visited Rutherford to discuss it, but Rutherford threw him out. Einstein, too, was skeptical about nuclear energy—splitting atoms at will, he said, was “like shooting birds in the dark in a country where there are only a few birds.” A decade later, Szilard’s insight was used to build the bomb.
….
Between the two conferences, the field had experienced a revolution, built on an approach called deep learning—a type of neural network that can discern complex patterns in huge quantities of data. For de­c­ades, researchers, hampered by the limits of their hardware, struggled to get the technique to work well. But, beginning in 2010, the increasing availability of Big Data and cheap, powerful
video-­game processors had a dramatic effect on performance. Without any profound theoretical breakthrough, deep learning suddenly offered breathtaking advances. “I have been talking to quite a few
contemporaries,” Stuart Russell told me. “Pretty much everyone sees examples of progress they just didn’t expect.” He cited a YouTube clip of a four-legged robot: one of its designers tries to kick it over, but it quickly regains its balance, scrambling with uncanny
naturalness. “A problem that had been viewed as very difficult, where progress was slow and incremental, was all of a sudden done. Locomotion: done.”

In recent years, Google has purchased seven robotics companies and several firms specializing in machine intelligence; it may now employ the world’s largest contingent of Ph.D.s in deep learning. Perhaps the most interesting acquisition is a British company called DeepMind, started in 2011 to build a general artificial intelligence. Its founders had made an early bet on deep learning, and sought to combine it with other A.I. mechanisms in a cohesive architecture. In 2013, they published the results of a test in which their system played seven classic Atari games, with no instruction other than to improve its score. For many people in A.I., the importance of the results was immediately evident. I.B.M.’s chess program had defeated Garry Kasparov, but it could not beat a three-year-old at tic-tac-toe. In six games, DeepMind’s system outperformed all previous algorithms; in three it was superhuman. In a boxing game, it learned to pin down its opponent and subdue him with a barrage of punches.

Weeks after the results were released, Google bought the company, reportedly for half a billion dollars. DeepMind placed two unusual conditions on the deal: its work could never be used for espionage or defense purposes, and an ethics board would oversee the research as it drew closer to achieving A.I. Anders Sandberg had told me, “We are happy that they are among the most likely to do it. They recognize there are some problems.”

“I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”

“}}

Startups Take Bite Out of Food Poisoning

Friday, December 18th, 2015

Startups Take Bite Out of Food Poisoning
http://www.wsj.com/articles/startups-take-bite-out-of-food-poisoning-1450069262 Chem #sensors for our food & more, eventually integrated into our phones

QT:{{”
“Among portable chemistry sets, electronic noses, visual sensors and a panoply of other technologies—including tools specific to, for example, detecting pesticide on produce—what all these companies have in common is that though they started with food, their sensors could someday add a new level of surveillance and awareness to nature and the built environment.”
“}}