Archive for the ‘tech’ Category

Google parent Alphabet passes Apple market cap at the open

Sunday, February 7th, 2016

Alphabet passes Apple market cap http://www.cnbc.com/2016/02/01/google-passes-apple-as-most-valuable-company.html 4 largest now in tech ($GOOG $547B, $AAPL 529, $MSFT 425, $FB 326) w/ big oil at #5
QT:{{"
Shares of Alphabet opened nearly 3 percent higher Tuesday, pushing the
technology giant’s market capitalization past Apple to become the
world’s most valuable public company.

Alphabet has a market cap of $547.1 billion, higher than Apple’s
$529.3 billion as of 9:45 a.m. ET.

Apple’s massive market cap is still trailed by Microsoft ($425.7
billion),Facebook ($326.2 billion) and Exxon Mobil ($310.1 billion) to
round out the list of the world’s five biggest companies.
"}}

Don’t Be Hacker Bait: Do This One-Hour Security Drill – WSJ

Sunday, February 7th, 2016

http://www.wsj.com/articles/do-this-one-hour-security-drill-5-steps-to-being-safer-online-1454528541

Light-bulb moment | The Economist

Saturday, February 6th, 2016

Light-bulb moment
http://www.economist.com/news/science-and-technology/21688375-bright-idea-save-beloved-technology-dustbin-light-bulb-moment Perhaps future homes will showcase classic incandescent #bulbs as they do for fireplaces

Can Drone Pilots Be Heroes?

Monday, February 1st, 2016

Can Drone Pilots Be Heroes?
http://www.theatlantic.com/politics/archive/2016/01/can-drone-pilots-be-heroes/424830/ Even if safe from harm they’re vital. But then this is also true for #drone programmers

QT:{{”

“There is a counter-history to the “sacrifice value” definition of heroism, one that emphasizes that the sacrifice needs to be in the service of something worthwhile. This second school of heroism calls for people to dedicate themselves to a purpose larger than themselves. As the writer Joseph Campbell put it: “A hero is someone who has given his or her life to something bigger than oneself.” And it is in this definition of heroism, one that accentuates participation in a larger project, that the military claims drone pilots should be included. As Colonel Eric Mathewson, himself a drone operator, told The Washington Post: “Valor, to me, is not risking your life … It is doing what is right for the right reasons.”

Stretching the definition of heroism to include following orders, while lopping off completely the parts about sacrifice and risk, might be indicative of a turn toward what the Center for Strategic and International Studies’ Edward Luttwak calls the “post-heroic.” Extending the kill chain to include more and more civilians, or even just noncombat arms warriors, takes people further and further away from the physical reality of their actions. Already, drone-targeting lists are almost completely determined by algorithm, with operators just there to pull the trigger. When even that human element is removed, what will the kill chain look like? What about the day pilots are completely replaced by artificial intelligence? According to experts, that moment may not be too far off. Grégoire Chamayou, the author of Drone Theory, says that a super-centralized handful of programmers and high-level generals will be constantly refining targeting for AI-operated drones.”
“}}

Here’s Why Public Wifi is a Public Health Hazard — Matter

Wednesday, January 6th, 2016

Why Public #Wifi is a…Hazard
https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6 Exposes one’s past network usage; ergo, don’t put your street into your home’s SSID

QT:{{”

Wouter removes his laptop from his backpack, puts the black device on the table, and hides it under a menu. A waitress passes by and we ask for two coffees and the password for the WiFi network. Meanwhile, Wouter switches on his laptop and device, launches some programs, and soon the screen starts to fill with green text lines. It gradually becomes clear that Wouter’s device is connecting to the laptops, smartphones, and tablets of cafe visitors.

On his screen, phrases like “iPhone Joris” and “Simone’s MacBook” start to appear. The device’s antenna is intercepting the signals that are being sent from the laptops, smartphones, and tablets around us.

“More text starts to appear on the screen. We are able to see which WiFi networks the devices were previously connected to. Sometimes the names of the networks are composed of mostly numbers and random letters, making it hard to trace them to a definite location, but more often than not, these WiFi networks give away the place they belong to.

We learn that Joris had previously visited McDonald’s, probably spent his vacation in Spain (lots of Spanish-language network names), and had been kart-racing (he had connected to a network belonging to a well-known local kart-racing center). Martin, another café visitor, had been logged on to the network of Heathrow airport and the American airline Southwest. In Amsterdam, he’s probably staying at the White Tulip Hostel. He had also paid a visit to a coffee shop called The Bulldog.

“}}

Mysteries of Sleep Lie Unsolved

Sunday, January 3rd, 2016

Mysteries of Sleep Lie Unsolved
http://www.nytimes.com/2015/02/26/technology/personaltech/despite-the-promise-of-technology-the-mysteries-of-sleep-lie-unsolved.html The Sense readily tracks #sleep w/o much effort but does it help one sleep better?

The Doomsday Invention – The New Yorker

Sunday, December 27th, 2015

The Doomsday Invention
http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom View of Transhumanism, Extropianism, the Singularity &
Superintelligence in terms of 1 person

about:
http://www.nickbostrom.com/

QT:{{

I arrived before he did, and waited in a hallway between two conference rooms. A plaque indicated that one of them was the Arkhipov Room, honoring Vasili Arkhipov, a Soviet naval officer. During the Cuban missile crisis, Arkhipov was serving on a submarine in the Caribbean when U.S. destroyers set off depth charges nearby. His captain, unable to establish radio contact with Moscow, feared that the conflict had escalated and ordered a nuclear strike. But Arkhipov dissuaded him, and all-out atomic war was averted. Across the hallway was the Petrov Room, named for another Soviet officer who prevented a global nuclear catastrophe. Bostrom later told me, “They may have saved more lives than most of the statesmen we celebrate on stamps.”


Although Bostrom did not know it, a growing number of people around the world shared his intuition that technology could cause
transformative change, and they were finding one another in an online discussion group administered by an organization in California called the Extropy Institute. The term “extropy,” coined in 1967, is generally used to describe life’s capacity to reverse the spread of entropy across space and time. Extropianism is a libertarian strain of transhumanism that seeks “to direct human evolution,” hoping to eliminate disease, suffering, even death; the means might be genetic modification, or as yet un­invented nanotechnology, or perhaps dispensing with the body entirely and uploading minds into
supercomputers. (As one member noted, “Immortality is mathematical, not mystical.”) The Extropians advocated the development of artificial superintelligence to achieve these goals, and they envisioned humanity colonizing the universe, converting inert matter into engines of civilization.

He believes that the future can be studied with the same
meticulousness as the past, even if the conclusions are far less firm. “It may be highly unpredictable where a traveller will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination,” he once argued. “The verylong-term future of humanity may be relatively easy to predict.” He offers an example: if history were reset, the industrial revolution might occur at a different time, or in a different place, or perhaps not at all, with innovation instead occurring in increments over hundreds of years. In the short term, predicting technological achievements in the counter-history might not be possible; but after, say, a hundred thousand years it is easier to imagine that all the same inventions would have emerged.

Bostrom calls this the Technological Completion Conjecture: “If scientific- and technological-development efforts do not effectively cease, then all impor­t­­­ant basic capabilities that could be obtained through some possible technology will be obtained.” In light of this, he suspects that the farther into the future one looks the less likely it seems that life will continue as it is. He favors the far ends of possibility: humanity becomes transcendent or it perishes. …
Unexpectedly, by dismissing its founding goals, the field of A.I. created space for outsiders to imagine more freely what the technology might look like. Bostrom wrote his first paper on artificial superintelligence in the nineteen-nineties, envisioning it as potentially perilous but irresistible to both commerce and government. “If there is a way of guaranteeing that superior artificial intellects will never harm human beings, then such intellects will be created,” he argued. “If there is no way to have such a guarantee, then they will probably be created nevertheless.” His audience at the time was primarily other transhumanists. But the movement was maturing. In 2005, an organization called the Singularity Institute for Artificial Intelligence began to operate out of Silicon Valley; its primary founder, a former member of the Extropian discussion group, published a stream of literature on the dangers of A.I. That same year, the futurist and inventor Ray Kurzweil wrote “The Singularity Is Near,” a best-seller that prophesied a merging of man and machine in the foreseeable future. Bostrom created his institute at Oxford. …
The parable is his way of introducing the book’s core question: Will an A.I., if realized, use its vast capability in a way that is beyond human control? One way to think about the concern is to begin with the familiar. Bos­trom writes, “Artificial intelligence already
outperforms human intelligence in many domains.” The examples range from chess to Scrabble. One program from 1981, called Eurisko, was designed to teach itself a naval role-playing game. After playing ten thousand matches, it arrived at a morally grotesque strategy: to field thousands of small, immobile ships, the vast majority of which were intended as cannon fodder. In a national tournament, Eurisko demolished its human opponents, who insisted that the game’s rules be changed. The following year, Eurisko won again—by forcing its damaged ships to sink themselves.

….worries that solving the “control problem”—insuring that a superintelligent machine does what humans want it to do—will require more time than solving A.I. does. The intelligence explosion is not the only way that a superintelligence might be created suddenly. Bostrom once sketched out a decades-long process, in which researchers arduously improved their systems to equal the intelligence of a mouse, then a chimp, then—after incredible labor—the village idiot. “The difference between village idiot and genius-­level intelligence might be trivial from the point of view of how hard it is to replicate the same functionality in a machine,” he said. “The brain of the village idiot and the brain of a scientific genius are almost identical. So we might very well see relatively slow and incremental progress that doesn’t really raise any alarm bells until we are just one step away from something that is radically superintelligent.”
….

Stuart Russell, the co-author of the textbook “Artificial
Intelligence: A Modern Approach” and one of Bostrom’s most vocal supporters in A.I., told me that he had been studying the physics community during the advent of nuclear weapons. At the turn of the twentieth century, Ernest Rutherford discovered that heavy elements produced radiation by atomic decay, confirming that vast reservoirs of energy were stored in the atom. Rutherford believed that the energy could not be harnessed, and in 1933 he proclaimed, “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.” The next day, a former student of Einstein’s named Leo Szilard read the comment in the papers. Irritated, he took a walk, and the idea of a nuclear chain reaction occurred to him. He visited Rutherford to discuss it, but Rutherford threw him out. Einstein, too, was skeptical about nuclear energy—splitting atoms at will, he said, was “like shooting birds in the dark in a country where there are only a few birds.” A decade later, Szilard’s insight was used to build the bomb.
….
Between the two conferences, the field had experienced a revolution, built on an approach called deep learning—a type of neural network that can discern complex patterns in huge quantities of data. For de­c­ades, researchers, hampered by the limits of their hardware, struggled to get the technique to work well. But, beginning in 2010, the increasing availability of Big Data and cheap, powerful
video-­game processors had a dramatic effect on performance. Without any profound theoretical breakthrough, deep learning suddenly offered breathtaking advances. “I have been talking to quite a few
contemporaries,” Stuart Russell told me. “Pretty much everyone sees examples of progress they just didn’t expect.” He cited a YouTube clip of a four-legged robot: one of its designers tries to kick it over, but it quickly regains its balance, scrambling with uncanny
naturalness. “A problem that had been viewed as very difficult, where progress was slow and incremental, was all of a sudden done. Locomotion: done.”

In recent years, Google has purchased seven robotics companies and several firms specializing in machine intelligence; it may now employ the world’s largest contingent of Ph.D.s in deep learning. Perhaps the most interesting acquisition is a British company called DeepMind, started in 2011 to build a general artificial intelligence. Its founders had made an early bet on deep learning, and sought to combine it with other A.I. mechanisms in a cohesive architecture. In 2013, they published the results of a test in which their system played seven classic Atari games, with no instruction other than to improve its score. For many people in A.I., the importance of the results was immediately evident. I.B.M.’s chess program had defeated Garry Kasparov, but it could not beat a three-year-old at tic-tac-toe. In six games, DeepMind’s system outperformed all previous algorithms; in three it was superhuman. In a boxing game, it learned to pin down its opponent and subdue him with a barrage of punches.

Weeks after the results were released, Google bought the company, reportedly for half a billion dollars. DeepMind placed two unusual conditions on the deal: its work could never be used for espionage or defense purposes, and an ethics board would oversee the research as it drew closer to achieving A.I. Anders Sandberg had told me, “We are happy that they are among the most likely to do it. They recognize there are some problems.”

“I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”

“}}

Tiny Hardware Firewall VPN Client

Wednesday, December 23rd, 2015

Looks like an interesting security idea…
http://tinyhardwarefirewall.com/

Genetic Testing May Be Coming to Your Office – WSJ

Friday, December 18th, 2015

Genetic Testing May Be Coming to Your Office http://www.wsj.com/articles/genetic-testing-may-be-coming-to-your-office-1450227295 Insurance may monetize #PersonalGenomics as advertising did for the Web

Startups Take Bite Out of Food Poisoning

Friday, December 18th, 2015

Startups Take Bite Out of Food Poisoning
http://www.wsj.com/articles/startups-take-bite-out-of-food-poisoning-1450069262 Chem #sensors for our food & more, eventually integrated into our phones

QT:{{”
“Among portable chemistry sets, electronic noses, visual sensors and a panoply of other technologies—including tools specific to, for example, detecting pesticide on produce—what all these companies have in common is that though they started with food, their sensors could someday add a new level of surveillance and awareness to nature and the built environment.”
“}}