Posts Tagged ‘photography’

Custom T-Shirts – Design Your Own T-Shirts Online – Free Shipping!

Sunday, October 6th, 2019

https://www.customink.com/

Photopea and Polarr- browser based editors

Tuesday, August 6th, 2019

The browser-based photoshop editor…is

https://www.photopea.com

(It’s more like a lite version of photoshop with all the functionality an amateur might ever use. Accepts photoshop, GIMP, etc. file formats in addition to other standard file formats.)

For a web-based Lightroom alternative – https://v2.polarr.co/#.

Batch-convert iPhone HEIC photos to JPEG format – CNET

Monday, August 5th, 2019

https://www.cnet.com/how-to/batch-convert-iphone-heic-photos-to-jpeg-format/

Photo Renaming Options in Lightroom – Lightroom Killer Tips

Monday, December 31st, 2018

https://lightroomkillertips.com/photo-renaming-options-lightroom/

Imaging Without Lenses

Sunday, February 4th, 2018

Imaging Without Lenses
https://www.AmericanScientist.org/article/imaging-without-lenses Computational #photography w. compressive sensing (reconstruction from arbitrary image bases) & diffractive imaging (forming an image via scattering from gratings) via @AmSciMag

QT:{{”

Computational Imaging

As its name suggests, the key advance in this new paradigm is the essential role played by computation in the formation of the final digital image. …
When the orbiting Hubble Space Telescope first sent its photos to Earth in the late 1980s, the images were far blurrier than expected; it quickly became apparent that something was wrong with the telescope optics. NASA scientists diagnosed the optical problems and, in the years before the unmanned telescope could be repaired, designed sophisticated digital processing algorithms to correct the images by compensating for many of the effects of flawed optics.

In the mid-1990s, W. Thomas Cathey and Edward R. Dowski, Jr., realized that one could go further still: One could intentionally design optics to produce blurry, “degraded” optical images, but degraded in such a way that special digital processing would produce a final digital image as good as, or even better than, those captured using
traditional optics
….

Diffraction for Imaging

One class of lensless devices for imaging macroscopic objects relies on miniature gratings consisting of steps in thickness in a
transparent material (glass or silicate) that delay one portion of the incident light wave with respect to another portion. The pattern of steps expresses special mathematical properties that uniquely ensure that the pattern of light in the material does not depend much on the wavelength of the light and thus upon the unintended variations in thickness arising during the manufacture of the glass. …The light from the scene
diffracts through the grating, yielding a pattern of light on the array that does not appear like a traditional image—it does not “look good” but instead more like a diffuse blob, unintelligible to the human eye. Nevertheless, the blob contains enough visual information (albeit in an unusual distribution) such that the desired image can be reconstructed through a computational process called image
convolution.

Compressive Sensing

….An optical image on a sensor is just a
complicated signal that can be represented as a list of numbers and processed digitally. Just as a complicated sound can be built up from a large number of simpler sounds, each added in a proportion that depends on the sound in question, so too can an image be built up from lots of simpler images. …

Enter compressive sensing. Theoretical results from statisticians have shown that, as long as the information from the scene is redundant (and the image is thus compressible), one does not need to measure such mathematically elegant bases, but can use measurements from a suitably random one. If such “coded measurements” are available then one can still exploit the idea that the signal can be well represented in the elegant basis elements (such as cosines or wavelets) and recover the image through compressive sensing.
“}}

Google Sells A.I. for Building A.I. (Novices Welcome) – The New York Times

Sunday, January 28th, 2018

$GOOG Sells AI for Building #AI
https://www.NYTimes.com/2018/01/17/technology/google-sells-ai.html QT: “Humans must label the data before the system can
learn…once images…labeled…[it] operates w/o human
involvement…It can build a model from scratch.” How can one preview this? Will it be integrated into gphotos?

QT:{{”
Initially, Google will open this service only to a small group of businesses.

But sometimes, there is no substitute for good old human labor. With Google’s new service, humans must label the data before the system can learn from it. …

Google says that once images are labeled, its new service operates without human involvement….Given more time, it
can build a model from scratch, specifically for the problem at hand.

If you are a zoologist who wants an algorithm that identifies jaguars and giraffes, said Fei-Fei Li, chief scientist inside the Google cloud group, all you have to do is supply the right images. “You upload jaguars and giraffes,” she said. “And you are done.”
“}}

Monitor: Picture imperfect | The Economist

Sunday, September 17th, 2017

Picture imperfect
http://www.Economist.com/news/technology-quarterly/21572915-digital-imaging-insurers-publishers-law-enforcement-agencies-and-dating-sites-are Photoshop add-ons for finding doctored photos – eg @Fourand6

QT:{{”
“Efforts to automate the detection of doctored images are bearing fruit. Last year Fourandsix Technologies, a start-up based in Silicon Valley, began selling an add-on for Photoshop, called FourMatch, that determines whether an image has come straight from a camera or has been manipulated. It compares the “metadata” associated with the image against a database of signatures that represent the characteristic ways in which different devices capture and compress image data, to ensure that the image is what it claims to be. …So a human analyst is still needed “in the loop”, says Mr Farid, one of the firm’s co-founders. A trained eye can spot inconsistencies in shadows, reflections and incorrect perspective.”
“}}

Harvard is putting its photography classes online for free

Sunday, January 15th, 2017

http://www.konbini.com/us/inspiration/harvard-photography-classes-online-for-free/

Can You Tell if These Objects Are Real or Rendered?

Monday, December 26th, 2016

Can You Tell if These…Are…Rendered?
https://www.Wired.com/2016/12/skrekkogle-still-life/ @Skrekkogle makes the real appear simulated. Implications for photo evidence

QT:{{”
“The Norwegian design studio Skrekkogle played this game with Still File, a series of photos that look like renderings but aren’t. Instead of manipulating pixels on a screen, studio founders Lars Marcus Vedeler and Theo Zamudio-Tveterås created and photographed sets that look like scenes made with 3-D rendering software. “It’s a weirdly elaborate process,” Vedeler says.

In particularly cool photo, they 3-D printed three wildly distorted teapots, gave them a flat finish, and glued them to the background before photographing them as a surrealist scene. In another, they placed a marble, a plastic cone, and a wood-lined cube atop checkered paper lacquered with acrylic. The camera’s flash reflected the checkerboard pattern onto the objects, creating a false sense of depth.”
“}}

Inside the Development of Light, the Tiny Digital Camera That Outperforms DSLRs – IEEE Spectrum

Wednesday, November 23rd, 2016

Light, the Tiny Digital Camera That Outperforms DSLRs
http://spectrum.ieee.org/consumer-electronics/gadgets/inside-the-development-of-light-the-tiny-digital-camera-that-outperforms-dslrs A multi-lens future made possible by computational photography [cagegory tech]