Photography and Smart Cameras
- Photography and its Trends
The word "photography", translated literally, means "writing with light". Photography occurs when light from the front of the camera refracts through the lens and projects an image onto the back of the camera. In film photography, the image (shade) is held by a light-sensitive material, usually made of silver, which reacts to light to capture the image on silver particles of various sizes. In digital photography, electronic sensors respond to light to capture images with pixels of the same size.
A few decades ago, almost all photography was film. At that point, you can capture a much higher quality image on film than you can on a sensor. Most photographers shoot on film and scan the negatives.
Then, as digital cameras fell in price and improved in quality, people realized they could take more pictures with less difficulty, and most commercial and industrial applications went digital. Soon, everyone had a digital camera on their phone, and those cameras with complex algorithms and programs based on film photography started doing many of the things that film could do, but more easily and cheaply.
Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography helps in improving the clarity of images by reducing motion blur and adding simulated depth of field, improving color, light range, and contrast by using image processing algorithms.
Before digital photography, photographs were made by exposing film and paper, which were processed in liquid chemical solutions to develop and stabilize the image. Digital photographs are typically created solely through computer-based photoelectric and mechanical techniques, without wet-bath chemical processing.
Due to the lack of industry demand, film became a niche product, and film prices continued to rise. Likewise, film processing, which was once available on nearly every street corner, has become more difficult. At the same time, digital quality continues to improve.
- Digital Photography
Digital cameras don't have film, instead they have a sensor. Unlike film, which is bought, processed, and printed separately from the camera, the sensor and the rest of the camera can be used over and over again, turning the image projected onto it into a picture that can be instantly and infinitely reproduced.
Digital photography, which is the vast majority of photography we use today, was developed on the basis of film photography. From the invention of photography in the early 1800s to the mid-2000s, film photography was how nearly all photographic images were created. Over the past decade, however, the situation has reversed, and digital photography is now considered the default and film photography the exception.
Digital photography uses a camera containing an array of electronic photodetectors connected to an analog-to-digital converter (ADC) to produce an image that is focused by a lens, rather than being exposed on photographic film. Digitized images are stored as computer files for further digital processing, viewing, electronic publishing or digital printing. It is a form of digital imaging based on the collection of visible light (or in the case of scientific instruments, light in various ranges of the electromagnetic spectrum).
- Computational Photography
Computational photography is concerned with computationally overcoming the limitations of traditional photography: optics, sensors, and geometry; even in terms of composition, style, and human-machine interface. Computational photography refers to digital image capture and processing techniques that use digital computation rather than optical processes. Computational photography can increase the capabilities of cameras, or introduce capabilities that are simply not possible with film photography, or reduce the cost or size of camera components.
Marc Levoy, professor emeritus of computer science at Stanford University, defines computational photography as a variety of computational imaging techniques that enhance or extend the capabilities of digital photography [where] the output is an ordinary photo that cannot be captured by a traditional camera.
Computational photography (or computational imaging) refers to digital image capture and processing techniques that use digital computation rather than optical processes. Computational photography can increase the capabilities of cameras, or introduce capabilities that are simply not possible with film photography, or reduce the cost or size of camera components. The definition of computational photography has grown to encompass many disciplines within computer graphics, computer vision, and applied optics.
Computational photography combines rich computing, digital sensors, modern optics, actuators, and smart lights to escape the limitations of traditional film cameras and enable novel imaging applications. Infinite dynamic range, variable focus, resolution and depth of field, cues for shape, albedo and lighting, and new interactive photo forms that are part snapshot and part video, are just some of the new applications in computational photography.
- Smart Cameras
Smart cameras have storage and processing built into the camera housing. Traditional cameras capture images and send them to another device for storage and analysis. Smart cameras have built-in I/O and communication capabilities to connect themselves to automation equipment. This design provides a more compact and robust design for the facility.
Smart cameras typically have larger sensors than traditional cameras. It's not uncommon to see 2MP or even 5MP sensors in smart cameras. With these higher pixel counts, smart cameras can detect smaller features in images. Although smart cameras cost more than their predecessors, they can reduce the number of cameras needed, as well as processing equipment.
[More to come ...]