#Depth sensor could bring Kinect games outdoors A new imaging technology could address a major drawback of depth-sensing cameras, such as Microsoft Kinect controller: the inability to work in bright light, especially sunlight. The key is to gather only the bits of light the camera actually needs. Researchers who developed the new imaging system created a mathematical model that allows the camera and its light source to work together efficiently, eliminating extraneous light, or oise, that would otherwise wash out the signals needed to detect a scene contours. e have a way of choosing the light rays we want to capture and only those rays, says Srinivasa Narasimhan, associate professor of robotics at Carnegie mellon University. e don need new image-processing algorithms, and we don need extra processing to eliminate the noise, because we don collect the noise. This is all done by the sensor. ne prototype based on this model synchronizes a laser projector with a common rolling-shutter camerahe type of camera used in most smartphoneso that the camera detects light only from points being illuminated by the laser as it scans across the scene. This not only makes it possible for the camera to work under extremely bright light or amidst highly reflected or diffused lightt can capture the shape of a lightbulb that has been turned on, for instance, and see through smokeut also makes it extremely energy efficient. This combination of features could make this imaging technology suitable for many applications, including medical imaging, inspection of shiny parts, and sensing for robots used to explore the moon and planets. It also could be incorporated readily into most smartphones. Depth cameras work by projecting a pattern of dots or lines over a scene. Depending on how these patterns are deformed or how much time it takes light to reflect back to the camera, it is possible to calculate the 3d contours of the scene. The problem is that these devices use compact projectors that operate at low power so their faint patterns are washed out and undetectable when the camera captures ambient light from a scene. But as a projector scans a laser across the scene, the spots illuminated by the laser beam are brighter, if only briefly, notes Kyros Kutulakos, a professor of computer science at the University of Toronto. ven though wee not sending a huge amount of photons, at short time scales, wee sending a lot more energy to that spot than the energy sent by the sun, he explains. The trick is to be able to record only the light from that spot as it is illuminated rather than try to pick out the spot from the entire bright scene. In the prototype using a rolling-shutter camera, this is accomplished by synchronizing the projector so that as the laser scans a particular plane, the camera accepts light only from that plane. Alternatively, if other camera hardware is used, the mathematical framework developed by the team can compute energy-efficient codes that optimize the amount of energy that reaches the camera. e have a way of choosing the light rays we want to capture and only those rays. n addition to enabling the use of Kinect-like devices to play videogames outdoors, the new approach also could be used for medical imaging, such as skin structures that otherwise would be obscured when light diffuses as it enters the skin. Likewise, the system can see through smoke despite the light scattering that usually makes it impenetrable to cameras. Manufacturers also could use the system to look for anomalies in shiny or mirrored components. William edwhittaker, a robotics professor at Carnegie mellon, says the system offers a number of advantages for extraterrestrial robots. Because depth cameras actively illuminate scenes, they are suitable for use in darkness, such as inside craters, he notes. In polar regions of the moon, where the sun is always at a low angle, a vision system that is able to eliminate the glare is essential. ow-power sensing is very important, Whittaker says, noting that a robot sensors expend a relatively large amount of energy because they are always on. very watt matters in a space mission. arasimhan says depth cameras that can operate outdoors could be useful in automotive applications, such as in maintaining spacing between self-driving cars that are latoonedfollowing each other at close intervals. The National Science Foundation the US ARMY Research Laboratory, and the Natural sciences and Engineering Research Council of Canada funded the work. The researchers will present their findings at SIGGRAPH 2015, the International Conference on Computer graphics and Interactive Techniques, in Los angeles t
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011