#Computer gets smarter by looking at online pics 24-7 Carnegie mellon University Posted by Byron Spice-Carnegie mellon on November 26 2013a computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day searching the internet for images and doing its best to understand them on its own. As NEIL s visual database grows the computer program gains common sense on a massive scale. NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images to characterize scenes and to recognize attributes such as colors lighting and materials all with a minimum of human supervision. In turn the data it generates will further enhance the ability of computers to understand the visual world But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying##that cars often are found on roads that buildings tend to be vertical and that ducks look sort of like geese. Based on text references it might seem that the color associated with sheep is black but people##and NEIL##nevertheless know that sheep typically are white. mages are the best way to learn visual propertiessays Abhinav Gupta assistant research professor in Carnegie mellon University s Robotics Institute. mages also include a lot of common sense information about the world. People learn this by themselves and with NEIL we hope that computers will do so as well. computer cluster has been running the NEIL program since late July and already has analyzed three million images identifying 1500 types of objects in half a million images and 1200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2500 associations from thousands of instances. One motivation for the NEIL project is to create the world s largest visual structured knowledge base where objects scenes actions attributes and contextual relationships are labeled and catalogued. hat we have learned in the last 5 to 10 years of computer vision research is that the more data you have the better computer vision becomesgupta says. Some projects such as Imagenet and Visipedia have tried to compile this structured data with human assistance. But the scale of the Internet is so vast##Facebook alone holds more than 200 billion images that the only hope to analyze it all is to teach computers to do it largely by themselves. Abhinav Shrivastava a Phd student in robotics says NEIL can sometimes make erroneous assumptions that compound mistakes so people need to be part of the process. A Google Image search for instance might convince NEIL that inkis just the name of a singer rather than a color. eople don t always know how or what to teach computershe says. ut humans are good at telling computers when they are wrong. eople also tell NEIL what categories of objects scenes etc. to search and analyze. But sometimes what NEIL finds can surprise even the researchers. It can be anticipated for instance that a search for pplemight return images of fruit as well as laptop computers. But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet but also of F18-class catamarans. As its search proceeds NEIL develops subcategories of objects tricycles can be for kids for adults and can be motorized or cars come in a variety of brands and models. And it begins to notice associations##that zebras tend to be found in savannahs for instance and that stock trading floors are crowded typically. NEIL is computationally intensive the research team notes. The program runs on two clusters of computers that include 200 processing cores. The Office of Naval Research and Google Inc. support the project. The research team will present its findings on Dec 4 at the IEEE International Conference on Computer Vision in Sydney Australiasource: Carnegie mellon Universit t
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011