Now your face could be transformed instantly into a more memorable one without the need for an expensive makeover thanks to an algorithm developed by researchers in MIT s Computer science and Artificial intelligence Laboratory (CSAIL.
The algorithm which makes subtle changes to various points on the face to make it more memorable without changing a person s overall appearance was unveiled earlier this month at the International Conference on Computer Vision in Sydney.
which people will actually remember a face says lead author Aditya Khosla a graduate student in the Computer Vision group within CSAIL.
and replace it with the most memorable one in our database we want your face to still look like you.
More memorable or lessthe system could ultimately be used in a smartphone app to allow people to modify a digital image of their face before uploading it to their social networking pages.
It could also be used for job applications to create a digital version of an applicant s face that will more readily stick in the minds of potential employers says Khosla who developed the algorithm with CSAIL principal research scientist Aude Oliva the senior author of the paper Antonio
Torralba an associate professor of electrical engineering and computer science and graduate student Wilma Bainbridge. Conversely it could also be used to make faces appear less memorable
To develop the memorability algorithm the team first fed the software a database of more than 2000 images.
In this way the software was able to analyze the information to detect subtle trends in the features of these faces that made them more or less memorable to people.
The researchers then programmed the algorithm with a set of objectives to make the face as memorable as possible
and so would fail to meet the algorithm s objectives. When the system has a new face to modify it first takes the image
The algorithm then analyzes how well each of these samples meets its objectives. Once the algorithm finds a sample that succeeds in making the face look more memorable without significantly altering the person s appearance it makes yet more copies of this new image with each containing further alterations.
It then keeps repeating this process until it finds a version that best meets its objectives.
When they tested these images on a group of volunteers they found that the algorithm succeeded in making the faces more or less memorable as required in around 75 percent of cases.
We all wish to use a photo that makes us more visible to our audience says Aleix Martinez an associate professor of electrical and computer engineering at Ohio State university.
Now Oliva and her team have developed a computational algorithm that can do this for us he says.
The research was funded by grants from Xerox Google Facebook and the Office of Naval Research h
#Study finds piece-by-piece approach to emissions policies can be effective Discussions on curbing climate change tend to focus on comprehensive, emissions-focused measures:
or a tax on all carbon emissions. But a new study by researchers at MIT finds that a egmentalapproach involving separate targeting of energy choices
and energy consumption through regulations or incentives can play an important role in achieving emission reductions. The new study, by assistant professor of engineering systems Jessika Trancik, is being published this week in the journal Environmental science and Technology.
Trancik is joined on the paper by three MIT graduate students: Michael Chang and Christina Karapataki of the Engineering Systems Division and Leah Stokes of the Department of Urban Studies and Planning. policy that focused on controlling carbon emissions is a different kind of policy than one that focused
on the underlying demand-side and supply-side technology drivers, Trancik says. And while those calling for sweeping, emission-focused policies have faced often uphill battles in regions, states,
and nations, a wide variety of segmental policies have been adopted by such jurisdictions, making it important to understand the effectiveness of such approaches,
she says. here are some things that these segmental policies do very well, Trancik says in particular dealing with the inertia associated with existing infrastructure. t will be expensive to retire new power plants early,
and so with each power plant built we are committing to emissions not just today, but in future years, she says. ompliance with a carbon-focused policy can come either from changes in energy consumption levels or technological change,
and a set of segmental policies can ensure that both types of change happen concurrently,
Trancik says. Comprehensive, carbon-limiting policies would not allow that kind of targeted approach, she adds.
The issue is urgent Trancik says: The paper shows that when accounting for infrastructural inertia,
the carbon intensity of new plants built over the coming decade that is, the amount of carbon dioxide emitted per megawatt-hour of power produced will need to be reduced by 50 percent,
in order to meet emissions reduction commitments that have been made by most nations. any nations are generally moving in the direction of segmental policies,
hat would involve capping carbon dioxide emissions, but then using these segmental policies to address particular areas of concern,
A global agreement on carbon emissions would be most effective at reducing the risks of climate change
An added benefit, Trancik notes, is that discussing segmental approaches is likely to lead to a greater understanding of where emissions reductions might come from,
which may eventually make it easier to reach an agreement on limiting carbon emissions directly. Decisions made over the next decade will have long-lasting effects on overall emissions,
and on energy efficiency work together? Doug Arent, a research scientist at the National Renewable energy Laboratory in Golden, Colo.
who was involved not in this work, says this study valuates a number of examples of policies that contribute to a portfolio effort of reducing greenhouse gas emissions,
but from multiple different approaches. This line of research is given valuable already-existing policies and multiple approaches, continued challenges with global agreements,
#3-D images with only one photon per pixel Lidar rangefinders which are common tools in surveying
In this week s issue of the journal Science researchers from MIT s Research Laboratory of Electronics (RLE) describe a new lidar-like system that can gauge depth
Since a conventional lidar system would require about 100 times as many photons to make depth estimates of similar accuracy under comparable conditions the new system could yield substantial savings in energy and time
and it works much more reliably than lidar in bright sunlight when ambient light can yield misleading readings.
All the hardware it requires can already be found in commercial lidar systems; the new system just deploys that hardware in a manner more in tune with the physics of low light-level imaging and natural scenes.
Count the photonsas Ahmed Kirmani a graduate student in MIT s Department of Electrical engineering and Computer science and lead author on the new paper explains the very idea of forming an image with only a single photon detected at each pixel location is counterintuitive.
The way a camera senses images is through different numbers of detected photons at different pixels Kirmani says.
Darker regions would have fewer photons and therefore accumulate less charge in the detector while brighter regions would reflect more light
and lead to more detected photons and more charge accumulation. In a conventional lidar system the laser fires pulses of light toward a sequence of discrete positions
which collectively form a grid; each location in the grid corresponds to a pixel in the final image.
The technique known as raster scanning is how old cathode ray tube-tube televisions produced images illuminating one phosphor dot on the screen at a time.
The laser will generally fire a large number of times at each grid position until it gets consistent enough measurements between the times at
which pulses of light are emitted and reflected photons are detected that it can rule out the misleading signals produced by stray photons.
The MIT researchers system by contrast fires repeated bursts of light from each position in the grid only until it detects a single reflected photon;
then it moves on to the next position. A highly reflective surface one that would show up as light rather than dark in a conventional image should yield a detected photon after fewer bursts than a less-reflective surface would.
So the MIT researchers system produces an initial provisional map of the scene based simply on the number of times the laser has to fire to get a photon back.
Filtering out noisethe photon registered by the detector could however be a stray photodetection generated by background light.
they follow a pattern known in signal processing as Poisson noise. Simply filtering out noise according to the Poisson statistics would produce an image that would probably be intelligible to a human observer.
It guides the filtering process by assuming that adjacent pixels will more often than not have similar reflective properties
Researchers in the Optical and Quantum Communications Group which is led by Jeffrey Shapiro the Julius A. Stratton Professor of Electrical engineering
which is also impressive says John Howell a professor of physics at the University of Rochester.
or it could be that you re interrogating a biological sample and too much light could damage it.
but other biological systems are the same. There could also be remote-sensing applications where you may want to look at something
could be used in medical imaging and collision-avoidance detectors for cars, and to improve the accuracy of motion tracking
The camera is based on ime of Flighttechnology like that used in Microsoft recently launched second-generation Kinect device, in
which the location of objects is calculated by how long it takes a light signal to reflect off a surface and return to the sensor.
a graduate student at MIT. sing the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D,
"Kadambi says. hat is because the light that bounces off the transparent object and the background smear into one pixel on the camera.
and returns to strike the pixel. Since the speed of light is known, it is then simple for the camera to calculate the distance the signal has travelled
Instead, the new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled
says Ramesh Raskar, an associate professor of media arts and sciences and leader of the Camera Culture group within the Media Lab,
and Christopher Barsi at MIT and Adrian Dorrington and Lee Streeter from the University of Waikato in New zealand. e use a new method that allows us to encode information in time,
when the data comes back, we can do calculations that are very common in the telecommunications world,
to estimate different distances from the single signal. The idea is similar to existing techniques that clear blurring in photographs
a graduate student in the Media Lab. eople with shaky hands tend to take blurry photographs with their cellphones
This allows the team to use inexpensive hardware off-the-shelf light-emitting diodes (LEDS) can strobe at nanosecond periods,
Conventional cameras see an average of the light arriving at the sensor, much like the human eye, says James Davis, an associate professor of computer science at the University of California at Santa cruz. In contrast,
the researchers in Raskar laboratory are investigating what happens when they take a camera fast enough to see that some light makes it from the lashback to the camera sooner,
and apply sophisticated computation to the resulting data, Davis says. ormally the computer scientists who could invent the processing on this data can build the devices,
and the people who can build the devices cannot really do the computation, he says. his combination of skills
and techniques is really unique in the work going on at MIT right now. What more, the basic technology needed for the team approach is very similar to that already being shipped in devices such as the new version of Kinect,
Davis says. o it going to go from expensive to cheap thanks to video games, and that should shorten the time before people start wondering what it can be used for,
he says. nd by the time that happens, the MIT group will have a whole toolbox of methods available for people to use to realize those dreams. o
#Creating synthetic antibodies MIT chemical engineers have developed a novel way to generate nanoparticles that can recognize specific molecules, opening up a new approach to building durable sensors for many different compounds
To create these ynthetic antibodies, the researchers used carbon nanotubes hollow, nanometer-thick cylinders made of carbon that naturally fluoresce
when exposed to laser light. In the past, researchers have exploited this phenomenon to create sensors by coating the nanotubes with molecules,
such as natural antibodies, that bind to a particular target. When the target is encountered, the carbon nanotube fluorescence brightens
or dims. The MIT team found that they could create novel sensors by coating the nanotubes with specifically designed amphiphilic polymers polymers that are drawn to both oil and water, like soap.
This approach offers a huge array of recognition sites specific to different targets, and could be used to create sensors to monitor diseases such as cancer, inflammation,
or diabetes in living systems. his new technique gives us an unprecedented ability to recognize any target molecule by screening nanotube-polymer complexes to create synthetic analogs to antibody function,
says Michael Strano, the Carbon P. Dubbs Professor of Chemical engineering at MIT and senior author of the study,
which appears in the Nov 24 online edition of Nature Nanotechnology. Lead authors of the paper are recent Phd recipient Jingqing Zhang
postdoc Markita Landry, and former postdocs Paul Barone and Jong-Ho Kim. Synthetic antibodies The new polymer-based sensors offer a synthetic design approach to the production of molecular recognition sites enabling, among other applications, the detection of a potentially infinite library of targets.
Moreover, this approach can provide a more durable alternative to coating sensors such as carbon nanotubes with actual antibodies,
which can break down inside living cells and tissues. Another family of commonly used recognition molecules are DNA aptamers,
which are short pieces of DNA that interact with specific targets, depending on the aptamer sequence.
However there are not aptamers specific to many of molecules that one might want to detect, Strano says.
In the new paper, the researchers describe molecular recognition sites that enable the creation of sensors specific to riboflavin, estradiol (a form of estrogen),
but they are now working on sites for many other types of molecules, including neurotransmitters, carbohydrates, and proteins.
Their approach takes advantage of a phenomenon that occurs when certain types of polymers bind to a carbon nanotube.
These polymers, known as amphiphilic, have both hydrophobic and hydrophilic regions. These polymers are designed and synthesized such that
when the polymers are exposed to carbon nanotubes, the hydrophobic regions latch onto the tubes like anchors
and the hydrophilic regions form a series of loops extending away from the tubes. These loops form a new layer surrounding the nanotube, known as a corona.
The MIT researchers found that the loops within the corona are arranged very precisely along the tube,
and alter the carbon nanotube fluorescence. Molecular interactions What is unique about this approach, the researchers say,
and the polymer before it attaches to the nanotube. he idea is that a chemist could not look at the polymer
because the polymer itself can selectively recognize these molecules. It has to adsorb onto the nanotube and then,
by having certain sections of the polymer exposed, it forms a binding site, Strano says.
Laurent Cognet, a senior scientist at the Institute of Optics at the University of Bordeaux, says this approach should prove useful for many applications requiring reliable detection of specific molecules. his new concept,
being based on the molecular recognition from the adsorbed phase itself, does not require the use of antibodies
or equivalent molecules to achieve specific molecule recognitions and thus provides a promising alternative route for n demandmolecular sensing,
says Cognet, who was not part of the research team. The researchers used an automated, robot-assisted trial and error procedure to test about 30 polymer-coated nanotubes against three dozen possible targets, yielding three hits.
They are now working on a way to predict such polymer-nanotube interactions based on the structure of the corona layers,
using data generated from a new type of microscope that Landry built to image the interactions between the carbon nanotube coronas
and their targets. hat happening to the polymer and the corona phase has been a bit of a mystery,
so this is a step forward in getting more data to address the problem of how to design a target for a specific molecule,
Landry says. The research was funded by the National Science Foundation and the Army Research Office through MIT Institute for Soldier Nanotechnologies t
#Droplets break a theoretical time barrier on bouncing Those who study hydrophobic materials water-shedding surfaces such as those found in nature
Their finding is reported in a paper in the journal Nature co-authored by Kripa Varanasi, the Doherty Associate professor of Mechanical engineering at MIT
along with James Bird, a former MIT postdoc who is now an assistant professor of mechanical engineering at Boston University, former MIT postdoc Rajeev Dhiman,
and energy between the drop and the surface, Varanasi says. f you can get the drops to bounce faster,
then pulling back inward due to surface tension and bouncing away depends on the time period of oscillations in a vibrating drop, also known as the Rayleigh time.
Varanasi team findings may also have implications for ecology: The researchers found that some butterfly wings naturally produce the same effect,
Similarly, the veins of nasturtium leaves, unlike those of most leaves, are on top, where they serve to break up droplets that land there.
and nasturtium leaves faster than they bounced off lotus leaves, which are considered often the old standardof nonwetting surfaces.
Howard Stone, a professor of mechanical and aerospace engineering at Princeton university who was involved not in this work,
For example, the turbine blades in electric power plants become less efficient if water builds up on their surfaces. f you can make the blades stay dry longer,
The research received support from the Defense Advanced Research Projects Agency, the MIT Energy Initiative, the National Science Foundation,
#Biologists ID new cancer weakness About half of all cancer patients have a mutation in a gene called p53
which allows tumors to survive and continue growing even after chemotherapy severely damages their DNA.
A new study from MIT biologists has found that tumor cells with mutated p53 can be made much more vulnerable to chemotherapy by blocking another gene called MK2.
In a study of mice tumors lacking both p53 and MK2 shrank dramatically when treated with the drug cisplatin while tumors with functional MK2 kept growing after treatment.
The findings suggest that giving cancer patients a combination of a DNA-damaging drug and an MK2 inhibitor could be very effective says Michael Yaffe the David H. Koch Professor in Science
and senior author of a paper describing the research in the Nov 14 issue of the journal Cell Reports.
Several drugs that inhibit MK2 are now in clinical trials to treat inflammatory diseases such as arthritis
and colitis but the drugs have never been tested as possible cancer treatments. What our study really says is that these drugs could have an entirely new second life in combination with chemotherapy says Yaffe who is a member of MIT s Koch Institute for Integrative Cancer Research.
We re very much hoping it will go into clinical trials for cancer. Sandra Morandell a postdoc at the Koch Institute is the paper s lead author.
To kill a tumorp53 is a tumor suppressor-protein protein that controls cell division. Before cell division begins p53 checks the cell s DNA
and initiates repair if necessary. If DNA damage is too extensive p53 forces the cell to undergo programmed cell death
or apoptosis. Tumors that lack p53 can avoid this fate. Usually p53 is the main driver of cell death
Our data suggested if you block the MK2 pathway tumor cells wouldn t recognize that they had DNA damage
and they would keep trying to divide despite having DNA damage and they would end up committing suicide Yaffe says.
if this would hold true in tumors in living animals as well as cells grown in a lab dish.
To do that they used a strain of mice that are programmed genetically to develop non-small-cell lung tumors.
or off allowing them to study tumors with and without MK2 in the same animal.
This new approach allows them for the first time to compare different types of tumors in the same mice where all genetic factors are identical except for MK2 expression.
and potentially useful approach for others to use says Titia de Lange a professor of cell biology
and genetics at Rockefeller University who was not part of the research team. Using these mice the researchers found that before treatment tumors lacking both MK2
and p53 grow faster than tumors that have MK2. This suggests that treating tumors with an MK2 inhibitor alone would actually do more harm than good possibly increasing the tumor s growth rate by taking the brake off the cell cycle.
However when these tumors are treated with cisplatin the tumors lacking MK2 shrink dramatically while those with MK2 continue growing.
A nonobvious combination The potential combination of cisplatin and MK2 inhibitors is unlike other chemotherapy combinations that have been approved by the Food
and Drug Administration which consist of pairs of drugs that each show benefit on their own. What we found is a combination that you would never have arrived at otherwise Yaffe says.
While this study focused on non-small-cell lung tumors the researchers have gotten similar results in cancer cells grown in the lab from bone cervical and ovarian tumors.
They are now studying mouse models of colon and ovarian cancer. The research was funded by the Austrian Science Fund the National institutes of health Janssen Pharmaceuticals Inc. the Koch Institute MIT s Center for Environmental Health Sciences the Volkswagenstiftung the Deutsche Forschungsgemeinschaft the German
When an earthquake and tsunami struck Japan Fukushima nuclear power plant in 2011, knocking out emergency power supplies,
crews sprayed seawater on the reactors to cool them to no avail. One possible reason:
the Doherty Associate professor of Ocean Utilization in MIT Department of Mechanical engineering and the lead author of the study.
who is now an assistant professor of mechanical engineering at Boston University. ommon knowledge suggests that the closely spaced posts would provide greater surface area,
To decouple those two effects, the researchers coated the surface featuring spaced-out microscale posts with nanoscale particles.
under the same conditions, the droplets did not wet the surfaces of samples with either the microscale posts or the nanoscale texture,
In addition to nuclear safety systems, this work has important implications for systems such as steam generators, industrial boilers, fire suppression,
as well as for processes such as spray cooling of hot metal. One application now being considered by Varanasi
and his colleagues is electronics cooling. he heat fluxes in electronics cooling are skyrocketing, Varanasi says.
It might be a job for efficient spray cooling f we can figure out how to fit a system into the small space inside electronic devices.
a professor of mechanical, aerospace, and nuclear engineering at Rensselaer Polytechnic institute who was involved not in this research,
says, xtending the surface temperature at which this phenomenon occurs is a challenging task that has been a century-long research effort.
The research was supported by a Young Faculty Award from the Defense Advanced Research Projects Agency, the MIT Energy Initiative,
Such particles could make it more feasible to design lab-on-a-chip devices, which hold potential as portable diagnostic devices for cancer and other diseases.
These devices consist of microfluidic channels engraved on tiny chips, but current versions usually require a great deal of extra instrumentation attached to the chip,
limiting their portability. Much of that extra instrumentation is needed to keep the particles flowing single file through the center of the channel,
where they can be analyzed. This can be done by applying a magnetic or electric field, or by flowing two streams of liquid along the outer edges of the channel, forcing the particles to stay in the center.
The new MIT approach, described in Nature Communications, requires no external forces and takes advantage of hydrodynamic principles that can be exploited simply by altering the shapes of the particles.
Patrick Doyle, the Singapore Research Professor of Chemical engineering at MIT, is the senior author of the paper.
The work builds on previous research showing that when a particle is confined in a narrow channel,
it has strong hydrodynamic interactions with both the confining walls and any neighboring particles. These interactions,
As a particle approaches the wall, the perturbation it creates in the fluid is reflected back by the wall,
just as waves in a pool reflect from its wall. This reflection forces the particle to flip its orientation and move toward the center of the channel.
Slightly asymmetrical particles will overshoot the center and move toward the other wall, then come back toward the center again until they gradually achieve a straight path.
Very asymmetrical particles will approach the center without crossing it, but very slowly. But with just the right amount of asymmetry, a particle will move directly to the centerline in the shortest possible time. ow that we understand how the asymmetry plays a role,
says Patrick Tabeling, a professor at the École Supérieure de Physique et de Chimie Industrielles in Paris,
In 2006, Doyle lab developed a way to create huge batches of identical particles made of hydrogel, a spongy polymer.
the researchers shine ultraviolet light through a mask onto a stream of flowing building blocks, or oligomers.
Wherever the light strikes, solid polymeric particles are formed in the shape of the mask, in a process called photopolymerization.
During this process, the researchers can also load a fluorescent probe such as an antibody at one end of the dumbbell.
This type of particle can be useful for diagnosing cancer and other diseases, following customization to detect proteins
or DNA sequences in blood samples that can be signs of disease. Using a cytometer,
and also provide a new toolkit from which one can develop other novel bioassays, Doyle says.
and the Institute for Collaborative Biotechnologies through the U s army Research Office u
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011