Now MIT researchers have developed an algorithm for bounding that they ve successfully implemented in a robotic cheetah a sleek four-legged assemblage of gears batteries
The key to the bounding algorithm is in programming each of the robot s legs to exert a certain amount of force in the split second during
and graduate student Meng Yee Chuah will present details of the bounding algorithm this month at the IEEE/RSJ International Conference on Intelligent Robots and Systems in Chicago.
Kim and his colleagues developed an algorithm that determines the amount of force a leg should exert in the short period of each cycle that it spends on the ground.
#In experiments the team ran the robot at progressively smaller duty cycles finding that following the algorithm s force prescriptions the robot was able to run at higher speeds without falling.
Kim says the team s algorithm enables precise control over the forces a robot can exert while running.
has developed a platform hardware, software, and cloud services that lets manufacturers pick and choose various components
and application-specific software to add to commercial drones for multiple purposes. The key component is the startup Linux-based autopilot device,
a small red box that is installed into all of a client drones. his is responsible for flying the vehicle in a safe, reliable manner,
and acts as hub for the components, so it can collect all that data and display that info to a user, says Downey, Airware CEO,
who researched and built drones throughout his time at MIT. To customize the drones customers use software to select third-party drone vehicles and components such as sensors, cameras, actuators,
and communication devices configure settings, and apply their configuration to a fleet. Other software helps them plan
and monitor missions in real time (and make midflight adjustments), and collects and displays data. Airware then pushes all data to the cloud,
where it aggregated and analyzed, and available to designated users. If a company decides to use a surveillance drone for crop management, for instance,
it can easily add software that stitches together different images to determine which areas of a field are overwatered
or underwatered. hey don have to know the flight algorithms, or underlying hardware, they just need to connect their software or piece of hardware to the platform,
Downey says. he entire industry can leverage that. Clients have trialed Airware platform over the past year including researchers at MIT,
who are demonstrating delivery of vaccines in Africa. Delta Drone in France is using the platform for open-air mining operations,
search-and-rescue missions, and agricultural applications. Another UAV maker, Cyber Technology in Australia, is using the platform for drones responding to car crashes and other disasters,
and inspecting offshore oilrigs. Now, with its most recent $25 million funding round Airware plans to launch the platform for general adoption later this year,
viewing companies that monitor crops and infrastructure with drones that require specific cameras and sensors as potential early customers.
when Downey, who studied electrical engineering and computer science, organized an MIT student team including Airware chief technology officer, Buddy Michini 7, SM 9,
even though you could change the software, Downey says. A five-year stretch at Boeing as an engineer for the U s. military A160 Hummingbird UAV and as a commercial pilot put Downey in contact with drone manufacturers, who,
he found, were still using black boxes or open-source designs. hey were basically facing the same challenges we faced as undergrads at MIT,
the development of a standard operating system for drones is analogous to Intel processors and Microsoft DOS paving the way for personal computers in the 1980s.
Before those components became available, hobbyists built computers using software that didn work with different computers.
At the same time, powerful mainframes were only available to a select few and still suffered software-incompatibility issues.
Then came Intel processors and DOS. Suddenly engineers could build computers around the standard processor and create software on the operating system,
without needing to know details of the underlying hardware. ee doing the same thing for the drone space,
Downey says. here are 600 companies building differing versions of drone hardware. We think they need the Intel processor of the drones,
if you will, and that operating system-level software component, too like the DOS for drones.
The benefits are far-reaching, Downey says: rone companies, for instance, want to build drones and tailor them for different applications without having to build everything from scratch,
he says. But companies developing cameras, sensors, and communication links for drones also stand to benefit,
he adds, as their components will need only to be compatible with a single platform. Additionally, it could help the Federal aviation administration (FAA) better assess the reliability of drones;
Congress recently tasked the agency with compiling UAV rules and regulations by 2015. This could also help promote commercial drone use in the United states,
and every drone has different software and electronics, it good for the FAA if all of them had reliable and common hardware and software,
he says. e think it valuable for everybody. n
#Manual control When you imagine the future of gesture-control interfaces, you might think of the popular science-fiction films inority Report (2002) or ron Man (2008).
or wireless gloves to seamlessly scroll through and manipulate visual data on a wall-sized, panoramic screen.
and a collaborative-conferencing system called Mezzanine that allows multiple users to simultaneously share and control digital content across multiple screens,
from any device, using gesture control. Overall, the major benefit in such a system lies in boosting productivity during meetings,
or check email surreptitiously under the table that can be electrifying force for the enterprise,
Mezzanine surrounds a conference room with multiple screens, as well as the rainsof the system (a small server) that controls and syncs everything.
Several Wii-like wands with six degrees of freedom, allow users to manipulate content such as text, photos, videos, maps, charts, spreadsheets,
and PDFS depending on certain gestures they make with the wand. That system is built on g-speak,
a type of operating system or a so-called patial operating environmentused by developers to create their own programs that run like Mezzanine.
-speak programs run in a distributed way across multiple machines and allow concurrent interactions for multiple people,
and IBM, as well as government agencies and academic institutions, such as Harvard university Graduate school of Design. Architects and real estate firms are also using the system for structural designing.
Putting pixels in the room G-speak has its roots in a 1999 MIT Media Lab project co-invented by Underkoffler in Professor Hiroshi Ishii Tangible Media Group called uminous Room,
which enabled all surfaces to hold data that could be manipulated with gestures. t literally put pixels in the room with you,
That meant the team could make any projected surface a veritable computer screen, and the data could interact with,
and be controlled by, physical objects. They also assigned pixels three-dimensional coordinates. Imagine, for example, if you sat down in a chair at a table,
and tried to describe where the front, left corner of that table was located in physical space. ou say that corner is this far off the floor, this far to the right of my chair,
and this much in front of me, among other things, Underkoffler explains. e started doing that with pixels.
nd the pixels surrounded the model, Underkoffler says. This provided three-dimensional spatial information, from which the program casted accurate, digital shadows from the models onto the table.
Seeing this technology on the big screen inspired Underkoffler to refine his MIT technology, launch Oblong in 2006,
Having tens of millions of viewers seeing the technology on the big screen, however, offered a couple of surprising perks for Oblong,
Additionally, being part of a big-screen production helped Underkoffler and Oblong better explain their own technology to clients,
so theye instantly legible on screen is really close to the refinement you need to undertake
and leave a really compact core of user-interface ideas we have today. After years of writing custom projects for clients on g-speak,
and multiple-user capabilities into Mezzanine. t was the first killer application we could write on top of g-speak,
shared-pixel workspace has enormous value no matter what your business is. Today, Oblong is shooting for greater ubiquity of its technology.
ut we really hope to radically tilt the whole landscape of how we think about computers and user interface. n
#Ride sharing could cut cabs road time by 30 percent Cellphone apps that find users car rides in real time are exploding in popularity:
What if the taxi-service app on your cellphone had a button on it that let you indicate that you were willing to share a ride with another passenger?
Authoritatively answering that question requires analyzing huge volumes of data which hasn t been computationally feasible with traditional methods.
and the Italian National Research Council s Institute for Informatics and Telematics present a new technique that enabled them to exhaustively analyze 150 million trip records collected from more than 13000 New york city cabs over the course of a year.
if the passengers are using cellphone apps. So the researchers also analyzed the data on the assumption that only trips starting within a minute of each other could be combined.
Even then they still found a 32 percent reduction in total travel time. We think that with the potential of a 30 percent reduction in operational costs there is plenty of room for redistributing these benefits to customers
and is now at Northeastern University and Giovanni Resta a researcher at Santi s home institution the Institute for Informatics and Telematics.
In analyzing taxi data for ride sharing opportunities Typically the approach that was taken was a variation of the so-called traveling-salesman problem Santi explains.
Then for each trip their algorithm identifies the set of other trips that overlap with it the ones that begin before it ends.
Next the algorithm represents the shareability of all 150 million trips in the database as a graph.
The graphical representation itself was the key to the researchers analysis. With that in hand well-known algorithms can efficiently find the optimal matchings to either maximize sharing
The researchers also conducted experiments to ensure that their matching algorithm would work in real time if it ran on a server used to coordinate data from cellphones running a taxi-sharing app.
They found that even running on a single Linux box it could find optimal matchings for about 100000 trips in a tenth of a second
whereas the GPS data indicated that on average about 300 new taxi trips were initiated in New york every minute.
Finally an online application designed by Szell Hubcab allows people to explore the taxi data themselves using a map of New york as an interface.
David Mahfouda the CEO of the car-and taxi-hailing company Bandwagon whose business model is built specifically around ride sharing says that his company hired analysts to examine the same data set that Santi
and his colleagues did. We did analysis of rides from Laguardia Airport and were able to build really detailed maps around where passengers were headed from that high-density departure point he says.
Making the entire data set available on a queryable basis does seem like a significantly larger lift.
or lab bench but the team is also working on a portable version that is about the size of a small electronic tablet.
and devised Medeye a bedside medication-scanning system that uses computer vision to identify pills
Algorithms distinguish the pills by matching them against a database of nearly all pills in circulation.
Although the hardware is impressive much innovation is in Medeye s software which cross-references (and updates) the results in the patient s records.
If a pill isn t in Medeye s database because it s new for instance the system alerts the nurse who adds the information into the software for next time.
It does all the querying for the right medication for the right patient and takes care of the paperwork Helgason says.
Companies sell medications with barcodes others sell software or barcode scanners. Hospitals have to make all these things work together
In a computer-vision class in the Computer science and Artificial intelligence Laboratory he saw that advances in 3-D object-recognition technology meant computers could learn objects based on various characteristics.
Everyone s starting companies says Reynisson a trained programmer who wrote early object-recognition code for the Medeye.
Seeking a change of pace from computer science Reynisson enrolled in the MIT Sloan School of management where he saw that Helgason was right.
That s when we realized what a change it would be for a hospital to collect data
At the core of the startup is this belief that better information technology in hospitals can both increase efficiency
First they used their engram-labeling protocol to tag neurons associated with either a rewarding experience (for male mice socializing with a female mouse) or an unpleasant experience (a mild electrical shock.
They also devised a computer simulation that can predict a cell trajectory through the channel based on its size
whether the primary cancer has moved to a new site to generate metastatic tumors, Dao says. his method is a step forward for detection of circulating tumor cells in the body.
#Unlocking the potential of simulation software With a method known as finite element analysis (FEA), engineers can generate 3-D digital models of large structures to simulate how theyl fare under stress, vibrations, heat,
and oil rigs these simulations require intensive computation done by powerful computers over many hours, costing engineering firms much time and money.
Now MIT spinout Akselos has developed novel software, based on years of research at the Institute, that uses precalculated supercomputer data for structural components like simulated egosto solve FEA models in seconds.
A simulation that could take hours with conventional FEA software for instance, could be done in seconds with Akselosplatform.
Hundreds of engineers in the mining, power-generation, and oil and gas industries are now using the Akselos software.
The startup is also providing software for an MITX course on structural engineering. With its technology, Akselos aims to make 3-D simulations more accessible worldwide to promote efficient engineering design,
says David Knezevic, Akseloschief technology officer, who co-founded the startup with former MIT postdoc Phuong Huynh
and alumnus Thomas Leurent SM01. ee trying to unlock the value of simulation software, since for many engineers current simulation software is far too slow
and labor-intensive, especially for large models, Knezevic says. igh-fidelity simulation enables more cost-effective designs, better use of energy and materials,
which enables users to build large and complex 3-D models out of a set of parameterized components,
that used that technique to create a mobile app that displayed supercomputer simulations, in seconds, on a smartphone.
A supercomputer first presolved problems such as fluid flow around a spherical obstacle in a pipe that had known a form
When app users plugged in custom parameters for problems such as the diameter of that spherical obstacle the app would compute a solution for the new parameters by referencing the precomputed data.
Today Akselos software runs on a similar principle, but with new software, and cloud-based service.
A supercomputer precalculates individual components, such as, say, a simple tube or a complex mechanical part. nd this creates a big data footprint for each one of these components,
which we push to the cloud, Knezevic says. These components contain adjustable parameters, which enable users to vary properties,
such as geometry, density, and stiffness. Engineers can then access and customize a library of precalculated components,
After that, the software will reference the precomputed data to create a highly detailed 3-D simulation in seconds.
and created modified simulations within a few minutes. he software also allows people to model the machinery in its true state,
since with other software it not feasible to simulate large structures in full 3-D detail.
Ultimately, pushing the data to the cloud has helped Akselos, by leveraging the age-old tradeoff between speed and storage:
By storing and reusing more data, algorithms can do less work and hence finish more quickly. hese days,
with cloud technology, storing lots of data is no big deal. We store a lot more data than other methods,
but that data, in turn, allows us to go faster, because wee able to reuse as much precomputed data as possible,
he says. Bringing technology to the world Akselos was founded in 2012, after Knezevic and Huynh,
along with Leurent who actually started FEA work with Patera group back in 2000 earned a Deshpande innovation grant for their upercomputing-on-a-smartphoneinnovation. hat was a trigger,
Knezevic says. ur passion and goal has always been to bring new technology to the world.
sales, opening a Web platform to users, and hiring. e needed a sounding board, Knezevic says. e go into meetings
who is using the startup software albeit a limited version in her MITX class, 2. 01x (Elements of Structures).
Primarily, he hears that the software is allowing students to uild intuition for the physics of structures beyond
Vietnam, and Switzerland building a community of users, and planning to continue its involvement with edx classes.
On Knezevic end, at the Boston office, it all about software development, tailoring features to customer needs a welcome challenge for the longtime researcher. n academia,
typically only you and a few colleagues use the software, he says. ut in a company you have people all over the world playing with it
and reuse it in photovoltaic panels that could go on producing power for decades. Amazingly because the perovskite photovoltaic material takes the form of a thin film just half a micrometer thick,
When the panels are retired eventually, the lead can simply be recycled into new solar panels. he process to encapsulate them will be the same as for polymer cells today,
Some companies are already gearing up for commercial production of perovskite photovoltaic panels, which could otherwise require new sources of lead.
This week in the journal Proceedings of the National Academy of Sciences researchers at the Koch Institute for Integrative Cancer Research at MIT report that they have delivered successfully small RNA therapies in a clinically relevant mouse model of lung cancer to slow
This mouse model reflects many of the hallmarks of human lung cancer and is used often in preclinical trials.
Researchers then compared mouse survival time among four treatment options:##no treatment; treatment with cisplatin a small-molecule standard-care chemotherapy drug;
We took the best mouse model for lung cancer we could find we found the best nanoparticle we could use
Photo courtesy of the researchersfull Screen As soon as researchers successfully demonstrated that this system could work in cells other than bacteria Niles started to think about using it to manipulate Plasmodium falciparum.
The exciting thing here is that you create this device that has embedded computation in the flat printed version says Daniela Rus the Andrew
and Erna Viterbi Professor of Electrical engineering and Computer science at MIT and one of the Science paper s co-authors.
Rus is joined on the paper by Erik Demaine an MIT professor of computer science and engineering and by three researchers at Harvard s Wyss Institute for Biologically Inspired Engineering and School of Engineering and Applied sciences:
In prior work Rus Demaine and Wood developed an algorithm that could automatically convert any digitally specified 3-D shape into an origami folding pattern.
what s called a cyclic fold where you have a bunch of panels connected together in a cycle
But as Demaine explains in origami 180-degree folds are used generally to join panels together.
With 150-degree folds the panels won t quite touch but that s probably tolerable for many applications In the meantime Demaine is planning to revisit the theoretical analysis that was the basis of the researchers original folding algorithm to determine
whether it s still possible to produce arbitrary three-dimensional shapes with folds no sharper than 150 degrees.
and computer science at the University of California at Berkeley who has been following the MIT and Harvard researchers work.
To investigate the potential usefulness of CRISPR for creating mouse models of cancer the researchers first used it to knock out p53 and pten
Many models possiblethe researchers also used CRISPR to create a mouse model with an oncogene called beta catenin
#Extracting audio from visual information Algorithm recovers speech from the vibrations of a potato-chip bag filmed through soundproof glass.
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video.
In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof Glass in other experiments,
a graduate student in electrical engineering and computer science at MIT and first author on the new paper. he motion of this vibration creates a very subtle visual signal that usually invisible to the naked eye.
Joining Davis on the Siggraph paper are Frédo Durand and Bill Freeman, both MIT professors of computer science and engineering;
Michael Rubinstein of Microsoft Research, who did his Phd with Freeman; and Gautham Mysore of Adobe Research.
Reconstructing audio from video requires that the frequency of the video samples the number of frames of video captured per second be higher than the frequency of the audio signal.
That much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras,
Commodity hardware In other experiments however, they used an ordinary digital camera. Because of a quirk in the design of most camerassensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second.
That corresponds to five thousandths of a pixel in a close up image, but from the change of a single pixel color value over time,
it possible to infer motions smaller than a pixel. Suppose, for instance, that an image has a clear boundary between two regions:
Everything on one side of the boundary is blue; everything on the other is red.
If, over successive frames of video, the blue region encroaches into the red region even less than the width of a pixel the purple will grow slightly bluer.
Putting it together Some boundaries in an image are fuzzier than a single pixel in width, however.
So the researchers borrowed a technique from earlier work on algorithms that amplify minuscule variations in video
The researchers developed an algorithm that combines the output of the filters to infer the motions of an object as a whole
so the algorithm first aligns all the measurements so that they won cancel each other out. And it gives greater weight to measurements made at very distinct edges clear boundaries between different color values.
The researchers also produced a variation on the algorithm for analyzing conventional video. The sensor of a digital camera consists of an array of photodetectors millions of them, even in commodity devices.
it less expensive to design the sensor hardware so that it reads off the measurements of one row of photodetectors at a time.
says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. ee scientists,
The data shows a loss of almost 1 percent of efficiency per week. But at present even in desert locations the only way to counter this fouling is to hose the arrays down a labor-and water-intensive method.
#Vision-correcting displays Researchers at the MIT Media Laboratory and the University of California at Berkeley have developed a new display technology that automatically corrects for vision defects no glasses (or contact lenses) required.
The technique could lead to dashboard-mounted GPS displays that farsighted drivers can consult without putting their glasses on
The first spectacles were invented in the 13th century says Gordon Wetzstein a research scientist at the Media Lab and one of the display's co-creators.
We have a different solution that basically puts the glasses on the display rather than on your head.
Wetzstein and his colleagues describe their display in a paper they're presenting in August at Siggraph the premier graphics conference.
The display is a variation on a glasses-free 3-D technology also developed by the Camera Culture group.
Essentially the new display simulates an image at the correct focal distance somewhere between the display and the viewer's eye.
The difficulty with this approach is that simulating a single pixel in the virtual image requires multiple pixels of the physical display.
which light would arrive from the same image displayed on the screen. So the physical pixels projecting light to the right side of the pupil have to be offset to the left
and the pixels projecting light to the left side of the pupil have to be offset to the right.
The use of multiple on-screen pixels to simulate a single virtual pixel would drastically reduce the image resolution.
But this problem turns out to be very similar to a problem that Wetzstein Raskar and colleagues solved in their 3-D displays
which also had to project different images at different angles. The researchers discovered that there is in fact a great deal of redundancy between the images required to simulate different viewing angles.
The algorithm that computes the image to be displayed onscreen can exploit that redundancy allowing individual screen pixels to participate simultaneously in the projection of different viewing angles.
The MIT and Berkeley researchers were able to adapt that algorithm to the problem of vision correction so the new display incurs only a modest loss in resolution.
In the researchers prototype however display pixels do have to be masked from the parts of the pupil for which they re not intended.
That requires that a transparency patterned with an array of pinholes be laid over the screen blocking more than half the light it emits.
instead using two liquid-crystal displays (LCDS) in parallel. Carefully tailoring the images displayed on the LCDS to each other allows the system to mask perspectives
Wetzstein envisions that commercial versions of a vision-correcting screen would use the same technique.
Indeed he says the same screens could both display 3-D content and correct for vision defects all glasses-free.
So the same device could in effect determine the user s prescription and automatically correct for it.
MIT researchers explain how their vision-correcting display technology works. The key thing is they seem to have cracked the contrast problem Dainty adds.
Dainty believes that the most intriguing application of the technology is in dashboard displays. Most people over 50 55 quite probably see in the distance fine
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011