has developed a platform hardware, software, and cloud services that lets manufacturers pick and choose various components
and application-specific software to add to commercial drones for multiple purposes. The key component is the startup Linux-based autopilot device,
a small red box that is installed into all of a client drones. his is responsible for flying the vehicle in a safe, reliable manner,
and acts as hub for the components, so it can collect all that data and display that info to a user, says Downey, Airware CEO,
who researched and built drones throughout his time at MIT. To customize the drones customers use software to select third-party drone vehicles and components such as sensors, cameras, actuators,
and communication devices configure settings, and apply their configuration to a fleet. Other software helps them plan
and monitor missions in real time (and make midflight adjustments), and collects and displays data. Airware then pushes all data to the cloud,
where it aggregated and analyzed, and available to designated users. If a company decides to use a surveillance drone for crop management, for instance,
it can easily add software that stitches together different images to determine which areas of a field are overwatered
or underwatered. hey don have to know the flight algorithms, or underlying hardware, they just need to connect their software or piece of hardware to the platform,
Downey says. he entire industry can leverage that. Clients have trialed Airware platform over the past year including researchers at MIT,
who are demonstrating delivery of vaccines in Africa. Delta Drone in France is using the platform for open-air mining operations,
viewing companies that monitor crops and infrastructure with drones that require specific cameras and sensors as potential early customers.
when Downey, who studied electrical engineering and computer science, organized an MIT student team including Airware chief technology officer, Buddy Michini 7, SM 9,
even though you could change the software, Downey says. A five-year stretch at Boeing as an engineer for the U s. military A160 Hummingbird UAV and as a commercial pilot put Downey in contact with drone manufacturers, who,
he found, were still using black boxes or open-source designs. hey were basically facing the same challenges we faced as undergrads at MIT,
the development of a standard operating system for drones is analogous to Intel processors and Microsoft DOS paving the way for personal computers in the 1980s.
Before those components became available, hobbyists built computers using software that didn work with different computers.
At the same time, powerful mainframes were only available to a select few and still suffered software-incompatibility issues.
Then came Intel processors and DOS. Suddenly engineers could build computers around the standard processor and create software on the operating system,
without needing to know details of the underlying hardware. ee doing the same thing for the drone space,
Downey says. here are 600 companies building differing versions of drone hardware. We think they need the Intel processor of the drones,
if you will, and that operating system-level software component, too like the DOS for drones.
The benefits are far-reaching, Downey says: rone companies, for instance, want to build drones and tailor them for different applications without having to build everything from scratch,
he says. But companies developing cameras, sensors, and communication links for drones also stand to benefit,
he adds, as their components will need only to be compatible with a single platform. Additionally, it could help the Federal aviation administration (FAA) better assess the reliability of drones;
Congress recently tasked the agency with compiling UAV rules and regulations by 2015. This could also help promote commercial drone use in the United states,
and every drone has different software and electronics, it good for the FAA if all of them had reliable and common hardware and software,
he says. e think it valuable for everybody. n
#Manual control When you imagine the future of gesture-control interfaces, you might think of the popular science-fiction films inority Report (2002) or ron Man (2008).
or wireless gloves to seamlessly scroll through and manipulate visual data on a wall-sized, panoramic screen.
and a collaborative-conferencing system called Mezzanine that allows multiple users to simultaneously share and control digital content across multiple screens,
from any device, using gesture control. Overall, the major benefit in such a system lies in boosting productivity during meetings,
or check email surreptitiously under the table that can be electrifying force for the enterprise,
Mezzanine surrounds a conference room with multiple screens, as well as the rainsof the system (a small server) that controls and syncs everything.
Several Wii-like wands with six degrees of freedom, allow users to manipulate content such as text, photos, videos, maps, charts, spreadsheets,
and PDFS depending on certain gestures they make with the wand. That system is built on g-speak,
a type of operating system or a so-called patial operating environmentused by developers to create their own programs that run like Mezzanine.
-speak programs run in a distributed way across multiple machines and allow concurrent interactions for multiple people,
and IBM, as well as government agencies and academic institutions, such as Harvard university Graduate school of Design. Architects and real estate firms are also using the system for structural designing.
That meant the team could make any projected surface a veritable computer screen, and the data could interact with,
Seeing this technology on the big screen inspired Underkoffler to refine his MIT technology, launch Oblong in 2006,
Having tens of millions of viewers seeing the technology on the big screen, however, offered a couple of surprising perks for Oblong,
Additionally, being part of a big-screen production helped Underkoffler and Oblong better explain their own technology to clients,
so theye instantly legible on screen is really close to the refinement you need to undertake
and multiple-user capabilities into Mezzanine. t was the first killer application we could write on top of g-speak,
ut we really hope to radically tilt the whole landscape of how we think about computers and user interface. n
#Ride sharing could cut cabs road time by 30 percent Cellphone apps that find users car rides in real time are exploding in popularity:
What if the taxi-service app on your cellphone had a button on it that let you indicate that you were willing to share a ride with another passenger?
and the Italian National Research Council s Institute for Informatics and Telematics present a new technique that enabled them to exhaustively analyze 150 million trip records collected from more than 13000 New york city cabs over the course of a year.
if the passengers are using cellphone apps. So the researchers also analyzed the data on the assumption that only trips starting within a minute of each other could be combined.
and is now at Northeastern University and Giovanni Resta a researcher at Santi s home institution the Institute for Informatics and Telematics.
Then for each trip their algorithm identifies the set of other trips that overlap with it the ones that begin before it ends.
Next the algorithm represents the shareability of all 150 million trips in the database as a graph.
The graphical representation itself was the key to the researchers analysis. With that in hand well-known algorithms can efficiently find the optimal matchings to either maximize sharing
The researchers also conducted experiments to ensure that their matching algorithm would work in real time if it ran on a server used to coordinate data from cellphones running a taxi-sharing app.
They found that even running on a single Linux box it could find optimal matchings for about 100000 trips in a tenth of a second
whereas the GPS data indicated that on average about 300 new taxi trips were initiated in New york every minute.
or lab bench but the team is also working on a portable version that is about the size of a small electronic tablet.
and devised Medeye a bedside medication-scanning system that uses computer vision to identify pills
Algorithms distinguish the pills by matching them against a database of nearly all pills in circulation.
Although the hardware is impressive much innovation is in Medeye s software which cross-references (and updates) the results in the patient s records.
because it s new for instance the system alerts the nurse who adds the information into the software for next time.
Companies sell medications with barcodes others sell software or barcode scanners. Hospitals have to make all these things work together
In a computer-vision class in the Computer science and Artificial intelligence Laboratory he saw that advances in 3-D object-recognition technology meant computers could learn objects based on various characteristics.
Everyone s starting companies says Reynisson a trained programmer who wrote early object-recognition code for the Medeye.
Seeking a change of pace from computer science Reynisson enrolled in the MIT Sloan School of management where he saw that Helgason was right.
At the core of the startup is this belief that better information technology in hospitals can both increase efficiency
First they used their engram-labeling protocol to tag neurons associated with either a rewarding experience (for male mice socializing with a female mouse) or an unpleasant experience (a mild electrical shock.
They also devised a computer simulation that can predict a cell trajectory through the channel based on its size
whether the primary cancer has moved to a new site to generate metastatic tumors, Dao says. his method is a step forward for detection of circulating tumor cells in the body.
#Unlocking the potential of simulation software With a method known as finite element analysis (FEA), engineers can generate 3-D digital models of large structures to simulate how theyl fare under stress, vibrations, heat,
and oil rigs these simulations require intensive computation done by powerful computers over many hours, costing engineering firms much time and money.
Now MIT spinout Akselos has developed novel software, based on years of research at the Institute, that uses precalculated supercomputer data for structural components like simulated egosto solve FEA models in seconds.
A simulation that could take hours with conventional FEA software for instance, could be done in seconds with Akselosplatform.
Hundreds of engineers in the mining, power-generation, and oil and gas industries are now using the Akselos software.
The startup is also providing software for an MITX course on structural engineering. With its technology, Akselos aims to make 3-D simulations more accessible worldwide to promote efficient engineering design,
says David Knezevic, Akseloschief technology officer, who co-founded the startup with former MIT postdoc Phuong Huynh
and alumnus Thomas Leurent SM01. ee trying to unlock the value of simulation software, since for many engineers current simulation software is far too slow
and labor-intensive, especially for large models, Knezevic says. igh-fidelity simulation enables more cost-effective designs, better use of energy and materials,
which enables users to build large and complex 3-D models out of a set of parameterized components,
that used that technique to create a mobile app that displayed supercomputer simulations, in seconds, on a smartphone.
A supercomputer first presolved problems such as fluid flow around a spherical obstacle in a pipe that had known a form
When app users plugged in custom parameters for problems such as the diameter of that spherical obstacle the app would compute a solution for the new parameters by referencing the precomputed data.
Today Akselos software runs on a similar principle, but with new software, and cloud-based service.
A supercomputer precalculates individual components, such as, say, a simple tube or a complex mechanical part. nd this creates a big data footprint for each one of these components,
which we push to the cloud, Knezevic says. These components contain adjustable parameters, which enable users to vary properties,
such as geometry, density, and stiffness. Engineers can then access and customize a library of precalculated components,
After that, the software will reference the precomputed data to create a highly detailed 3-D simulation in seconds.
and created modified simulations within a few minutes. he software also allows people to model the machinery in its true state,
since with other software it not feasible to simulate large structures in full 3-D detail.
algorithms can do less work and hence finish more quickly. hese days, with cloud technology, storing lots of data is no big deal.
sales, opening a Web platform to users, and hiring. e needed a sounding board, Knezevic says. e go into meetings
who is using the startup software albeit a limited version in her MITX class, 2. 01x (Elements of Structures).
Primarily, he hears that the software is allowing students to uild intuition for the physics of structures beyond
Vietnam, and Switzerland building a community of users, and planning to continue its involvement with edx classes.
On Knezevic end, at the Boston office, it all about software development, tailoring features to customer needs a welcome challenge for the longtime researcher. n academia,
typically only you and a few colleagues use the software, he says. ut in a company you have people all over the world playing with it
and reuse it in photovoltaic panels that could go on producing power for decades. Amazingly because the perovskite photovoltaic material takes the form of a thin film just half a micrometer thick,
When the panels are retired eventually, the lead can simply be recycled into new solar panels. he process to encapsulate them will be the same as for polymer cells today,
Some companies are already gearing up for commercial production of perovskite photovoltaic panels, which could otherwise require new sources of lead.
This week in the journal Proceedings of the National Academy of Sciences researchers at the Koch Institute for Integrative Cancer Research at MIT report that they have delivered successfully small RNA therapies in a clinically relevant mouse model of lung cancer to slow
This mouse model reflects many of the hallmarks of human lung cancer and is used often in preclinical trials.
Researchers then compared mouse survival time among four treatment options:##no treatment; treatment with cisplatin a small-molecule standard-care chemotherapy drug;
We took the best mouse model for lung cancer we could find we found the best nanoparticle we could use
Photo courtesy of the researchersfull Screen As soon as researchers successfully demonstrated that this system could work in cells other than bacteria Niles started to think about using it to manipulate Plasmodium falciparum.
The exciting thing here is that you create this device that has embedded computation in the flat printed version says Daniela Rus the Andrew
and Erna Viterbi Professor of Electrical engineering and Computer science at MIT and one of the Science paper s co-authors.
Rus is joined on the paper by Erik Demaine an MIT professor of computer science and engineering and by three researchers at Harvard s Wyss Institute for Biologically Inspired Engineering and School of Engineering and Applied sciences:
In prior work Rus Demaine and Wood developed an algorithm that could automatically convert any digitally specified 3-D shape into an origami folding pattern.
what s called a cyclic fold where you have a bunch of panels connected together in a cycle
But as Demaine explains in origami 180-degree folds are used generally to join panels together.
With 150-degree folds the panels won t quite touch but that s probably tolerable for many applications In the meantime Demaine is planning to revisit the theoretical analysis that was the basis of the researchers original folding algorithm to determine
whether it s still possible to produce arbitrary three-dimensional shapes with folds no sharper than 150 degrees.
and computer science at the University of California at Berkeley who has been following the MIT and Harvard researchers work.
To investigate the potential usefulness of CRISPR for creating mouse models of cancer the researchers first used it to knock out p53 and pten
Many models possiblethe researchers also used CRISPR to create a mouse model with an oncogene called beta catenin
#Extracting audio from visual information Algorithm recovers speech from the vibrations of a potato-chip bag filmed through soundproof glass.
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video.
In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof Glass in other experiments,
a graduate student in electrical engineering and computer science at MIT and first author on the new paper. he motion of this vibration creates a very subtle visual signal that usually invisible to the naked eye.
Joining Davis on the Siggraph paper are Frédo Durand and Bill Freeman, both MIT professors of computer science and engineering;
Michael Rubinstein of Microsoft Research, who did his Phd with Freeman; and Gautham Mysore of Adobe Research.
Reconstructing audio from video requires that the frequency of the video samples the number of frames of video captured per second be higher than the frequency of the audio signal.
That much faster than the 60 frames per second possible with some smartphones, but well below the frame rates of the best commercial high-speed cameras,
Commodity hardware In other experiments however, they used an ordinary digital camera. Because of a quirk in the design of most camerassensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second.
So the researchers borrowed a technique from earlier work on algorithms that amplify minuscule variations in video
The researchers developed an algorithm that combines the output of the filters to infer the motions of an object as a whole
so the algorithm first aligns all the measurements so that they won cancel each other out. And it gives greater weight to measurements made at very distinct edges clear boundaries between different color values.
The researchers also produced a variation on the algorithm for analyzing conventional video. The sensor of a digital camera consists of an array of photodetectors millions of them, even in commodity devices.
it less expensive to design the sensor hardware so that it reads off the measurements of one row of photodetectors at a time.
says Alexei Efros, an associate professor of electrical engineering and computer science at the University of California at Berkeley. ee scientists,
#Vision-correcting displays Researchers at the MIT Media Laboratory and the University of California at Berkeley have developed a new display technology that automatically corrects for vision defects no glasses (or contact lenses) required.
The technique could lead to dashboard-mounted GPS displays that farsighted drivers can consult without putting their glasses on
The first spectacles were invented in the 13th century says Gordon Wetzstein a research scientist at the Media Lab and one of the display's co-creators.
We have a different solution that basically puts the glasses on the display rather than on your head.
Wetzstein and his colleagues describe their display in a paper they're presenting in August at Siggraph the premier graphics conference.
The display is a variation on a glasses-free 3-D technology also developed by the Camera Culture group.
Essentially the new display simulates an image at the correct focal distance somewhere between the display and the viewer's eye.
The difficulty with this approach is that simulating a single pixel in the virtual image requires multiple pixels of the physical display.
which light would arrive from the same image displayed on the screen. So the physical pixels projecting light to the right side of the pupil have to be offset to the left
The use of multiple on-screen pixels to simulate a single virtual pixel would drastically reduce the image resolution.
and colleagues solved in their 3-D displays which also had to project different images at different angles.
The algorithm that computes the image to be displayed onscreen can exploit that redundancy allowing individual screen pixels to participate simultaneously in the projection of different viewing angles.
The MIT and Berkeley researchers were able to adapt that algorithm to the problem of vision correction so the new display incurs only a modest loss in resolution.
In the researchers prototype however display pixels do have to be masked from the parts of the pupil for which they re not intended.
That requires that a transparency patterned with an array of pinholes be laid over the screen blocking more than half the light it emits.
instead using two liquid-crystal displays (LCDS) in parallel. Carefully tailoring the images displayed on the LCDS to each other allows the system to mask perspectives
Wetzstein envisions that commercial versions of a vision-correcting screen would use the same technique.
Indeed he says the same screens could both display 3-D content and correct for vision defects all glasses-free.
So the same device could in effect determine the user s prescription and automatically correct for it.
MIT researchers explain how their vision-correcting display technology works. The key thing is they seem to have cracked the contrast problem Dainty adds.
Dainty believes that the most intriguing application of the technology is in dashboard displays. Most people over 50 55 quite probably see in the distance fine
whose advanced emotion-tracking software called Affdex is based on years of MIT Media Lab research.
which has amassed a vast facial-expression database is also setting its sights on a mood-aware Internet that reads a user s emotions to shape content.
The broad goal is to become the emotion layer of the Internet says Affectiva cofounder Rana el Kaliouby a former MIT postdoc who invented the technology.
We believe there s an opportunity to sit between any human-to-computer or human-to-human interaction point capture data
and use it to enrich the user experience. In using Affdex Affectiva recruits participants to watch advertisements in front of their computer webcams tablets and smartphones.
Machine learning algorithms track facial cues focusing prominently on the eyes eyebrows and mouth. A smile for instance would mean the corners of the lips curl upward and outward teeth flash and the skin around their eyes wrinkles.
Affdex then infers the viewer s emotions such as enjoyment surprise anger disgust or confusion and pushes the data to a cloud server where Affdex aggregates the results from all the facial videos (sometimes hundreds)
which it publishes on a dashboard. But determining whether a person likes or dislikes an advertisement takes advanced analytics.
Importantly the software looks for hooking the viewers in the first third of an advertisement by noting increased attention
But if a smirk subtle asymmetric lip curls separate from smiles comes at a moment when information appears on the screen it may indicate skepticism or doubt.
Still in their early stages some of the apps are designed for entertainment such as people submitting selfies to analyze their moods and sharing them across social media.
Years of data-gathering have trained the algorithms to be very discerning. As a Phd student at Cambridge university in the early 2000s el Kaliouby began developing facial-coding software.
She was inspired in part by her future collaborator and Affectiva cofounder Rosalind Picard an MIT professor who pioneered the field of affective computing where machines can recognize interpret process
and simulate human affects. Back then the data that el Kaliouby had consisted access to of about 100 facial expressions gathered from photos
if you showed the computer an expression of a person that s somewhat surprised or subtly shocked it wouldn t recognize it el Kaliouby says.
and training the algorithms by collecting vast stores of data. Coming from a traditional research background the Media Lab was completely different el Kaliouby says.
and provide real-time feedback to the wearer via a Bluetooth headset. For instance auditory cues would provide feedback such as This person is bored
-and with a big push by Frank Moss then the Media Lab s director they soon ditched the wearable prototype to build a cloud-based version of the software founding Affectiva in 2009.
Kaliouby says training its software s algorithms to discern expressions from all different face types and skin colors.
and can avoid tracking any other movement on screen. One of Affectiva s long-term goals is to usher in a mood-aware Internet to improve users experiences.
Imagine an Internet that s like walking into a large outlet store with sales representatives el Kaliouby says.
At the store the salespeople are reading your physical cues in real time and assessing whether to approach you
Websites and connected devices of the future should be like this very mood-aware. Sometime in the future this could mean computer games that adapt in difficulty and other game variables based on user reaction.
But more immediately it could work for online learning. Already Affectiva has conducted pilot work for online learning where it captured data on facial engagement to predict learning outcomes.
For this the software indicates for instance if a student is bored frustrated or focused which is especially valuable for prerecorded lectures el Kaliouby says.
#Making the cut Diode lasers used in laser pointers barcode scanners DVD players and other low-power applications are perhaps the most efficient compact and low-cost lasers available.
a 3-foot cube that comes with multiple laser engines a control computer power supplies and an output head for welding
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011