#Bringing the world reboot-less updates It s an annoyance for the individual computer user:
You ve updated your operating system and now you need to reboot. This is so the computer can switch to the modified source code.
Imagine however having to update and reboot hundreds or thousands of computers operating in large companies and organizations:
It can have a significant impact in lost time and money as computers and online services shut down sometimes for hours.
To avoid downtime organizations will usually wait for low-traffic periods to update but this can leave the servers outdated or vulnerable to cyber attacks.
In 2008 Jeff Arnold 07 MENG 08 along with a team of MIT computer scientists and engineers began solving this issue by developing
and commercializing software called Ksplice that automatically applies patches (security updates or bug fixes) to an operating system on the fly without requiring a reboot.
Based on Arnold s award-winning MIT master s thesis the novel software compares changes between the old
and updated code and implements those changes into a running Linux kernel an operating system s core data-processing component.
In essence it does something that could normally be achieved only by shutting down the operating system. The software also incorporates novel techniques that remove the need for programmer intervention with the code (a trademark of performing updates without Ksplice)
which decreases the cost and risk of error Arnold says. The aim is to allow administrators the benefit of the update
while eliminating both the cost and downtime for the users Arnold says. After winning the 2009 MIT $100k Entrepreneurship Competition for the software Arnold co-founded Ksplice Inc. with Waseem Daher 07 MENG 08 Tim Abbott 07 SM 08
and Anders Kaseorg 08 in Cambridge to launch it as a commercial product. Arnold served as the company s CEO.
In just 18 months Ksplice accumulated 700 customers independent firms government agencies and Fortune 500 companies that were running the software on more than 100000 servers.
Then the startup sold for an undisclosed amount to technology giant Oracle which is now providing the software to its Oracle Linux customers
which include banks retail firms and telecommunications companies worldwide. After the purchase the Ksplice team joined Oracle to help the company integrate the software in its products.
As of today Ksplice has only ever run on Linux operating systems. But Daher says the code is written in a way that should make it potentially expandable to other products such as Mac and Windows operating systems.
Object focusedthe process of updating running kernels is called hot updating or hot patching and predates Ksplice.
But Ksplice s novelty is that it constructs hot patches using the object code binary that a computer can understand instead of the source code computer instructions written
and modified as text by a programmer (such as in C++ or Java). Hot patching a program without Ksplice requires a programmer to construct replacement source code
or manually inspect the code to create an update. Programmers might also need to resolve ambiguity in the code say choosing the correct location in computer memory
when two or more software components have the same name. Ksplice however hot patches the object code using two novel techniques invented by Arnold.
The first called pre-post differencing creates object code before a patch (pre) and object code modified by the patch (post) on the fly.
It then compares the pre and post code to determine what code has been modified extracts the changed code
and puts the code into its own updated object file which it will plug into the running kernel.
Essentially it makes changes to functions modified by the patch and points to relocated updated versions of those functions.
The second technique called run-pre matching computes the address in computer memory of ambiguous code by using custom computation to compare the pre code with the finalized running kernel (run code.
while the servers were in heavy use he delayed installing the update until the weekend.
This wait unfortunately resulted in a cyber attack that required reinstalling all the system software. That s what motivated
You can t bring servers down right away and can t wait until you have a chance to update
Under the tutelage of Frans Kaashoek the Charles A. Piper Professor of Computer science and Engineering Arnold started developing Ksplice for his graduate thesis
and accounting challenging for people with strictly computer science backgrounds Daher says. For help they turned to MIT s Venture Mentoring Service (VMS)
Arnold and Daher are now working on another software startup at the Cambridge Business Center and still keep in touch with the VMS they say.
Mobile phone usage is far more prevalent in Kenya than traditional banking is and the system lets people transfer money by text message.
users requested an ascension pace of 10 feet per second. But Atlas found that as soon as you maneuver over,
Also, the APA can function as a backup for a helicopter if something goes awry with the primary hoist:
as well as for first responders. here a broad spectrum of users people who use rope access as part of their work for
Now your face could be transformed instantly into a more memorable one without the need for an expensive makeover thanks to an algorithm developed by researchers in MIT s Computer science and Artificial intelligence Laboratory (CSAIL.
The algorithm which makes subtle changes to various points on the face to make it more memorable without changing a person s overall appearance was unveiled earlier this month at the International Conference on Computer Vision in Sydney.
which people will actually remember a face says lead author Aditya Khosla a graduate student in the Computer Vision group within CSAIL.
and replace it with the most memorable one in our database we want your face to still look like you.
More memorable or lessthe system could ultimately be used in a smartphone app to allow people to modify a digital image of their face before uploading it to their social networking pages.
It could also be used for job applications to create a digital version of an applicant s face that will more readily stick in the minds of potential employers says Khosla who developed the algorithm with CSAIL principal research scientist Aude Oliva the senior author of the paper Antonio
Torralba an associate professor of electrical engineering and computer science and graduate student Wilma Bainbridge. Conversely it could also be used to make faces appear less memorable
To develop the memorability algorithm the team first fed the software a database of more than 2000 images.
In this way the software was able to analyze the information to detect subtle trends in the features of these faces that made them more or less memorable to people.
The researchers then programmed the algorithm with a set of objectives to make the face as memorable as possible
and so would fail to meet the algorithm s objectives. When the system has a new face to modify it first takes the image
The algorithm then analyzes how well each of these samples meets its objectives. Once the algorithm finds a sample that succeeds in making the face look more memorable without significantly altering the person s appearance it makes yet more copies of this new image with each containing further alterations.
It then keeps repeating this process until it finds a version that best meets its objectives.
When they tested these images on a group of volunteers they found that the algorithm succeeded in making the faces more or less memorable as required in around 75 percent of cases.
We all wish to use a photo that makes us more visible to our audience says Aleix Martinez an associate professor of electrical and computer engineering at Ohio State university.
Now Oliva and her team have developed a computational algorithm that can do this for us he says.
The research was funded by grants from Xerox Google Facebook and the Office of Naval Research h
#3-D images with only one photon per pixel Lidar rangefinders which are common tools in surveying
All the hardware it requires can already be found in commercial lidar systems; the new system just deploys that hardware in a manner more in tune with the physics of low light-level imaging and natural scenes.
Count the photonsas Ahmed Kirmani a graduate student in MIT s Department of Electrical engineering and Computer science and lead author on the new paper explains the very idea of forming an image with only a single photon detected at each pixel location is counterintuitive.
The way a camera senses images is through different numbers of detected photons at different pixels Kirmani says.
Darker regions would have fewer photons and therefore accumulate less charge in the detector while brighter regions would reflect more light
each location in the grid corresponds to a pixel in the final image. The technique known as raster scanning is how old cathode ray tube-tube televisions produced images illuminating one phosphor dot on the screen at a time.
The laser will generally fire a large number of times at each grid position until it gets consistent enough measurements between the times at
It guides the filtering process by assuming that adjacent pixels will more often than not have similar reflective properties
The camera is based on ime of Flighttechnology like that used in Microsoft recently launched second-generation Kinect device, in
"Kadambi says. hat is because the light that bounces off the transparent object and the background smear into one pixel on the camera.
and returns to strike the pixel. Since the speed of light is known, it is then simple for the camera to calculate the distance the signal has travelled
Instead, the new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled
when the data comes back, we can do calculations that are very common in the telecommunications world,
to estimate different distances from the single signal. The idea is similar to existing techniques that clear blurring in photographs
a graduate student in the Media Lab. eople with shaky hands tend to take blurry photographs with their cellphones
This allows the team to use inexpensive hardware off-the-shelf light-emitting diodes (LEDS) can strobe at nanosecond periods,
much like the human eye, says James Davis, an associate professor of computer science at the University of California at Santa cruz. In contrast,
and apply sophisticated computation to the resulting data, Davis says. ormally the computer scientists who could invent the processing on this data can build the devices,
and the people who can build the devices cannot really do the computation, he says. his combination of skills
and techniques is really unique in the work going on at MIT right now. What more, the basic technology needed for the team approach is very similar to that already being shipped in devices such as the new version of Kinect,
Davis says. o it going to go from expensive to cheap thanks to video games, and that should shorten the time before people start wondering what it can be used for,
This approach offers a huge array of recognition sites specific to different targets, and could be used to create sensors to monitor diseases such as cancer, inflammation,
Synthetic antibodies The new polymer-based sensors offer a synthetic design approach to the production of molecular recognition sites enabling, among other applications, the detection of a potentially infinite library of targets.
In the new paper, the researchers describe molecular recognition sites that enable the creation of sensors specific to riboflavin, estradiol (a form of estrogen),
but they are now working on sites for many other types of molecules, including neurotransmitters, carbohydrates, and proteins.
it forms a binding site, Strano says. Laurent Cognet, a senior scientist at the Institute of Optics at the University of Bordeaux, says this approach should prove useful for many applications requiring reliable detection of specific molecules. his new concept,
using data generated from a new type of microscope that Landry built to image the interactions between the carbon nanotube coronas
so this is a step forward in getting more data to address the problem of how to design a target for a specific molecule,
Our data suggested if you block the MK2 pathway tumor cells wouldn t recognize that they had DNA damage
They are now studying mouse models of colon and ovarian cancer. The research was funded by the Austrian Science Fund the National institutes of health Janssen Pharmaceuticals Inc. the Koch Institute MIT s Center for Environmental Health Sciences the Volkswagenstiftung the Deutsche Forschungsgemeinschaft the German
and Dust environment Explorer (LADEE) spacecraft had made history by using a pulsed laser beam to transmit data over the 239000 miles from the moon to Earth at a record-breaking data-download speed of 622 megabits per second (Mbps). This download speed is more than six times faster than the speed achieved by the best
LLCD also demonstrated a data-upload speed of 20 Mbps on a laser beam transmitted from a ground station in New mexico to the LADEE spacecraft in lunar orbit;
which was developed by MIT Lincoln Laboratory researchers led by Don Boroson a laboratory fellow in MIT LL s Communication systems Division.
and exploration missions to deep space are constrained by the amount of data they can get back to Earth.
and power on their spacecraft for the much higher data return they can get. Q:
and delivered these various parts to the spacecraft and to the ground site. Finally we designed
You can know something about the identity of a person from the sound of their voice so this technology is keying in to that type of information says Jim Glass a senior research scientist at MIT s Computer science and Artificial intelligence Laboratory (CSAIL) and head
To create a sonic portrait of a single speaker Glass explains a computer system will generally have to analyze more than 2000 different speech sounds;
A new algorithm that determines who speaks when in audio recordings represents every second of speech as a point in a three-dimensional space.
Stephen Shum a graduate student in MIT s Department of Electrical engineering and Computer science and lead author on the new paper found that a 100-variable i-vector a 100-dimension approximation of the 120000-dimension space was an adequate
According to Patrick Kenny a principal research scientist at the Computer Research Institute of Montreal i-vectors were devised originally to solve the problem of speaker recognition or determining whether the same speaker features on multiple recordings.
It s really an order of magnitude less than the recordings that are used in text-dependent speech recognition. What was completely not obvious
We think that in this mouse model we may have some kind of indication that there s a disorganized thinking process going on says Junghyup Suh a research scientist at the Picower Institute
This mutant mouse doesn t seem to have that kind of replay of a previous experience.
when a person (or mouse) is resting between goal-oriented tasks. When the brain is focusing on a specific goal
Compilers are computer programs that translate high-level instructions written in human-readable languages like Java or C into low-level instructions that machines can execute.
Most compilers also streamline the code they produce, modifying algorithms specified by programmers so that theyl run more efficiently.
Sometimes that means simply discarding lines of code that appear to serve no purpose. But as it turns out,
compilers can be overaggressive, dispensing not only with functional code but also with code that actually performs vital security checks.
At the ACM Symposium on Operating systems Principles in November, MIT researchers will present a new system
that automatically combs through programmerscode, identifying just those lines that compilers might discard but which could, in fact, be functional.
commercial software engineers have downloaded already Stack and begun using it, with encouraging results. As strange as it may seem to nonprogrammers or people
and compilers should remove it. Problems arise when compilers also remove code that leads to ndefined behavior
. or some things this is obvious, says Frans Kaashoek, the Charles A. Piper Professor in the Department of Electrical engineering and Computer science (EECS).
f youe a programmer, you should not write a statement where you take some number and divide it by zero.
You never expect that to work. So the compiler will just remove that. It pointless to execute it anyway,
because there not going to be any sensible result. Defining moments Over time, however, ompiler writers got a little more aggressive,
Kaashoek says. t turns out that the C programming language has a lot of subtle corners to the language specification,
and there are things that are undefined behavior that most programmers don realize are undefined behavior. A classic example
the computer will lop off the bits that don fit. n machines, integers have a limit,
Seasoned C programmers will actually exploit this behavior to verify that program inputs don exceed some threshold.
According to Wang, programmers give a range of explanations for this practice. Some say that the intent of the comparison an overflow check is clearer
according to the C language specification, undefined for signed integers integers that can be either positive or negative.
The fine print Complicating things further is the fact that different compilers will dispense with different undefined behaviors:
but prohibit other programming shortcuts; some might impose exactly the opposite restrictions. So Wang combed through the C language specifications
and identified every undefined behavior that he and his coauthors Kaashoek and his fellow EECS professors Nickolai Zeldovich and Armando Solar-Lezama imagined that a programmer might ever inadvertently invoke.
Stack, in effect, compiles a program twice: once just looking to excise dead code, and a second time to excise dead code and undefined behavior.
but not the first and warns the programmer that it could pose problems. The MIT researchers tested their system on several open-source programs.
In one case, the developers of a program that performs database searches refused to believe that their code had bugs,
. i sent them a one-line SQL statement that basically crashed their application, by exploiting their orrectcode,
Mattias Engdegård, an engineer at Intel, is one of the developers who found Stack online
Such a system could be used to monitor patients who are at high risk for blood clots says Sangeeta Bhatia senior author of the paper and the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical engineering and Computer science.
By creating a computer model of that microstructure and studying its response to various conditions, e found that there is a mechanism that can, in principle, close cracks under any applied stress,
A computer simulation of the molecular stucture of a metal alloy, showing the boundaries between microcystalline grains (white lines forming hexagons),
#Better robot vision Object recognition is one of the most widely studied problems in computer vision.
and Computer science is exploiting a statistical construct called the Bingham distribution. In a paper they re presenting in November at the International Conference on Intelligent Robots
and Systems Glover and MIT alumna Sanja Popovic 12 MENG 13 who is now at Google describes a new robot-vision algorithm based on the Bingham distribution that is 15 percent better than its best
That algorithm however is for analyzing high-quality visual data in familiar settings. Because the Bingham distribution is a tool for reasoning probabilistically it promises even greater advantages in contexts where information is patchy or unreliable.
In cases where visual information is particularly poor his algorithm offers an improvement of more than 50 percent over the best alternatives.
because it allows the algorithm to get more information out of each ambiguous local feature.
Because Bingham distributions are so central to his work Glover has developed also a suite of software tools that greatly speed up calculations involving them.
The software is freely available online for other researchers to use. In the rotationone reason the Bingham distribution is so useful for robot vision is that it provides a way to combine information from different sources.
Generally determining an object s orientation entails trying to superimpose a geometric model of the object over visual data captured by a camera in the case of Glover s work a Microsoft Kinect camera
Imagine too that software has identified four locations in an image where color or depth values change abruptly likely to be the corners of an object.
Most algorithms Glover s included will take a first stab at aligning the points. In the case of the tetrahedron assume that after that provisional alignment every point in the model is near a point in the object but not perfectly coincident with it.
and Popovic s algorithm to explore possible rotations in a principled way quickly converging on the one that provides the best fit between points.
The current version of Glover and Popovic s algorithm integrates point-rotation probabilities with several other such probabilities.
In experiments involving visual data about particularly cluttered scenes depicting the kinds of environments in which a household robot would operate Glover s algorithm had about the same false positive-rate rate as the best existing algorithm:
Glover argues that that difference is because of his algorithm s better ability to determine object orientations.
He also believes that additional sources of information could improve the algorithm s performance even further.
Gary Bradski vice president of computer vision and machine learning at Magic Leap and president and CEO of Opencv the nonprofit that oversees the most widely used open-source computer-vision software library believes that the Bingham
In November, Romanishin now a research scientist in MIT Computer science and Artificial intelligence Laboratory (CSAIL) Rus,
a professor of electrical engineering and computer science and director of CSAIL. e just needed a creative insight
The sliding-cube model simplifies the development of self-assembly algorithms, but the robots that implement them tend to be much more complex devices.
and designing algorithms to guide them. e want hundreds of cubes, scattered randomly across the floor,
an associate professor of electrical engineering and computer science at the University of Illinois at Urbana-Champaign who was not part of the research team. he possibilities are endless:
because we have the recording data to show how this compulsive sugar-seeking happens, Nieh says,
In previous studies using mouse models of fragile X, Bear and others discovered that the loss of this gene results in exaggerated protein synthesis at synapses, the specialized sites of communication between neurons.
Of particular interest, they found that this protein synthesis was stimulated by the neurotransmitter glutamate, downstream of a glutamate receptor called mglur5.
the researchers used a mouse model of 16p11.2 microdeletion, created by Alea Mills at Cold Spring Harbor Laboratory.
biochemical, and behavioral analyses, the MIT team compared this 16p11.2 mouse with what they already had established in the fragile X mouse.
Synaptic protein synthesis was disrupted indeed in the hippocampus, a part of the brain important for memory formation.
GCS began going to the villages and selling solar-powered lamps, which also charge cellphones. Suddenly, its product started moving and fast. hat
and other devices such as the cellphone charger that GCS later developed. e called it our universal adapter,
#Drive-by heat mapping In 2007, Google unleashed a fleet of cars with roof-mounted cameras to provide street-level images of roads around the world.
An onboard control system has software to track the route and manage the cameras. On the software side, computer vision and machine-learning algorithms stitch together the images, extract features,
and filter out background objects. In one night, the cars can generate more than 3 terabytes of data,
which is downloaded to an onboard system and processed at the startup Boston headquarters. Combining those heat maps with novel analytics, Essess shows utilities companies
But there were many challenges. ery expensive thermal cameras had lower resolution than your smartphone camera,
Among other things, this included an algorithm called Kinetic Super Resolution co-invented with Sarma and MIT postdoc Jonathan Jesneck that computationally combines many different images taken with an inexpensive low-resolution
Not just finding the culprits These early innovations to the hardware have nabled Essess to have this large-scale,
software-analytics approach, says Sarma, who is now Essessboard director. For utility companies, this means pinpointing home and building owners who are more or less likely to implement energy-efficient measures.
To do so, Sarma helped develop software that brings in household and demographic data such as information on householdsmortgage payments
Based on data from across the United states, for example, a household with three children is about 8 percent more likely to seal up leaks than a household with two children,
and using other data, have no building-envelope scans, so they can really determine if the envelope is indeed the culprit.
And constant tweaks had to be made to the GPS SYSTEM that required more sophisticated software. hen youe driving around
There also the software. ou get the system running and realize there a tree in front of the building and,
was finding how closely coupled the hardware was to the software. his is truly mechatronic,
he says. small change to the hardware could have profound effects on the software. You may say,
el switch the frame rate of the cameras to catch more data, but that changes everything else in the software.
You really have to think about everything together. Now in its fourth iteration the technology constant refining for real-world applications has helped Essess develop a very sophisticated system,
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011