Synopsis: Domenii: Ict: Ict generale:


newsoffice 00202.txt

whose advanced emotion-tracking software called Affdex is based on years of MIT Media Lab research.

which has amassed a vast facial-expression database is also setting its sights on a mood-aware Internet that reads a user s emotions to shape content.

The broad goal is to become the emotion layer of the Internet says Affectiva cofounder Rana el Kaliouby a former MIT postdoc who invented the technology.

We believe there s an opportunity to sit between any human-to-computer or human-to-human interaction point capture data

and use it to enrich the user experience. In using Affdex Affectiva recruits participants to watch advertisements in front of their computer webcams tablets and smartphones.

Machine learning algorithms track facial cues focusing prominently on the eyes eyebrows and mouth. A smile for instance would mean the corners of the lips curl upward and outward teeth flash and the skin around their eyes wrinkles.

Affdex then infers the viewer s emotions such as enjoyment surprise anger disgust or confusion and pushes the data to a cloud server where Affdex aggregates the results from all the facial videos (sometimes hundreds)

which it publishes on a dashboard. But determining whether a person likes or dislikes an advertisement takes advanced analytics.

Importantly the software looks for hooking the viewers in the first third of an advertisement by noting increased attention

and focus signaled in part by less fidgeting and fixated gazes. Smiles can indicate that a commercial designed to be humorous is indeed funny.

But if a smirk subtle asymmetric lip curls separate from smiles comes at a moment when information appears on the screen it may indicate skepticism or doubt.

Still in their early stages some of the apps are designed for entertainment such as people submitting selfies to analyze their moods and sharing them across social media.

Years of data-gathering have trained the algorithms to be very discerning. As a Phd student at Cambridge university in the early 2000s el Kaliouby began developing facial-coding software.

She was inspired in part by her future collaborator and Affectiva cofounder Rosalind Picard an MIT professor who pioneered the field of affective computing where machines can recognize interpret process

and simulate human affects. Back then the data that el Kaliouby had consisted access to of about 100 facial expressions gathered from photos

and those 100 expressions were fairly prototypical. To recognize surprise for example we had this humongous surprise expression.

if you showed the computer an expression of a person that s somewhat surprised or subtly shocked it wouldn t recognize it el Kaliouby says.

and training the algorithms by collecting vast stores of data. Coming from a traditional research background the Media Lab was completely different el Kaliouby says.

and provide real-time feedback to the wearer via a Bluetooth headset. For instance auditory cues would provide feedback such as This person is bored

-and with a big push by Frank Moss then the Media Lab s director they soon ditched the wearable prototype to build a cloud-based version of the software founding Affectiva in 2009.

Kaliouby says training its software s algorithms to discern expressions from all different face types and skin colors.

and can avoid tracking any other movement on screen. One of Affectiva s long-term goals is to usher in a mood-aware Internet to improve users experiences.

Imagine an Internet that s like walking into a large outlet store with sales representatives el Kaliouby says.

At the store the salespeople are reading your physical cues in real time and assessing whether to approach you

Websites and connected devices of the future should be like this very mood-aware. Sometime in the future this could mean computer games that adapt in difficulty and other game variables based on user reaction.

But more immediately it could work for online learning. Already Affectiva has conducted pilot work for online learning where it captured data on facial engagement to predict learning outcomes.

For this the software indicates for instance if a student is bored frustrated or focused which is especially valuable for prerecorded lectures el Kaliouby says.

To be able to capture that data in real time means educators can adapt that learning experience

and change the content to better engage students making it say more or less difficult and change feedback to maximize learning outcomes el Kaliouby says.


newsoffice 00213.txt

#Making the cut Diode lasers used in laser pointers barcode scanners DVD players and other low-power applications are perhaps the most efficient compact and low-cost lasers available.

At the core of the Terablade is a power-scaling technique known as wavelength beam combining (WBC

a 3-foot cube that comes with multiple laser engines a control computer power supplies and an output head for welding


newsoffice 00214.txt

and generated one-third of the DNA sequence data for that project the single largest contribution to the effort.

Celebrating its 10th anniversary this month the Broad Institute is today home to a community of more than 2000 members including physicians biologists chemists computer scientists engineers staff and representatives of many other disciplines.

In the spirit of the Human genome Project the Broad makes its genomic data freely available to researchers around the world.

and disseminate discoveries tools methods and data openly to the entire scientific community. Founded by MIT Harvard


newsoffice 00219.txt

A novel control algorithm enables it to move in sync with the wearer s fingers to grasp objects of various shapes and sizes.

Wearing the robot a user could use one hand to for instance hold the base of a bottle while twisting off its cap.

To develop an algorithm to coordinate the robotic fingers with a human hand the researchers first looked to the physiology of hand gestures learning that a hand s five fingers are highly coordinated.

and robotic joint angles multiple times with various objects then analyzed the data and found that every grasp could be explained by a combination of two or three general patterns among all seven fingers.

The researchers used this information to develop a control algorithm to correlate the postures of the two robotic fingers with those of the five human fingers.

Asada explains that the algorithm essentially teaches the robot to assume a certain posture that the human expects the robot to take.

As a user works with the robot it could learn to adapt to match his


newsoffice 00227.txt

This approach could lead to devices to charge cellphones or other electronics using just the humidity in the air.

For example Miljkovic has calculated that at 1 microwatt per square centimeter a cube measuring about 50 centimeters on a side about the size of a typical camping cooler could be sufficient to fully charge a cellphone in about 12 hours.


newsoffice 00231.txt

Near the end of the last decade however a team of MIT researchers led by Professor of Physics Marin Soljacic took definitive steps toward more practical wireless charging.

Now this wireless electricity (or Witricity) technology licensed through the researchers startup Witricity Corp.#is coming to mobile devices electric vehicles and potentially a host of other applications.

But it could also lead to benefits such as smaller batteries and less hardware which would lower costs for manufacturers and consumers.

We believe wireless charging has a potential to do that. He is not alone. Last month Witricity signed a licensing agreement with Intel to integrate Witricity technology into computing devices powered by Intel.

Back in December Toyota licensed Witricity technology for a future line of electric cars. Several more publicized

At present Witricity technology#charges devices#at around 6 to 12 inches with roughly 95 percent efficiency#12 watts for mobile devices and up to 6. 6 kilowatts for cars.

Witricity Corp. recently unveiled a design for a smartphone and wireless charger powered by its technology.

The charger can charge two phones simultaneously and can be placed on top of a table or mounted underneath a table or desk.

Courtesy of Witricity Corp. Full Screen The Witricity technology can charge an electric car with the vehicle parked about a foot above the transmitting pad.

Courtesy of Witricity Corp. Full Screen Stronger couplingsimilar wireless charging technologies have been around for some time. For instance traditional induction charging

or a radio antenna tuning into a single station out of hundreds.##The concept took shape in early 2000s

Frustrated and standing half awake he contemplated ways to harness power from all around to charge the phone.

At the time he was working on various photonics projects lasers solar cells and optical fiber that all involved a phenomenon called resonant coupling.

Wireless charging: An expectationthese days Gruzen sees wireless charging as analogous to the evolution of a similar technology Wifi that he witnessed in the early 2000s as senior vice president of global notebook business at Hewlett packard.

At the time Wifi capabilities were implemented rarely into laptops; this didn t change#until companies began bringing Wireless internet access into hotel lobbies libraries airports and other public places.

Now having established a standard for wireless charging#of consumer devices with the A4wp (Alliance for Wireless Power) known as Rezence Witricity aims to be the driving force behind wireless charging.

Soon Gruzen says it will be an expectation much like Wifi. You can have a charging surface wherever you go from a kitchen counter to your workplace to airport lounge

and hotel lobbies he says. In this future you re not worried about carrying cords. Casual access to topping off power in your devices just becomes an expected thing.

This is where we re Going with an expected rise of wireless charging one promising future application Soljacic sees is in medical devices especially implanted ventricular assist devices (or heart pumps) that support blood flow.

Currently a patient who has experienced a heart attack or weakening of the heart has wires running from the implant to a charger


newsoffice 00232.txt

#Own your own data Cellphone metadata has been in the news quite a bit lately but the National security agency isn t the only organization that collects information about people s online behavior.

Newly downloaded cellphone apps routinely ask to access your location information your address book or other apps and of course websites like Amazon or Netflix track your browsing history in the interest of making personalized recommendations.

At the same time a host of recent studies have demonstrated that it s shockingly easy to identify unnamed individuals in supposedly anonymized data sets even ones containing millions of records.

So if we want the benefits of data mining like personalized recommendations or localized services how can we protect our privacy?

Their prototype system openpds short for personal data store stores data from your digital devices in a single location that you specify:

It could be encrypted an server in the cloud but it could also be a computer in a locked box under your desk.

Any cellphone app online service or big data research team that wants to use your data has to query your data store

which returns only as much information as is required. Sharing code not data The example I like to use is personalized music says Yves-Alexandre de Montjoye a graduate student in media arts and sciences and first author on the new paper.

Pandora for example comes down to this thing that they call the music genome which contains a summary of your musical tastes.

you don t share data. Instead of you sending data to Pandora for Pandora to define what your musical preferences are it s Pandora sending a piece of code to you for you to define your musical preferences

and send it back to them. De Montjoye is joined on the paper by his thesis advisor Alex Sandy Pentland the Toshiba Professor of Media Arts and Sciences;

and Samuel Wang a software engineer at Foursquare who was a graduate student in the Department of Electrical engineering

and Computer science when the research was done. After an initial deployment involving 21 people who used openpds to regulate access to their medical records the researchers are now testing the system with several telecommunications companies in Italy and Denmark.

Although openpds can in principle run on any machine of the user s choosing in the trials data is being stored in the cloud.

Meaningful permissionsone of the benefits of openpds de Montjoye says is that it requires applications to specify what information they need

You as a user have absolutely no way of knowing what that means. The permissions don t tell you anything.

In fact applications frequently collect much more data than they really need. Service providers and application developers don t always know in advance what data will prove most useful

so they store as much as they can against the possibility that they may want it later.

Openpds preserves all that potentially useful data but in a repository controlled by the end user not the application developer or service provider.

A developer who discovers that a previously unused bit of information is useful must request access to it from the user.

If the request seems unnecessarily invasive the user can simply deny it. Of course a nefarious developer could try to game the system constructing requests that elicit more information than the user intends to disclose.

A navigation application might for instance be authorized to identify the subway stop or parking garage nearest the user.

But it shouldn t need both pieces of information at once and by requesting them it could infer more detailed location information than the user wishes to reveal.

Creating safeguards against such information leaks will have to be done on a case-by-case application-by-application basis de Montjoye acknowledges

If we manage to get people to have access to most of their data and if we can get the overall state of the art to move from anonymization to interactive systems that would be such a huge win.

because it allows users to control their data and at the same time open up its potential both at the economic level


newsoffice 00242.txt

such as an optical fiber, into the brain to control the selected neurons. Such implants can be difficult to insert,

The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent enough to shut down neural activity. his exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,

the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal head.

Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye natural opsins


newsoffice 00250.txt

where the ability to adjust the texture of panels to minimize drag at different speeds could increase fuel efficiency,


newsoffice 00252.txt

#Researchers unveil experimental 36-core chip The more cores or processing units a computer chip has,

the bigger the problem of communication between cores becomes. For years, Li-Shiuan Peh, the Singapore Research Professor of Electrical engineering and Computer science at MIT, has argued that the massively multicore chips of the future will need to resemble little Internets,

where each core has associated an router, and data travels between cores in packets of fixed size.

This week, at the International Symposium on Computer architecture, Peh group unveiled a 36-core chip that features just such a etwork-on-Chip in addition to implementing many of the group earlier ideas

it also solves one of the problems that has bedeviled previous attempts to design networks-on-chip:

maintaining cache coherence, or ensuring that coreslocally stored copies of globally accessible data remain up to date.

In today chips, all the cores typically somewhere between two and six are connected by a single wire,

called a bus . When two cores need to communicate, theye granted exclusive access to the bus. But that approach won work as the core count mounts:

Cores will spend all their time waiting for the bus to free up, rather than performing computations.

In a network-on-chip, each core is connected only to those immediately adjacent to it. ou can reach your neighbors really quickly,

says Bhavya Daya, an MIT graduate student in electrical engineering and computer science, and first author on the new paper. ou can also have multiple paths to your destination.

So if youe going way across, rather than having one congested path, you could have multiple ones.

One advantage of a bus, however, is that it makes it easier to maintain cache coherence.

Every core on a chip has its own cache a local, high-speed memory bank in which it stores frequently used data.

As it performs computations, it updates the data in its cache, and every so often, it undertakes the relatively time-consuming chore of shipping the data back to main memory.

But what happens if another core needs the data before it been shipped? Most chips address this question with a protocol called noopy,

because it involves snooping on other corescommunications. When a core needs a particular chunk of data, it broadcasts a request to all the other cores,

and whichever one has the data ships it back. If all the cores share a bus,

then when one of them receives a data request, it knows that it the most recent request that been issued.

Similarly, when the requesting core gets data back, it knows that it the most recent version of the data.

But in a network-on-chip data is flying everywhere, and packets will frequently arrive at different cores in different sequences.

The implicit ordering that the snoopy protocol relies on breaks down. Daya, Peh, and their colleagues solve this problem by equipping their chips with a second network, which shadows the first.

The circuits connected to this network are very simple: All they can do is declare that their associated cores have sent requests for data over the main network.

But precisely because those declarations are so simple, nodes in the shadow network can combine them

and pass them on without incurring delays. Groups of declarations reach the routers associated with the cores at discrete intervals intervals corresponding to the time it takes to pass from one end of the shadow network to another.

Each router can thus tabulate exactly how many requests were issued during which interval, and by which other cores.

The requests themselves may still take a while to arrive, but their recipients know that theye been issued.

During each interval, the chip 36 cores are given different, hierarchical priorities. Say, for instance, that during one interval,

both core 1 and core 10 issue requests, but core 1 has a higher priority.

Core 32 router may receive core 10 request well before it receives core 1 . But it will hold it until it passed along 1. This hierarchical ordering simulates the chronological ordering of requests sent over a bus,

so the snoopy protocol still works. The hierarchy is shuffled during every interval, however, to ensure that in the long run,

all the cores receive equal weight. Cache coherence in multicore chips s a big problem, and it one that gets larger all the time,

says Todd Austin, a professor of electrical engineering and computer science at the University of Michigan. heir contribution is an interesting one:

Theye saying, et get rid of a lot of the complexity that in existing networks. That will create more avenues for communication,

and our clever communication protocol will sort out all the details. It a much simpler approach and a faster approach.

It a really clever idea. ne of the challenges in academia is convincing industry that our ideas are practical and useful,

Austin adds. heye really taken the best approach to demonstrating that, in that theye built a working chip.

I be surprised if these technologies didn find their way into commercial products. After testing the prototype chips to ensure that theye operational

Daya intends to load them with a version of the Linux operating system, modified to run on 36 cores,

and evaluate the performance of real applications, to determine the accuracy of the group theoretical projections.

At that point, she plans to release the blueprints for the chip, written in the hardware description language Verilog,


newsoffice 00261.txt

pulling it slightly toward the leak site. That distortion can be detected by force-resistive sensors via a carefully designed mechanical system (similar to the sensors used in computer trackpads),

and the information sent back via wireless communications. Detecting leaks by sensing a pressure gradient close to leak openings is a novel idea

Chatzigeorgiou says, and key to the effectiveness of this method: This approach can sense a rapid change in pressure close to the leak itself, providing pinpoint accuracy in locating leaks.


newsoffice 00270.txt

#Who s using your data? By now most people feel comfortable conducting financial transactions on the Web.

The cryptographic schemes that protect online banking and credit card purchases have proven their reliability over decades.

As more of our data moves online a more pressing concern may be its inadvertent misuse by people authorized to access it.

At the same time tighter restrictions on access could undermine the whole point of sharing data. Coordination across agencies and providers could be the key to quality medical care;

you may want your family to be able to share the pictures you post on a social-networking site.

Researchers in the Decentralized Information Group (DIG) at MIT s Computer science and Artificial intelligence Laboratory (CSAIL) believe the solution may be transparency rather than obscurity.

which will automatically monitor the transmission of private data and allow the data owner to examine how it s being used.

At the IEEE s Conference on Privacy Security and Trust in July Oshani Seneviratne an MIT graduate student in electrical engineering and computer science and Lalana Kagal a principal research scientist at CSAIL will present a paper

that gives an overview of HTTPA and presents a sample application involving a health-care records system that Seneviratne implemented on the experimental network Planetlab.

DIG is directed by Tim Berners-Lee the inventor of the Web and the 3com Founders Professor of Engineering at MIT and it shares office space with the World wide web Consortium (W3c) the organization also led by Berners-Lee that oversees the development of Web protocols like HTTP XML and CSS.

DIG s role is to develop new technologies that exploit those protocols. With HTTPA each item of private data would be assigned its own uniform resource identifier (URI) a key component of the Semantic web a new set of technologies championed by W3c that would convert the Web from essentially a collection of searchable

text files into a giant database. Remote access to a Web server would be controlled much the way it is now through passwords and encryption.

But every time the server transmitted a piece of sensitive data it would also send a description of the restrictions on the data s use.

And it would log the transaction using only the URI somewhere in a network of encrypted special-purpose servers.

HTTPA would be voluntary: It would be up to software developers to adhere to its specifications when designing their systems.

But HTTPA compliance could become a selling point for companies offering services that handle private data.

It s not that difficult to transform an existing website into an HTTPA-aware website Seneviratne says.

On every HTTP request the server should say OK here are the usage restrictions for this resource and log the transaction in the network of special-purpose servers.

An HTTPA-compliant program also incurs certain responsibilities if it reuses data supplied by another HTTPA-compliant source.

Suppose for instance that a consulting specialist in a network of physicians wishes to access data created by a patient s primary-care physician

and suppose that she wishes to augment the data with her own notes. Her system would then create its own record with its own URI.

But using standard Semantic web techniques it would mark that record as derived from the PCP s record

and label it with the same usage restrictions. The network of servers is where the heavy lifting happens.

When the data owner requests an audit the servers work through the chain of derivations identifying all the people who have accessed the data and what they ve done with it.

Seneviratne uses a technology known as distributed hash tables the technology at the heart of peer-to-peer networks like Bittorrent to distribute the transaction logs among the servers.

Redundant storage of the same data on multiple servers serves two purposes: First it ensures that

if some servers go down data will remain accessible. And second it provides a way to determine

whether anyone has tried to tamper with the transaction logs for a particular data item such as to delete the record of an illicit use.

A server whose logs differ from those of its peers would be easy to ferret out.

To test the system Seneviratne built a rudimentary health-care records system from scratch and filled it with data supplied by 25 volunteers.

She then simulated a set of transactions pharmacy visits referrals to specialists use of anonymized data for research purposes

and the like that the volunteers reported as having occurred over the course of a year.

Seneviratne used 300 servers on Planetlab to store the transaction logs; in experiments the system efficiently tracked down data stored across the network

and handled the chains of inference necessary to audit the propagation of data across multiple providers.

In practice audit servers could be maintained by a grassroots network much like the servers that host Bittorrent files or log Bitcoin transactions s


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011