Synopsis: Domenii:


R_www.technology.org 2015 00002135.txt

#Wearable device Slows Deadly Brain tumors, Clinical Trial Finds A wearable device that emits low-level electrical fields can slow the progression of glioblastoma, the deadliest form of brain cancer,

and extend patientslifespans, a major clinical trial at the University of Virginia School of medicine and more than 80 other institutions has found.

Dr. David Schiff of the U. Va. Department of Neurology said the results of the trial have come as a eal shocker to the field,

noting that glioblastoma is notoriously difficult to treat. his is a tumor type that it been very hard to make real progress against.

From the 1960s to the present we haven improved the average survival by more than a few months less than a handful of months.

So anybody who been in the field for a while has seen a lot of bright ideas fail, he said. ut this trial in newly diagnosed disease is a different kettle of fish.

Because this trial clearly shows an improvement both in time until the tumor starts growing but more importantly in overall survival.

And if you can make a difference in overall survival, youe really doing something. Prolonged Survival Median survival among the 210 newly diagnosed patients who wore the device was 19.6 months

more than three months longer than for the 105 patients who didn. Both groups otherwise received the same treatment,

including surgery and chemotherapy.)Forty-three percent of device wearers survived two years; only 29 percent of those who didn wear the device lived that long.

The device is worn like a skullcap, with electrode pads affixed to the shaved scalp. It is powered by a separate,

portable battery attached by long wires. The cosmetic effect of the device was often a concern for patients who declined to participate in the trial, noted Schiff,

the principal investigator at U. Va. here are patients who love the idea that this isn chemotherapy,

putting electrode pads on their head and being hooked up to a battery about the size of a small laptop computer

whenever they want to go out and about. One Participant Experience Trial participant Violet Horst had no such concerns.


R_www.technology.org 2015 00002164.txt

#Brain imaging Explains Reason For good and Poor Language Outcomes in ASD Toddlers Using functional magnetic resonance imaging (fmri), University of California,

San diego School of medicine researchers say it may be possible to predict future language development outcomes in toddlers with autistic spectrum disorder (ASD),

even before theye been diagnosed formally with the condition. Image depicts patterns of brain activation in typically developing, ASD oodand ASD oorlanguage ability toddlers in response to speech sounds during their earliest brain scan (ages 12-29 months.

A major challenge of ASD diagnosis and treatment is that the neurological condition which affects 1 in 68 children in the United states,

in part because the underlying causes for different subtypes of autism are diverse and not well-understood. here is no better example than early language development,

professor of neurosciences and co-director of the Autism Center of Excellence at UC San diego. ome individuals are minimally verbal throughout life.

and new biological ways to identify and stratify the ASD population into clinical sub-types

more individualized treatments, said co-author Karen Pierce, Phd, associate professor of neurosciences and co-director of the Autism Center of Excellence.

Courchesne, first author Michael V. Lombardo, Phd, a senior researcher at the University of Cambridge and assistant professor at the University of Cyprus, Pierce and colleagues describe the first effort to create a process capable

The researchers combined prospective fmri measurements of neural systemsresponse to speech in children at the earliest ages at which risk of ASD can be detected clinically in a general pediatric population (at approximately ages 1-2 years

They found that pre-diagnosis fmri response to speech in ASD toddlers with relatively good language outcomes was highly similar to non-ASD comparison groups with robust responses to language in superior temporal cortices,

and underlie later good versus poor language outcome in autism. These findings said researchers, will open new avenues of progress towards identifying the causes and best treatment for these two very different types of autism. or the first time,

our study shows a strong relationship between irregularities in speech-activation in the language-critical superior temporal cortex and actual,

memory and motor skills were involved. ur work represents one of the first attempts using fmri to define a neurofunctional biomarker of a subtype in very young ASD toddlers,

but not all, on the autism spectrum. t


R_www.technology.org 2015 00002225.txt

#Graphics in reverse Most recent advances in artificial intelligence such as mobile apps that convert speech to text are the result of machine learning, in

which computers are turned loose on huge data sets to look for patterns. To make machine-learning applications easier to build,

computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts.

In 2013, the U s. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.

At the Computer Vision and Pattern Recognition conference in June MIT researchers will demonstrate that on some standard computer-vision tasks,

short programs less than 50 lines long written in a probabilistic programming language are competitive with conventional systems with thousands of lines of code. his is the first time that wee introducing probabilistic programming in the vision area,

says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. he whole hope is to write very flexible models, both generative and discriminative models,

as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems.

By the standards of conventional computer programs those odelscan seem absurdly vague. One of the tasks that the researchers investigate,

for instance, is constructing a 3-D model of a human face from 2-D images. Their program describes the principal features of the face as being distributed two symmetrically objects (eyes) with two more centrally positioned objects beneath them (the nose and mouth.

It requires a little work to translate that description into the syntax of the probabilistic programming language,

Joining Kulkarni on the paper are his adviser, professor of brain and cognitive sciences Josh Tenenbaum;

and Pushmeet Kohli of Microsoft Research Cambridge. For their experiments, they created a probabilistic programming language they call Picture,

which is an extension of Julia, another language developed at MIT. What old is new The new work,

Kulkarni says, revives an idea known as inverse graphics, which dates from the infancy of artificial-intelligence research.

Even though their computers were painfully slow by today standards, the artificial intelligence pioneers saw that graphics programs would soon be able to synthesize realistic images by calculating the way in

which light reflected off of virtual objects. This is, essentially how Pixar makes movies. Some researchers,

like the MIT graduate student Larry Roberts, argued that deducing objectsthree-dimensional shapes from visual information was simply the same problem in reverse.

Calculating the color value of the pixels in a single frame of oy Storyis a huge computation, but it deterministic:

what probabilistic programming languages are designed to do. Kulkarni and his colleagues considered four different problems in computer vision,

each of which involves inferring the three-dimensional shape of an object from 2-D information. On some tasks, their simple programs actually outperformed prior systems.

Learning to learn In a probabilistic programming language the heavy lifting is done by the inference algorithm the algorithm that continuously readjusts probabilities on the basis of new pieces of training data.

In that respect, Kulkarni and his colleagues had the advantage of decades of machine-learning research. Built into Picture are several different inference algorithms that have fared well on computer-vision tasks.

Time permitting, it can try all of them out on any given problem, to see which works best.

Moreover, Kulkarni says, Picture is designed so that its inference algorithms can themselves benefit from machine learning, modifying themselves as they go to emphasize strategies that seem to lead to good results. sing learning to improve inference will be task-specific,

but probabilistic programming may alleviate rewriting code across different problems, he says. he code can be generic

if the learning machinery is powerful enough to learn different strategies for different tasks. icture provides a general framework that aims to solve nearly all tasks in computer vision,

says Jianxiong Xiao, an assistant professor of computer science at Princeton university, who was involved not in the work. t goes beyond image classification the most popular task in computer vision

and tries to answer one of the most fundamental questions in computer vision: What is the right representation of visual scenes?

It is the beginning of modern revisit for inverse-graphics reasoning. e


R_www.technology.org 2015 00002227.txt

#The Brazilian deforestation puzzle Brazil rate of deforestation went down dramatically over the last ten years.

It not completely clear why that happened. The trend now seems to be reversing (or at least encountering an upward blip).

But it not clear why that happening either. I wish I had a clear explanation to give you.

A big part of the story seems to involve national policy shifts, but there are some complications that don seem to have obvious explanations.

Aerial view of the Amazon Rainforest, near Manaus. Image credit: Neil Palmer/CIAT via Wikimedia Commons Aerial view of the Amazon Rainforest, near Manaus.

Image credit: Neil Palmer/CIAT via Wikimedia Commons These developments are hugely important. The Amazon, most of which is in Brazil,

is one of the world great centers of biodiversity. There are places where you can as many kinds of butterflies in a day as live in all of the eastern United states

. And the Amazon is only an enormous repository of carbon. The destruction of the Amazon is bad news for the whole planet.

Let start with the good news part of the story, the decade-long decrease in deforestation.

What was behind that? One possible reason, advanced by ecologist Philip Fearnside in an interview on Yale 360,

Commodity prices went down, and so did the value of Brazilian currency, making food exports unprofitable.

So people stopped cutting down rain forest. There could be something to that, but there is also reason to be skeptical.

That because Brazil was an exception to the trend, both in other Amazonian countries and globally,

which was in the opposite direction of increasing deforestation. Both economic circumstances and the drivers of deforestation vary from place to place,

but it not obvious why the Brazil living in the same world economy as Peru,

Bolivia, and other countries with tropical forest should have responded to global economic changes with a big dip in deforestation

while the deforestation in the other countries grew sharply. Also, deforestation decreases in different Brazilian states weren lockstep,

which also suggests the possibility that uniform national forces applying to the whole country may not be the complete explanation.

Finally, Brazilian beef and soybean production were rising during much of the same time when deforestation was falling.

The other plausible cause involves Brazil forest policy. Brazil adopted a number of anti-deforestation measures,

including increased enforcement. In a particularly interesting move the government used its controls of state banks to deter agriculture

and logging use of forests. These policies are being studied carefully in other countries with deforestation problems.

It frustrating that we don have better empirical analysis of the role of policy versus economics in driving the declining trend.

Whatever the reasons, however, the downward trend was very good news. Unfortunately, as Dr. Fearnsi points out, there are indications that the trend is reversing.

Satellite evidence suggests that deforestation has doubled in the past year. The government has confirmed this shift. The reasons for this change are also unclear.

Fearnsi attributes some of the change to a shift in exchange rates favoring exporters. I not sure about that as a cause Brazilian beef exports are actually down this year,

and so are soy exports.)But Fearnsi also lays considerable blame on policy changes by the current government that have weakened forest protections in particular an amnesty that gives violators reasons to hope that there will be future episodes of forgiveness of their current sins.

There have been other changes that have lowered also protections. An additional reason may be that the Brazilian economy has a whole has been faltering,

which may have made profits from deforestation more appealing compared to the alternatives. There an oddity about where deforestation is increasing.

In 2013, deforestation in Pará was about as great as in Rondônia and Mato grosso combined.

see this chart. But in 2014, each of those state had more deforestation than Pará (see here.

There seem to be two possible explanations. One is that the drivers of deforestation in those states are much different,

so countrywide economic or legal changes have very different effects in each state. Mato grosso, for example, is a far bigger soybean producer than the other two states.

The other is that there are differences in policy or enforcement (either by the states themselves or by the local federal authorities) that are driving the outcomes.

There are some indications that state governments and independent prosecutors made a difference, along with national policy, in the earlier decrease in deforestation,

so maybe changes at those levels are part of the current increase. The most obvious explanation for the overall trends is the change in national policy first to sharply restrict deforestation,

then to loosen the restrictions a bit. It seems very likely that this obvious explanation is a big part of the story.

But the story is complicated by the economic changes that were happening in the background and by the differences in the way the trends have played out at the state level.

So there are some empirical puzzles waiting to be resolved. Whatever the causes for this year reversal in trends

we can only hope that Brazil gets back on track in its efforts to control deforestation


R_www.technology.org 2015 00002275.txt

#New Way to Fight Cancer Targeted cancer therapies work by blocking a single oncogenic pathway to halt tumor growth.

But because cancerous tumors have the unique ability to activate alternative pathways, they are often able to evade these therapies and regrow.

Moreover, tumors contain a small portion of cancer stem cells that are believed to be responsible for tumor initiation, metastasis and drug resistance.

Thus, eradicating cancer stem cells may be critical for achieving long-lasting remission, but there are no drugs available that specifically attack cancer stem cells.

A Surprising Discovery Now a research team led by investigators at Harvard Medical school and the Cancer Center at Beth Israel Deaconess Medical center

has identified an inhibitor of the Pin1 enzyme that can address both these challenges in acute promyelocytic leukemia (APL) and triple-negative breast cancer.

Their surprising discovery demonstrates that the Vitamin a derivative ATRA (all-trans retinoic acid), a treatment for APL that is considered to be the first example of modern targeted cancer therapy,

can block multiple cancer-driving pathways and, at the same time, eliminate cancer stem cells by degrading the Pin1 enzyme.

Reported online in Nature Medicine, these novel findings suggest a promising new way to fight cancerarticularly cancers that are aggressive

or drug resistant. in1 changes protein shape through proline-directed phosphorylation, which is a major control mechanism for disease,

explained co-senior author Kun Ping Lu, HMS professor of medicine and director of translational therapeutics in the Cancer Research Institute at Beth Israel Deaconess. in1 is a common key regulator in many types of cancer

and as a result can control over 50 oncogenes and tumor suppressors, many of which are known to also control cancer stem cells.

Lu also co-discovered the enzyme in 1996. A Different Approach Until now, agents that inhibit Pin1 have been developed mainly through rational drug design.

Although these inhibitors have proven active against Pin1 in the test tube when they are tested in vitro in a cell model

or in vivo in a living animal they are unable to efficiently enter cells to successfully inhibit Pin1 function.

In this new work, co-senior author Xiao Zhen Zhou, HMS assistant professor of medicine and an investigator in the Division of Translational Therapeutics at Beth Israel Deaconess, decided to take a different

approach to identify Pin1 inhibitors: She developed a mechanism-based high-throughput screen to identify compounds that were targeting active Pin1. e had identified previously Pin1 substrate-mimicking peptide inhibitors,

explained Zhou. e therefore used these as a probe in a competition binding assay and screened approximately 8,

Zhou chose a probe that specifically binds to the Pin1 enzyme active site very tightly,

an approach that is not commonly used for this kind of screen. nitially, the screening results appeared to not have positive hits,

in order to bind Pin1. hile it has been shown previously that ATRA ability to degrade the leukemia-causing fusion oncogene PML-RAR causes ATRA to stop the leukemia stem cells that drive APL,

when two tumor suppressors fuse together to become an oncogene, added co-author Pier Paolo Pandolfi, the HMS George C. Reisman Professor of Medicine and director of the Cancer Genetics Program at Beth Israel Deaconess,

whose own pioneering work revealed the molecular underpinnings of APL and led to its cure. hese new findings demonstrate that by inhibiting Pin1,

you can degrade this fusion oncogene, thereby stopping cancer stem cells from replicating. This is a critically important discovery that will impact the treatment of other forms of cancer

since Pin1 inhibition is also affecting other key oncogenes. To that end, the authors also tested ATRA in triple-negative breast cancer, one of the most aggressive types of breast cancer.

They discovered that ATRA-induced Pin1 ablation also potently inhibits triple-negative breast cancer growth in human cells and in animal models by simultaneously turning off many oncogenes and turning on many tumor suppressors.

Aiming at ream Targets These new results, say the authors, provide a rationale for developing longer half-life ATRA

or more potent and specific Pin1-targeted ATRA variants for cancer treatment. he current ATRA drug has a very short half life of only 45 minutes in humans,

explained Lu. e think that a more potent Pin1 inhibitor will be able to target many ream targets,

and offers a promising new approach for targeting a Pin1-dependent common oncogenic mechanism in numerous cancer-driving pathways in cancer

and cancer stem cells. This is especially critical for treating aggressive or drug-resistant cancers. a


R_www.technology.org 2015 00002280.txt

#Computer-Designed Rocker Protein Worlds First To Biomimic Ion Transport For the first time, scientists recreated the biological function of substrate transportation across the cell membranes by computationally designing a transporter protein.

The designed protein, dubbed Rocker, was shown to transport ions across the membrane, a process crucial to cell and organismal survival in various functions,

such as nutrient intake, efflux of waste or drug, and cell signaling, for instance, between nerve cells in the brain and spinal cord. o our knowledge, this is the first transport protein designed from scratch that is,

said study co-author Michael Grabe, an associate professor in the Department of Pharmaceutical Chemistry and the Cardiovascular Research Institute at the University of California,

such as targeting medicines more specifically into cancer cells and driving charge separation potentially for harvesting energy for batteries.

The engineered Rocker protein acts like a tiny gate, designed so that zinc ions and protons can flow in a controlled way across the lipid-membrane barrier around the cell-like vesicle.

and the engineered system works like natural transport proteins found throughout the human body. Rocker was designed to perform this function by doing one thing ockingbetween two different shapes, or conformations.

said Jean Chin of the National institutes of health National Institute of General Medical sciences, which funded the work through grants to several of the paper co-authors. he highly collaborative team used its deep knowledge of the structures,

Simulations on the Stampede supercomputer of the Texas Advanced Computing Center (TACC) bridged the gap between the drawing board

The molecular dynamics simulations were about a microsecond of aggregate simulation time with classical force fields using the NAMD software program.

The computer allocation was made through XSEDE, the Extreme Science and Engineering Discovery Environment, a single virtual system funded by the National Science Foundation (NSF) that scientists use to interactively share computing resources,

data and expertise. or my research lab, my XSEDE resources are absolutely essential, Grabe said.

Protein engineering lags far behind the genetic engineering of modifying DNA that has been around since the 1970s.

Grabe views it as a first step in advancing the new discipline of protein engineering to the level of genetic engineering,

000 cores of the Stampede supercomputer. Grabe was cautious about the potential application of this research,

but he did say there could be benefits to society in areas such as energy and medicine. e envision that this protein can create an electrochemical gradient using things like ph,

using protons. One can imagine in a totally noncellular case that one could potentially harvest this kind of pumping to create things like batteries.

This might not happen in the very near future, but things like this could be used to do some kind of energy harvesting,

Grabe said. Moving things in and out of living cells would be even harder Grabe said. ou have a very hard problem of spatial and temporal specificity,

say, a cancerous cell with a stereotypical membrane environment, and load it full of zinc, or run down a chemical gradient that already there.

But if you could transport something into the cell such as a toxic ion or small molecule that could be quite interesting,


R_www.technology.org 2015 00002314.txt

According to the American Heart Association, African americans are at a higher risk than whites to die of heart disease.

Research over the last decade has demonstrated that discrimination against African americans may be a direct cause of many of these health problems

Goosby recruited 44 low-income adolescent African american youths between 10 and 15 years old from an Omaha health fair she organized with funding from the National institutes of health.

Blood samples and blood pressure readings were collected, as well as information about stress and perceived discrimination through interviews.

Goosby found a strong correlation between perceived discrimination and levels of C reactive-protein protein, which causes inflammation,

and blood pressure. didn expect that discrimination would have such a strong relationship with these particular markers of cardiovascular disease risk in kids this young,

Goosby is creating a report of her findings for Omaha city leaders, education leaders and advocates.

Jacob Cheadle and Deadric T. Williams and was published online in the American Journal of Human biology s


R_www.technology.org 2015 00002374.txt

#Electrolyte Genome Could Be Battery Game-Changer A new breakthrough batteryne that has significantly higher energy,

But Lawrence Berkeley National Laboratory (Berkeley Lab) scientist Kristin Persson says she can take some of the guesswork out of the discovery process with her Electrolyte Genome.

Think of it as a Google-like database of molecules. A battery scientist looking for a new electrolyte would specify the desired parameters and properties

and the Electrolyte Genome would return a short list of promising candidate molecules, dramatically speeding up the discovery timeline. lectrolytes are a stumbling block for many battery technologies,

whether the platform is designed for electric vehicles or a flow battery for grid applications, Persson said. hat we can do is calculate the properties of a large number of molecules

and give experimentalists a much better set of materials to work with than if they were to explore all possible combinations.

The electrolyte is a chemical substance that carries electrical charge between the battery anode and cathode to charge

and discharge the cell. It consists of a salt and solvent possibly additives and, not by design, impurities.

Persson Electrolyte Genome, launched more than two years ago, uses high-throughput computer screening to calculate the properties

not only of these three components but also their interactions with each other. f we can come up with an electrolyte that has a higher electrochemical window for multivalent batteries,

or with larger solubility for certain redox molecules, if we can solve either of these, you suddenly enable the whole industry,

Persson said. t could be a game-changer. Faster, smarter, better Besides being faster and more efficient in screening out bad candidates,

the Electrolyte Genome offers two other significant advantages to battery scientists. The first is that it could generate novel ideas. hile there are some amazing organic chemists out there,

this allows us to be agnostic in how we search for novel ideas instead of relying purely on chemical intuition,

The second advantage of the Electrolyte Genome is that it can add to scientistsfundamental understanding of chemical interactions. t adds explanations to why certain things work

or don work, Persson said. requently we rely on trial and error. If something doesn work, we throw it away

and go to the next thing, but we don understand why it didn work. Having an explanation becomes very usefule can apply the principles wee learned to future guesses.

So the process becomes knowledge-driven rather than trial and error. How it works funnel method The Electrolyte Genome uses the infrastructure of the Materials Project, a database of calculated properties of thousands of known materials,

co-founded by Persson and Gerbrand Ceder of MIT. The researchers apply a funnel idea

The methodology has been validated with known electrolytes. Using the supercomputers at Berkeley Lab National Energy Research Scientific Computing Center (NERSC),

the researchers can screen hundreds of molecules per day. To date more than 15,000 molecules for electrolytesncluding 10,000 redox active molecules, hundreds of conductive network molecules,

Early success stories The Electrolyte Genome first major scientific findinghat magnesium electrolytes are very prone to forming ion pairs,

They had another success screening molecules for redox capabilities for flow batteries for fellow Berkeley Lab scientist Brett Helms. e basically gave us a chemical space of organogelator molecules and asked

if I want a voltage window that precisely here,?Persson said. e filtered down about a hundred candidates to one.


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011