in order to deal with these roadblocks and transmit accurate genetic data. Lesions in DNA can occur as often as 100,000 times per cell per day.
#Tablet for 2 waiting at an Olive Garden near you soon Olive Garden, owned by Florida's Darden Restaurants Inc,
. started using Ziosk tablets in some of its restaurants last year. The chain said Tuesday that locations using the devices have experienced faster dining times and increased tip percentages for wait staff.
and we're excited to give our guests the ability to customize their visit by leveraging the technology of Ziosk's tabletop tablets,"Dave George,
Tablets have made appearances in airports, where travelers can have delivered food to where they sit, but are limited still in the traditional restaurant scene.
Ziosk tablets are in use at Chili's restaurants and are in the process of launching nationwide at Red Robin.
In 2012 it was paused for an extensive upgrade. The new upgraded and supercharged LHC restarts at almost twice the energy
so it will deliver much more data.""I think that in the LHC Run 2 we will sieve through more data than in all particle physics experiments in the world together for the past 50 years,
"Stroynowski said.""Nature would be really strange if we do not find something new.""SMU is active on the LHC's ATLAS detector experimentwithin the big LHC tunnel,
That's a lot of collision data, says SMU physicist Robert Kehoe, a member of the ATLAS particle detector experiment with Stroynowski and other SMU physicists.
Evaluating that much data isn't humanly possible so a computerized ATLAS hardware"trigger system"grabs the data,
makes a fast evaluation, decides if it might hold something of interest to physicists, than quickly discards
"That gets rid of 99.999 percent of the data, "Kehoe said.""This trigger hardware system makes measurements
but they are very crude, fast and primitive.""To further pare down the data, a custom-designed software program culls even more data from each nanosecond grab,
reducing 40 million events down to 200. Two groups from SMU, one led by Kehoe, helped develop software to monitor the performance of the trigger systems'thousands of computer processors."
"The software program has to be accurate in deciding which 200 to keep. We must be very careful that it's the right 200 the 200 that might tell us more about the Higgs boson, for example.
If it's not the right 200, then we can't achieve our scientific goals."
"The ATLAS computers are part of CERN's computing center, which stores more than 30 petabytes of data from the LHC experiments every year, the equivalent of 1. 2 million Blu-ray discs.
Flood of data from ATLAS transmitted via tiny electronics designed at SMU to withstand harsh conditionsan SMU physics team also collaborates on the design,
construction and delivery of the ATLAS"readout"system an electronic system within the ATLAS trigger system that sends collision data from ATLAS to its data processing farm.
Data from the ATLAS particle detector's Liquid Argon Calorimeter is transmitted via 1 524 small fiber-optic transmitters.
A powerful and reliable workhorse, the link is one of thousands of critical components on the LHC that contributed to discovery and precision measurement of the Higgs boson.
The custom-made high-speed data transmitters were designed to withstand extremely harsh conditions low temperature and high radiation.""It's not always a smooth ride operating electronics in such a harsh environment,
"Failure of any transmitter results in the loss of a chunk of valuable data. We're working to improve the design for future detectors because by 2017 and 2018,
the existing optical data-link design won't be able to carry all the data.""Each electrical-to-optical and optical-to-electrical signal converter transmits 1. 6 gigabytes of data per second.
Lighter and smaller than their widely used commercial counterpart, the tiny, wickedly fast transmitters have been transmitting from the Liquid Argon Calorimeter for about 10 years.
Upgraded optical data link is now in the works to accommodate beefed-up data flow A more powerful data link much smaller and faster than the current one'is in research and development now.
it has the capacity to deliver 5. 2 gigabytes of data per second. The new link's design has been even more challenging than the first
but handles more data, while at the same time maintaining the existing power supply and heat exchanger now in the ATLAS detector.
The link will have the highest data density in the world of any data link based on the transmitter optical subassembly TOSA, a standard industrial package,
Fine-tuning the new, upgraded machine will take several weeksthe world's most powerful machine for smashing protons together will require some"tuning"before physicists from around the world are ready to take data,
so physicists on shifts in the control room can take high-quality data under stable operating conditions.
Sekula said. 10 times as many Higgs particles means a flood of data to sift for gems LHC Run 2 will collide particles at a staggering 13 teraelectronvolts (Tev),
"On paper, Run 2 will give us four times more data than we took on Run 1,
"But each of those multiples of data are actually worth more. Because not only are we going to take more collisions,
"SMU's Maneframe supercomputer plays a key role in helping physicists from the Large hadron collider experiments.
One of the fastest academic supercomputers in the nation, it allows physicists at SMU and around the world to sift through the flood of data,
quickly analyze massive amounts of information, and deliver results to the collaborating scientists. During Run 1, the LHC delivered about 8, 500 Higgs particles a week to the scientists,
but some will be more important than others There are a handful of major new discoveries that could emerge from Run 2 data,
and Professor Lennart Lindfors, of Astrazeneca, Sweden, have mapped out'in diagram format the actual movements made by chemical molecules on their breeding journey using computer simulations.
T. C. Chang Professor of Computer science at Columbia Engineering, has invented a prototype video camera that is the first to be fully self-poweredt can produce an image each second, indefinitely, of a well-lit indoor scene.
They designed a pixel that can not only measure incident light but also convert the incident light into electric power.
who directs the Computer Vision Laboratory at Columbia Engineering. He notes that in the last year alone,
At the heart of any digital camera is an image sensor, a chip with millions of pixels.
The key enabling device in a pixel is the photodiode, which produces an electric current when exposed to light.
This mechanism enables each pixel to measure the intensity of light falling on it. The same photodiode is used also in solar panels to convert incident light to electric power.
The photodiode in a camera pixel is used in the photoconductive mode while in a solar cell it is used in the photovoltaic model.
Nayar, working with research engineer Daniel Sims BS'14 and consultant Mikhail Fridberg of ADSP Consulting, used off-the-shelf components to fabricate an image sensor with 30x40 pixels.
each pixel's photodiode is operated always in the photovoltaic mode. The pixel design is very simple,
and uses just two transistors. During each image capture cycle, the pixels are used first to record
and read out the image and then to harvest energy and charge the sensor's power supplyhe image sensor continuously toggles between image capture and power harvesting modes.
such as a phone or a watch. Nayar notes that the image sensor could use a rechargeable battery and charge it via its harvesting capability:"
What the EU says Google is doing wrong The European union slapped Google Inc. with antitrust charges Wednesday,
saying it is abusing its dominance in Web search to promote its own products Here are the major allegations:
The EU says Google is unfairly favoring its own comparison-shopping service in general search results.
a coffee mug would find Google Shopping results for mugs displayed at the top of the search page,
The EU says part of the reason for competing sites'low rankings is applied that Google different parameters to comparison-shopping services,
The EU pointed out that a previous Google shopping site called Froogle did not use a favorable system
The current Google Shopping product which allegedly uses the favorable system, is experiencing higher growth.
Also under investigation is Google's smartphone operating system, Android. The EU is looking in whether the company is giving smartphone makers unfair incentives for preinstalling Google's applications,
such as the Chrome Web browser and Youtube. The EU is also continuing a formal investigation into concerns that Google copies rivals Web content and places undue restrictions on advertisers a
#Using composite material samples, NRL scientists predict aspects of F/A-18 performance The U s. Naval Research Laboratory (NRL) has built a robot to pull, bend,
and twist samples of the composite materials used to build F/A-18s and other aircraft.
and wing panel composite skin abnormalities, engineers have had to do extensive analysis to develop repairs.""So the need for certifying a new material comes in,
A custom-developed machine vision system, with four cameras, captures digital images of what's happening in real-time.
the scientists use custom-developed full field measurement algorithms Michopoulos'group has patented now to"take those digital images
"So we do have a very rich database, "says Michopoulos. Computations from robot data predict how materials behave in aircraft A snapped composite specimen is one thing;
a theory of how that material will behave when formed into a jet wing is another.
they were still getting the same results as CRC-ACS because of all the data they'd captured with NRL66. 3."For me,
"NRL collected 12 terabytes of data during the testing period.""Just to tell you how much richness there is in these data,
"he says, out of the 72 loading paths we applied, an MIT student"based one dissertation out of one of these paths."
"Michopoulos wants to offer design engineers a simulation environment based on actual data.""Instead of asking the question,
A robot that has a process to optimize its own performance based on how well it collects data for material characterization."
we can have the machine decide where it wants to go to get the best possible data for characterizing the material in real time."
Professor Shu Kobayashi's group at the Graduate school of Science has developed highly active immobilized catalysts (heterogeneous catalysts)
and demonstrated simple and highly efficient synthesis of (R)- and (S)- rolipram by an eight-step continuous flow reaction using multiple column reactors containing the immobilized catalysts.
Professor Kobayashi's application of flow chemistry techniques to the production of fine chemicals using heterogeneous catalysts has resulted in simple method to synthesize (R)
and without purification of products from catalysts. Professor Kobayashi says"This new technology can be applied to not only other gamma aminobutyric-acids acids and medicines but also various chemicals such as flavors, agricultural chemicals,
"The core transcription machinery of RNA polymerase copies the information found in DNA genes onto MESSENGER RNA molecules that then govern the production of proteins.
She and her team then devised special binary codes to encode individual RNAS, and labeling and imaging schemes to decode these RNA codes.
and act as easily accessed landing sites for fluorescently labeled"readout probes"that are applied to the cells in subsequent rounds.
Those fluorescent spots are translated to the first bit of the binary code: any RNAS that fluoresce at this step are assigned a 1,
Although 16 rounds of imaging could yield more than 60,000 unique binary codes, the team used only a special subset of these codes to encode their RNAS.
her team borrowed an error-correction strategy from the field of digital communications. Instead of assigning all possible codes as identifiers of specific RNAS, they use only codes that differ from all others by more than one bit."
#New software analyses the effect of climate change on buildings from the cloud Large Spanish construction companies have begun to use a simulation software package,
and Technology at the Polytechnic University of Catalonia (CIT UPC) have developed the first-ever software to analyse the entire life of a building,
Certain details regarding the technology have been published in the journal Advances in Engineering Software. Even before its commercialization, Spanish construction groups such as Acciona and VIA had used already it in some of their projects.
"Another of the software's most original features is the ability to simulate models to show how global warming may affect construction.
Materials database Furthermore, NECADA has integrated an database which can be used to improve construction and design buildings that are adapted to European legislation.
The algorithms that are implemented currently in the system are some of the heuristic classics uch as hill-climbing, simulated annealing, NSGA-II,
Pau Fonseca indicates that the software consists of a core made up of a motor referred to as SDLPS that enables simulations to be carried out from complete representations of the models using languages such as SDL
"This core can be executed in a computer or be combined in a distributed way in a cluster, speeding up the extraction of results"."
enabling a user who is not familiar with simulation to customise the model and execute it in a distributed way,
Furthermore, the software integrates"key factors such as the price of materials, and their transportation, assembly and disassembly so the construction company can calculate the total cost of the building."
The technology, using a large screen to enhance the sense of reality and interactive avatars, synchronises the different realities so they all coordinate as one.
The project also involves Essex computer science alumnus Victor Zamudio whose Mexican company Fortito produced the intelligent home prototype Buzzbox used in the experiment.
As an open access tool, researchers and crop breeders can submit their own data to Polymarker
In one new software tool, we have applied expertise in advanced algorithm development, knowledge on genetics and principles of genome architecture."
Germany and Canada have built a miniature particle accelerator that uses terahertz radiation instead of radio waves to create pulses of high-energy electrons.
and some physicists are keen on using the radiation in much the same way that radio waves
which causes them to emit intense flashes of X-ray light. Currently, access to large-scale FELS is limited,
receivers and other hardware used to implement it. According to Vadim Makarov of the University of Waterloo and colleagues, many scientists assume that
so that it misses the core of three of the four fibres leading to Bob's polarization detectors.
says this idea of actively damaging QKD components was"not previously on the radar screen"of scientists working on quantum-communication technologies.
the development of QKD is"always a cat and mouse game
#China Stocks Extend Slide Amid Warning of Severe Trade Pressure Shanghai/Beijing: Chinese stock markets tumbled for a second straight day on Wednesday as investors crowded the exits,
within a few days of the poor July export data and other official figures showing factory-gate prices continued their three-year slide in July, touching a six-year
, according to data from National securities Depository Limited (NSDL. The sales helped push the Nifty down 6. 6 per cent in August, its worst monthly performance since November 2011.
Data on Tuesday showed India's economy grew 7 per cent in the April-June quarter from a year earlier, much slower than expected,
"said Davish Jain, chairman of the Soybean Processors Association of India.""Our oilseed and edible oil production will not rise
#E-commerce Players Betting Big on Offline Presence Many e-commerce players have started foraying into physical retail space
some e-tailers are setting up shops offline. According to property consultant JLL India, Pepperfry is the latest to go'hybrid'.
"A significant share of our customers still prefer to buy offline and consider online website as a research tool.
The bigger ticket items are bought generally offline.""Therefore, in the race to provide a complete solution,
such phenomenon of online players going offline should stay, "said Vikas Bhasin, CFO, Pine Labs, an integrated payment solutions company."
"A larger portion of the Indian society is dominated middle class. This clientele is very skeptical about what,
For this strong bastion of middle class milieu, offline models like retailer shops have mushroomed all over, "said R P Yadav, CMD, Genius Consultants.
and offline models of e-commerce, "said Dinesh Gulati, Director, Indiamart. According to Tripti Lochan, CEO of VML, a leading digital marketing agency,"pureplay online retail will continue to live as not every e-commerce player has need either the
One of our apps is built specifically to help users visualise our sofas in their home with augment reality."
#Dell to Acquire EMC in $67 Billion Record Tech Deal Computer maker Dell Inc said on Monday it had agreed to buy data storage company EMC Corp in a $67 billion record technology
and storing data for enterprises.""Dell wants to become the old IBM Corp, a one-stop shop for corporate clients.
That model fell apart a couple of decades ago. Reviving it would be a stunning coup for Dell,
and will also give EMC shareholders a special stock that tracks the share price in virtual software provider VMWARE Inc."The combination of Dell
While IBM Corp, Cisco systems Inc and Hewlett-packard Co could theoretically be potential suitors for EMC,
where the data collected is fed directly to robots for interpretation. Zecca explores issues such as how people feel about interacting with the robot on an emotional level,
Download the PDF presentation here. Massimiliano Zecca holds a Ph d. in Biomedical Robotics from the Scuola Superiore Santnna
An offboard computer runs the algorithms and sends commands out to the#ying machines via a customized wireless infrastructure.
Khazendar team used data on ice surface elevations and bedrock depths from instrumented aircraft participating in NASA Operation Icebridge,
Data on flow speeds came from spaceborne synthetic aperture radars operating since 1997. Khazendar noted his estimate of the remnant remaining life span was based on the likely scenario that a huge,
and study Earth interconnected natural systems with long-term data records. The agency freely shares this unique knowledge
On a flat lattice, atoms can easily move around from site to site. However, in a tilted lattice, the atoms would have to work against gravity.
However, these molecules can also cause collateral damage to healthy tissue around the infection site:
or site of damage in the structure of DNA, called 5-chlorocytosine (5clc) in the inflamed tissues of mice infected with the pathogen Helicobacter hepaticus.
the researchers first placed the 5clc lesion at a specific site within the genome of a bacterial virus. They then replicated the virus within the cell.
when triggered by infection, fires hypochlorous acid at the site, damaging cytosines in the DNA of the surrounding healthy tissue.
The former have a shell that is bonded directly to the core, but yolk-shell particles feature a void between the two equivalent to where the white of an egg would be.
the aluminum core continuously shrinks to become a 30-nm-across olk, which shows that small ions can get through the shell.
This research outcome potentially allows for great flexibility in the design and optimization of electronic and optoelectronic devices like solar panels and telecommunication lasers.
If you pry open one of today ubiquitous high-tech devices whether a cellphone, a laptop,
the technology now used in everything from cellphones to electric cars. The electrolyte in such batteries typically a liquid organic solvent whose function is to transport charged particles from one of a battery two electrodes to the other during charging
Such batteries provide a 20 to 30 percent improvement in power density with a corresponding increase in how long a battery of a given size could power a phone, a computer,
The finding suggests that quasarshe brilliant cores of active galaxies may commonly host two central supermassive black holes,
Like a pair of whirling skaters, the black-hole duo generates tremendous amounts of energy that makes the core of the host galaxy outshine the glow of its population of billions of stars
The results were published in the August 14, 2015 edition of The Astrophysical Journal. PDF Copy of the Study:
Applications of these devices include advanced microscopes, displays, sensors, and cameras that can be mass-produced using the same techniques used to manufacture computer microchips. hese flat lenses will help us to make more compact and robust imaging assemblies,
said Mahmood Bagheri, a microdevices engineer at JPL and co-author of a new Nature Nanotechnology study describing the devices. urrently,
Manipulating the polarization of light is essential for the operation of advanced microscopes, cameras and displays;
#New Technique Could Enable Chips with Thousands of Cores Researchers from MIT have unveiled the first fundamentally new approach to cache coherence in more than three decades,
a memory-management scheme that could help enable chips with thousands of cores. In a modern, multicore chip, every core or processor has its own small memory cache, where it stores frequently used data.
But the chip also has shared a larger cache, which all the cores can access. If one core tries to update data in the shared cache,
other cores working on the same data need to know. So the shared cache keeps a directory
of which cores have copies of which data. That directory takes up a significant chunk of memory:
In a 64-core chip, it might be 12 percent of the shared cache. And that percentage will only increase with the core count.
Envisioned chips with 128,256, or even 1, 000 cores will need a more efficient way of maintaining cache coherence.
At the International Conference on Parallel Architectures and Compilation Techniques in October, MIT researchers unveil the first fundamentally new approach to cache coherence in more than three decades.
Whereas with existing techniques, the directory memory allotment increases in direct proportion to the number of cores, with the new approach, it increases according to the logarithm of the number of cores.
In a 128-core chip that means that the new technique would require only one-third as much memory as its predecessor.
With Intel set to release a 72-core high-performance chip in the near future, that a more than hypothetical advantage.
But with a 256-core chip, the space savings rises to 80 percent, and with a 1, 000-core chip, 96 percent.
When multiple cores are simply reading data stored at the same location, there no problem.
Conflicts arise only when one of the cores needs to update the shared data. With a directory system, the chip looks up
which cores are working on that data and sends them messages invalidating their locally stored copies of it. irectories guarantee that
when a write happens, no stale copies of the data exist, says Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and first author on the new paper. fter this write happens,
no read to the previous version should happen. So this write is ordered after all the previous reads in physical-time order. ime travelwhat Yu and his thesis advisor Srini Devadas,
the Edwin Sibley Webster Professor in MIT Department of Electrical engineering and Computer science realized was that the physical-time order of distributed computations doesn really matter,
so long as their logical-time order is preserved. That is core A can keep working away on a piece of data that core B has provided
since overwritten that the rest of the system treats core A work as having preceded core B. The ingenuity of Yu
and Devadasapproach is in finding a simple and efficient means of enforcing a global logical-time ordering. hat we do is we just assign time stamps to each operation,
and we make sure that all the operations follow that time stamp order, Yu says. With Yu and Devadassystem, each core has its own counter,
and each data item in memory has associated an counter, too. When a program launches, all the counters are set to zero.
When a core reads a piece of data, it takes out a easeon it, meaning that it increments the data item counter to,
say, 10. As long as the core internal counter doesn exceed 10, its copy of the data is valid.
The particular numbers don matter much; what matters is their relative value. When a core needs to overwrite the data,
however, it takes wnershipof it. Other cores can continue working on their locally stored copies of the data,
but if they want to extend their leases, they have to coordinate with the data item owner.
The core that doing the writing increments its internal counter to a value that higher than the last value of the data item counter.
Say, for instance, that cores A through D have all read the same data, setting their internal counters to 1
and incrementing the data counter to 10. Core E needs to overwrite the data so it takes ownership of it
and sets its internal counter to 11. Its internal counter now designates it as operating at a later logical time than the other cores:
Theye way back at 1, and it ahead at 11. The idea of leaping forward in time is
what gives the system its name Tardis, after the time-traveling spaceship of The british science fiction hero Dr. Who.
Now, if core A tries to take out a new lease on the data, it will find it owned by core E, to
which it sends a message. Core E writes the data back to the shared cache,
and core A reads it, incrementing its internal counter to 11 or higher. Unexplored potentialin addition to saving space in memory
Tardis also eliminates the need to broadcast invalidation messages to all the cores that are sharing a data item.
In massively multicore chips, Yu says, this could lead to performance improvements as well. e didn see performance gains from that in these experiments,
Yu says. ut that may depend on the benchmarksthe industry-standard programs on which Yu and Devadas tested Tardis. heye highly optimized,
so maybe they already removed this bottleneck, Yu says. here have been other people who have looked at this sort of lease idea,
says Christopher Hughes, a principal engineer at Intel Labs, ut at least to my knowledge, they tend to use physical time.
you can use this data for, say, 100 cycles, and I guarantee that nobody else is going to touch it in that amount of time.
because if somebody else immediately afterward wants to change the data, then theye got to wait 100 cycles before they can do so.
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011