Synopsis: Domenii: Ict: Ict generale: Computer:


techcrunch.com 2015 06131.txt.txt

#Google, Microsoft, Mozilla And Others Team up To Launch Webassembly, A New Binary Format For The Web Google, Microsoft,

Some of these projects focus on adding new features to the language (like Microsoft Typescript) or speeding up Javascript (like Mozilla asm. js project.

The new format is meant to allow programmers to compile their code for the browser (currently the focus is on C/C++,

Javascript files are simple text files that are downloaded from the server and then parsed and compiled by the Javascript engine in the browser.


techcrunch.com 2015 06252.txt.txt

It a piece of hardware that attaches to an iphone to provide mobile vision exams.

the technology is impressive in that it opens up yet more avenues for mobile devices. Everyone talks about the phenomenal computing power available in mobile devices,

but Smart Vision tech is one of the few that harnesses mobile computing in a novel way that make the device a new technology platform for healthcare.


techcrunch.com 2015 06327.txt.txt

the readings it takes are more scientifically rigorous than those achieved by the current crop of Android Wear-powered devices,

and the dedicated medical wearable unveiled today also monitors and reports information continuously, for better delivery of real-time actionable info to researchers and medical professionals.

This isn Google first move in building medical hardware; Google X is also creating contact lenses that can monitor blood glucose level to help in managing conditions like diabetes.

The competition is also eager to contribute to the medical research community pple has introduced Researchkit,

which allows studies to use iphones and ipads to gather participant data from a wider potential user pool, for instance o


techcrunch.com 2015 06738.txt.txt

Compelation ios app beta is normally invite-only, but it opening signups for the next 24 hours to let Techcrunch readers give it a try.


techcrunch.com 2015 07417.txt.txt

All you need is a computer, smartphone, Wi-fi and 25 minutes to take its test about

Opternative Test Software Eats The Eye Exam After graduating optometry school, Dr. Steven Lee was sure that computers

and mobile phones had to offer an alternative to traditional autorefractor machines used for vision tests.

You calibrate your screen by measuring a credit card and sync your phone as a remote control for your computer over Wi-fi and an SMS confirmation.

The test takes about 25 minutes. You follow the dictated and written instructions to cover one eye at a time,

look at your computer screen, and answer corresponding visual acuity questions on your phone. How many lines are in a symbol?

and be told to walk a certain number of heel-to-toe steps away from your computer before answering.

Screen Shot 2015-07-27 at 2. 13.15 PM Vision For The Future Until now the only ways to get eye exams were the doctor office,

It also building out a touch screen kiosk that could fit inside physical eyewear stores. Seeing clearly can help people learn,


techcrunch.com 2015 07480.txt.txt

however though that will change later this year as Subway begins to advertise its mobile ordering and payments app for ios and Android.

it moving to integrate Paypal Onetouch mobile checkout into the Subway application as another checkout option, alongside the app support for Apple Pay and Android Pay.

including Apple Pay and Android Pay, that the businesses themselves will want to support. t the end of the day,


tech_review 00005.txt

#Google s Brain-Inspired Software Describes What It Sees in Complex Images Experimental Google software that can describe a complex scene could lead to better image search

Researchers at Google have created software that can use complete sentences to accurately describe scenes shown in photos significant advance in the field of computer vision.

the software responded with the description group of young people playing a game of frisbee.

The software can even count, giving answers such as wo pizzas sitting on top of a stove top oven.

most efforts to create software that understands images have focused on the easier task of identifying single objects. t very exciting,

The new software is the latest product of Google research into using large collections of simulated neurons to process data (see 0 Breakthrough Technologies 2013:

No one at Google programmed the new software with rules for how to interpret scenes. Instead, its networks earnedby consuming data.

Google researchers created the software through a kind of digital brain surgery, plugging together two neural networks developed separately for different tasks.

The other had been trained to generate full English sentences as part of automated translation software. When the networks are combined,

After that training process, the software was set loose on several large data sets of images from Flickr

The accuracy of its descriptions was judged then with an automated test used to benchmark computer-vision software.

Google software posted scores in the 60s on a 100-point scale. Humans doing the test typically score in 70s,

That result suggests Google is far ahead of other researchers working to create scene-describing software.

and test this kind of software. When Google asked humans to rate its software descriptions of images on a scale of 1 to 4

it averaged only 2. 5, suggesting that it still has a long way to go.

though large databases of hand-labeled images have been created to train software to recognize individual objects,

Microsoft this year launched a database called COCO to try to fix that. Google used COCO in its new research,


tech_review 00007.txt

and are moving towards using the smartphone/tablet hardware and software to perform more advanced functions.

An emerging example of this is Setpoint Medical s implantable neurostimulation device (currently in development) configured via an ipad app.

This device is aimed at treating patients with debilitating inflammatory diseases. It consists of an implantable microregulator a wireless charger and the ipad prescription-pad application.

Addressing Two Critical Questionssagentia believes there are two critical questions for medical device companies entering this space:

It s not just a software tool he says. MMAS should be treated like any other medical device.

You then need to map your core user requirements so that you understand what information is needed how it should be presented


tech_review 00017.txt

and are moving towards using the smartphone/tablet hardware and software to perform more advanced functions.

An emerging example of this is Setpoint Medical s implantable neurostimulation device (currently in development) configured via an ipad app.

This device is aimed at treating patients with debilitating inflammatory diseases. It consists of an implantable microregulator a wireless charger and the ipad prescription-pad application.

Addressing Two Critical Questionssagentia believes there are two critical questions for medical device companies entering this space:

It s not just a software tool he says. MMAS should be treated like any other medical device.

You then need to map your core user requirements so that you understand what information is needed how it should be presented


tech_review 00021.txt

Paying search engines for stimulating clicks that led to purchases was fine but most consumers take a more circuitous route to their final decisions.


tech_review 00022.txt

will use AOPTIX technology in New jersey to shave nanoseconds off the time it takes data to travel between the computers of Nasdaq Stock market and the New york stock exchange e


tech_review 00026.txt

Yardarm plans to start selling the hardware and tracking service in mid-2015. The next goal is to capture the direction in


tech_review 00033.txt

and imaging technologies assembled into a single workstation. It combines a touch screen camera, infrared depth sensors, projector, touch-sensitive whiteboard,

and a conventional printer and scanner. Youe encouraged to hook it up to a 3-D printer,

like the one HP launched alongside the Sprout. All that is supposed to make Sprout into a powerful new tool for designers and other creatives.

You might use the device to scan, say, a Buddha statuette in 3-D, and then use a stylus to modify the digital scan once it is projected onto the workstation touch-sensitive surface.

After you made your change, you could print the new design out in 3-D. Sprout shows signs of HP history of making PCS and printers,

with matte grey casing and the bulbous contours of a Ford taurus. But it is clearly the product of some very clever engineering and an ambitious product strategy.

While computer processers and memory have advanced over the decades, we have continued to interface with them via monitor, keyboard, and mouse.

More recently, tools for making things in the physical world have changed a lot too, with the advent of maker spaces and affordable, computer-controlled lathes, mills,

and 3-D printers. But in neither of these cases do you have the opportunity to take control of the world of physical outputs

and software-based design and computing together. Sprout is a clunky device to gaze upon,

but it dreaming in a big way about the very nature of work. You can do things with Sprout that had previously only had been possible by piecing together at least a half dozen different devices.


tech_review 00035.txt

The power unit is a rectangular slab about the size of a movie theater screen. It mounted on a thick steel post,


tech_review 00036.txt

At the same time computer scientists would dearly love to reproduce the same kind of memory in silico. Today Google s secretive Deepmind startup which it bought for $400 million earlier this year unveils a prototype computer that attempts to mimic some of the properties of the human brain s short-term working memory.

The new computer is a type of neural network that has been adapted to work with an external memory.

The result is a computer that learns as it stores memories and can later retrieve them to perform logical tasks beyond those it has been trained to do.

Deepmind s breakthrough follows a long history of work on short-term memory. In the 1950s the American cognitive psychologist George Miller carried out one of the more famous experiments in the history of brain science.

During the 1990s and 2000s computer scientists repeatedly attempted to design algorithms circuits and neural networks that could perform this trick.

Such a computer should be able to parse a simple sentence like Mary spoke to John by dividing it into its component parts of actor action and the receiver of the action.

In Turing s famous description of a computer the memory is the tickertape that passes back and forth through the computer and which stores symbols of various kinds for later processing.

This is similar to the way an ordinary computer might put the number 3 and the number 4 inside registers and later add them to make 7. The difference is that the neural network might store more complex patterns of variables representing for example the word Mary

He believed that until a computer could reproduce this ability it could never match the performance of the human brain.


tech_review 00037.txt

Poynt s terminal is dominated by two touch screens that meet at an angle a seven-inch display that a store employee will use to ring up sales

and there s a built-in receipt printer that will spit out paper from an opening below the customer touch screen.

me how it works during an interview conducted via Skype video. The Poynt terminal he used said Welcome to Main St. Bakery on the customer screen

Merchants could use the screen for ads or store specials when not taking payments Bedier says.

and Kabbage and releasing a software development kit in hopes of attracting other developers too o


tech_review 00046.txt

Right now Roost has a working prototype in a plastic box about the size of an external hard drive;


tech_review 00049.txt

#How Magic Leap s Augmented reality Works A Florida startup called Magic Leap announced Tuesday that it had received $542 million in funding from major Silicon valley investors led by Google to develop hardware

for a new kind of augmented reality hardware. The secretive startup has yet to publicly describe or demonstrate its technology,

The filings describe sophisticated display technology that can trick the human visual system better than existing virtual reality displays (such as the Oculus Rift) into perceiving virtual objects as real.

The display technology used in most devices can show only flat, 2-D images. Headsets like the Oculus Rift trick your brain into perceiving depth by showing different images to each eye,

but your eyes are focused always on the flat screen right in front of them. When you look at a real 3-D scene,

They describe displays that can create the same kind of 3-D patterns of light rays, known as ight fields,

Earlier this year, Wetzstein and colleagues used that technique to create a display that allows text to be read clearly by people not wearing their usual corrective lenses (see rototype Display Lets You Say Goodbye to Reading Glasses.

He previously worked on glasses-free 3-D displays based on similar methods. And last year, researchers at chip company Nvidia demonstrated a basic wearable display based on light fields.

A trademark filing from July describes Magic Leap technology as earable computer hardware, namely, an optical display system incorporating a dynamic light-field display.

One of Magic Leap patents describes how such a device, dubbed a WRAP, for aveguide reflector array projector, would operate.

The display would be made up of an array of many small curved mirrors; light would be delivered to that array via optical fiber,

Multiple layers of such tiny mirrors would allow the display to produce the illusion of virtual objects at different distances.

That would allow the mirrors to be reprogrammed using a magnetic field to rapidly display points at different depths fast enough to fool the eye

Magic Leap greatest challenge may be to find a way to seamlessly integrate virtual 3-D objects created by that display with

and eye-tracking cameras on a wearable display to figure out at what depth a person eyes are focused.

Depth-sensing cameras are now relatively cheap and compact (see ntel Says Tablets and Laptops with 3-D Vision Are Coming Soon.

But Wetzstein says Magic Leap will need likely to make major breakthroughs in computer vision software for a wearable device to make sense of the world enough for very rich augmented reality. hey will require very powerful 3-D image recognition,

running on your head-mounted display, he says. The company is recruiting experts in chip design and fabrication

apparently with a view to creating custom chips to process image data. Dedicated chips could make that work more energy-efficient, something important for a wearable device.

Magic Leap already employs Gary Bradski, a pioneer of computer vision research and software, notes Wetzstein.

and video game development. Altogether, many of the underlying techniques Magic Leap needs to realize highly realistic augmented reality have been demonstrated,


tech_review 00061.txt

Twenty minutes away in San jose, the largest city in the Valley, a camp of homeless people known as the Jungleeputed to be the largest in the countryas taken root along a creek within walking distance of Adobe

The coauthor, with fellow MIT academic Andrew Mcafee, of The Second Machine Age, Brynjolfsson, like Piketty, has gained recently unlikely prominence for an academic economist.

and thanks to software and other digital technologies. Why hire a local tax consultant when you can use a cheap,

The ability to copy software and distribute digital products anywhere means customers will buy the top one.

Why use a search engine that is almost as good as Google? Such economic logic now rules a growing share of the marketplace;

and building a business becomes less capital-intensiveou don need a printing plant to produce an online news site,

In an article called ew World Order, published this summer in Foreign affairs, Brynjolfsson, Mcafee, and Michael Spence, a Nobel laureate and professor at New york University, argued that uperstar-based technical change is upending the global economy.

and Mcafee argument that the transformation of work is speeding up as technological change accelerates.

nor is concentrated such growth in computer-intensive sectors. According to Autor, the changes wrought by digital technologies are transforming the economy,


tech_review 00065.txt

Herr worked with Pratt to develop a computer-controlled knee joint that uses a magnetorheological fluid a fluid

and postdocs working on projects is strewn with computer parts coffee cups wires rolls of tape random tools

This science he says is critical for designing the hardware and software control systems of bionic devices.


tech_review 00071.txt

and Mike Cariaso, a computer programmer. It works by comparing a person DNA data with entries in SNPEDIA,

the FDA said it has authority to regulate software that interprets genomes, even if such services are given away free.

After all, they named their software after Prometheus, the titan who defied the gods by stealing fire from Mt olympus and giving it to mankind.


tech_review 00074.txt

If you happened to pore over the details added to Apple website yesterday about its new ipads,

You will be able to use a setting in ios to quickly switch from carrier to carrier right on the ipad

I asked Apple why the company didn mention the feature during its ipad news event Thursday in Cupertino.

For now, it more of an intriguing footnote to Apple refresh of its ipad line


tech_review 00080.txt

and the prospect of far worse floods the nation is sophisticated developing computer models of climate precipitation hydrology sea level

Its strategies guided by sophisticated computer models include building some inland water barriers as a second line of defense;


tech_review 00081.txt

But one shopper tries it by taking out his Android phone and clicking on Google alletapp intended to allow instant payment and taps the terminal.

Behind the scenes, a payment processor such as Visa recognizes an encrypted version of your credit card such as the one in an itunes account,

because card numbers aren stored directly on the phone or on Apple servers. Instead, digital tokens, encrypted numbers that look like card numbers,

says David Brudnicki, chief technology officer for Sequent Software, which provides mobile wallet services to banks, retailers,

AT&T, and Verizon is touting its support of more than 80 Android phones and the ability to pay at retailers including Mcdonald, Subway, and Walgreens.

Payments experts think the company will allow outside software developers to create apps that can add such features to Apple Pay.


tech_review 00086.txt

Tibbits then uses a 3-D printer to apply materials that are known to shrink or grow under certain conditions.

or twist in various ways depending on the pattern produced by the printer. He and his colleagues are developing design software that simulates the way different patterns of these materials printed onto different kinds of composite materials will behave under different conditions.

So far Tibbits has demonstrated materials that respond to light water and heat. But he says it should be possible to make ones that respond to air pressure and other stimuli.#


tech_review 00100.txt

because manufacturers typically use equipment developed for making high-resolution displays, says Michael Boroson, the chief technology officer of OLED Works.

The factory will be able to produce a million 15-centimeter-wide panels per month. Even with such advances, it will take years to bring costs low enough to make OLED lighting widely used.


tech_review 00122.txt

as if the German industrial designer Dieter Rams had created a more social version of Tumblr is probably not causing many people to ditch Facebook

Apps for iphone and Android are in the offing but for now the only way to use it on a smartphone

or tablet is via a mobile browser. Despite the long to-do list Ello is off to an intriguing start.


tech_review 00132.txt

The catalysts built on previous work showing that nickel hydroxide is a promising catalyst, and that adding iron could improve it.


tech_review 00154.txt

Izhikevich s startup Brain Corporation based in San diego has developed an operating system for robots called Brainos to make that possible.

To teach a robot running the software to pick up trash for example you would use a remote control to repeatedly guide its gripper to perform that task.

Brain Corporation hopes to make money by providing its software to entrepreneurs and companies that want to bring intelligent low-cost robots to market.

Later this year Brain Corporation will start offering a ready-made circuit board with a smartphone processor

The chip on that board is made by mobile processor company Qualcomm which is an investor in Brain Corporation.

and could then copy its software to new robots with the same design before they headed to store shelves.

Brain Corporation s software is based on a combination of several different artificial intelligence techniques. Much of the power comes from using artificial neural networks

But they might eventually offer a more powerful and efficient way to run software like Brainos.


tech_review 00157.txt

Researchers from Carnegie mellon and Intel developed the prototype headlight which scans the road ahead using an infrared camera

The Carnegie mellon-Intel prototype includes a camera a computer and a digital projector. Information from the infrared camera is processed by a computer that tries to identify relevant objects on the road such as cars, pedestrians or road signs.

The projector uses a light source that is 4700 lumens (much brighter than a halogen headlight) with an array of almost 800000 micromirrors that can be controlled individually by the computer.

The ability to control the light with so many micromirrors provides a high-resolution, highly tunable system that can also turn on

This is a great example of taking ideas from computer vision and applying them to a challenging real-world problem,

which recently presented its findings at the European Conference on Computer Vision in Zurich Switzerland is still modifying the prototype


tech_review 00165.txt

and gets its power from a USB port on a computer. Unlike other commercial sequencing machines

if the sequencer was vaporware. By this spring Oxford had worked the bugs out enough at any rate to start mailing out beta versions of the nanopore sequencer to 500 hand-picked labs it is collaborating with.

Even with a supercomputer the puzzle often can't be solved there can be repeated too many sequences


tech_review 00174.txt

#Intel Says Laptops and Tablets with 3-D Vision Are Coming Soon Laptops with 3-D sensors in place of conventional webcams will go on sale before the end of this year according to chip maker Intel

which is providing the sensing technology to manufacturers. And tablets with 3-D sensors will hit the market in 2015 the company said at its annual developers conference in San francisco on Wednesday.

Intel first announced its 3-D sensing technology at the Consumer electronics Show in January (see Intel s 3-D Camera Heads to Laptops and Tablets.

It has developed two different types of depth sensor. One is designed for use in place of a front-facing webcam to sense human movement such as gestures.

The other is designed for use on the back of a device to scan objects as far as four meters away.

Both sensors allow a device to capture the color and 3-D shape of a scene making it possible for a computer to recognize gestures

Intel is working with software companies to develop applications that use the technology. In the next few weeks the chip maker will release free software that any software developer can use to build apps for the sensors.

Partners already working with Intel include Microsoft s Skype unit the movie and gaming studio Dreamworks and the 3-D design company Autodesk according to Achin Bhowmik general manager for Intel s

perceptual computing business unit. None of those partners showed off what they re working on at the event this week.

But Intel showed several demonstrations of its own. One developed with a startup called Volumental lets you snap a 3-D photo of your foot to get an accurate shoe size measurement something that could help with online shopping.

Bhowmik also showed how data from a tablet s 3-D sensor can be used to build very accurate augmented reality games where a virtual character viewed on a device s screen integrates into the real environment.

As the tablet showing the character was moved it stayed perched on the tabletop and even disappeared behind occluding objects.

Intel also showed how the front-facing 3-D sensors can be used to recognize gestures to play games on a laptop

Those demonstrations were reminiscent of Microsoft s Kinect sensor for its Xbox gaming console which introduced gamers to depth sensing

Microsoft launched a version of Kinect aimed at Windows PCS in 2012 and significantly upgraded its depth-sensing technology in 2013

but Kinect devices are too large to fit inside a laptop or tablet. Some of Intel s demos were rough around the edges suggesting that their compact sensors are less accurate than the larger ones of Microsoft.

However Bhowmik said that any such glitches would be unnoticeable in the fully polished apps that will appear on commercial devices.

Intel s two sensors work in slightly different ways. The front sensor calculates the position of objects by observing how they distort an invisible pattern of infrared light by a tiny projector in the sensor.

Intel s new sensors are roughly the same size as the camera components used in existing devices says Bhowmik.

On Monday Dell announced that the sensors will appear later this year in its Venue 8 7000 tablet which is only six millimeters thick thinner than any other tablet on the market t


tech_review 00179.txt

however, this doesn mean you can actually bend the screen. As with other devices featuring flexible displays,

such as those from LG and Samsung, the display has been laminated onto a stiff pane, fixing it in place to prevent the damage that would come from repeated flexing.

Even so, the appearance of the first few flexible screens in commercial devices may be a sign of things to Come in fact

fully flexible electronic gadgetsith full-color displays that wrap around a wrist or fold upay be just a few years away,

chief marketing officer for Applied materials, a company whose equipment is used to make displays, is also extremely difficult to make a flexible backlighthe component needed to illuminate LCD pixels.

So the screen in the Apple Watch is almost certainly an OLED display. Rather than the pixels being illuminated by a backlight,

Manufacturers can already make OLED displays flexible. They first laminate a sheet of plastic to glass and then deposit the materials for the pixels and the electronics on top of both.

and afterwards the plastic, together with display and electronic components, is lifted off the glass. Manufacturers have known how to do this for years.

so you have to seal the display within robust, high-quality, flexible materials. This is costly, and there are challenges with ensuring that the seal survives being bent hundreds or thousands of times over the lifetime of a device.

Novel materials for touch screens that use flexible nanomaterials could also help. One patent application suggests Apple is already looking at this issue.


< Back - Next >


Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011