and has the hardware capabilities to organize into complex shapes with its neighbors due to accurate range and bearing.
Using a two-step approximation1â we showâ how this data can be combined with accurate sensor and receiver characteristics to calculate not only the bearing to the other robots but also its heading and eventually the range.
however and omnidirectional motion using three motors has been thought to be only possible by carefully synchronizing all motors phases#something close to impossible with cheap hardware.
We are currently working on using this feature for data transmission enabling programming an entire swarm not only via infrared but directly via the floor.
Google new purpose-built self driving car Google completed a major step in its long and extensive self-driving cars project by presenting its first purpose-built autonomous car
or brake pedals#and you can ride it strictly as a passenger which is probably a strange feeling but according to Google video not entirely unpleasant.
Google will make about a hundred prototypes and it looks like it a very serious effort. It is equipped with multiple sensors (a big LIDAR on its roof is probably doing most of the work)
Thanks to Google previous experience with self-driving cars one can expect very good performance in real environments;
other Google self-driving cars have completed hundreds of thousands miles with no major incidents. The car itself may look like a toy
and above it the whole front panel is made of foam so the car is not only safe for its passengers but for pedestrians as well.
This is a very clever move from Google that aspires to overcome the legal and ethical problems of who should be able to drive
Google will launch a small pilot program in California in the next couple of years. The prototypes released on public roads will have manual controls for obvious legal reasons
Read the full post in Google official blog here. It worth mentioning that similar concepts
but its main feature was the interior where similarly to Google car no manual controls were present
and contains three motors (for rotational movements) onboard computation and a battery that allows for at least one hour of operating time.
The team behind the Roombots have had to overcome a series of research challenges in terms of hardware and control.
and technical challenges such as compensating unwanted bending in the mechanical structure (related to building larger complex 3d structures) developing the best-suited algorithms for reconfiguration
and often writes about r Robinon her blog (in Italian). In our ageing society many elderly people are in the same situation
and monitor health blood pressure or sugar levels for example. They also allow the persons caregivers to monitor their wellbeing remotely and to check for falls.
A robot moves around the home and allows family friends and carers to virtually visit the person.
but we see that various aspects of the system are appreciated differently by the different users.
and technology should be both adaptable and tailored to user needscurrent plans are to put the system into commercial production next year based on an upfront fee
#Gzweb for mobile platforms Gzweb Mobile from OSRF on Vimeo. During her Gnome Outreach Program for Women internship with OSRF Louise Poubel made Gzweb work on mobile platforms by designing a mobile-friendly interface
and implementing lighter graphics. Until recently Gazebo was only accessible on the desktop. Gzweb Gazebo web client allows visualization of simulations in a web browser.
Louise implemented the graphics using Webgl. The interface includes menus suitable for mobile devices and multi-touch interactions to navigate the 3d scene.
Louise conducted usability tests throughout the development phase in order to improve user experience and quickly discover
and resolve bugs. To optimize 3d rendering performance on mobile platforms she also implemented a mesh simplifcation tool
which allows users to choose how much to simplify 3d models in the database during the deployment stage
and generate coarse versions of meshes to be used by gzweb. Mobile devices have been and will continue to be a big part of our lives.
With Gzweb Mobile users can visualize simulations on mobile phones and tablets and interact with the scene inserting shapes and moving models around.
References: http://www. gazebosim. orggzweb wikirepositories: Gzweb Bitbucket repositor r
#The People#s Bot: Robotic telepresence for the public good have wanted you ever to attend a conference that was too far away expensive or sold out?
Whether youe a penniless researcher interested youth or a group of elderly people who want to live in other people bodies (like in that weird movie Being John Malkovich) your wish may be granted.
& support public interest media-making through scholarships media fellowships and auctions. nd by public good they mean for example sneaking into purchasing participant registration for The People Bot at the highly popular Computer-Human-Interaction Conference
but like many people can afford to attend it looks like you can still bid for a much cheaper telepresence opportunity on ebay.
Bid on Ebay to attend the History of Wearables & Google glass exhibit at CHI (proceeds go to the CHI student travel grants).
Rumor has it The People Bot is currently wandering around the Theorizing the Web conference in New york city.
and locked-in syndrome to interact with the world around them by controlling a computer mouse with their mind.
Moving a mouse is something that most of us take for granted but an individual with locked-in syndrome that is fully awake and conscious has no other means of producing speech move limbs
Through a device implanted in the skull the Braingate device interfaces with a computer allowing the individual to interact with their environment.
but revolutionary hardware designed by engineers like Bacher have also been key. Bacher began working on the Braingate technology by conducting brain-machine interface research on nonhuman primates.
Or as Bacher describes it#basically having monkeys playing video games with their brains.##This research helped Bacher
Bacher#s approach also takes worldwide emerging technology trends in hands-free computing robotics and other areas and finds ways to translate them into helping disabled individuals.
Says Bacher#if we develop a piece of technology for someone with a spinal cord injury who is moving their head to control a computer there s no reason that can#t help someone
#Human computer interface technology is limited still largely to touch keyboard and mouse. The research conducted by Bacher
Says Bacher#clearly a mouse and a keyboard even today seems like an antiquated way to interact with technology.
We#ve seen developments like touch-screen technology and voice-control come along way but it#s still not mainstream.
Smart phones can provide the#brains#for assistive devices. Eye motion-capture technologies developed for paralyzed individuals open up new ways for the general public to interact with technology
The latest in soft-bodied robots created by team of engineers of the Computer science and Artificial intelligence Laboratory (CSAIL) at the Massachusetts institute of technology.
and Computer science and Director of CSAIL Cagdas Onal Assistant professor of Mechanical engineering at the Worcester Polytechnic institute and Andrew Marchese a doctoral candidate in engineering at MIT created the robot to be autonomous.
This means it has all the necessary sensing actuation and computation on board. Its flexible body is made of silicone rubber.
In the not so distant future the fish-bot could be put to use for covert science missions where it might be able to infiltrate schools of real fish to collect data about their behavior.
and at the other end to sensors) and an algorithm to convert signals the team has produced a hand that sends information back to the brain that is so detailed that the wearer could even tell the hardness of objects he was given to Hold in a paper published in Science Translational Medicine in Feb. 2014
and are used to decode the intentions of the user and a power source is used to activate the hand in one of four different grasping motions.
and it was found that the brain automatically assimilated data from multiple sensors in the palm of the hand.
However the tests indicated that this was not the case thus opening up the technology to a wide range of potential users.
Unfortunately there is no super-software that can be uploaded to the robot to make it become an instant medical expert.
Software may be upgraded in the future to allow R2 to move around the stations interior and perform routine maintenance tasks such as cleaning filters.
Other upgrades may enable the robot to work outside the space station to perform repairs and maintenance checks
The most defining feature of the system on display at the company showroom in Yokohama is its 3-dimensional use of space. his is tiered a 5 cultivation system.
if playing music was as simple at looking at your laptop screen. Now it is thanks to Kenneth Camilleri
The user can control the music player simply by looking at a series of flickering boxes on a computer screen.
and as the user looks at them their brain synchronizes at the same rate. This brain pattern reading system developed by Rosanne Zerafa relies on a system involving Steady State Visually Evoked potentials (SSVEPS.
This process known as electroencephalography (EEG) records the brain responses and converts the brain activity into a series of computer commands.
As the user looks at the boxes on the screen the computer program is able to figure out the commands allowing the music player to be controlled without the need of any physical movement.
or change the song the user just has to look at the corresponding box. The command takes effect in just seconds.
This particular brain-computer interface exploits one of these; the occulomotor nerve which is responsible for the eyeâ#movements.
This means that even an individual with complete body paralysis can still move their eyes over images on a screen.
This cutting age brain-computer interface system could lead the way for the development of similar user interfaces for tablets and smart phones.
Packbotâ#other attributes include a state-of-the-art GPS video image display system monitoring electronic compass temperature sensors.
The robot is manipulated with an integrated Pentium-based computer e
#Microsoft pays 5m CHF to ETHZ and EPFL for research on flying robots and new memory architectures ETH Zurich and EPFL are jointly entering into a new research partnership with Microsoft Research.
Over five years Microsoft Research will provide five million Swiss francs of funding to support IT research projects.
Microsoft researchers will also work closely with the scientists at the two universities. Microsoft has been investing in Swiss research for years
and now the US technology company is renewing its longstanding collaboration with EPFL and ETH Zurich.
Microsoft will provide one million Swiss francs per year in funding for IT-related research projects at the two universities over a period of five years.
The collaboration is a continuation of a project launched in 2008 that focused on embedded technology software solutions and prototypes.
The new collaboration will now broaden the areas of computer science and deepen the collaboration between the three partners.
Scientists in Lausanne and Zurich submitted 27 proposals seven of which were selected by the steering committee. Among the successful projects:
The young computer scientist from ETH Zurich studies the interaction between people and computers. For his project he and Dr. Shahram Izadi from Microsoft Research are investigating how flying robots can work
and interact with people in active scenarios. Specifically they want to develop a platform that enables flying robots to do more than just recognise people
Thanks to their algorithms the robots should be able to react to gestures and touch as well.
At the Ecocloud center at EPFL Dr. Edouard Bugnion and Professor Babak Falsafi are carrying out research into energy-efficient memory architectures for data centres that can handle huge amounts of data.
To do this they are combining thousands of energy-efficient micro-servers in a way that enables them to access the memories of the other servers with a minimal time delay.
The two computer scientists are working with Dr. Dushyanth Narayanan and other scientists from Microsoft Research in Cambridge to develop new applications for this system known as Scale-out NUMA A
#Robot tourism coming soon to Korea: Masan Robot Land project finally breaks ground With permission finally given to occupy lands that include an existing road
#Google getting more roboticists with Nest acquisition Is Google getting a robot company? Or just another source of roboticists with the Nest acquisition that was announced today?
The small sampling of roboticists Iâ##ve spoken to who are employed at Google have shed little light on future plans
Perhaps Google is collecting libraries and IP instead in the same way that Wolfram is talking about owning the database of the internet of things.
Google is to buy Nest Labs Inc. for $3. 2 billion in cash. Nest launched in 2011 with a smart thermostat
and has launched recently a smoke alarm Both products are doing very well in sales. Nest will continue to operate under the leadership of Tony Fadell
and maintain a distinct brand identity but as a part of the Google stable. Forâ more information from Nestâ as to the transition.
Larry page CEO of Google said: â##Nestâ#founders Tony Fadell and Matt Rogers have built a tremendous team that we are excited to welcome into the Google family.
Theyâ##re already delivering amazing products you can buy right nowâ##thermostats that save energy
â##Weâ##re thrilled to join Google. With their support Nest will be placed even better to build simple thoughtful devices that make life easier at home
and that have a positive impact on the world. â#Colin Angle CEO of irobot described Google recent acquisitions as a logistics play around self driving vehicles
and that when Google solved the first mile/last mile problem irobot was planning on meeting them at the house door.
There are plenty more robotics companies left for Google to acquire but if I were looking it would be interesting to consider Unbounded and Otherlab.*
We are currently developing learning algorithms that allow the Cubli to automatically learn and adjust the necessary parameters
Furthermore the same momentum wheels can be used to implement a reaction-torque based control algorithm for balancing by exploiting the reaction torques on the cube body
Determining the required hardware specs. Specs and hardware designgiven the required velocities and torques determined above it was clear that the momentum wheel motor
and gearbox would be a major challenge for creating the robot. Using the mathematical model allowed to systematically tackle this problem by allowing a quantitative analysis of the trade-offs between higher velocities
This mathematics-driven hardware design resulted in detailed specs for the robot core hardware components (momentum wheels motors gears
and batteries) and allowed a CAD design of the entire system. Part of this step was the design of a special brake to suddenly stop a momentum wheel to transfer its energy to the entire cube
It expects a compound growth rate of 25%said Wang Weiming deputy director of the Ministry of Industry and Information technology.
DHL and microdrones are testing drones that could be used to deliver urgently needed goods to hard-to-reach places eg medicines to remote sites.
After the workflow has been designed customer-specific hardware components would be installed 3d printed and on the standardized robot grippers.
once on site the robot would be connected to the machinery software through a brand-independent software system
 A set of predefined skills similar to Apps for smart phones would help speed the process.
The consortium aims to have the workflow process designed within the first year to have the software prototype tested for the first use case by the third year
#Rehabilitation support robot-cloudmakes muscle movement visible Associate professor Toshiaki Tsuji Laboratory at Saitama University has developed R-cloud a rehabilitation support robot that enables users to view how their own muscles
and quantifies this into data which enables physical therapists to provide accurate instruction on movement and patients to confirm their own movements. his robot has a force sensor
Based on data collected from these sensors calculations are made on the force of muscle contraction within the arm as well as on the amount of calories consumed by each muscle during training.
In addition we use augmented reality technology to make these results visible. y databasing measurement data collected during training the Tsuji Lab is constructing a rehabilitation cloud system
#Google buys up robotics companies from DRC If youe recently wondered where all the roboticists were going the answer is to an unassuming complex in Palo alto. Google has backed Android developer Andy Rubin to acquire at least 7 major robotics
Other companies include Holomni makers of high tech wheels Bot&dolly and Autofuss the super cool robotics software companies behind the special effects in the film Gravity.
â##I feel with robotics itâ#a green fieldâ#he said. â##Weâ##re building hardware weâ##re building software.
Weâ##re building systems so one team will be able to understand the whole stack. â#Google hasn announced yet
The vision is clearly to capitalize on the DARPA Grand Challenges as Google has done so thoroughly with the automobile/self driving challenge.
Google has purchased a number of teams and people involved in the upcoming DARPA Robotics Challenge
and note that Dndrea is the tech wizard behind Kiva robotic warehouse##the video shows a novel fail safe algorithm that allows an unmanned aerial vehicle to recover
According to Mueller the algorithm allows the vehicle to remain in flight despite the loss of one two or possibly even three propellers.
Per trajectory this requires less than two microseconds on a modern laptop computer. The trajectory generator is used here to generate trajectories to hit a ball towards a target and determines:
his video demonstrates an iterative learning algorithm that allows accurate trajectory tracking for quadrocopters executing periodic maneuvers.
The algorithm uses measurements from past executions in order to find corrections that lead to better tracking performance.
and the algorithm provides a means to then transfer the learned corrections from the lower execution speed to higher speeds.
#Play-i want to bring a robot to every child After receiving a $1million seed fund round from investors like Google Ventures many of us have been waiting for Play
Play-i are based a Silicon valley startup with a founding team from Amazon Google frog design and Apple;
Bo is a mobile multi ball robot whereas Yana is designed to be a storyteller. Both robots come with different skins
and Blockly and can be programmed wirelessly from a device like an ipad. The robots are designed to be programmed by early readers
The Play-i team are parents who want to address the shortfall of computer scientists in an increasingly digital world. â##What makes Play-iâ#robots so unique and special is that they really connect with younger kids on an emotional level
and now a VP of Product and Businessâ Development at irobot and adviser for Play-i. â##They are leveraging a legacy ofâ ideas from research on computing robotics
or 2nd grade can easily play with programming and in the process construct rich models for understanding the world.
â#â##Play-i gets how a developmentally appropriate introduction to programming can pave the way towards a lifelong interest and aptitude in computer scienceâ#saidâ Vibha Sazawal Lecturer and Visiting Research Scientist at the University
of Interactive Computing at Georgia Tech Kiki Prottsman Educator and Executive director of Thinkersmith Vibha Sazawal Scientist at the University of Maryland and Prashant Kanhere who led led development of the ipod at Apple.
However Play-i may be funded the best company with a focus on programming robots at the moment s
#Sharp futuristic Health care Support Chair a proactive health care solution Sharp has developed a health care support chair that combines a range of sensors for checking the user health.
and pulse waveform storing the data in the cloud. hen we talked to people they often said that measuring each health indicator separately is time-consuming and bothersome.
If possible people want to obtain all their health data in one go. The equipment needed to do that is still large but wee made it as compact as possible.
Our idea is that people could check their health data regularly in places they often visit
Wee considering a system that enables people to videoconference with a physician if theye detected anything abnormal.
Because the data is saved all the physician can give suitable advice while looking at the information the user has obtained so far.?
Rather than people who are ill going to the doctor our idea is for healthy people to think about how to stay healthy prepare for any emergencies
Nowadays theye used for applications that require extremely high precision such as mounting smartphone components and coating the glass panels in LCD TVS.?
if the timing is correct the braking energy of one axis can be used as energy for another in the same way as a regenerative braking system reducing energy usage. his system is used currently in the automotive chip-making printing and food industries.
Once inspection is complete the data is analysed and suspicious areas are marked. Available data from earlier inspections is used to detect changes
and the damage map is updated. The next time a ship is due for inspection the damage map is checked
since the inspectors already have access to the data they can identify critical problems in advance
and control tasks (with the exception of a microcontroller for bundling sensor data) are executed on a base station that runs the Robot Operating system.
The robot uses four Sharp infrared distance sensors for edge and obstacle detection and an Invensense MPU 6050 IMU provides the operator with orientation data.
These sensors are used currently only for user feedback. The camera used on the robot is a Gopro Hero 3 where it serves as a demonstrator for a visual sensor.
Ethicon Endo-Surgery a Johnson & johnson subsidiary created SEDASYS a computer-assisted device that administers the prescription drug propofol into the blood stream via intravenous IV infusion.
#OAK, a Kinect-based active support system for the severely disabled OAK which stands for bservation and Access with Kinectis a software application for use by people with severe disabilities.
It has been developed by RCAST at the University of Tokyo in collaboration with Microsoft Japan and is available from assist
So for people whoe been thought of as totally immobile until now this system could open up a lot more possibilities based on scientific data such as shoulder movements for example. nother option is called Face Switch
 Through theâ Open Hand Project an open source project with the goal of making robotic prosthetic hands more accessible to amputees aâ fully-functional prototypeâ has already been developed.
Usually they need to be fitted custom to the user remaining arm which can rack up medical bills with consultations and fittings.
It uses stick-on electrodes to read signals from the users remaining muscles which can control the hand telling it to open or close.
because the user can select any colour it easy to switch from a right hand to a left hand
#Researchers use single joystick to control swarm of RC robots What can you do with 12 RC robots all slaved to the same joystick remote control?
Common sense might say you need 11 more remotes but our video demonstrates you can steer all the robots to any desired final position by using an algorithm we designed.
The algorithm exploits rotational noise: each time the joystick tells the robots to turn every robot turns a slightly different amount due to random wheel slip.
We use these differences to slowly push the robots to goal positions. The current algorithm is slow
so wee designing new algorithms that are 200x faster. You can help by playing our online game:
www. swarmcontrol. net. The algorithm extends to any number of robots; this video shows a simulation with 120 robots and a more complicated goal pattern.
Our research is motivated by real-world challenges in microrobotics and nanorobotics where often all the robots are steered by the same control signal (IROS 2012 paper).
Our colleagues Yan Ou and Agung Julius at RPI and Paul Kim and Minjun Kim at Drexel use an external magnetic field to steer single-celled protozoa swimming in a Petri dish.
We can then either drive the robots around with a simple joystick or let a computer apply the control.
Regardless the commands are the same and consist merely of go forwards/backwards x seconds or turn left for 2 seconds.
The computer has an advantage over human players because it can precisely measure the position
This steering algorithm is based on piecewise-constant inputs and Taylor series approximations. Taylor series approximations give us a clear method for increasing precision.
You can test this out by purchasing several RC cars tuned to the same radio frequency.
The algorithm published in another 2012 IROS article shows that rotational noise improves control but translational noise impairs control.
Our algorithm allowed us to control the final position of n robots but we could not control the final orientation.
The video for our upcoming IROS 2013 paper illustrates this algorithm using robots equipped with laser turrets.
and to gather quantitative data about what mechanisms help people work with these swarms most effectively.
Secondarily wee created a simple platform for publishing these academic user experiments projects online why settle merely for a handful of undergrads to sample with?
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011