The rocket and general hardware is also at least partially covered and as stated on the press conference NASA
#Opensource 3d printed oppyhumanoid enables experimentation in robot design Funded by the European Research Council (ERC) Â Poppy is a 3d printed open source humanoid designed by the Inria Flowers lab a French research facility
that creates computer and robotic models as tools for understanding developmental processes in humans. Â Inria Research director Pierre-Yves Oudeyer (who developed Poppy as a tool for studying the science of learning
and development) hopes that the open source and 3d printable aspects of the project will enable researchers to quickly print custom body parts and experiment with different robot morphologies in order to study their impact on behaviour and learning.
while other humanoid projects such asâ icubâ are also open source the 3d-printable aspect of Poppy is significant
 With 3d printing changing the body shape is easier. nother platform that is similar to Poppy in that it is both open source and 3d printed isâ Jimmy the Robot.
the 3d printing and open source aspects of the project are part of that vision. To facilitate collaboration
and experimentation they have created an open source dedicated webâ platformâ that combines hardware software and webtools.
However only very little has been done to explore the benefits of 3d printing and its interaction with computer science in classrooms.
and teachers an adequate tool to cultivate the creativity of students studying in fields such as mechanics computer sciences electronics and 3d printing.
Both hardware and software are open source. There is not one single Poppy humanoid robot but as many as there are users.
This makes it very attractive as it has grown from a purely technological tool to a real social platform a
#Google adds to Deepmind, acquiring 2 UK startups and partnering with Oxford U In January 2014 Google acquired London-based Deepmind Technologies for $643 million.
Now it is adding to that purchase with two more companies ten new hires and a substantial contribution to Oxford university.
Google Deepmind was a British AI start-up with some high-profile investors and a whiz kid founder (Demis Hassabis) and theirâ forte was smart recommendations for online commerce.
The two acquisitions are Dark blue Labsâ (whichâ is creating systems to understand natural languageâ that would allow computers to comprehend the meaning of sentences
According to the Financial times the two acquisitions are estimated to have cost Google Deepmind $50 million. The Google-Oxford partnership will cost more.
A substantial contribution to Oxford university is forthcoming to expand Google AI and deep learning capabilities.#
#Google Deepmind has hired all seven founders of these startups with the three professors holding joint appointments at Oxford university where they will continue to spend part of their time#Hassabis notes in his blog post.#
#These exciting partnerships underline how committed Google Deepmind is to supporting the development of UK academia
and the growth of strong scientific research labs.#f you liked this article you may also be interested in:
#Japan holds first#robotics revolutioncouncil meeting This fall the Japanese government held its first meeting of a new panel focused on its goal of aâ#robotics revolution#a key item in the government#s
The panel is tasked with promotingâ measures to increaseâ the use of robots and related technologies in various fields extending out ofâ the manufacturing sector andâ into hotel distribution medical and elderly nursing-care services.
According toâ Prime minister Shinzo Abe who instigated the robot panel determiningâ the appropriate use of robots will be a key to solving these problems.
The panel chaired by Mitsubishi electric Corp. consultant Tamotsu Nomakuchi will work out a five-year plan to be presented by the end of 2014 with details on how they willâ achieve the numerical targets.
In our laboratory we develop various types of high-speed vision hardware and algorithms that can implement high-speed image processing with a sampling time from 10ms up to 1ms.
High-speed vision can provide control data at the same sampling rate as that of the servo controller used for the robot actuators.
The running algorithm used in the ACHIRESÂ robot is different from those typically used in other running robots.
While most running robots use a method based on ZMP-criteria for maintaining stable and balanced posture we introduced a very simple algorithm using high-speed performance of a sensory-motor system without ZMP criteria.
and is equipped with computers for navigation and a satellite communication system. Three Wave Gliders have been deployed for this project.
because Hellospoon is a smartphone app: This relegates the processor speaker and microphone to the user smartphone drastically cutting the cost of production.
If a robot is intended to improve people lives it shouldn be expensive right? Also unlike other mealtime assistants Hellospoon is meant to be a companion and not just a feeding machine.
Users can talk to Hellospoon using voice commands and Hellospoon can answer back and play songs
while scooping food and does a little dance when the user decides to stop eating.
It interactions like these that make mealtime more entertaining andâ help to establish a bond between the user and the robot.
We know that every user is different and so Hellospoon behaviours can be customized to suit the user individual needs and preferences.
For example we are developing a profile system where Hellospoon will behave playfully when interacting with kids
and respectfully when interacting with elderly users. My name is Luis Garcia and I#m a recently graduated 23-year-old Mechatronics Engineer from Sinaloa Mexico;
I also the developer designer programmer and crazy guy behind the Hellospoon robot. I have been developing Hellospoon for the past year
and a half from my bedroom proving that sometimes you don need a big robotics laboratory to start something great!
Hellospoon Youtube channelhellospoon bloghellospoon Twitterplease support the Hellospoon robot campaignâ-Â inviting other people to support this development will be amazing to t
#Open Brain-Computer Interface: An Interview with Conor Russomanno Brain-computer interfacing (BCI) is a rapidly growing field that offers huge potential for many applications such as medical grade BCIS to help people with sensory-motor disabilities.
Currently a number of researchers are developing more affordable BCI systems designed to address a wider range of neurotherapeutic applications.
and research developer of Openbci a low-cost open-source hardware platform that records the brain#s electrical signals and uses devices and software languages to make the data easily accessible.
Russomanno and cofounder Joel Murphy aim to accelerate the advancement of BCI through collaborative hardware and software development.
Another of Openbci#s exciting potentials is the fact that it is fully open source. Russomanno
This enables users to share tutorials getting started guides and also their own individual projects.
In reality however in the current state mind controlled robots at least using Openbci translates to looking at a flashing screen
and helping him train for the All Star Games through a sequence of animated storybooks that play on a screen in his belly.
Jerry is built off of the android platform and has sewn 19 sensors throughout his fur. These sensors let children check Jerry glucose levels feed him foods give him insulin
Builder and all-around maker of mechanical and electrical systems with a passion for user interaction. Wire-framer of ideas and framer of ideas out of sculpture wire.
Andrew Berkowitz Software engineer: Android developer hacker hustler and resident diabetes expert if our bears ever become sentient you#ll know whom to blame.
Joel Schwartz Hardware Engineer: Previouslyâ building rehabilitative toys for kids with cerebral palsy. Happiest in a machine shop with a well-made spreadsheet or fly fishing.
Brian Oley Software engineer: An acrobatic unicycling photographer and developer of mobile games and medical software.
A well balanced member of the team. If you liked this article you may also be interested in:
See all the latest robotics news on Robohub or sign up for our weekly newsletter o
#Has Dyson#s robotic dream become a reality? has cracked Dyson finally its 16-year mission to create a robot vacuum cleaner?
The company#s  teaser video hints at a new release which looks a lot like a rival for the best-selling Roomba
 Dyson#s mysterious video posted on Youtube (below) heralds a 4 september release. The only piece of information to accompany it reads:#
and would use computer vision to navigate a room. Imperial#s Professor Davison has spoken also about the importance of  location tracking:
#Dyson is also planning a £250m expansion of one of its UK sites at Malmesbury
In a great up-sum of why  published on Techcrunch computer scientists Rudina Seseri and Wan Li Zhu who both work for VC#s Fairhaven Capital explain why the conditions now are perfect.#
#In recent years#they say#we have seen accelerated levels of innovation in both software and hardware that are now driving new possibilities for consumer readiness and adoption of personal robotics.#
#This as happened on a number of levels: Proliferation of devices compute and bandwidth. Instead of requiring expensive custom onboard computing robots are now able to leverage hardware smartphones and cloud computing for processing and storage.
This has implications on both cost and availability of data for machine perception and learning.
Progress in natural language processing speech vision and machine learning a self-reinforcing loop: As machines can better understand the real world they learn at a faster rate.
To increase the range of a small UAV one idea is to pair it with an unmanned ground vehicle (UGV) that can carry it to a site of operation and transport heavier cargo.
and experimental results for landing a quadrotor on a ground rover. The two robots communicate their positions converge to a common docking location and the dock successfully both indoors and out.
and is stable in the face of multi-second time delays allowing unreliable wifi communication to be used in the landing.
For more information you can read the paper Coordinated landing of a quadrotor on a skid-steered ground vehicle in the presence of time delays (John M. Daly Yan Ma Steven L. Waslander Autonomous
With the robots ready the Nagpal team had to develop an algorithm which could guarantee that large numbers of robots with limited capabilities
and local communication could cooperatively self-assemble into user-specified shapes. This is what they came up with.
The algorithm had to account for unreliable robots that are pushed out of their desired location or block other robots performing their functions.
Nagpal#s team overcame this challenge by implementing strategies that allowed robots to rely on their neighbours to cooperatively monitor for faults.
After years of research in this area it looks like we are finally reaching a tipping point where both hardware
and algorithms can build large-scale robotic swarms at least in the labs. These swarms have the potential to help us understand natural self-organised systems by providing fully engineered physical systems on which to do experiments.
Aloft hotels are a boutique brand of Starwood hotels that focusâ on techie features such as Apple TV in-roomâ services and the needs of the#always on#generation of traveler.
by expanding cooperation with telecommunication. The 1st Vice-minister Kim Jae-hong Ministry of Trade Industry and Energy said
Sebastian Cagnon a French behavior architect at Aldebaran Robotics who blogs at About Robots recapped the following governmental programs.
Programming Edison involves dragging and dropping icons to form a program. The software Edware is open source and compatible with Windows Mac and Linux.
Programs download into the robot via the supplied Edcomm cable that plugs into the headphone jack of your computer.
But programming isn#t necessary to start using Edison as the robot has the ability to read special barcodes that activate preprogrammed features such as line-following and obstacle avoidance.
The barcodes also allow Edison to learn commands from a standard TV and DVD remote control which can be used to drive the robot by remote control.
Two or more Edisons can communicate via infrared light so combined with the low cost creating your own robot swarm becomes feasible.
Edison robot is also LEGO-compatible allowing you to build on to the robot and create something new.
#Global trends in robotics from patent analysis 120000 robotics patents have been published in the last 10 years tripling in rate from 2004 to 2013 according to the UK Intellectual Property Office Informatics Team.
#the big data revolution and energy-efficient computing;##satellites and commercial applications of space;##robotics and autonomous systems;#
Google#s automotive patent portfolio is relatively small at 35 families. Most of Google#s patents were published very recently in 2013 with the earliest being only in 2010.
The rate of publishing for Google shows clear increase so further patents should be anticipated. Â The report also shows collaboration within industry groups.
For example Google and Honda are contained very self in contrast to other automotive companies. Most of the patents in the UK dataset are in the field of autonomous vehicles includingâ road vehicles unmanned aerial vehicles and unmanned underwater vehicles.
Roboticsâ companies in the UK dataset have very small portfolios with the largest being Notetry (5â families)
The cap then deciphers the intention of the user and transmits the instruction to the exoskeleton
The exoskeleton user wears sleeves created by a team lead by Prof. Hannes Bleuler at LSRO EPFL and NCCR Roboticsâ and stemming from the ongoing Phd work of Simon Gallo.
and allowing the user to walk upright. Thus the wearer can sense what feels like an artificial footprint on their arm as the sensors roll from heel to toe.
and an avatar to allow the user to see real time limb positioning. One of the major challenges of the project was the training process for the eight volunteers (males
The second worry for the project was that electromagnetic fields from cell phones carried by people in the stadium would block electronic signals in the exoskeleton
and has the hardware capabilities to organize into complex shapes with its neighbors due to accurate range and bearing.
one that has a built-in 38khz demodulation circuit (as used in TV remote controls) and another that provides us with the raw amplitude.
however and omnidirectional motion using three motors has been thought to be only possible by carefully synchronizing all motors phases#something close to impossible with cheap hardware.
Google new purpose-built self driving car Google completed a major step in its long and extensive self-driving cars project by presenting its first purpose-built autonomous car
or brake pedals#and you can ride it strictly as a passenger which is probably a strange feeling but according to Google video not entirely unpleasant.
Google will make about a hundred prototypes and it looks like it a very serious effort. It is equipped with multiple sensors (a big LIDAR on its roof is probably doing most of the work)
Thanks to Google previous experience with self-driving cars one can expect very good performance in real environments;
other Google self-driving cars have completed hundreds of thousands miles with no major incidents. The car itself may look like a toy
and above it the whole front panel is made of foam so the car is not only safe for its passengers but for pedestrians as well.
This is a very clever move from Google that aspires to overcome the legal and ethical problems of who should be able to drive
Google will launch a small pilot program in California in the next couple of years. The prototypes released on public roads will have manual controls for obvious legal reasons
Read the full post in Google official blog here. It worth mentioning that similar concepts
but its main feature was the interior where similarly to Google car no manual controls were present
and contains three motors (for rotational movements) onboard computation and a battery that allows for at least one hour of operating time.
The team behind the Roombots have had to overcome a series of research challenges in terms of hardware and control.
and technical challenges such as compensating unwanted bending in the mechanical structure (related to building larger complex 3d structures) developing the best-suited algorithms for reconfiguration
which uses a combination of wearable devices sensors throughout the home and a mobile robot to assist older people in their homes
and often writes about r Robinon her blog (in Italian). In our ageing society many elderly people are in the same situation
or watching television and monitor health blood pressure or sugar levels for example. They also allow the persons caregivers to monitor their wellbeing remotely and to check for falls.
A robot moves around the home and allows family friends and carers to virtually visit the person.
Silver generation and economyvice-President of the European commission@Neeliekroeseu responsible for the Digital Agenda says: one of us is getting any younger.
but we see that various aspects of the system are appreciated differently by the different users.
and technology should be both adaptable and tailored to user needscurrent plans are to put the system into commercial production next year based on an upfront fee
#Gzweb for mobile platforms Gzweb Mobile from OSRF on Vimeo. During her Gnome Outreach Program for Women internship with OSRF Louise Poubel made Gzweb work on mobile platforms by designing a mobile-friendly interface
and implementing lighter graphics. Until recently Gazebo was only accessible on the desktop. Gzweb Gazebo web client allows visualization of simulations in a web browser.
Louise implemented the graphics using Webgl. The interface includes menus suitable for mobile devices and multi-touch interactions to navigate the 3d scene.
Louise conducted usability tests throughout the development phase in order to improve user experience and quickly discover
and resolve bugs. To optimize 3d rendering performance on mobile platforms she also implemented a mesh simplifcation tool
which allows users to choose how much to simplify 3d models in the database during the deployment stage
and generate coarse versions of meshes to be used by gzweb. Mobile devices have been and will continue to be a big part of our lives.
With Gzweb Mobile users can visualize simulations on mobile phones and tablets and interact with the scene inserting shapes and moving models around.
References: http://www. gazebosim. orggzweb wikirepositories: Gzweb Bitbucket repositor r
#The People#s Bot: Robotic telepresence for the public good have wanted you ever to attend a conference that was too far away expensive or sold out?
Whether youe a penniless researcher interested youth or a group of elderly people who want to live in other people bodies (like in that weird movie Being John Malkovich) your wish may be granted.
& support public interest media-making through scholarships media fellowships and auctions. nd by public good they mean for example sneaking into purchasing participant registration for The People Bot at the highly popular Computer-Human-Interaction Conference
but like many people can afford to attend it looks like you can still bid for a much cheaper telepresence opportunity on ebay.
Bid on Ebay to attend the History of Wearables & Google glass exhibit at CHI (proceeds go to the CHI student travel grants).
Rumor has it The People Bot is currently wandering around the Theorizing the Web conference in New york city.
and locked-in syndrome to interact with the world around them by controlling a computer mouse with their mind.
Moving a mouse is something that most of us take for granted but an individual with locked-in syndrome that is fully awake and conscious has no other means of producing speech move limbs
Through a device implanted in the skull the Braingate device interfaces with a computer allowing the individual to interact with their environment.
but revolutionary hardware designed by engineers like Bacher have also been key. Bacher began working on the Braingate technology by conducting brain-machine interface research on nonhuman primates.
Or as Bacher describes it#basically having monkeys playing video games with their brains.##This research helped Bacher
Bacher#s approach also takes worldwide emerging technology trends in hands-free computing robotics and other areas and finds ways to translate them into helping disabled individuals.
Says Bacher#if we develop a piece of technology for someone with a spinal cord injury who is moving their head to control a computer there s no reason that can#t help someone
#Human computer interface technology is limited still largely to touch keyboard and mouse. The research conducted by Bacher
Says Bacher#clearly a mouse and a keyboard even today seems like an antiquated way to interact with technology.
We#ve seen developments like touch-screen technology and voice-control come along way but it#s still not mainstream.
Smart phones can provide the#brains#for assistive devices. Eye motion-capture technologies developed for paralyzed individuals open up new ways for the general public to interact with technology
The latest in soft-bodied robots created by team of engineers of the Computer science and Artificial intelligence Laboratory (CSAIL) at the Massachusetts institute of technology.
and Computer science and Director of CSAIL Cagdas Onal Assistant professor of Mechanical engineering at the Worcester Polytechnic institute and Andrew Marchese a doctoral candidate in engineering at MIT created the robot to be autonomous.
This means it has all the necessary sensing actuation and computation on board. Its flexible body is made of silicone rubber.
and at the other end to sensors) and an algorithm to convert signals the team has produced a hand that sends information back to the brain that is so detailed that the wearer could even tell the hardness of objects he was given to Hold in a paper published in Science Translational Medicine in Feb. 2014
and are used to decode the intentions of the user and a power source is used to activate the hand in one of four different grasping motions.
However the tests indicated that this was not the case thus opening up the technology to a wide range of potential users.
Unfortunately there is no super-software that can be uploaded to the robot to make it become an instant medical expert.
Software may be upgraded in the future to allow R2 to move around the stations interior and perform routine maintenance tasks such as cleaning filters.
Other upgrades may enable the robot to work outside the space station to perform repairs and maintenance checks
The most defining feature of the system on display at the company showroom in Yokohama is its 3-dimensional use of space. his is tiered a 5 cultivation system.
if playing music was as simple at looking at your laptop screen. Now it is thanks to Kenneth Camilleri
The user can control the music player simply by looking at a series of flickering boxes on a computer screen.
and as the user looks at them their brain synchronizes at the same rate. This brain pattern reading system developed by Rosanne Zerafa relies on a system involving Steady State Visually Evoked potentials (SSVEPS.
This process known as electroencephalography (EEG) records the brain responses and converts the brain activity into a series of computer commands.
As the user looks at the boxes on the screen the computer program is able to figure out the commands allowing the music player to be controlled without the need of any physical movement.
or change the song the user just has to look at the corresponding box. The command takes effect in just seconds.
This particular brain-computer interface exploits one of these; the occulomotor nerve which is responsible for the eyeâ#movements.
This means that even an individual with complete body paralysis can still move their eyes over images on a screen.
This cutting age brain-computer interface system could lead the way for the development of similar user interfaces for tablets and smart phones.
Packbotâ#other attributes include a state-of-the-art GPS video image display system monitoring electronic compass temperature sensors.
The robot is manipulated with an integrated Pentium-based computer e
#Microsoft pays 5m CHF to ETHZ and EPFL for research on flying robots and new memory architectures ETH Zurich and EPFL are jointly entering into a new research partnership with Microsoft Research.
Over five years Microsoft Research will provide five million Swiss francs of funding to support IT research projects.
Microsoft researchers will also work closely with the scientists at the two universities. Microsoft has been investing in Swiss research for years
and now the US technology company is renewing its longstanding collaboration with EPFL and ETH Zurich.
Microsoft will provide one million Swiss francs per year in funding for IT-related research projects at the two universities over a period of five years.
The collaboration is a continuation of a project launched in 2008 that focused on embedded technology software solutions and prototypes.
The new collaboration will now broaden the areas of computer science and deepen the collaboration between the three partners.
Scientists in Lausanne and Zurich submitted 27 proposals seven of which were selected by the steering committee. Among the successful projects:
The young computer scientist from ETH Zurich studies the interaction between people and computers. For his project he and Dr. Shahram Izadi from Microsoft Research are investigating how flying robots can work
and interact with people in active scenarios. Specifically they want to develop a platform that enables flying robots to do more than just recognise people
Thanks to their algorithms the robots should be able to react to gestures and touch as well.
To do this they are combining thousands of energy-efficient micro-servers in a way that enables them to access the memories of the other servers with a minimal time delay.
The two computer scientists are working with Dr. Dushyanth Narayanan and other scientists from Microsoft Research in Cambridge to develop new applications for this system known as Scale-out NUMA A
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011