#Parrot AR. Drone app harnesses crowd power to fast-track vision learning in robotic spacecraft Astrodrone is both a simulation game app for the Parrot AR. Drone and a scientific crowdsourcing experiment that aims to improve landing, obstacle avoidance and docking capabilities in autonomous space probes. As researchers at the European space agency Advanced Concepts Team, we wanted to study how visual cues could be used by robotic spacecraft to help them navigate unknown, extraterrestrial environments. One of our main research goals was to explore how robots can share knowledge about their environments and behaviors to speed up this visual learning process. Similar to the Roboearth project the central idea is that a group of robots sharing visual information such as raw camera images or abstracted mathematical image features would have a much broader visual experience to learn from than a single robot operating on its own. We had studied already the use of motion-based cues in navigation, such as optic flow and the size changes of visually salient features like SURF, but we we wanted to verify our findings in this area and at the same time investigate if the appearance of visual features could be used to aid navigation as well. To this end we needed a very large robotic data set of visually salient image features paired with corresponding robotic state estimates, and it was simply infeasible to gather all this data by ourselves. That where the Parrot AR. Drone entered into play. The AR stands for Augmented reality, and the drone is indeed a toy quad rotor meant for playing games. We thought we could develop a game for the Parrot that would serve as a means for crowdsourcing the data we needed. And because the Parrot was designed to be used in augmented reality games and can be controlled by an ios device, data-gathering could be done in a gaming context, making it fun for people to participate in the experiment. To this end, we designed a mission-based game where players simulate docking the Parrot with the International space station as quickly as possible while maintaining good control. Bonus points are given for correct orientation and low speed on the final approach. At the end of the mission, players can log their high scores on a score board, and at the same time contribute to the experiment by anonymously sharing abstract mathematical image features and velocity readings. At the end of the mission players can log their high scores on a score board, and at the same time contribute to the experiment by anonymously sharing abstract mathematical image features and velocity readings. The Parrot AR. Drone is uniquely suitable for this purpose because: It is widely available to the public, with over a half-million units sold worldwide. It is a real robot with many useful sensors, including two cameras (at the front and on the bottom), sonar, accelerometers, gyrometers, and (on the Parrot AR drone 2. 0) a pressuremeter and magnetometer. Further, it already uses these sensors to achieve accurate onboard state estimation and (consequently) stable hovering. It can be controlled by smart devices such as the iphone or Android devices, which can be used send data over the internet. The code for communicating with Parrot is open source. Methodology The question we faced experimentally was how to couple the real-world object to the virtual space in which the drone would be flying. Having a reference object of known size was a goal since it would provide some form of ground-truth measurement. As it happens, the Parrot AR. Drone is delivered with a marker that is recognized in the images by the onboard firmware. The recognition gives an image position (x, y) and a distance to the marker. Note, however, that this information is not enough to disambiguate both the position and orientation of the drone with respect to the marker. Figure 1 shows two configurations that give exactly the same marker detection data. In the edcase, the marker is in the left part of the image because the drone is located on the right of the marker. In the luecase, the marker is in the left part of the image, because the drone has turned to the right. Note that the distances to the marker are the same. In order to render the virtual space environment, we perform state estimation on the basis of the drone estimates of its height, speeds, angles, and its detection of the marker. The docking port of the ISS (the marker) is assumed to be at location (0, 0, 0). The state estimation uses an Extended Kalman filter (EKF), and in the light of the discussion of the marker ambiguity, we decided to only update the position on the basis of the marker readings, and not the heading. The heading is estimated quite well on board the drone, where the Parrot AR. Drone 2 also uses a magnetometer. Figure 2 shows what computational processes are happening on which device during the game. The marker detection and the estimation of the drone attitude, height, and speeds are performed by the firmware on board the drone. The app is running the drone control interface, sending and receiving data from the drone, performing state estimation (X y z) with respect to the marker and rendering the 3d world. Of course, for the experiment we also need to process the drone onboard images. Because of the iphone computational constraints, we decided to save 5 subsequent images while the player is trying to dock and to process the images after the flight. This processing now happens when the player visits the highscore table and agrees to join the experiment. After extracting the vision data and concatenating this with the state estimates, the data is encrypted for transmission. Finally, since some players may use 3g for sending their data we considered that the data should be as compact as possible. Therefore, the data is compressed on board the iphone before being sent. The data from 5 textured images on average takes around 77 kb. Testing and Launch Our original plan was to work on the game from February to April 2012, launch it in May or June, and analyze the data from August to September. In reality, we didn converge on the final code until November 2012. Of course, when you are developing the app, you know the program inside out and by habit you may avoid playing the game in such a way that would make the program malfunction. So in November we invited other people try out and rate the app. The ratings and comments were used to perform some final adjustments, and we sent the app to the Apple store for review in January 2013. At the same time, we began to execute our plans for the launch and related PR-activities. We saw this as an essential part of the project, as the rowdfirst has to know about a project in order to contribute to it. The app was launched on the 15th of March 2013, accompanied by a press release about Astrodrone from the European space agency and the video shown below. Parrot also promoted our app on their site. The news was picked soon up by BBC Technology Wired, and the Verge, as well as news papers, and television shows such as ARD Nachtmagazin. This (ongoing) media attention has helped us to reach the public: just one week after the launch, some 4, 000 persons downloaded the Astrodrone app, and 458 of these people had contributed already to our experiment by sending their data. This is a nice start, but of course we hope there is more to come. Next Steps We have not arrived at the end of this project, but rather at the beginning. We are first solving some small issues by means of app updates. For example, if an ipad or ipod is connected to the drone Wifi, it cannot send its data. There was no message that showed the absence of an internet connection (solved in version 1. 2). At the same time we have started analyzing all the data that is coming in. In the future, we may also improve our crowdsourcing game in various ways, in order to make it as fun and easy as possible for players, and as useful as possible for our research. In any case, it an adventure to work with this exciting new way of gathering robotic data n
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011