Tuesday 14 December 2010

British Museum + Rovio + children = ...

Having been a very long time since I've had time to post, a lot has happened in the interim. Unfortunately, the Mars Missioneers project didn't get past the finalist stage in the DMLC, but some of the ideas have been carried forward and developed.
I ave blogged previously about a project which was planned for children from a primary school in Worcestershire to connect via Rovio to the British Museum. As it turned out, bringing that project to fruition presented many a technical challenge, not least the fact that the robot had issues connecting to the British Museum's wireless network, which left us with the option of creating a temporary wireless network using a 3G router. This led to further problems with bandwidth, signal strength, static IPs, etc. but the staff I was working with were admirably tenacious and got through the hitches. After a few false starts, the kids and teachers were together, the British Museum staff were prepared and the technology decided to work. The children spent around 40 minutes exploring the museum's Enlightenment Gallery, navigating their way to 10 different cases containing ancient Egyptian artifacts. The activity was a success with the children who got a lot from the experience based on their responses during interviews after the activity. The video below shows some footage of the activity itself and some of the feedback received during the interviews.



I think the activity had some interesting outputs in terms of what happens when videoconferencing is done with a moving camera instead of a static one. In addition, the fact that the camera was controllable by the other party added a sense of presence which is otherwise absent in videoconference situations.
The experience was not as fluid as I would have liked - the video feed kept choking and we weren't able to use the built-in audio features of the robot due to the inconsistent connection which meant we had to rely on a mobile phone link in addition to the robot. Most of this was due to the fact we were using 3G, and we have since run an activity between two schools with a solid data connection. That experience was much more robust and enabled the level of real-time interaction we were hoping for with the British Museum linkup.
I would like to run a videoconference using something like Drahtwerk's iWebcamera which allows the webcam to be mobile but without being under the control of the calling party. This would help to figure out what it is about this type of videoconference that makes it more engaging - is it the mobility of the camera or the control of the camera by the calling party? If the control aspect is not significant to the user's experience, then the activity becomes much more technically manageable and achievable. Taking the robot out of the loop would remove countless technical hurdles, but would it also remove the fun?
I think that once network speeds catch up, this concept has enormous potential for learning. Serendipitously, the day after filming one of the children suggesting we send the robot up the North Eiger, I got a tweet from The Guardian reporting that Nokia have just installed a 3G network around Mount Everest. Virtual field trip to Everest, anyone?

Thursday 15 April 2010

Digital Media and Learning Competition

After a tipoff from FutureLab's Flux blog, I decided to enter this year's Digital Media and Learning Competition with a version of the epic cross-curricular Mars Mission based project I blogged about a while back. I called it Mars Missioneers and shoehorned as much detail as I could into the meagre 300 word limit on the application.
The judges obviously saw the potential of it and recently chose it to be one of 50 out of the many 100s of submissions to go through to the final round of the competition, amongst such big players as FutureLab themselves and Mitchel Resnick of MIT Scratch fame! After I had picked myself up off the floor, I set about the (surprisingly mammoth) effort of creating a three minute video describing the concept behind the project. That effort today came to fruition! Here is the finished product:



I used a number of free tools in the creation of the video - Blender for the title sequence, Audacity for the audio recording and mixing, prezi for the panning and zooming graphics portions and the ingenious xtranormal for the animated narrator sequences. I discovered that they allow you to render the character on a green screen, which means you can composite the footage onto whatever back ground you desire. Combine that with iMovie '09 and you have the tools to make something that looks halfway decent for very little investment (other than the time involved!). If you would like to comment on the project, I would appreciate it enormously if you could take the time to register at the DMLC site and leave a comment on the project page. Comments close on the 22nd April, so time is rather short!