Having been a very long time since I've had time to post, a lot has happened in the interim. Unfortunately, the Mars Missioneers project didn't get past the finalist stage in the DMLC, but some of the ideas have been carried forward and developed.
I ave blogged previously about a project which was planned for children from a primary school in Worcestershire to connect via Rovio to the British Museum. As it turned out, bringing that project to fruition presented many a technical challenge, not least the fact that the robot had issues connecting to the British Museum's wireless network, which left us with the option of creating a temporary wireless network using a 3G router. This led to further problems with bandwidth, signal strength, static IPs, etc. but the staff I was working with were admirably tenacious and got through the hitches. After a few false starts, the kids and teachers were together, the British Museum staff were prepared and the technology decided to work. The children spent around 40 minutes exploring the museum's Enlightenment Gallery, navigating their way to 10 different cases containing ancient Egyptian artifacts. The activity was a success with the children who got a lot from the experience based on their responses during interviews after the activity. The video below shows some footage of the activity itself and some of the feedback received during the interviews.
I think the activity had some interesting outputs in terms of what happens when videoconferencing is done with a moving camera instead of a static one. In addition, the fact that the camera was controllable by the other party added a sense of presence which is otherwise absent in videoconference situations.
The experience was not as fluid as I would have liked - the video feed kept choking and we weren't able to use the built-in audio features of the robot due to the inconsistent connection which meant we had to rely on a mobile phone link in addition to the robot. Most of this was due to the fact we were using 3G, and we have since run an activity between two schools with a solid data connection. That experience was much more robust and enabled the level of real-time interaction we were hoping for with the British Museum linkup.
I would like to run a videoconference using something like Drahtwerk's iWebcamera which allows the webcam to be mobile but without being under the control of the calling party. This would help to figure out what it is about this type of videoconference that makes it more engaging - is it the mobility of the camera or the control of the camera by the calling party? If the control aspect is not significant to the user's experience, then the activity becomes much more technically manageable and achievable. Taking the robot out of the loop would remove countless technical hurdles, but would it also remove the fun?
I think that once network speeds catch up, this concept has enormous potential for learning. Serendipitously, the day after filming one of the children suggesting we send the robot up the North Eiger, I got a tweet from The Guardian reporting that Nokia have just installed a 3G network around Mount Everest. Virtual field trip to Everest, anyone?
PLTS and Learning Platforms
15 years ago
No comments:
Post a Comment