Tuesday 22 March 2011

Creative tools

In the course of research for an ongoing project, I've discovered a number of new (to me at least!) creative tools for working with or generating video content and presentations which I think show real potential for learning:

Muvizu - This is a standalone application for Windows which links in with a online community and allows for the quick and easy creation for 3D animations. The potential sophistication of the final production in terms of lighting, motion and camera direction is quite astounding to behold, and the interface looks incredibly intuitive. The closest analogy I can draw is like a super-charged xtranormal.com without the charging for assets. As far as I can tell, it does not do text-to-voice either, but does do lip synching to recorded dialogue. The online community seems to be pretty well thought out and is developing steadily allowing creators to take on the role of a director or producer and gather participants from within the community to complete their project.

Kerpoof - This is a web-based creativity tool which allows users to create animated short films, still comics, pictures and story books in an anime style with a very graphical, infant friendly drag and drop interface.

Google SearchStories - I have of course been familiar with Google's Search Stories ad format for a while, but never realised that they have a tool which allows members of the community to create SearchStories of their own by entering a list of search terms and then highlighting information in the search results which tell their story particularly well, then choosing a sound track for the story. This is a fantastic format for a new take on storytelling and narrative and has great potential as a medium for literacy activities.

Stupeflix - This is an online video editing and production tool in the vein of Animoto, though you can incorporate video clips as well as photos into slick templates and have more control over transitions, etc than Animoto gives you. The en result looks lie something which has come out of iMovie, which is really saying something for an online video editing tool.

Hope you find one or more of these tools useful, and please let me know of any others along a similar theme!

Tuesday 14 December 2010

British Museum + Rovio + children = ...

Having been a very long time since I've had time to post, a lot has happened in the interim. Unfortunately, the Mars Missioneers project didn't get past the finalist stage in the DMLC, but some of the ideas have been carried forward and developed.
I ave blogged previously about a project which was planned for children from a primary school in Worcestershire to connect via Rovio to the British Museum. As it turned out, bringing that project to fruition presented many a technical challenge, not least the fact that the robot had issues connecting to the British Museum's wireless network, which left us with the option of creating a temporary wireless network using a 3G router. This led to further problems with bandwidth, signal strength, static IPs, etc. but the staff I was working with were admirably tenacious and got through the hitches. After a few false starts, the kids and teachers were together, the British Museum staff were prepared and the technology decided to work. The children spent around 40 minutes exploring the museum's Enlightenment Gallery, navigating their way to 10 different cases containing ancient Egyptian artifacts. The activity was a success with the children who got a lot from the experience based on their responses during interviews after the activity. The video below shows some footage of the activity itself and some of the feedback received during the interviews.



I think the activity had some interesting outputs in terms of what happens when videoconferencing is done with a moving camera instead of a static one. In addition, the fact that the camera was controllable by the other party added a sense of presence which is otherwise absent in videoconference situations.
The experience was not as fluid as I would have liked - the video feed kept choking and we weren't able to use the built-in audio features of the robot due to the inconsistent connection which meant we had to rely on a mobile phone link in addition to the robot. Most of this was due to the fact we were using 3G, and we have since run an activity between two schools with a solid data connection. That experience was much more robust and enabled the level of real-time interaction we were hoping for with the British Museum linkup.
I would like to run a videoconference using something like Drahtwerk's iWebcamera which allows the webcam to be mobile but without being under the control of the calling party. This would help to figure out what it is about this type of videoconference that makes it more engaging - is it the mobility of the camera or the control of the camera by the calling party? If the control aspect is not significant to the user's experience, then the activity becomes much more technically manageable and achievable. Taking the robot out of the loop would remove countless technical hurdles, but would it also remove the fun?
I think that once network speeds catch up, this concept has enormous potential for learning. Serendipitously, the day after filming one of the children suggesting we send the robot up the North Eiger, I got a tweet from The Guardian reporting that Nokia have just installed a 3G network around Mount Everest. Virtual field trip to Everest, anyone?

Thursday 15 April 2010

Digital Media and Learning Competition

After a tipoff from FutureLab's Flux blog, I decided to enter this year's Digital Media and Learning Competition with a version of the epic cross-curricular Mars Mission based project I blogged about a while back. I called it Mars Missioneers and shoehorned as much detail as I could into the meagre 300 word limit on the application.
The judges obviously saw the potential of it and recently chose it to be one of 50 out of the many 100s of submissions to go through to the final round of the competition, amongst such big players as FutureLab themselves and Mitchel Resnick of MIT Scratch fame! After I had picked myself up off the floor, I set about the (surprisingly mammoth) effort of creating a three minute video describing the concept behind the project. That effort today came to fruition! Here is the finished product:



I used a number of free tools in the creation of the video - Blender for the title sequence, Audacity for the audio recording and mixing, prezi for the panning and zooming graphics portions and the ingenious xtranormal for the animated narrator sequences. I discovered that they allow you to render the character on a green screen, which means you can composite the footage onto whatever back ground you desire. Combine that with iMovie '09 and you have the tools to make something that looks halfway decent for very little investment (other than the time involved!). If you would like to comment on the project, I would appreciate it enormously if you could take the time to register at the DMLC site and leave a comment on the project page. Comments close on the 22nd April, so time is rather short!

Friday 30 October 2009

Adventures in OpenSim

I have been playing about with OpenSim for some time now with the intention of using it for the purposes of teacher CPD initially, but have been bowled over by the educational possibilities for students in the process.
My recent experiments have included an educational resource to teach local history about the cholera epidemic through the reconstruction of an abandoned mill which was used as a cholera hospital in the 19th century. Students are presented with the building and an inventory of items which they must place in the rooms to recreate how they think the hospital would have looked. From this they can deduce what measures were used to prevent the spread of the disease in such infirmaries.
The picture left is a view in the upstairs bedroom before the task begins with no items yet placed.


This activity was previously done on paper and students created sketches of how they think the hospital would have looked. The advantage here is that they get a physical sense of what it would have felt like to be there, amongst the paraphernalia associated with cholera victims. It also allows a new degree of collaboration due to the objects being shared and movable by any member of a group of students. Once complete, it can act as a physical tour for other students who can roam around the scene at will. This version of the activity lends itself to the idea of cross curricular working whereby the recreated scene could act as inspiration for writing or the set of a machinima directed, 'acted' and shot by students.

I included in the environment a 'reflection area' just outside the building where the processes undergone by students in the completion of the task can be discussed. I intend to add tools for the recording of thought processes such as Salahzar Stenvaag's Wiki3D scripts which allow the building of rich 3D wikis and mind maps in-world.





The other experiment I have been cooking up struck me when I discovered the Environment Editor in Windlight-enabled Second Life/ OpenSim viewers. I have been considering for sometime a possible epic cross-curricular activity based around the idea of a mission to Mars and came across recently the NASA MOLA Mars terrain data and the freeware 3DEM and Terragen which enable you to use real terrain data and import it into OpenSim. So I did a quick test to see whether a relaistic Martian environment could be created from real data. Below is the result.
Now from what I know of the Martian environment, this looks very convincing and has the potential to give students an idea of what it would really be like to stand on the surface of Mars. What is more, they can collaboratively build a settlement together!
Combine this with other free tools and you have the bones of a completely immersive, cross-curricular, epic learning experience. Google Mars could be used by students to select an appropriate landing site based on criteria decided upon with the aid of an expert from NASA brought in by videoconference to the school. The site could be recreated in OpenSim and used to build a settlement. The settlement could form the set for a machinima which determines the progress of the mission, putting the students in charge of the direction the experience takes. Using the MIT Outdoor AR Toolkit you could overlay the Martian environment on a region of the school grounds, placing clues at strategic points as simulated scientific 'finds', similar to FutureLab's Savannah. I have been playing with the idea of doing remote field trips using a Rovio mobile webcam over Skype which is controlled by the students in school. This could be used as a way of experiencing what it is like to explore using robots and how much more challenging it is than using humans. All this could be glued together with the school's learning platform as a forum for discussion across teams, sharing of created resources and inter-team communcation. I shall blog further as this idea develops.
If anyone is interested in recreating the scene above in their own OpenSim, it uses a 2x2 megaregion and I will provide you with the terrain files and Environment Editor presets if you contact me or leave your details in a comment.

Monday 7 September 2009

Augmented Reality

Just reading Ollie Bray's musings about Augmented Reality and its emergence as a gaming technology. I've been working on this over the summer looking at ways that it could be harnessed for educational purposes and discovering some interesting history - like that the BBC released a set of AR story books as part of their Jam program which has now disappeared off the face of the digital Earth (anyone who's still got a copy PLEASE get in touch!!).
I have been surprised actually how easy it is to develop using AR and how many different software frameworks there are out there for developing resources. One particularly interesting discovery was the integration of the ARTag library with vvvv, a visual programming language. From having no previous knowledge, I was quite quickly (within 1-2 hours) able to put together a 'patch' (like a program in vvvv) which took a camera input and detected an AR tag within the video stream, reporting its 3D orientation in realtime.
ARTag itself has numerous applications as a standalone tool just by using the demo apps - I showed it to some colleagues in the office (set it up in 15 mins flat) which elicited the comment "It's like magic!", and it really is magical to see in action. A search on YouTube will yield numerous demos of AR applications such as this one from Georgia Tech:

Don't be misled by the use of beefed-up mobile hardware to make this demo possible either. There are at least two software libraries available which allow the implementation of AR applications on standard mobile phones, namely Studierstube Tracker and d-touch. AR has definitely arrived. All that remains is for us to make use of it!

Tuesday 2 June 2009

Microsoft Visioning

I thought I would share a video recently posted on Microsoft's site detailing their vision for the future of education (plus other stuff). Unsurprisingly, interactive surfaces feature heavily as well as videoconferencing, real-time translation and other technologies to do with inter-device communication.

Productivity Future Vision

I was surprised that gesture interfaces did not feature, considering the recent announcement about Project Natal, their gesture/voice/image hybrid interface for the Xbox 360. I can think of 10 educational applications of that right now...

Friday 1 May 2009

10 tools in 10 minutes

Just thought I'd share a short presentation I did on web-based tools for learning recently.