The Open Education Conference 2010 is almost here, and I'll have the opportunity to present there some of the work I've done in the last year with the first open online courses offered in Colombia. I talked a bit about the proposal that was accepted for presentation here.
Here is a draft of the paper submitted for publication, so please keep in mind that there's a lot of room for improvement and it's quite likely there will be errors lingering around. As usual, if you have suggestions and comments, I'll be quite happy to hear them.
For a downloadable PDF version, please click here.
70508 views | 26 feedbacks »
A couple of months ago, I sent a proposal to the Open Education Conference 2010, which was accepted for presentation. Now I'm working on a paper expanding these ideas, based on the results I've gotten so far. I finally found time to put this online (should have done it weeks ago), so here's the text I submitted. Please note that the text was written in May, so there have been a lot of changes since then, especially after the beginning of DocTIC:
Open online courses in Colombia: Lessons from an educational and technological experiment
An examination of lessons and implications from the first open online courses offered in Colombia, based on free, replicable technology.
In September 2009, in line with experiences described by Fini (2008), Fini (2009) and Wiley (2009), the first Colombian Open Online Course was launched, as a local educational and technological experiment. The course, concerned with the exploration of the present and future of e-Learning in Colombia (ELRN), was offered as part of the masters program in Educational Informatics at Universidad de la Sabana, including students in both tuition-paying and open modalities. Two more courses were offered, based on the instructional ideas and technological infrastructure used in this first experience: one by Universidad EAFIT called Groups, Networks and Communities, and a new offer of ELRN at Universidad de la Sabana in 2010.
These courses, some of the first of its kind offered in Spanish, were based on the use of blogs for reflective writing, and were supported by a technological infrastructure designed to be free, replicable, public and as simple as possible, considering that the use of blogs and other social software tools in education is still incipient in Colombia, e-mail is still the most used communication tool, technologies such as RSS are unknown to most people, and not every teacher has access to a LMS installation. There was also a challenge of providing a common ‘course’ experience, while allowing participants to keep control of their own information during and after it.
A basic mash-up, which could be adapted and used in new courses, was used to aggregate and redistribute information (using Google Docs, Yahoo Pipes and Google Feedburner). Along eight weeks, participants published their reflections, opinions and findings on their personal blogs, which were compiled into a unique RSS feed or an e-mail subscription. Participants were also asked to save their own resources using social bookmarking tools, and to keep track of their learning tasks using wikis.
The technology used has evolved trying to make its use easier, including at the time a set of parameterized Yahoo Pipes, which compile both posts and comments referred to the course in several online platforms, making easier to track the distributed conversation (an issue in open online courses). Also, the conversation generated has been compiled in social graphs that help both teachers and students to see the evolution of participation in the course.
The experience showed local difficulties tied to this kind of educational experience, such as an unexpected low skill level from most participants in the use of some apparently basic tools, as well as qualms regarding the creation of a personal and public writing space. At the same time, participants reported their satisfaction with the exercise and expressed how challenging the experience of an open course was, its richness in terms of learning, and the value of having open, non-LMS courses available.
128 participants (34 tuition-paying / 94 non-credit) were registered in the courses offered, with a completion rate of 30% (90% tuition-paying / 11% non-credit). This experience intends to open new local discussions about the possibilities and challenges of open education, when going beyond the mere provision of OER.
- Fini, A (2009). The Technological Dimension of a Massive Open Online Course: The Case of the CCK08 Course Tools. In The International Review of Research in Open and Distance Learning, Vol 10, No 5 (2009), ISSN: 1492-3831. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/643
- Fini, A., Formiconi, A., Giorni, A., Pirruccello, N., Spadavecchia, E., & Zibordi, E. (2008). IntroOpenEd 2007: An experience on Open Education by a virtual community of teachers. Journal of e-Learning and Knowledge Society, 4(1), 231-239. Retrieved from http://www.je-lks.it/en/08_01/11Apfini_en.pdf.
- Wiley, D., Hilton III, J. (2009). Openness, Dynamic Specialization, and the Disaggregated Future of Higher Education. The International Review of Research in Open and Distance Learning, Vol 10, No 5 (2009), ISSN: 1492-3831. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/768/1414
18818 views | 2 feedbacks »
A few months ago, and taking as a starting point the presentation I did in OpenEd'09, I started to write an account of the work I did from 2007 to 2009 with the EduCamp workshops. It took longer than expected, but finally I have a full draft (release candidate, let's say) in English, which complements the chapter I wrote last year for this book edited by Alejandro Piscitelli and published by Espacio Fundación Telefónica of Argentina.
I'm deeply grateful to Stephen Downes, Scott Leslie and Linda Ashworth, who were extremely kind and took (a lot of) time to read the initial draft. I learned a lot from their suggestions, which definitely improved this version. It's important to say, though, that any error lingering in the document is my responsibility, not theirs.
So far, the document doesn't include any info about the workshop I did with secondary school students a few months ago, so it's an account of things that happened in 2007-2009. I'll have to find an opportunity to talk a bit more about it.
If you distribute this draft, please keep in mind that it is exactly that, a draft, and that there's a lot of room for improvement. Of course, if you have suggestions and comments, I'll be quite happy to hear them.
UPDATE (2010/10/13): I uploaded a second draft of this document, which includes additional information and reads a little better. The title was changed to better reflect the content.
UPDATE (2011/03): The final version of this document was published on the Vol 12, No 3 issue of the International Review of Research in Open and distance Learning.
10237 views | 3 feedbacks »
Well, this "thinking out loud" thing is something I'm not really used to (I'm trying to learn), so please bear with me. I'm not even sure if I'll be able to say what I'm thinking, for that matter... That said...
Last year I offered my first open course ever, called e-Learning (ELRN). It was supposed to be an exploration of the present and future (at a local level) of technology in education. And I decided I wanted it to be open, and also that I wouldn't use a LMS. I wanted to do something similar to the things Stephen and George did with CCK08, but I didn't have the infrastructure to make that happen.
Following David, Alec, George and Stephen, I chose a wiki as the platform to publish the weekly activities of the course, and blogs as the main reflective tool for students. The decision didn't have to do with the technology, but with the reflexive processes that it allows, based on my own experience. Blogs would be the publishing platform of choice. The discussion wouldn't happen in centralized discussion fora, but in each participant's blog. It would be decentralized.
Also, given the characteristics of my own context (Colombia), where there are not that many education blogs online, and most of us are not power users, I wanted to make access to the info produced in the course as easy as possible. Participants would be able to get info by e-mail, and those who felt comfortable with it, by RSS. To participate in the course would not require to login in a specific platform to access content, and it should be a real possibility to send and get information by e-mail.
So all these intentions led me to find ways to collect all the info generated in the blogs, and distribute it by e-mail. That led me to work a bit with Pipes, something I hadn't done before. The first product of that work looked like this, and I talked about it in another post:
At the end of that post, I noticed some limitations that I was still trying to figure out:
- How do I analyze all the data coming out of the course? If I wanted to see the progress/evolution of different participants, what kind of tools should I use? Is it possible to do it with the pipe I have now?
- Feedburner is not flexible enough with the mail subscriptions. I'd like every participant to decide whether she gets a daily or real time notification.
As I went through ELRN, something else proved difficult: How could I (as a facilitator) keep tabs on the comments of formal students? Monitoring every comment in every post was, clearly, a daunting task...
At the end of the course I tried to generate, by hand, some sort of analysis of what happened, including posts frequency and the blog comment network generated along the course, which was possible thanks to the small number of participants. That network looked like this:
So I realized that such a graph could be very useful to monitor the participation in the course, and to detect (and maybe do something about) people who were not being part of the conversation. It could be used as a "lurker detector", so to speak. Nevertheless, I told myself, in order to be useful you would have to get this kind of info not at the end of the course, but along the road. And, doing that by hand would be very time consuming. And that leads me, finally, to this post.
So, What if you could generate such a graph (described as a GraphML file) from a RSS feed? How could you do that?
Last year I started another open course about groups, networks and communities (called GRYC in Spanish), which for several reasons was postponed for this year. That gave me some time to think about the problem (but not to solve it). Here's where I am right now:
If we go back to the first diagram, we see that I have a pipe compiling feeds from different blogs. So I wondered how could I get comments from those blogs, given that there were many people using an already existing blog for the course and tagging their posts, and there were many different platforms being used at the same time, each one with its own approach to comments (WP doesn't have specific feeds for comments in a category, while Blogger does. Some people use Feedburner for their feeds, which makes impossible to discover a comment feed, and so on).
What I did was to create a new pipe (the second one in the sequence), which takes the first one as input and extracts the comment feed associated to each post, and then gets the items included in each one of those feeds. Also, I'm getting (using YQL) Twitter messages and comments made in the wiki. Everything is put together and at the end I have a feed including who said what, and where that was said (blogs, twitter, wiki). It's quite easy to extend this and include other sources (Google Groups, Moodle fora, or anything that can imported into Pipes). Now, maybe there's a more simple way to do this but, well, I'm still learning to use Pipes (for example, can all this be parameterized?)..
There is a "dark side" to this. I still have to do a lot of "maintenance" in the original data, and I have to consider specific cases that made the pipes not as simple as I'd like them to be. Let's say, someone is using Disqus for the comments, and someone else uses different display names for the posts and the comments in different platforms. Some comment feeds are not being discovered by Pipes... It's not nice. It's not 'clean'...
My basic graph, as shown above, includes people as nodes and comments as edges. So I still need to know who said what to whom. I could do this in the second pipe, but I don't want to add any more complexity (that is, processing time) to that pipe. So, in a third pipe I get results from the second pipe, I process each item, and at the end I have what I need: I'm putting in each item's title the source and target of the comment (that is, their names) and the date. Each item still contains the link to the comment, also.
But, where do I go from here? Some ideas and issues related to the first (blue) bubble:
- I could write some PHP code to get the RSS feed and process each item, generating a GraphML text file on the fly. Of course, it could be Java as well. After all, a framework such as Jung includes a lot of utilities now, even to generate the visual representation of a graph... But I'm not sure if it processes RSS...
- Um, but what happens when I do the same process the next day? Would it make sense, then, to put all the RSS info inside a DB, so I can have even more flexibility later?
- Maybe having things in a DB will let me include more info in the graph. Let's say, what if the size of the nodes is tied to the amount of posts generated? That could be useful too...
- In the end, having a DB will allow me to split tasks. One thing is to add info from the feed (a daily process, let's say), another one is to generate a GraphML file from that info. This could be done on demand, and cached for later use.
But what about the second (green) bubble? The idea here is to go from the text file to a useful visualization. We could even thing about creating (automatically) animations showing the daily progress of a course, discovering when new nodes (this is, new people posting) appear, and showing new edges as they emerge.
- The weapon of choice would be Jung, clearly (I still don't know if something as Cyclone has anything to do with this problem). With that we can get from GraphML to images, I think. Now, if we want to create PNG animations, well, I still have no idea how to do it.
- In any case, I'd have to go back to Java (long time no see!) and learn a lot about a lot of things. And time seems to be a quite scarce resource...
So, where does that leave us? You get to extract info from the pipes in "real-time" and generate GraphML files from it (or whatever you want) to show the status of the graph at any given time. This could help to see who's being left behind in the community (because they're not writing nor commenting, for example) in an easy way, which would help in massive courses. Actually, you could even send specific e-mail messages based on the configuration of the graph (nice!).
And, where do we go from here? Well, what if you applied the logic of an Open Course to a whole academic program? What if participating in a course means just tagging your blog posts with a code for that course? What if we aggregate everything in something such as Elgg, and keep using pipes to distribute content around (as we see fit)? Would that look like a more decentralized approach to a LMS? With new monitoring tools, more focused on the interactions? With students having more control of their info?
I just don't know. What I do know is that this approach, as much as I like it, is not scalable. And if we want to get more serious about being open we will need, eventually, to provide alternative solutions easy to use for administrators, teachers and students, and analytical tools focused on the kind of things we'd like to observe and foster in our students.
Anyway, keep in mind that I'm thinking out loud here. This is the second time I'm trying this architecture, so there are a lot of things to improve and many others that make no sense at all. I'm just trying to figure out if it makes sense to work more on this. So thanks in advance for your thoughts!
44237 views | 5 feedbacks »
12/31/09 , 09:19:23 am
Catalogado en Thoughts
This is the third year I'm sending out a Holiday message. This time is very especial, though, because 2009 was a turning point in many ways. I got the chance to meet amazing people from all around the world, and to enjoy their hospitality, which in itself was a humbling event.
So I want to say "Thank you" to all of you, for all the learning you've made possible, and for helping me to make true so many personal and professional dreams.
May 2010 be a time to ask ourselves what would happen if we're wrong, and to rethink (from there) the things we do, so we can change our world in a positive way.
As usual, this is a small "homemade" message. A short video, including some quotes and ideas important to me:
Thank you everyone for making 2009 a year even more memorable than the previous one! Here's to a 2010 full of great surprises for each and every one of you!
4267 views | Send feedback »