Unfortunately I missed the tweetorial, however, as a non-participant, this gives me the chance to examine the findings and statistics from the outside. What can I learn from the statistics and graphs? Firstly, the tweetorial seems male-dominated. I doubt this is the case, but the first graphic places male participants at 90%. The focal point of the tweets came from the UK and North America, with the larger concentration coming from the UK. Looking at tweet archivist, the key discussion focused on algorithms, as set out by the set of questions on the EDC site.
Given the questions from the EDC site these fit nicely into the themes/questions asked above.
The mined data links to the questions, however what do we learn from the data? Can we say that the session was successful in it’s aims or outcomes? We really don’t get an idea simply from the summary, we would have to take a deeper look at the actual conversations. We can at least measure whether or not the participants were ‘on topic’ and not focussing on something that wasn’t an order of the day.
What about grading? When it comes to most active users should we reward them? Should the most active users based on frequency of contribution?
The information cultivated is certainly interesting, yet I feel that we need to look at this information in a more detailed and methodical way. We can get the brush strokes and overview of what went on through the algorithmic data, however in order to have a more detailed picture we should go to the raw data. These tools would be extremely useful in large number MOOCs with 1000+ users where tracking specific themes/topics would be too huge a task to take on.