Having participated in the Tweetorial session during Week 9, here are some of my reflections on the process:
Q: ‘How has the Tweet Archivist service represented our Tweetorial (Use the drop down arrows on the menu items on the right of the page: ‘Top users’, ‘Top words’, Top URLs’, Source of Tweet’ etc.)’
Tweet Archivist presents a series of numeric, textual and visual summariesof the twitter activity. On the top bar is a series of overall summaries: total tweets (n=325); ‘impressions’ (not sure what this refers to!); date range of the archive (3rd-16th March 2015) and whether the archive is active/inactive.
On the right hand side of the page is a series of representations. There is a pie chart of ‘top users'; a word cloud of ‘top words'; a list of top url links; a pie chart depicting the source of the tweets; a pie chart depicting languages used in tweets; a line graph depicting spikes in activity; a word cloud of twitter mentions and hashtags; a link to images and finally (and interestingly!) and ‘influencer index’.
Following Williamson (2014) these summaries are a selection of all possible summaries that could have been produced. Additional summaries, for example, could have included time of tweets as well as the number of replies to tweets. Another pragmatic choice is the medium in which these data are represented e.g. using word clouds and pie charts rather than bar. Statistics are kept to purely descriptive features i.e. there are no analytical statistics presented e.g. between top users and top word returns. The designer of this algorithm has, therefore, had to make a series of pragmatic choices yet the rationale for these decisions are not visible on the website (so the reader cannot evaluate the quality of the data presented).
Q: ‘What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?’
I would take issue with the notion of ‘accuracy’ as this implies there is a ‘right’ and a ‘wrong’ way to present these data. From an ethno-statistics perspective (Gephart, 2006), these data should not be taken as a series of truth statements, but (instead) as artefacts produced through a process of meaning-making, by which the tweetorial and its impact on learning are socially constructed.
These data do, however, highlight some interesting findings. Top users, for example, are primarily male whereas less participating learners were female. Does this point to the potential for Twitter to re-enforce gendered divisions in learning? Twitter in general, and twitter storms in particular, have a strong association with discussing controversy and for thesis vs anti-thesis style learning (Hegel, rather than co-operative inquiry. My impression of the tweetorial was that it was based on challenging users statements and beliefs, rather than building a shared consensus (e.g. through a Delphi style process). In this respect, the archive data lends support to my initial impressions.
Q: ‘What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?’
As above, I think the key to the educational value of such analytics lies in the approach taken to their interpretation. If educators adopt a quasi-positivist perspective, for example, and treat learning analytics as objective ‘facts’, then their value will be limited. This is because treating analytic data as real ‘things’ ignores the role of pragmatic decision-making (e.g. by programmers and software designers) in how data are mined, interpreted and represented. If, however, learning analytics are treated as social constructions then they do have a useful role to play in highlighting patterns of social interactions, through which (following a network learning perspective) learning takes place. Facet methodology, as outlined by Mason (2011) I think offers an ideal approach to interpreting learning analytics. This approach treats data and data sources (such as Twitter Archives) as a facet (akin to a side of a cut gemstone) which illuminates certain aspects of social phenomena whilst simultaneously obscuring others. Thus, from this perspective, learning analytics represent one of many possible facets in understanding the learning process.
- Gephart, R. (2006) Ethnostatistics and organisational research methodologies. Organisational Research Methods 9(4): 417-431.
- Mason, J. (2011) Facet methodology: the case for an inventive research orientation. Methodological Innovations Online 6(3): 75-92.
- Williamson, B. (2014) Calculating Academics: theorising the algorithmic organization of the digital university