Critical look on the tweet archive

This post is a reaction to the visualisations of the course activity on Twitter, as created through tools such as Tweet Archivist and Keyhole, and a critical look at how the tools represent the course activity.

I think the visualizations are helpful because they take advantage of people’s ability to process visual images and because they can uncover patterns in the tweets, a considerable benefit given the volume of data and the tedium of processing it (Burchard, 2004).

That said, what first struck me was how the visualizations were presented without any introduction, as if they were self-explanatory, like photographs that stand alone by themselves. This relates to what Borer and Lawn (2013) said about how numbers and data have been instituted  to make claims about objective ways of seeing reality. The centrality of the numbers and the visualizations contrasts with how images are used, for example, in journalism where text accompanies or works together with images.

Putting the data visualizations at center stage hides the social negotiations, the subjective processes them behind this purportedly objective view. (Gillespie, 2012). Also, given the different types of data that can be captured (note that Tweet Archivist and Keyhole display different data points) and the different ways they are represented, the lack of text — about how the data was captured and represented and what they might mean — becomes a dangerous and misleading omission.

For example, the two tools show different results for top hashtags and top users, even if dates are accounted for. Furthermore, they seem to try to calculate impact through proprietary algorithms, both unexpalined, called Influencer Index (Tweet Archivist) and Klout (Keyhole) but show different results. The difference then is somewhere hidden in the algorithms.

Extrapolating from Borer and Lawn (2013), the visualizations not only make claims about objectivity, they also legitimise specific perspectives over others. Relating this to learning, the data visualizations could be misleading because they could influence how learning is defined: is learning a matter of frequency of participation, volume of resources shared, number of people reached? In other words, is learning quantifiable?

I certainly do not think so because the visualizations, at their current state, do not account for context and meaning. The question in my mind however is this all a matter of technology? If in the future, the visualisation tools cannot only count but also semantically understand the Twitter discussions, would their value be different? Would a data interpretation by a human be different from that of a machine?

References

Borer, V. L., & Lawn, M. (2013). Governing Education Systems by Shaping Data: from the past to the present, from national to international perspectives. European Educational Research Journal12(1), 48-52.

Burkhard, R. A. (2004). Learning from architects: the difference between knowledge visualization and information visualization. In Information Visualisation, 2004. IV 2004. Proceedings. Eighth International Conference on(pp. 519-524). IEEE.

Gillespie, T. 2012. The Relevance of Algorithms. forthcoming, in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>