Tagged algorithms

Week 8 summary: algorithms

(Note I am posting this summary to belatedly capture my engagement with the course during the period. Because of work pressures, I was unable to publish this on time. This post is based on a review of the course calendar and my Twitter activity.)

When the introductory post to Block 3 mentioned how the presence of algorithms are acknowledged only when they produce unintended results (as in the case of Kim Jon-un’s mistaken photo), my first reaction was to look for a list of face recognition fails I had previously seen on the web. Although teaching computers how to recognise images is a rapidly evolving field, its current state, especially as implemented in consumer equipment such as cameras, can lead to hilarious results. The list of funny face-recogntion fails was my first tweet for the block.

We are surrounded by algorithms and it was easy to find other examples. During the week, I also tweeted how the SIRI algorithm on iPhones work. As with other examples, how the algorithm works is rarely known because it is purposely kept hidden by commercial interests and perhaps because their technological sophistication can be understood only by specialists. Either way, it is ironic that a device we are unaware of a device we are intimate with (we carry it around at all times, we keep it close). It is in a sense like sleeping with the enemy.

Algorithms in retail shopping show how invasive algorithms work. I tweeted a New York Times video on how algorithms can determine female shoppers are pregnant based on changes in their shopping patterns, and how algorithms recommend related products accordingly. This uncanny accuracy makes shoppers uncomfortable so retailers changed their strategy by purposely including seemingly random objects in the mix: table cloth with diapers, a familiar jam with milk formula for babies. The video is a good example of the privacy concerns raised by how algorithms.

Other concerns include reliability of data sources, disclosure and ongoing maintenance (not just development) of the algorithm. This issues form part of the ethical guideline for news articles written by algorithms, another link I tweeted in the block.

Icons for algorithms

What icon symbolises an algorithm? Unlike cyborgs, algorithms are difficult to illustrate or visualise because they are defined not by what they are, but what they do, and by how they do those while away from the public view.

The poster below is an attempt to find a visual equivalent and a mnemonic aid for these hidden influences as enumerated by Gillespie (2014).


Critical look on the tweet archive

This post is a reaction to the visualisations of the course activity on Twitter, as created through tools such as Tweet Archivist and Keyhole, and a critical look at how the tools represent the course activity.

I think the visualizations are helpful because they take advantage of people’s ability to process visual images and because they can uncover patterns in the tweets, a considerable benefit given the volume of data and the tedium of processing it (Burchard, 2004).

That said, what first struck me was how the visualizations were presented without any introduction, as if they were self-explanatory, like photographs that stand alone by themselves. This relates to what Borer and Lawn (2013) said about how numbers and data have been instituted  to make claims about objective ways of seeing reality. The centrality of the numbers and the visualizations contrasts with how images are used, for example, in journalism where text accompanies or works together with images.

Putting the data visualizations at center stage hides the social negotiations, the subjective processes them behind this purportedly objective view. (Gillespie, 2012). Also, given the different types of data that can be captured (note that Tweet Archivist and Keyhole display different data points) and the different ways they are represented, the lack of text — about how the data was captured and represented and what they might mean — becomes a dangerous and misleading omission.

For example, the two tools show different results for top hashtags and top users, even if dates are accounted for. Furthermore, they seem to try to calculate impact through proprietary algorithms, both unexpalined, called Influencer Index (Tweet Archivist) and Klout (Keyhole) but show different results. The difference then is somewhere hidden in the algorithms.

Extrapolating from Borer and Lawn (2013), the visualizations not only make claims about objectivity, they also legitimise specific perspectives over others. Relating this to learning, the data visualizations could be misleading because they could influence how learning is defined: is learning a matter of frequency of participation, volume of resources shared, number of people reached? In other words, is learning quantifiable?

I certainly do not think so because the visualizations, at their current state, do not account for context and meaning. The question in my mind however is this all a matter of technology? If in the future, the visualisation tools cannot only count but also semantically understand the Twitter discussions, would their value be different? Would a data interpretation by a human be different from that of a machine?


Borer, V. L., & Lawn, M. (2013). Governing Education Systems by Shaping Data: from the past to the present, from national to international perspectives. European Educational Research Journal12(1), 48-52.

Burkhard, R. A. (2004). Learning from architects: the difference between knowledge visualization and information visualization. In Information Visualisation, 2004. IV 2004. Proceedings. Eighth International Conference on(pp. 519-524). IEEE.

Gillespie, T. 2012. The Relevance of Algorithms. forthcoming, in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.


Playing with algorithms

Where can you find the world’s best pizza? What is the most popular curry recipe, headache relief? The answer, according to Google, is it depends. It depends on where you are.

This video shows how the Google search algorithm tracks geolocation and changes keyword suggestions, search results and displayed ads accordingly. Using a VPN software, I pretended that I am from several different countries (Germany, United Kingdom, Japan, United Arab Emirates and Switzerland)  The same search phrases yield different search results depending on location.

While the algorithm’s influence on recommendations about restaurants and recipes are relatively harmless, it’s influence on medical information seems like its heading to grey territory. Results from some countries are preceded by ads, pushing the actual information downwards. In fact it is quite ironic that the ads for headache relief are followed by search results for homemade remedies.