Lifestream summary

Several themes emerged as I tried to summarize and reflect on my participation in this course.

Learning in the open: blogging, tweeting and exchanging resources
Blogging anchored my learning throughout the course, gave me a structure and a schedule to follow. I was initially hesitant to post half-formed ideas publicly, but as the course ran and as I found myself returning to themes and topics I had written about earlier, I found blogging a good way to track my progress. The openness was most helpful when my classmates and I responded to each other’s block assignments. I learned from how they interpreted the same assignment and worked with the same ideas.

Tweeting allowed us to exchange resources but IFTTT for my taste cluttered more than cleared up my lifestream, so I am happy that categorising the posts allowed balance between capturing my engagement and showing progress.

Multimodal opportunities
The opportunity to improve multimodal skills through practice was one of the main reasons I enrolled in this course. While tutors encouraged us to submit digital artefacts for our block assignments, the richness of the course topics themselves provided lots of visual inspiration, practically inviting multimodal play. This is especially true for cyborgs and robots we discussed in Block 1 and algorithms and analytics in Block 3. This richness allowed me to play with different formats: through sketch notes, videos and data visualization.

Although it was at the start of the course, I consider the creating the video Danbo’s day out one of my personal highlights, an example of not just illustrating an idea visually but also interpreting it. By making the video as interesting as I could, I was able to invite multiple interpretations from the audience. Combining this interestingness with an academic argument is one thing I am keen to explore in the final assignment for the course.

A critical attitude towards technology
Looking at cyber, community and algorithmic cultures impressed on me a different perspective on technology. In Block 1, I learned that we humans are not separate from our tools, and that humans and technology are mutually determining.

As an online trainer, I found it tempting to focus solely on the affordances of technology and the constant hype about what technology could offer. How could we use technology to make learning more effective and engaging? However, there is an alternative to this instrumentalist view. Technology represents not just a set of tools but also the product of toolmaking, that is, they represent a particular set of human values, judgements, decisions and negotiations. These idea was introduced in Block 1 and exemplified for me in Block 3 during the discussion on algorithms and data visualizations. Commercial interests cloak the mechanics of algorithms in secrecy. Left unquestioned, algorithms could mislead. What kind of data gets captured, how they are analysed and represented is based on social processes. Prying into this hidden social processes leads to a more nuanced understanding of technology. This allows us to see where technology fails to meet our aspirations and values as educators, and where technology leads to unintended consequences.

Week 8 summary: algorithms

(Note I am posting this summary to belatedly capture my engagement with the course during the period. Because of work pressures, I was unable to publish this on time. This post is based on a review of the course calendar and my Twitter activity.)

When the introductory post to Block 3 mentioned how the presence of algorithms are acknowledged only when they produce unintended results (as in the case of Kim Jon-un’s mistaken photo), my first reaction was to look for a list of face recognition fails I had previously seen on the web. Although teaching computers how to recognise images is a rapidly evolving field, its current state, especially as implemented in consumer equipment such as cameras, can lead to hilarious results. The list of funny face-recogntion fails was my first tweet for the block.

We are surrounded by algorithms and it was easy to find other examples. During the week, I also tweeted how the SIRI algorithm on iPhones work. As with other examples, how the algorithm works is rarely known because it is purposely kept hidden by commercial interests and perhaps because their technological sophistication can be understood only by specialists. Either way, it is ironic that a device we are unaware of a device we are intimate with (we carry it around at all times, we keep it close). It is in a sense like sleeping with the enemy.

Algorithms in retail shopping show how invasive algorithms work. I tweeted a New York Times video on how algorithms can determine female shoppers are pregnant based on changes in their shopping patterns, and how algorithms recommend related products accordingly. This uncanny accuracy makes shoppers uncomfortable so retailers changed their strategy by purposely including seemingly random objects in the mix: table cloth with diapers, a familiar jam with milk formula for babies. The video is a good example of the privacy concerns raised by how algorithms.

Other concerns include reliability of data sources, disclosure and ongoing maintenance (not just development) of the algorithm. This issues form part of the ethical guideline for news articles written by algorithms, another link I tweeted in the block.

Week 10 summary: wrapping up

The course is on its last few weeks and there’s been a lot of things I learned, also maybe even unlearned about algorithms. I think it’s fitting that in this last week we tried review how algorithms and learning analytics influence educational concerns.

One of the week’s highlights for me was the hangout session. I was struck by what Jeremy said about the exercise on algorithmic play. In many instances, our actions to deliberately influence what information the algorithms display did not make a substantial impact. Logging in an out, clicking one video over another made some difference but not a lot. Jeremy’s point was that our individual impact to how the algorithms work is tiny, and the data sets on which algorithms are based are huge. The scale of Big Data seems to me therefore dehumanising. Despite claims about personalizing information for individuals, algorithms cannot be customised for individuals. As Gillespie pointed out, algorithms can only approximate. This contrasts with my experience as a trainer where I learned to care for learners both collectively and individually. This aspects of caring seems to be drowned out in the sea of data.

This week I also posted my reaction to the Tweetorial visualisations. My main concern was that their veneer of objectivity (as seen in the way the images and numbers were presented without introductions, as if they could stand alone by themselves) is a misleading claim. The fact that the algorithms behind them are also deliberately hidden is dangerous–conclusions based on incomplete information can lead to wrong decisions.

I also created a poster based on the Gillespie articleas a way of processing my understanding of the reading. Reviewing the icons now, I think I should have represented both subject and object to move away from illustration and to move towards interpretation, an reminder as I work on my final assignment idea for the course.

Icons for algorithms

What icon symbolises an algorithm? Unlike cyborgs, algorithms are difficult to illustrate or visualise because they are defined not by what they are, but what they do, and by how they do those while away from the public view.

The poster below is an attempt to find a visual equivalent and a mnemonic aid for these hidden influences as enumerated by Gillespie (2014).

algorithms_icons

Critical look on the tweet archive

This post is a reaction to the visualisations of the course activity on Twitter, as created through tools such as Tweet Archivist and Keyhole, and a critical look at how the tools represent the course activity.

I think the visualizations are helpful because they take advantage of people’s ability to process visual images and because they can uncover patterns in the tweets, a considerable benefit given the volume of data and the tedium of processing it (Burchard, 2004).

That said, what first struck me was how the visualizations were presented without any introduction, as if they were self-explanatory, like photographs that stand alone by themselves. This relates to what Borer and Lawn (2013) said about how numbers and data have been instituted  to make claims about objective ways of seeing reality. The centrality of the numbers and the visualizations contrasts with how images are used, for example, in journalism where text accompanies or works together with images.

Putting the data visualizations at center stage hides the social negotiations, the subjective processes them behind this purportedly objective view. (Gillespie, 2012). Also, given the different types of data that can be captured (note that Tweet Archivist and Keyhole display different data points) and the different ways they are represented, the lack of text — about how the data was captured and represented and what they might mean — becomes a dangerous and misleading omission.

For example, the two tools show different results for top hashtags and top users, even if dates are accounted for. Furthermore, they seem to try to calculate impact through proprietary algorithms, both unexpalined, called Influencer Index (Tweet Archivist) and Klout (Keyhole) but show different results. The difference then is somewhere hidden in the algorithms.

Extrapolating from Borer and Lawn (2013), the visualizations not only make claims about objectivity, they also legitimise specific perspectives over others. Relating this to learning, the data visualizations could be misleading because they could influence how learning is defined: is learning a matter of frequency of participation, volume of resources shared, number of people reached? In other words, is learning quantifiable?

I certainly do not think so because the visualizations, at their current state, do not account for context and meaning. The question in my mind however is this all a matter of technology? If in the future, the visualisation tools cannot only count but also semantically understand the Twitter discussions, would their value be different? Would a data interpretation by a human be different from that of a machine?

References

Borer, V. L., & Lawn, M. (2013). Governing Education Systems by Shaping Data: from the past to the present, from national to international perspectives. European Educational Research Journal12(1), 48-52.

Burkhard, R. A. (2004). Learning from architects: the difference between knowledge visualization and information visualization. In Information Visualisation, 2004. IV 2004. Proceedings. Eighth International Conference on(pp. 519-524). IEEE.

Gillespie, T. 2012. The Relevance of Algorithms. forthcoming, in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.

 

Possible topic: What’s the matter with data-enhanced learning?

Quite suddenly tonight, an idea for a possible assignment topic struck my head. Choosing the final topic has been simmering in my head the past week but only only now did it bubble up — so I thought I’d write this idea down, as a first attempt at least for thinking about the final assignment for the course.

The title will be: What’s the matter with data-enhanced learning? The title is a rift of Sian’s critique of TEL. The plan is to apply the three main critiques Sian raised in her article to some of the claims made about learning analytics.

The paper will try to argue how the process of data-gathering needs to be questioned; values behind the application of learning analytics, clarified; and assumptions about learning, uncovered.

I am uncertain about two things: Are the terms learning analytics and data-enhanced learning synonymous? Is the use of term data-enhanced learning justifiable for the title, though perhaps not in the body of the paper.

For references, I will rely mainly on Sian’s article for the framework of the critique of TEL. For the other part, I will cite the main claims made about learning analytics as mentioned by Ben Williamson in his presentation.

For the multimodal part, I will try to draw a sketch note that outlines the main arguments, similar to the one I had created earlier. I have seen several videos of animated data visualisations but I do not have the skills to pull of something similar. Although I will try to minimise the amount of text in the sketch note, having the flexibility of the sketch note format comes in handy. The need to minimise the amount of text in the drawing suggests that the tone may need to be somewhat punchy, or manifesto-like. Time may not be enough to pull something like that for the assignment but I am writing it down here anyway to keep it in the radar of my brain. The sketch note cannot stand alone and still relies on text to flesh out the argument. I’m unsure whether I will combine the images and text into a web page or PDF document.

A song called Android’s lament, which I tweeted in the early weeks of the course, seems appropriate for the topic. It’s opening lyric is “I will not be pushed, filed, stamped, debriefed or numbered”, which sounds like a rally against the de-personalization that happens as a by product of the processes around big data. I do not know yet how to incorporate it into the multimodal format, but the tone of its argument reminds me of Haraway’s strong and clear voice, and of her imaginative approach.

These are just preliminary ideas, some of which may not make it as part of my final assignment. However, I am interested in finding out which ideas I keep or discard in the end, and why: Creating a multimodal artefact is its own learning.