The course is on its last few weeks and there’s been a lot of things I learned, also maybe even unlearned about algorithms. I think it’s fitting that in this last week we tried review how algorithms and learning analytics influence educational concerns.
One of the week’s highlights for me was the hangout session. I was struck by what Jeremy said about the exercise on algorithmic play. In many instances, our actions to deliberately influence what information the algorithms display did not make a substantial impact. Logging in an out, clicking one video over another made some difference but not a lot. Jeremy’s point was that our individual impact to how the algorithms work is tiny, and the data sets on which algorithms are based are huge. The scale of Big Data seems to me therefore dehumanising. Despite claims about personalizing information for individuals, algorithms cannot be customised for individuals. As Gillespie pointed out, algorithms can only approximate. This contrasts with my experience as a trainer where I learned to care for learners both collectively and individually. This aspects of caring seems to be drowned out in the sea of data.
This week I also posted my reaction to the Tweetorial visualisations. My main concern was that their veneer of objectivity (as seen in the way the images and numbers were presented without introductions, as if they could stand alone by themselves) is a misleading claim. The fact that the algorithms behind them are also deliberately hidden is dangerous–conclusions based on incomplete information can lead to wrong decisions.
I also created a poster based on the Gillespie articleas a way of processing my understanding of the reading. Reviewing the icons now, I think I should have represented both subject and object to move away from illustration and to move towards interpretation, an reminder as I work on my final assignment idea for the course.