I had always imagined algorithms to be some kind of sturdy and reliable mathematical formula and have found it surprising these past two weeks to discover that they are generally considered to be out of control, unpredictable, fragile and buggy. It therefore seems quite strange that at the same time we also consider these algorithms to be so sophisticated that they are able to explain, simulate and predict human life.
I thought it was interesting that Kitchin sees data as speaking for itself and that it is free from human bias or framing, so patterns and relationships within big data are inherently meaningful and truthful. Surely the fact that algorithms are created and need to be analysed and interpreted by humans means that they can never be free totally from human bias?
In his presentation, Williamson talks about statisticians and data scientists doing the job of sociologists. While to a certain extent they may be able to do this it would also seem vital for sociologists to be trained in analysing and interpreting big data, and for them to work together.
In ‘Learning Analytics’ Siemens states that LA is concerned with ‘sense making and action’ – but clearly this comes with ethical issues. Are we obliged to act on student LA data if the student looks to be ‘at risk’? If ‘at risk’ means low engagement on a VLE, how do we know that the student is not studying outside the VLE? What are the consequences of intervention? Are teachers/course providers sufficiently well trained in LA to able to correctly interpret the data before making any kind of decision?