George Siemens, ‘Learning Analytics: The Emergence of a Discipline’ in American Behavioral Scientist 57(10) 1380–1400 DOI: 10.1177/0002764213498851 abs.sagepub.com ￼￼
The view that data and analytics offer a new mode of thinking and a new model of discovery is at least partially rooted in the artificial intelligence and machine learning fields. Halevy, Norvig, and Pereira (2009) argue for the “unreasonable effectiveness of data” (p. 8), stating that machine learning and analytics can help computers to tackle even the most challenging knowledge tasks, such as understanding human language. Hey, Tansley, and Tolle (2009) are more bold in their assertions, arguing that data analytics represent the emergence of a new approach to science.(p. 1381-2).
Here we have a connection between the android/cyborg of section one, the post-human ethnography of section 2, and the algorithms of section 3.
There are many fields that Siemens covers, but the developments described in E-learning are less speculative, they are happening now in Universities.
E-learning: The growth of online learning, particularly in higher education (T. Anderson, 2008; Andrews & Haythornthwaite, 2007; Haythornthwaite & Andrews, 2011), has contributed to the advancement of LA as student data can be captured and made available for analysis. When learners use an LMS, social media, or similar online tools, their clicks, navigation patterns, time on task, social networks, information flow, and concept development through discus- sions can be tracked. The rapid development of massive open online courses offers additional data for researchers to evaluate teaching and learning in online environments (Chronicle of Higher Education, 2012). (p.1384).
I personally have an issue with using learning analytics to tell us anything except about the interface with machine learning. We can look at our data, considering bounce rates, load times, click throughs etc, and we can decide that students are using the pages as intended… or not. But we really can’t know anything about how much students learn or understand. Even if we test students, we can only know how students respond to the tests—from my own experience, this is often more useful to demonstrate that I designed the test badly, or phrased the questions ambiguously, than to demonstrate student’s understanding or recall. (Frequent small stakes questions are very useful to prompt students to recall information and reproduce it, helping them to reflect and aiding memory. But it’s the actions and reactions prompted by the learning interface but taking place outside of it, that really matters.)
Siemens also notes that this kind of data is “academic analytics” not “research challenges in learning”, that is, it “involved the adoption of business intelligence (BI) to the academic sector (Goldstein, 2005).” For this reason “Commercial tools are the most developed… Research and open analytics tools are not as developed” (p. 1836).
Siemens reminds us that algorithmic or machine data is more useful when it is supporting “human effort”:
To be effective, holistic, and transferable, future analytics projects must afford the capacity to include additional data through observation and human manipulation of the existing data sets. (p. 3987)
I was horrified to read:
Curriculum in schools and higher education is generally preplanned. Designers create course content, interaction, and support resources well before any learner arrives in a course (online or on campus). Through the use of analytics, educational institutions can restructure learning design processes. When learning designers have access to information about learner success following a tutorial or the impact of explanatory text on student performance during assessment, they can incorporate that feedback into future design of learning content.
Online teaching and blended learning requires hours of preparation that makes it hard to change a class mid-lesson. I now lecture without powerpoint, or with only images if possible, as then I can read the room, take questions, realise the lecture is not meeting the student’s needs, and draw on my knowledge to re-form the teaching content. In small group teaching, I do the same. One semester I taught four tutorials in a row: I would plan a single lesson, but each time I delivered it, I would amend, react, expand or contract parts of the class. I sometimes gave four radically different classes (because I wasn’t delivering pre-planned content, but the group and I were exploring the content together).
For this reason, the ‘personalisation’ of student learning is also problematic. Human beings are social animals, and we learn as a social interaction. This means that personalisation alone is insufficient for effective learning. We need to be able to alter teaching to support groups, and realise that students will behave differently in different cohorts. The personality of the teacher has been demonstrated to have an enormous influence on student engagement (Shelvin et al 2000, Williams and Ceci 1997, Murray, Rushton, & Paunonen, 1990), and the personality of other students in a class is also significant. While there are ‘lone wolf’ students, this does not invalidate the need to explore group dynamics, but rather demonstrated that there are multiple factors that should be explored in learning analytics for them to be useful for learners.
An empathetic human teacher doesn’t need to wait till after a tutorial to incorporate feedback into the design of learning content, but can (and should) be redesigning the learning content in the classroom. Every human teacher should be looking at “the impact of explanatory text on student performance during assessment” as they mark, and this should be, as I understand teaching (and have been doing for the last decade), incorporated into the next iteration. The question that 30% of students clearly misunderstood? The LMS quiz that was aced by native English speakers and tanked by international students? These things we are already quantifying without big data and without E-learning. In fact, we’re waiting for E-learning to catch up with the speed that an experienced teacher can manage with a whiteboard and marker, or document camera (another modern technology).
*rant the second*
Concerns about data quality, sufficient scope of the data captured to reflect accurately the learning experience, privacy, and ethics of analytics are among the most significant concerns (see Slade & Prinsloo, 2013). (p. 1392)
Privacy is, I think, one of the most significant issues with modern learning analytics. I used to teach a class that required the production of a weekly ‘learning journal’. However, the journal was only viewed twice in the semester. If a student decided to write in larger chunks, or individually, if they journalled before or after class, if they did it all in a big go just before handing it in… I didn’t know. I could tell the students who did it badly—and it was designed to help students reflect and engage as they went. But students had privacy to produce assessment at their own pace.
This meant that students didn’t have to tell me they were sick, they were busy at work, that their children were home from school with chicken pox. They did have to attend my tutorials, but that was 1 hour a week. Other than that, they could learn privately.
I work in blocks and then distribute my learning across the week though scheduling and back dating. I found IFTTT meant that learning intruded too far into my private space. I had to act to suit the assessment and the algorithm, rather than learn. I chose to learn.
A public blog means that I have to be public about my health, my busyness, my business. Or at least the markers of those interruptions are publicly available.
Siemens quotes two scholars writing in 1964, 50 years later, their concerns remain pertinent:
Ellul (1964) stated that technique and technical processes strive for the “mechanization of everything it encounters” (p. 12).
“Most of our institutions of higher learning are as thoroughly automated as a modern steel plant” (Mumford, 1964, p. 274). (p. 1395)
The learning process is creative, requiring the generation of new ideas, approaches, and concepts. Analytics, in contrast, is about identifying and revealing what already exists. (p. 1395)
Ellul and Mumford might also remind us that the purpose of the mechanised plant is to repeat the same process over and over again, as quickly and consistently as possible. Humans, as I have argued above, are not very good at being consistent, and as Adorno and Foucault might argue, any attempt to make students into machine-goods is an act of oppression. What’s more, Foucault would suggest that the assessing gaze of the teacher is another form of coercion.
Unlike Facebook or Amazon, who track our clicks, absences and contributions to give us more of what we want, teachers judge our clicks, absenses and contributions against a regime and rubric of value judgments.
*rant trails off, need to go make tea*