Disciplined by Algorithms

Note to the reader: I’m still working on a visualized version of the narrative below.

On posting my QuillConnect Twitter report below (in the previous post), Sian presented me with the salient question of how exactly do I feel about the profiling:

 

‘What did you feel about this report Miles? To me feels like the QuillConnect algorithm is designed to make us feel ‘good’ about our Twitter presence, as long as we’re oriented around some kind of notion of ‘positivity’ and playing the game as a ‘successful’ Twitter user (is your profile filled in? are you using the most influential hashtags?). Are we being ‘disciplined by algorithms’?

In truth, I actually felt a bit guilty, although as Sian says the QillConnect algorithm sets out in one way to make us feel good about our Twitter presence, there are some precepts in terms of a kind of corporate expectation regarding engagement. Principally there is the uncompleted profile, this a default habit of mine, deriving from a past practice, of wanting to use social media platforms for personal rather than professional reasons, and wishing to maintain privacy (and another example of this is that my Facebook profile does not show up on any search engines, I have obliterated my SEO here (I have written about this previously on my IDEL blog)).

At the opposite extreme my Google plus profile has upwards of 51,000 hits. From an SEO (and an algorithmic perspective therefore) I have previously asked, what terms in my Google plus profile created the visibility in searches to produce the resultant hit rate? Is it ‘Augustine'; ‘John Peel'; ‘New Order’ or ‘Hokkaido’ (which I’ve also written previously about in my IDEL blog)? Or is it that these terms are factored by the twenty four Google communities that I’m a member of? This EDC blog is to an extent an act of automated curation, but I also have my own personal act of curation taking place in the background, as I’ve curated over the past year over 7,500 Google Community posts and have a limited range of Google alerts set up also (these are digital film and interestingly also SEO related communities, however combing through the latter this week as the posts are potentially highly relevant for this bloc, I found nothing particularly revelatory, disappointingly there was no significant ‘backstage access’) (Gillespie, 2012).

Returning to my QuillConnect Twitter report and that uncompleted profile then:

‘We notice that your bio is blank. This is one of the first things that potential followers look at. Try to make a bio that explains who you are and what you do. Also, include a city name.’

Common sense says on the one hand that actually filling in the profile is a good idea, on the other hand the Twitter algorithm hungers for this information as its fuel, as do SEO algorithms generally (for showing in search results) and needless to say for advertisers, and other ‘vendors’, the ultimate clients (or top feeders) in the social media algorithmic food chain. Gillespie (2012, no page no.) in the ‘Relevance of Algorithms’ indicates how well trained, even compliant users are necessary for algorithms to function effectively and that they are embedded into lived practices of users, especially when they are instruments of business:

‘…for whom the information it delivers (or the advertisements it pairs with it) is the commodity. This means we must not consider their ‘effect’ on people but but a multidimensional ‘entanglement’ between algorithms put into practice and and the social tactics of those who take them up.’

To answer then Sian’s question, well yes we are being disciplined by algorithms, my Facebook page displays prominently to me my incomplete profile (and by what percentage)- pushing me with suggestions of how many friends live in certain proximities and should I like to choose these as a current location, or as a previous life and so on. As Gillespie (2012) points out and as social media providers such as Twitter and Facebook are aware, and any number of corporate organizations, the entanglement talked of here is not one way, but a ‘recursive loop’ (2012) including the calculations of users as much as those of algorithms (2012).

I feel therefore drawn at the point of this conclusion,  to reflect upon Jeremy Knox’s (2014) ‘Active Algorithms: Socio Material Spaces in the E- learning and Digital Cultures MOOC’ and his idea of ‘sociomaterial theory’ (2104, p45), of moving away from talk of determining factors all together, and instead think about that which is created from ‘co- constitutive relations’ (cited in Knox, 2014, p45; Fenwick et al. 2011, p3). It is a fascinating insight (cited in Knox 2014, p46; Scott and Orlikowski, 2013) that:

‘Code is considered active, generative and performative in mapping on-line space.’

What is interesting to note here is that when Knox comes to critique the Youtube ‘recommended videos’ page section (column adjacent to the current video being viewed, dependent on page layout selected) has as its algorithm driver, not just information about the  prior behaviour of logged in user, video meta data (and of course other collateral data), but data  regarding the behaviour about other users en mass who have viewed the current  video being watched (Knox, 2014, 49). The way in which this page’s display of ‘recommended videos’ ‘for you’ is dependent equally upon other peoples behaviour, as much as your own and this is what is to be understood as an algorithm which is ‘generative and performative'; and indeed ‘co-constitutive’ (pp 45,46, 2014). As Knox observes:

‘The overall Youtube page is thus not fixed, but produced through relations between the operation of algorithms and the activity of users.’

The omission of my completed profile is perhaps a triviality when compared to the way data can be scraped in a continual cycle of on -line actions (or ‘acts’, implying completion  in an action and cognisance), interactions and behaviour taken as something which is co-constitutive and performative (Knox , 2014) as actual practice and inculcated as a part of a kind of intentionality which has an ontological as well as epistemic or cognitive value independent, or at least not reducible to me, or any other single individual or indeed potentially any single algorithm perhaps.

What I see as significant in terms of my algorithmic profile (and  this is not mentioned in the QuillConnect report, but backstage access was never going to be a part of the deal) (2012), and what what was necessary for me to run the QuillConnect report, is that I needed to be signed in, as is the case for pretty much all of the things that I want to do on line. It would be interesting to see how long I have been signed in to certain services, and how many days I have run operations consecutively through this sign in(s) and how these ‘acts’ are represented (as a held, transformative schematic value). These operations often involve proxies (from Edu to Google Scholar for example, or as is most often the case from Google to other services).

Hands up, I’m signed in pretty much 24/7, so here is the real extent to which I am disciplined by algorithms in an attitudanal way, and of course there’s a trade off here, my data tracks in return for the optimized services I avail myself of, in these ‘sociomaterial spaces’ on line (Knox 2014).

4 responses

  1. Fascinating stuff Miles, in particular the way you foreground this imperative to ‘complete the profile': it makes me think of the visualisation of ‘completion’ via progress bars and the like which I guess tap into a psychological need of users (this seems to be quite well documented in the HCI literature, and perhaps relates to the feeling of ‘guilt’ you mention in relation to a ‘failure’ to complete it).

    I wonder what the implications might be for the representation of data in learning analytics, as I share a general concern that the oversimplification of data visualisation in things like the famous Purdue ‘traffic lights’ system (http://www.educause.edu/ero/article/signals-applying-academic-analytics) reifies data that is already reductive, and reduces our capacity to interrogate and understand how that data is worked on algorithmically.

  2. ‘there are some precepts in terms of a kind of corporate expectation regarding engagement’

    This is a really important point I think. QuillConnect has clearly predetermined a particular kind of ‘normal’ Twitter user, against which you have been judged. However, a corporate use of social media would seem to be more about marketing and being seen to be present, rather than participating with some kind of educational agenda?

    ‘my Facebook page displays prominently to me my incomplete profile (and by what percentage)- pushing me with suggestions of how many friends live in certain proximities and should I like to choose these as a current location, or as a previous life and so on. ‘

    Indeed, and the QuillConnect report must have also made suggestions about the kind of future conduct you should be aiming for. Perhaps the number of tweets you should send a week, or the ‘sentiment’ with which you write?

    ‘part of a kind of intentionality which has an ontological as well as epistemic or cognitive value independent, or at least not reducible to me, or any other single individual or indeed potentially any single algorithm perhaps.’

    Really liked this summary Miles. I guess I wouldn’t use the term ‘intentionality’, due to its focus on the human mind and its ability to represent, however I agree that a useful way to look at this is to consider a kind of distributed, relational ontology. I particularly like how you highlight the ‘for you’ – great point. It seems to emphasise a position that is entirely the opposite: that these algorithms are somehow discovering deep truths about us as individuals, rather than integrating our activity data with much broader societal behaviours.

    I think your final point here is excellent too: what we ‘get back’ for our personal data (particularly with the Google suite of services) is easy of transition between things. Life is just easier when you are automatically signed in, and I think things are increasingly designed that way.

  3. Jeremy, Thank you for your thoughtful detailed comments, a great help. I certainly agree that I went too far with the reference to an ‘intentionality’ (maybe far future looking here), part of the process of thinking my way through things I suppose, and yes the focus was really on distributed relational ontology, epistemic and cognitive value. On the point of future my conduct, Quillconnect clearly encourages me to be ‘positive’ in my Tweeting (or continue being so, perhaps, such flattery!), and is not afraid of making comparisons between me and my followers. Reminds me of yours and Sian’s jokes about HAL and performance appraisals during our film festival Hangout, which I suppose is quite apt and as on topic at this juncture! Thanks once again.

  4. Sian, Wonderful insights as always. The traffic light system (I took a look at the article you mention) is not for me, partly for the reasons of over simplification you mention, but also because of a feeling of a kind of ‘punitive imperative’ which it implies, it sort of takes us back to ‘Follow the Judas sheep: materializing post-qualitative methodology in zooethnographic space’ (Penderson, 2013). Surely an authentic (extent) of reflexivity (for learners and educators) within the field of activity and interaction is required to input sufficient values and parameters for an algorithmic application, which might then be sympathetic to proper interrogation by both parties, equally? Pity I can’t fit this thought into a Tweet! Might try now (I am too wordy). Thanks once again.

Leave a Reply

Your email address will not be published. Required fields are marked *