<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Disciplined by Algorithms</title>
	<atom:link href="https://edc15.education.ed.ac.uk/mprowse/2015/03/07/disciplined-by-algorithms/feed/" rel="self" type="application/rss+xml" />
	<link>https://edc15.education.ed.ac.uk/mprowse/2015/03/07/disciplined-by-algorithms/</link>
	<description>Just another Education and digital culture 2015 site</description>
	<lastBuildDate>Mon, 30 Mar 2015 10:45:10 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.2.38</generator>
	<item>
		<title>By: mprowse</title>
		<link>https://edc15.education.ed.ac.uk/mprowse/2015/03/07/disciplined-by-algorithms/#comment-191</link>
		<dc:creator><![CDATA[mprowse]]></dc:creator>
		<pubDate>Fri, 13 Mar 2015 14:10:16 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mprowse/?p=327#comment-191</guid>
		<description><![CDATA[Sian, Wonderful insights as always. The traffic light system (I took a look at the article you mention) is not for me, partly for the reasons of over simplification you mention, but also because of a feeling of a kind of &#039;punitive imperative&#039; which it implies, it sort of takes us back to &#039;Follow the Judas sheep: materializing post-qualitative methodology in zooethnographic space&#039; (Penderson, 2013). Surely an authentic (extent) of reflexivity (for learners and educators) within the field of activity and interaction is required to input sufficient values and parameters for an algorithmic application, which might then be sympathetic to proper interrogation by both parties, equally? Pity I can&#039;t fit this thought into a Tweet! Might try now (I am too wordy). Thanks once again.]]></description>
		<content:encoded><![CDATA[<p>Sian, Wonderful insights as always. The traffic light system (I took a look at the article you mention) is not for me, partly for the reasons of over simplification you mention, but also because of a feeling of a kind of &#8216;punitive imperative&#8217; which it implies, it sort of takes us back to &#8216;Follow the Judas sheep: materializing post-qualitative methodology in zooethnographic space&#8217; (Penderson, 2013). Surely an authentic (extent) of reflexivity (for learners and educators) within the field of activity and interaction is required to input sufficient values and parameters for an algorithmic application, which might then be sympathetic to proper interrogation by both parties, equally? Pity I can&#8217;t fit this thought into a Tweet! Might try now (I am too wordy). Thanks once again.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mprowse</title>
		<link>https://edc15.education.ed.ac.uk/mprowse/2015/03/07/disciplined-by-algorithms/#comment-190</link>
		<dc:creator><![CDATA[mprowse]]></dc:creator>
		<pubDate>Fri, 13 Mar 2015 13:37:47 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mprowse/?p=327#comment-190</guid>
		<description><![CDATA[Jeremy, Thank you for your thoughtful detailed comments, a great help. I certainly agree that I went too far with the reference to an &#039;intentionality&#039; (maybe far future looking here), part of the process of thinking my way through things I suppose, and yes the focus was really on distributed relational ontology, epistemic and cognitive value. On the point of future my conduct, Quillconnect clearly encourages me to be &#039;positive&#039; in my Tweeting  (or continue being so, perhaps, such flattery!), and is not afraid of making comparisons between me and my followers. Reminds me of yours and Sian&#039;s jokes about HAL and performance appraisals during our film festival Hangout, which I suppose is quite apt and as on topic at this juncture! Thanks once again.]]></description>
		<content:encoded><![CDATA[<p>Jeremy, Thank you for your thoughtful detailed comments, a great help. I certainly agree that I went too far with the reference to an &#8216;intentionality&#8217; (maybe far future looking here), part of the process of thinking my way through things I suppose, and yes the focus was really on distributed relational ontology, epistemic and cognitive value. On the point of future my conduct, Quillconnect clearly encourages me to be &#8216;positive&#8217; in my Tweeting  (or continue being so, perhaps, such flattery!), and is not afraid of making comparisons between me and my followers. Reminds me of yours and Sian&#8217;s jokes about HAL and performance appraisals during our film festival Hangout, which I suppose is quite apt and as on topic at this juncture! Thanks once again.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jeremy Knox</title>
		<link>https://edc15.education.ed.ac.uk/mprowse/2015/03/07/disciplined-by-algorithms/#comment-186</link>
		<dc:creator><![CDATA[Jeremy Knox]]></dc:creator>
		<pubDate>Wed, 11 Mar 2015 11:43:17 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mprowse/?p=327#comment-186</guid>
		<description><![CDATA[&#039;there are some precepts in terms of a kind of corporate expectation regarding engagement&#039;

This is a really important point I think. QuillConnect has clearly predetermined a particular kind of &#039;normal&#039; Twitter user, against which you have been judged. However, a corporate use of social media would seem to be more about marketing and being seen to be present, rather than participating with some kind of educational agenda?

&#039;my Facebook page displays prominently to me my incomplete profile (and by what percentage)- pushing me with suggestions of how many friends live in certain proximities and should I like to choose these as a current location, or as a previous life and so on. &#039;

Indeed, and the QuillConnect report must have also made suggestions about the kind of future conduct you should be aiming for. Perhaps the number of tweets you should send a week, or the &#039;sentiment&#039; with which you write?

&#039;part of a kind of intentionality which has an ontological as well as epistemic or cognitive value independent, or at least not reducible to me, or any other single individual or indeed potentially any single algorithm perhaps.&#039;

Really liked this summary Miles. I guess I wouldn&#039;t use the term &#039;intentionality&#039;, due to its focus on the human mind and its ability to represent, however I agree that a useful way to look at this is to consider a kind of distributed, relational ontology. I particularly like how you highlight the &#039;for you&#039; - great point. It seems to emphasise a position that is entirely the opposite: that these algorithms are somehow discovering deep truths about us as individuals, rather than integrating our activity data with much broader societal behaviours. 

I think your final point here is excellent too: what we &#039;get back&#039; for our personal data (particularly with the Google suite of services) is easy of transition between things. Life is just easier when you are automatically signed in, and I think things are increasingly designed that way.]]></description>
		<content:encoded><![CDATA[<p>&#8216;there are some precepts in terms of a kind of corporate expectation regarding engagement&#8217;</p>
<p>This is a really important point I think. QuillConnect has clearly predetermined a particular kind of &#8216;normal&#8217; Twitter user, against which you have been judged. However, a corporate use of social media would seem to be more about marketing and being seen to be present, rather than participating with some kind of educational agenda?</p>
<p>&#8216;my Facebook page displays prominently to me my incomplete profile (and by what percentage)- pushing me with suggestions of how many friends live in certain proximities and should I like to choose these as a current location, or as a previous life and so on. &#8216;</p>
<p>Indeed, and the QuillConnect report must have also made suggestions about the kind of future conduct you should be aiming for. Perhaps the number of tweets you should send a week, or the &#8216;sentiment&#8217; with which you write?</p>
<p>&#8216;part of a kind of intentionality which has an ontological as well as epistemic or cognitive value independent, or at least not reducible to me, or any other single individual or indeed potentially any single algorithm perhaps.&#8217;</p>
<p>Really liked this summary Miles. I guess I wouldn&#8217;t use the term &#8216;intentionality&#8217;, due to its focus on the human mind and its ability to represent, however I agree that a useful way to look at this is to consider a kind of distributed, relational ontology. I particularly like how you highlight the &#8216;for you&#8217; &#8211; great point. It seems to emphasise a position that is entirely the opposite: that these algorithms are somehow discovering deep truths about us as individuals, rather than integrating our activity data with much broader societal behaviours. </p>
<p>I think your final point here is excellent too: what we &#8216;get back&#8217; for our personal data (particularly with the Google suite of services) is easy of transition between things. Life is just easier when you are automatically signed in, and I think things are increasingly designed that way.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: sbayne</title>
		<link>https://edc15.education.ed.ac.uk/mprowse/2015/03/07/disciplined-by-algorithms/#comment-185</link>
		<dc:creator><![CDATA[sbayne]]></dc:creator>
		<pubDate>Wed, 11 Mar 2015 11:35:39 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mprowse/?p=327#comment-185</guid>
		<description><![CDATA[Fascinating stuff Miles, in particular the way you foreground this imperative to &#039;complete the profile&#039;: it makes me think of the visualisation of &#039;completion&#039; via progress bars and the like which I guess tap into a psychological need of users (this seems to be quite well documented in the HCI literature, and perhaps relates to the feeling of &#039;guilt&#039; you mention in relation to a &#039;failure&#039; to complete it).

I wonder what the implications might be for the representation of data in learning analytics, as I share a general concern that the oversimplification of data visualisation in things like the famous Purdue &#039;traffic lights&#039; system (http://www.educause.edu/ero/article/signals-applying-academic-analytics) reifies data that is already reductive, and reduces our capacity to interrogate and understand how that data is worked on algorithmically.]]></description>
		<content:encoded><![CDATA[<p>Fascinating stuff Miles, in particular the way you foreground this imperative to &#8216;complete the profile': it makes me think of the visualisation of &#8216;completion&#8217; via progress bars and the like which I guess tap into a psychological need of users (this seems to be quite well documented in the HCI literature, and perhaps relates to the feeling of &#8216;guilt&#8217; you mention in relation to a &#8216;failure&#8217; to complete it).</p>
<p>I wonder what the implications might be for the representation of data in learning analytics, as I share a general concern that the oversimplification of data visualisation in things like the famous Purdue &#8216;traffic lights&#8217; system (<a href="http://www.educause.edu/ero/article/signals-applying-academic-analytics" rel="nofollow">http://www.educause.edu/ero/article/signals-applying-academic-analytics</a>) reifies data that is already reductive, and reduces our capacity to interrogate and understand how that data is worked on algorithmically.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
