<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Adventures in algorithms</title>
	<atom:link href="https://edc15.education.ed.ac.uk/mpeters/2015/03/09/adventures-in-algorithms/feed/" rel="self" type="application/rss+xml" />
	<link>https://edc15.education.ed.ac.uk/mpeters/2015/03/09/adventures-in-algorithms/</link>
	<description>An education and digital culture site - #MSCEDC 2015</description>
	<lastBuildDate>Wed, 11 Mar 2015 04:07:32 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.2.38</generator>
	<item>
		<title>By: Martyn</title>
		<link>https://edc15.education.ed.ac.uk/mpeters/2015/03/09/adventures-in-algorithms/#comment-112</link>
		<dc:creator><![CDATA[Martyn]]></dc:creator>
		<pubDate>Wed, 11 Mar 2015 04:07:32 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mpeters/?p=138#comment-112</guid>
		<description><![CDATA[Looking through Rebecca Enyon&#039;s paper on the rise of big data she really highlights the more concerning issues related to algorithms and their application. When she writes about using data to predict whether or not a university student will drop out and whether universities should accept them, for me these issues are really tough ethically. Also, the idea that a student&#039;s progress can be tracked throughout school, unless it is used for remedial work and support, could be damning and condemning if used for other means. The tracking of progress could give way to a kind of modern day 11 plus exam. One quote that really also stood out was;

&quot;What happens to serendipity in a system where all educational choices are based on recommender systems&quot; (Enyon, 2013, 238)

I think this really echoes the points you were making about Netflix above Jeremy, and, as much as I love structured studying, I think stumbling upon ideas and concepts through education is also really important. 

Rebecca Eynon (2013) The rise of Big Data: what does it mean for education,
technology, and media research?, Learning, Media and Technology, 38:3, 237-240, DOI:
10.1080/17439884.2013.771783]]></description>
		<content:encoded><![CDATA[<p>Looking through Rebecca Enyon&#8217;s paper on the rise of big data she really highlights the more concerning issues related to algorithms and their application. When she writes about using data to predict whether or not a university student will drop out and whether universities should accept them, for me these issues are really tough ethically. Also, the idea that a student&#8217;s progress can be tracked throughout school, unless it is used for remedial work and support, could be damning and condemning if used for other means. The tracking of progress could give way to a kind of modern day 11 plus exam. One quote that really also stood out was;</p>
<p>&#8220;What happens to serendipity in a system where all educational choices are based on recommender systems&#8221; (Enyon, 2013, 238)</p>
<p>I think this really echoes the points you were making about Netflix above Jeremy, and, as much as I love structured studying, I think stumbling upon ideas and concepts through education is also really important. </p>
<p>Rebecca Eynon (2013) The rise of Big Data: what does it mean for education,<br />
technology, and media research?, Learning, Media and Technology, 38:3, 237-240, DOI:<br />
10.1080/17439884.2013.771783</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jeremy Knox</title>
		<link>https://edc15.education.ed.ac.uk/mpeters/2015/03/09/adventures-in-algorithms/#comment-110</link>
		<dc:creator><![CDATA[Jeremy Knox]]></dc:creator>
		<pubDate>Tue, 10 Mar 2015 12:03:29 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mpeters/?p=138#comment-110</guid>
		<description><![CDATA[This is a super outline of your algorithmic play Martyn, and fascinating to see Netflix recommendation at work.

&#039;My choices, right from the beginning, were limited to recent viewing folly&#039;

Yes, really interesting, particularly as Netflix seem to claim a somewhat more sophisticated algorithm, and also that they highlight the differences between what people say they watch, and what they actually watch (http://www.wired.com/2013/08/qq_netflix-algorithm/). Where is the scope to be surprised by something new in your recommended films? This is the crux of the you loop issue.

&#039;Harmless here, but what if it was a political view being reinforced? What if it was a perspective on a current affair or report on a certain issue I may not have heard about.&#039;

Great point here. While one might argue that Hollywood films are relatively simple to categorise into genre, there is a problem here between how the algorithm might classify something, and how any one individual might interpret that area, isn&#039;t there? So, staying with the film example,can we assume that people actually like films according to genre in all cases? Don&#039;t people have more nuanced interpretations of films, and what they might like about them? For example, I think The Shining (http://en.wikipedia.org/wiki/The_Shining_%28film%29) is probably one of my favourite films, but I wouldn&#039;t say I&#039;m a fan of a &#039;horror&#039; genre. There are other things I like about the film that I&#039;m not sure I can even articulate fully what those things are and *why* I like it - so can an algorithm really do that for me?

That relates to your point here, because there are perhaps more serious implications around the kind of knowledge we have access to through algorithms (Google Search being a prime example). So, my argument would be that when we try to model the &#039;likes&#039; and &#039;dislikes&#039; of people, it is not only done in a hugely simplistic way, but the results also feed back formative ideas about our selves that are quite powerful (Oh, so I must be a horror fan!). Rather than think about what it is we like about films, the algorithm gives us an answer. When one thinks about this in terms of politics, as you say, the implications may be pretty significant.

You also raise a really good point about the automated &#039;personalisation&#039; of education and the loss of diversity. The assumed transparency and authenticity of the algorithm is central here again I think: if an algorithm tells you that you are a &#039;visual learner&#039; and that you prefer mathematics-related subjects, that is a pretty powerful message to somebody who might be feeling pressured to find direction and focus in their choice of educational pathway?]]></description>
		<content:encoded><![CDATA[<p>This is a super outline of your algorithmic play Martyn, and fascinating to see Netflix recommendation at work.</p>
<p>&#8216;My choices, right from the beginning, were limited to recent viewing folly&#8217;</p>
<p>Yes, really interesting, particularly as Netflix seem to claim a somewhat more sophisticated algorithm, and also that they highlight the differences between what people say they watch, and what they actually watch (<a href="http://www.wired.com/2013/08/qq_netflix-algorithm/" rel="nofollow">http://www.wired.com/2013/08/qq_netflix-algorithm/</a>). Where is the scope to be surprised by something new in your recommended films? This is the crux of the you loop issue.</p>
<p>&#8216;Harmless here, but what if it was a political view being reinforced? What if it was a perspective on a current affair or report on a certain issue I may not have heard about.&#8217;</p>
<p>Great point here. While one might argue that Hollywood films are relatively simple to categorise into genre, there is a problem here between how the algorithm might classify something, and how any one individual might interpret that area, isn&#8217;t there? So, staying with the film example,can we assume that people actually like films according to genre in all cases? Don&#8217;t people have more nuanced interpretations of films, and what they might like about them? For example, I think The Shining (<a href="http://en.wikipedia.org/wiki/The_Shining_%28film%29" rel="nofollow">http://en.wikipedia.org/wiki/The_Shining_%28film%29</a>) is probably one of my favourite films, but I wouldn&#8217;t say I&#8217;m a fan of a &#8216;horror&#8217; genre. There are other things I like about the film that I&#8217;m not sure I can even articulate fully what those things are and *why* I like it &#8211; so can an algorithm really do that for me?</p>
<p>That relates to your point here, because there are perhaps more serious implications around the kind of knowledge we have access to through algorithms (Google Search being a prime example). So, my argument would be that when we try to model the &#8216;likes&#8217; and &#8216;dislikes&#8217; of people, it is not only done in a hugely simplistic way, but the results also feed back formative ideas about our selves that are quite powerful (Oh, so I must be a horror fan!). Rather than think about what it is we like about films, the algorithm gives us an answer. When one thinks about this in terms of politics, as you say, the implications may be pretty significant.</p>
<p>You also raise a really good point about the automated &#8216;personalisation&#8217; of education and the loss of diversity. The assumed transparency and authenticity of the algorithm is central here again I think: if an algorithm tells you that you are a &#8216;visual learner&#8217; and that you prefer mathematics-related subjects, that is a pretty powerful message to somebody who might be feeling pressured to find direction and focus in their choice of educational pathway?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: PJ Fameli</title>
		<link>https://edc15.education.ed.ac.uk/mpeters/2015/03/09/adventures-in-algorithms/#comment-109</link>
		<dc:creator><![CDATA[PJ Fameli]]></dc:creator>
		<pubDate>Mon, 09 Mar 2015 13:24:53 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mpeters/?p=138#comment-109</guid>
		<description><![CDATA[Matryn, thx for concise,straightforward and instructive algorithmic play. I have been delaying signing up for Netflix for past couple years, as I have clung to my portable DVD player. But now that you have educated me on what to watch out for, I will probably break down and start watching Scandinavian films on it.  Cheers, PJ]]></description>
		<content:encoded><![CDATA[<p>Matryn, thx for concise,straightforward and instructive algorithmic play. I have been delaying signing up for Netflix for past couple years, as I have clung to my portable DVD player. But now that you have educated me on what to watch out for, I will probably break down and start watching Scandinavian films on it.  Cheers, PJ</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Week 9: learning analytics and calculating academics &#124; MSc in Digital Education at the University of Edinburgh</title>
		<link>https://edc15.education.ed.ac.uk/mpeters/2015/03/09/adventures-in-algorithms/#comment-107</link>
		<dc:creator><![CDATA[Week 9: learning analytics and calculating academics &#124; MSc in Digital Education at the University of Edinburgh]]></dc:creator>
		<pubDate>Mon, 09 Mar 2015 10:51:35 +0000</pubDate>
		<guid isPermaLink="false">http://edc15.education.ed.ac.uk/mpeters/?p=138#comment-107</guid>
		<description><![CDATA[[&#8230;] in a way which limits not only the scope of our activity but the construction of our identity. Martyn did a great job of analysing this in relation to Netflix, Clare’s tiki-toki timeline gave a terrific sense of her wider social media ecology, Jin applied [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] in a way which limits not only the scope of our activity but the construction of our identity. Martyn did a great job of analysing this in relation to Netflix, Clare’s tiki-toki timeline gave a terrific sense of her wider social media ecology, Jin applied [&#8230;]</p>
]]></content:encoded>
	</item>
</channel>
</rss>
