Category Archives: lifestream

Recap: Week 9

Week 9 has ended and we have continued exploring the world of algorithms and turned our attention to the specific applications of algorithms in education in the form of learning analytics.

It was fascinating to see what my colleagues came up with for last week’s exercise in exploring the algorithms used by internet companies to give their users individual suggestions and recommendations. Following up on my own explorations I delved into some issues raised by Jeremy, especially concerning privacy implications – in my mind the most pervasive and pressing issue surrounding the growing use of algorithms in our lives.

Later in the week I turned my attention to Twitter, reading and replying to my colleagues’ tweets as well as participating in this week’s scheduled EDC tweetstorm.

One particular question we were asked was what we give to algorithms and what they give us and I tweeted “We give them our history, they give us our future.” This statement was on purpose meant to be ambiguous. On a more surface level one could interpret it as us giving algorithms our search history or watched videos history and they recommend to us sites or videos we will watch in our future, thus shaping our future. In my opinion this goes even deeper however. Algorithms that predict the weather include not just historical weather data but also our own  current understanding of maths, systems dynamics and meteorology – all developed in time. Especially in light of artificial intelligence it seems more and more likely to me that we are soon to be passing on the torch of knowledge creation to entities that will not be limited by their phyical and biological boundaries.

Considering the issues of learning analytics I see that in the future they will be able to considerably help people in their learning endeavours in a variety of ways. My prediction will be that models based on biofeedback, like heart rate, skin conduction, pupil dilation, blink frequency, brainwave measurements etc will be one day used to guide the student to maximise learning, perhaps by signalling perfect learning window times. As previously mentioned, such massive tracking carries its own set of problems, particularly with regards to privacy and data security.

To round things up this week I stumbled upon a new fun little game by Google that lets you play around with its autocomplete suggestion engine, scoring points for correctly guessing its guesses.

Comment on Exploring Algorithms by mkiseloski

Thanks Martyn and Jeremy for your comments!

@ Martyn
Thank you, I will check out Ghostery. It seems like one more useful tool to protect privacy on the internet.

@ Jeremy
You raise some very interesting points here! I wholeheartedly agree that the issues of privacy need to be taken much more seriously in our public discourse, regardless of whether our data are collected from a government entity or a private business. You are right when you say that information like “likes Latin American music” or “likes winter sports” on their own seem rather inconspicuous, but the point to be made here is that over the more such seemingly useless factoids merge to create a stunningly accurate profile. This reminds me of how after the Snowden leaks people tried justify the warrantless NSA surveillance programs citing that they only collected metadata when in fact metadata (who did you talk to, when were you in what place, where did you use your credit card how much money, where did you go regularly, who was with you during those times, etc.) can potentially present a much more accurate description than the content of phone calls.
I read recently that Uber could easily infer from their user data how likely someone was for having an affair with someone (and where) simply by looking at driving patterns of people regularly driving some place in the evening and driving back home in the early hours of the morning. Knowing that some private company can so easily obtain such sensitive information feels quite unsettling for me.
I think that algorithms first discover identity but that as they become more aware of your existing identity and as they form a filter bubble for you they tend to influence you with their suggestions, possibly shaping your identity as you interact with their services more and more. In any case I think that privacy has to be protected if we want to live in a free society.

from Comments for Mihael’s EDC blog http://edc15.education.ed.ac.uk/mkiseloski/2015/03/07/exploring-algorithms/#comment-609
via IFTTT

Recap: Week 8

The first week of block 3 on algorithmic cultures has ended and as expected it was a very informative experience delving into the depths of algorithmic cultures.

To finish up last week’s ethnographic song I started this week’s lifestream with some more detailed observations about the songwriting MOOC’s community culture.

Next, I dove right into this weeks topics of algorithms with a fantastic talk by Kevin Slain on how algorithms shape our world. Mr Slain shows the fascinating world of financial algorithms that have largely taken over control of the global financial system in the last decade. Expanding on this trend I can wholeheartedly recommend last year’s bestseller book “Flash Boys” by Michael Lewis.

Staying on the topic of algorithms and the financial sector I started to explore the topic of cryptocurrencies such as BitCoin which is based on the blockchain algorithm, a cryptographically protected trustless accounting ledger that many people see as one of the biggest revolutions of the 21st century. Building on the principle of the blockchain developers have already started to expand the framework to not only include currencies but also social contracts, financial markets, property rights and e-government. A truly fascinating Harvard lecture on Ethereum, the platform on which these blockchain contracts are being executed, shows the dangers and possibilites of these brand new developments.

The following day I once more turned my attention to this exceptionally well written and thoroughly researched longform article about artificial intelligence – an incredibly long, but nonetheless fascinating, thought provoking and even frightening elaboration on the implications of artificial superintelligence. Expanding on that, I posted a video by Nick Bostrom, one of the more prominent thinkers in this field.

Finally, I posted my findings on this week’s task of exploring and playing around algorithms employed by technology companies in their web offerings. I used the YouTube recommendation engine to examine this topic, finding plenty of evidence for the often cited “filter bubble”.

It will be interesting to see next week how algorithms can be used to specifically improve education outcomes.

Exploring Algorithms

Algorithms play an increasing role in our online lives. In their neverending quest to accurately profile their users in order to maximise ad revenues IT companies employ more and more sophisticated data mining methods incorporating information from  your activity history, your friends, your location and even strangers with similar interests to you.

Ever since I’ve watched Gary Kovac’s shocking TED talk “Tracking the Trackers” I’ve become increasingly privacy conscious and have taken several precautions to not be as easily tracked. From activating the ‘Do not track’ option in my browser to installing privacy enhancing browser extension such as Disconnect to opting out of targeted ads within the Google settings.

Thanks to these steps my advertisement profile with Google is now rather unspecific. Without taking such measures however, Google has been able to profile people surprisingly well.

I actively try to avoid the personalised features that sites present me with. In Facebook I never use their EdgeRank algorithm that sorts my feed according to “Top Stories” – I use “Most Recent” instead – simply because most of the Top Stories unsurprisingly is paid content from pages I subscribed to, not posts from my friends. Another reason is that I prefer to keep an open mind and personalised filters tend to create a filter bubble which not only distort people’s view of the outside world according to their own preferences and beliefs, they also do so invisibly.

For this week’s exercise of exploring algorithms I have decided to take a look at YouTube, since I have a long standing history of using that site. My main interaction with the site is wih the “My Subscriptions” tab which is always more relevant (and recent) than the algorithmically populated “What to watch” feature.

Logged into my account, this is what my front page looks like

what_to_watch

I can immediately tell why YouTube is recommending these videos to me. All of these videos are closely related to videos I have watched on YouTube within the last 48 hours. I watched one “CinemaSins” video, one Pink Floyd song, one Kygo song, a fail video and a Strokes song. While the songs that I played were actively sought out the other videos showed up on my subscription feed which made me click them.

If I scroll further down, it seems that YouTube still takes the same 3 or 4 videos from earlier and shows more related videos. Additionally, it suggests videos by la belle musique, a channel I am already subscribed to. what_to_watch2

While the suggested videos generally meet my taste, they don’t necessarily entice me to watch them now, especially since they don’t lead me to interesting new channels which I might want to subscribe to.

If I log out and visit YouTube in an incognito mode I am greeted with the following suggestions.

what_to_watch3

None of these videos have any relevance to my search or watched videos history but looking at the channel names (Ad Council, RadioKRONEHIT) one can assume that these videos have been placed on the front page because someone had paid for it.

Let’s take a look at the comments section of YouTube which has long been famous for its disastrous reputation. Apparently Google sorts the comments according to your Google+ profile which I, however, never use. This shows in the comparison between the two comments pages of a random video I clicked on.

Logged in:
comments1

Logged out:
comments2

As we can see, probably due to the lack of usable profile data Google has from my Google+ account, the comments shown are the same both when logged in and logged out. Again, the combative nature of YouTube comments shines through once more. It seems that no matter how sophisticated the algorithms in place, unless YouTube actively censors comments, it will always fight an uphill battle against the culture that has developed within the YouTube comments universe.

In conclusion, suggestion algorithms like the one used by YouTube can somewhat enhance your user experience to a certain extent, provided that you are okay with sharing enough data about yourself. Given the problems associated with filter bubbles and privacy concerns however, at present I still prefer a carefully manually selected subscriptions list to algorithmically derived suggestions.