Category Archives: lifestream

Recap: Week 12 – Final Lifestream Summary

The Education and Digital Cultures 2015 course has finally come to an end and I would like to take this opportunity to look back at the last 12 weeks and thank my colleagues and my tutors Sian and Jeremy for this highly interesting journey into the land of cyborgs, algorithms and artificial intelligence.

Keeping a lifestream blog has been a new experience for me and while it did take some time, effort and, dare I say, frustration to set it up correctly, over the duration of the course it has grown into an excellent collection of resources for me to come back to.

Adding tags to every post was an incredibly helpful way to organise my lifestream according to different parameters such as source, type or topic. The tag cloud on the right shows a nice visual representation of this endeavour and gives us plenty of insights into the lifestream as a whole.

As we can see IFTTT is the biggest tag, meaning that around 3/4 of my posts were automatically populated from social media sites like Twitter, YouTube, Vimeo and Pinterest with the help of the IFTTT service.

In terms of types of content I shared a variety of videos and articles that I thought were really interesting and relevant to the discussions we were having during the course. In addition to the weekly recaps and my comments on my peers’ and my own blog I also posted two how to guides to help my fellow students set up their lifestream. Furthermore there are one-off postings like the digital artefact from block 1, the mooc ethnography from block 2 and my reflections on the tweetorial from block 3 of our course.

I didn’t really know a lot about the topics covered in this course before the semester started. It was therefore a very pleasant surprise for me that the themes we discussed were extremely fascinating and they challenged me to think about issues I had never considered before. I’ve been hearing about artificial intelligence all my life but I had never actually contemplated how vast its implications are going to be, not just in the field of digital education but for humankind in general and society as a whole.

This course has made me think about what it means to be human in an age where the lines between biology and technology are being increasingly blurred with biohackers substituting and even adding new senses to our biology. We are living in a time where computer algorithms are not just taking over more and more tasks that humans used to do (such as trading in the financial markets) but thanks to big data are now able to do things that weren’t even possible before, like personalised search results and video suggestions based on profiles of people similar to you.

Technology does an exceptional job in connecting people and I see a lot of potential in it to facilitate education for everyone, as shown by the development of MOOCs. Moreover, thanks to learning analytics algorithms could uncover as of yet unknown patterns in how the mind works. All this, however, comes at a hefty price: Privacy. The more information we are willing to quantify about ourselves the more we allow certain entities to know about us. If we don’t ever want that knowledge to be used against us we have to become much more conscious about the issue of privacy and data security going forward. How we will react to these issues will be one of the defining moments of the 21st century.

Recap: Week 11

Now that the taught section of the course has come to an end my main objective for the last two weeks of the course is to clean up the lifestream blog for final submission and coming up with a research question for my final essay.

Although I hadn’t planned to add additional content this week I couldn’t pass up on the opportunity to share this excellent TED talk on YouTube by Stanford professor Fei Fei Li on the newest advancements in machine learning, an overarching theme in this course and one of the most exciting topics I’ve learned about in a long time. All throughout this course I’ve been wondering if once artificial intelligence surpasses our own where this will leave us humans and our human education. Might such a takeover happen before education (at least in its institutionalised form) even embraces some of the radical changes promised by Technology Enhanced Learning? Will there even be a need for education in a world where all important cognitive tasks are performed by sentient machines or will education be an optional activity for those so inclined similar to what learning a musical instrument is in today’s society?

I’m currently in the process of going through every post of my lifestream, fixing links, embeds and tags. Once I am finished with that I will be posting a final summary by the end of next week.

Recap: Week 10

Another week has gone by far too quickly and looking at  the content in my lifestream this week the main theme “putting it all together” seems rather fitting as I’ve been collecting interesting material that covers not just block 3 of algorithmic cultures but topics from the whole course.

The first post this week was an incredibly well done sci-fi short film I saw on Vimeo, called “Sight” on how augmented reality and gamification might drastically change the way we live and interact with each other in the future.

Next I linked to an interesting article I found on Twitter in the International Business Times that discusses the influence of content-curation algorithms and their inherent biases. It shows that people are often unaware of algorithms working in the background and when learning about it they often exhibit quite “visceral” reactions, followed by a change in their behaviour to accomodate for the algorithms.

Another great longform article I found on the Verge discusses the possibility that memories might be able to survive outside of the brain which reminded me of the discussions we had when we explored posthumanism in the earlier weeks of the course.

The following post was an in-depth reflection on last week’s tweetorial where I looked at what we can learn from tools like Tweetarchivist and Keyhole which algorithmically analysed the conversation we had on Twitter.

This week a couple of new talks from the latest TED conference showed up in my YouTube newsfeed and one of them in particular caught my attention as it was a new talk by neuroscientist David Eagleman whom I had previously talked about in this post. While his main talking points were the same as in his previous video he offered some new results that look very promising. His sensory substition vest, for example, seems to work very well in teaching a deaf person to hear. I am still just as excited as the first time I heard about this research. Maybe sensory addition really is just around the corner.

Finally I linked to a short animated TED-Ed video on whether robots can be creative. This video explores algorithms that to come up with pieces of music which they then iteratively compare with music that humans consider to be “beautiful”, discarding the patterns that do not match and keeping the patterns that do. The results are remarkable to say the least. To an outsider the music these algorithms create sounds very much like it has been composed by a human being.

Now that the end of the course is drawing closer it is time to turn my attention to the final assignment. In the meantime I would like to say thank you to our exceptional course tutors Sian and Jeremy and my wonderful colleagues for the many thought provoking and highly engaging discussions I’ve been blessed to be a part of over the last 3 months. :)

Reflections on the Tweetorial

Last week’s tweetorial was the first time I participated in a so-called “tweetstorm” on a topic. As I am not really an active Twitter user outside of this course it was a new experience for me. Reflecting on it I’ve noticed that the 140 character limit per tweet has very interesting and real consequences for a discussion and my own participation within it. Obviously, the limit causes people to express their opinions in very brief statements which can leave room for interpretation. To counteract  the limitations one can keep sending out tweets to get one’s message across – in my mind a rather inefficient way compared to other mediums such as blog posts. The consequence is that it will likely clog up the twitter feed and potentially drown out other voices. Another way is to think hard about how to best come up with an answer that is deliberately vague and open to interpretation yet still conveys meaning. I wasn’t too comfortable with overshadowing the conversation with too many messages (and I unfortunately couldn’t participate on Friday) but I tried to come up with messages that were appropriate for the medium.

tweet

The Tweet Archivist and Keyhole analyses of our tweetorial show a discrepancy in the number of posts people were willing to share. While I was on the lower end with 6 tweets, the top tweeter by far was my colleague PJ with 59 contributions.

piechart

As I was unavailable for most of Friday I unfortunately missed the peak of the discussion.

timeline

Once I came back, I was feeling overwhelmed by the fragmented nature of tweets and retweets on a variety of topics. People hadn’t just stuck to the questions Sian and Jeremy had prepared but instead switched to other topics as well, such as the topic of learning to code – as seen in this keyword cloud:

topics

Looking at the data sets that these analytical tools generate I can’t help but question their value in terms of how they can help us learn.  The word cloud is the best example of how data needs to be interpreted to create information, let alone to generate knowledge or wisdom. Atomising the conversation and displaying the frequency distribution of words visually might give an outsider a quick overview of the topics discussed but there doesn’t seem to promote much learning. Perhaps analytic algorithms will in the future be able to extract the meaning of such conversations and assist the learner in getting the gist of it but in their current state these analytical tools don’t seem to offer much value in terms of content.

There is, however, an interesting observation that we can glean from the analysis on a meta perspective: the social dynamics of the conversation.

user mentions

Compared to the tweet count from earlier we can see that Sian, even though she only posted half as many tweets as PJ was being addressed the most. As she is the tutor in this course this does not seem all too surprising but considering the scope that learning analytics could be scaled up to, identifying such influencers might turn out to be valuable meta data.

Overall I have to say that the algorithmic snapshots of our tweetorial have not given me any particularly valuable insight that could significantly support me in learning from the tweets. Perhaps more context aware algorithms will one day be able to better distill the meaning of such conversations. For now, the meta data, particularly regarding the social structure of the conversations, are the most useful parts generated by these analytic tools.