In my post on the Twitter Discovery timeline, I created a list of things that I thought Twitter might be counting as ‘interest’ catagories. When I read my QuillConnect report, I initially felt it was reductive and not particularly helpful. Sian tweeted something similar:
— Sian Bayne (@sbayne) March 6, 2015
Reading PJ’s blog, however, inspired me to go back and try again.
To start, some numbers:
You have been a Twitter user for four years and you tweet more than most of your followers. You post 66 tweets a week while your followers average 7 per week. Further, you have 916 followers listening to you … You are in the 96th percentile of Twitter users measured by followers.
During MScEDC I’ve been tweeting more than normal–this week I tweeted 86 times, last week I tweeted over 150 times. The numbers are correct, as far as I can measure them. I was surprised that someone with less than 1000 followers was so influential, but it would explain why some accounts with just over 1000 followers were rated influential enough to appear alongside major platforms like The New York Times or The Economist on my Discovery timeline.
I have manually created my own ‘youloop’–most of the people I follow and those who follow me are interested in the same topics as me: “Your important topics match those most tweeted about by followers who are similar to you.” “The hashtags most often used by your followers similar to you have been #mscedc, #phdchat, and #talkhe.” This is unsurprising!
Second, issues of catagorisation. Some of my catagories are much more specific than QuillConnect’s (History, Writing, Academia probably all count as ‘Education’). I made no distinction between sources of interest, where QuillConnect lists ‘Entertainment, Arts, Music, Television, Celebrity’. On the other hand, I make a distinction between ‘Geek culture’ and ‘Popular culture’ and ‘Writing’. QuillConnect does not list ‘Food’ as a catagory, which is surprising (though perhaps less so on Twitter than Instagram or Facebook).
Third, issues of sentiment. QuillConnect is interested in ‘positive’, ‘negative’ or neutral’ language. I listed content analysis of genre. However, it would not be hard to suggest that Diversity, Hard News, Online Security are likely to skew negative, where as Humour, Inspiration, Visually attractive are likely to skew positive.
I therefore recoded my tweets to try to ‘reverse engineer’ my Discovery timeline analysis. I then compared them to the QuillAnalysis report.
I mostly tweet about politics (8%) according to QuillConnect. From my own content analysis, Twitter reads this as more likely to mean ‘diversity politics’ than ‘hard news politics’.
I regularly tweet about Education, and education tweets are most likely to be served to me in my Discovery timeline (both mobile and desktop).
Twitter thinks I’m more interested in Technology than Science, while QuillConnect thinks it’s the other way around. (I tend to agree with Twitter).
I tweet less than most people about Entertainment, Television, Music and Celebrity. As I don’t own a television and films give me migraines, this is not surprising. Discovery tends to agree–my tweets that did reference television tended to do so obliquely. A gif referencing Star Trek was really about technology, a tweet mentioning television marathons was really a joke about tenure (education), a tweet about HBO was more about structures of technology companies than the content of any shows.
I tweet about ‘Arts’ according to QuillConnect. Discovery suggests that this means, sculpture, historical artifacts, photography, literature.
My Twitter is regarded as ‘neutral’ by QuillConnect, and “for #mscedc, tweets containing the hashtag are predominantly neutral in tone.” When I re-coded my Discovery timeline, I counted 6 positive and 7 negative tweets on mobile (some of the tweets, for example, were humourous tweets about a negative situation, thus they were counted twice). This suggests a balance . However, only 3 tweets in the Desktop timeline were truly neutral (neither positive nor negative) and none of the mobile tweets.
So QuillConnect wasn’t wrong, but it was reductive. It was limited by lack of detail, which a human user or Twitter’s own algorithm seems able to deliver on. The difference between ‘balanced both positive and negative neutral’ and ‘not expressing positive or negative neutral’, for example, is significant.
QuillConnect is unable to analyse sophisticated or contextual content. It suggests: “For example, the most retweeted of your followers have utilized #westconnex in their recent tweets.” This is because I follow a number of accounts campaigning against the East-West Link (an unpopular, expensive proposed toll road through Melbourne), and WestConnex is a similar road in Sydney. Some of the people I follow (who follow me back) are active campaigners against both roads. Me jumping on that bandwagon to increase my reach would be strange and inappropriate.
Twitter’s Discovery timeline alogorithm, on the other hand, assumes I am more interested in historical artifacts, more likely to click on an article about social justice, or online security, than on a campaign going on in my backyard. They are probably right.
The QuillConnect algorithm is still too far off, and is therefore producing negative emotions.
Uncanny Valley, via Wikimedia. See http://www.androidscience.com/theuncannyvalley/proceedings2005/uncannyvalley.html
The ‘uncanny valley’ suggests that a ‘bunraku puppet’ is clearly unhuman but would still garner a ‘positive’ familiarity response.
Discovery on mobile is also somewhat creepy–these would not have been the tweets I picked out for myself as being the ones I’d like most to see. But the Discovery timeline (when it has enough data) is starting to climb out of the uncanny valley.
It is not yet human, I still probably won’t use it, but it didn’t annoy me much and it’s analysis of my interests was ‘close enough’.