This arrangement has been going on for about two years now and we have not had a chance to evaluate what we have done. In essence, we have conducted a two year experiment into using science communication. By doing a brief eye-balling online it looks like #microtwjc is longest running, functional twitter journal club. The original twitter journal (http://www.twitjc.com/) ran for 2.5 years (June 2011 - December 2013), with a half-year break between June 2013 and December 2013 (other breaks were taken). This means that #microtwjc affords an optimal chance of getting something useful out of its study.
For the last few months we have been trying to gather as much data about #microtwjc as we can. This covers number of tweeters over time at each session, number of tweets per tweeter per session, papers discussed, journals most chosen etc., etc. These data can be used to assess the worth and potential for #microtwjc to be better in the future. It also serves as a model to understand the use of twitter and social media for science communication and public engagement with science in the future.
In a series of posts I will try and analyse these data with a hope of finding some useful knowledge and wisdom about #microtwjc. You can view the data here.
A brief description of the methods to get the numbers discussed in this post. 1) each session was logged in storify or topsy format. 2) logged on to each site and searched for each session by date, name etc. 3) went through and counted number of tweeters and number of tweets they made. *this data collection was carried out by a number of the moderators of #microtwjc* . You know who you are. If any of you want to explore these data further, go do it. It's all open.
Firstly, I took the numbers and divided them per session (fig 1. ). What I quickly realised is that many tweeters tweeted only once, advertising the session, apologising for not being there. I felt these people were not 'engaged' with the session and as we are primarily interested in 'engagement' I decided to clean up the data and remove them from later analysis (posts to come). The engaged tweeters are shown in magenta (fig. 1.). The average tweeters per session was higher in the non-engaged group (As expected) with an average of ~8 compared to ~6. The spread of tweeters was also greater, mostly accounted for by session 1 (fig. 2 green).
|fig 1. number of tweeters: total (green); tweeters tweeting more than once 'engaged' (magenta).|
Comparing the number of 'engaged' tweeters with total tweeters for each session we can come up with a map of engagement over each session (fig. 3.). This largely parallels fig. 2. but shows it better.
|fig. 3. percentage of tweeters who tweeted more than once per session (engaged) (orange).|
The hope is that these data can be compiled, analysed, uploaded onto a data repository (figshare) and then use this to publish (PLoS, for example). This process will be completely open and collaborative.