It’s been an incredibly busy year, at work and at home, in professional circles and in the wider public, political arena. As a keen advocate of blogging as a key component of scholarly communications and the research life cycle, I’ve regretted being too busy (or too exhausted) to blog more frequently. As the academic term draws to an end and we approach the Christmas holidays, I feel I have a lot of engagement and dissemination work I have to catch up with. This post is one attempt of doing so.
I am very pleased to share that this year I joined the Centre for Human-Computer Interaction Design (HCID) at City, University of London. The Centre has a strong track record of research into accessible and interactive technologies and methods for people with disabilities and to support creativity in mental health (particularly for dementia care).
My own recent collaborative research has focused on Graphic Medicine, i.e the study, design and delivery of creative, therapeutic and educational uses of graphic narratives (comics, cartoons) for mental health care provision and public engagement.
I am very pleased that my application to obtain internal funding from the School to support activities and strategies to develop impact from HCID’s previous and ongoing research on these areas was successful. This is a modest internal award to support strategies to enhance the ‘public impact’ of recent academic outputs (2013-2016). Our proposal seeks to connect the dots between previous and ongoing work on dementia care and graphic medicine.
We will be organising knowledge exchange workshops with the participation of HCID researchers, mental health professionals, comics scholars and comics artists. The workshops will focus on the exploration, discussion, reuse and adaptation into comics of the dementia care best practice data collected and made available by the Care’N’Share project, which crowdsourced, curated and aggreagated a significant dataset of case studies of best practices for dementia care (Zachos et al, 2013; Maiden et al, 2016).
Our ongoing study on ‘Graphic Medicine’ as a Mental Health Information Resource engaged with members of the creative industries involved in the creation and publishing of comic books with mental health topics and mental health care students and professionals in partnership with the Tavistock and Portman NHS Foundation Trust (Priego & Farthing 2016, Farthing & Priego, 2016b). The research shows the need of further knowledge exchange between academics, those creating graphic medicine materials, mental health care practitioners and members of the public.
Our proposal seeks to address and respond to these findings through graphic medicine workshops and the creation of deliverables in comics (print and online) form. Initially, we will host comics workshops at City, University of London between late February and April 2017. We will focus primarly in working together to explore and discuss the Care’N’Share dataset and the different possibilities in which the data can be adapted into comics form, leading to the creation, distribution and user testing of a professional comics publication, under the artistic direction of Dr Simon Grennan. We will be sending out public and personalised invitations to participate in the workshops and to provide feedback in early 2017.
The end users will be those interested in dementia care (carers, mental health professionals, patients, relatives, members of the public interested in comics and/or mental health). They will benefit by gaining knowledge about the best practices for dementia care collected and the affordances of graphic medicine to make these practices communicated more widely and distributed in an accessible form.
Carers and people with dementia, care homes and health trusts are logical beneficiaries of enhanced impact of dementia care research, but so is society at large: it is estimated 750,000 people suffer from dementia in the UK alone. It is predicted that by 2051 dementia will affect “a third of the population either as a sufferer, relative or carer” (Zachos et al, 2013; Wimo and Prince, 2010).
Research shows that comics have the potential to have a positive impact on the health and quality of life of people who engage in comics creation (for example by participating in workshops) or reading (publications), contributing to transform attitudes, awareness and behaviour around illness and contributing to create new opportunities for empowerment and more positive behaviour (Cardiff University 2014).
Ours is a small initiative that seeks to make a contribution to enhancing the public impact of the best practice data resulting from research by exploring and embracing the communicative affordances of graphic storytelling in general and graphic medicine in specific. We hope that by enabling stronger links between academia, dementia care practice and comics scholars and practitioners, we will be taking steps in the right direction.
Zachos, K., Maiden, N., Pitts, K., Jones, S., Turner, I., Rose, M., Pudney, K. & MacManus, J. (2013). A software app to support creativity in dementia care. Paper presented at the 9th ACM Conference on Creativity & Cognition, 17-06-2013 – 20-06-2013, Sydney, Australia. http://openaccess.city.ac.uk/3837/ .
Maiden, N., Schubmann, M., McHugh, M., Lai, A.Y. & Sulley, R. (2016). Evaluating the Impact of a New Interactive Digital Solution for Collecting Care Quality In-formation for Residential Homes. Paper presented at the 30th British Human Computer Interaction Conference, 11-15 Jul 2016, Bournemouth, UK. http://openaccess.city.ac.uk/15127/.
Priego, E. & Farthing, A. (2016). ‘Graphic Medicine’ as a Mental Health Information Resource: Insights from Comics Producers. The Comics Grid: Journal of Comics Scholarship, 6, doi: 10.16995/cg.74 http://openaccess.city.ac.uk/13441/ .This research was presented at the Graphic Medicine Conference 2016, 7-9 July 2016, University of Dundee, UK.
Farthing, A. & Priego, E. (2016). Data from ‘Graphic Medicine’ as a Mental Health Information Resource: Insights from Comics Producers. Journal of Open Health Data, 4(1), e3. doi: 10.5334/ohd.25. http://openaccess.city.ac.uk/15251/ .
Traditionally, two main form of metrics have been used to measure the “impact” of academic outputs: usage statistics and citations.
“Usage statistics” usually refers to mainly two things: downloads and page views (they are often much more than that though). These statistics are often sourced from individual platforms through their web logs and Google Analytics. Some of the data platform administrators have collected from web logs and Google Analytics apart from downloads and page views include indicators of what type of operating systems and devices are being used to access content and landing pages for most popular content. This data is often presented in custom-made reports that collate the different data, and the methods of collection and collation vary from platform to platform and user to user. The methods of collection are not transparent and often not reproducible.
Citations on the other hand can be obtained from proprietary databases like Scopus and Web of Knowledge, or from platforms like PubMed (in the sciences) Google Scholar, and CrossRef. These platforms have traditionally favoured content from the sciences (not the arts and humanities). Part of the reason is that citations are more easily tracked when the content is published with a Digital Object Identifier, a term that remains largely obscure and esoteric to many in the arts and humanities. Citations traditionally take longer to take place, and therefore take longer to collect. Again, the methods for their collection are not always transparent, and the source data is more often than not closed rather than open. Citations privilege more ‘authoritative’ content from publishers that provide and count with the necessary infrastructure, and that has been available for a longer time.
Altmetrics is “the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship.” (Priem et al 2010). Altmetrics normally employ APIs and algorithms to track and create metrics from the activity on the web (normally social media platforms such as Twitter and Facebook, but also from online reference managers like Mendeley and tracked news sources) around the ‘mentioning’ (i.e. linking) of scholarly content. Scholarly content is recognised by their having an identifier such as a DOI, PubMed ID, ArXiv ID, or Handle. This means that outputs without these identifiers cannot be tracked and/or measured. Altmetrics are so far obtained through third-party commercial services such as Altmetric.com, Plu.mx and ImpactStory.
Unlike citations, altmetrics (also known as “alternative metrics” or “article-level metrics” when usage statistics are included too) can be obtained almost immediately, and since in some cases online activity can be hectic the numbers can grow quite quickly. Altmetrics providers do not claim to measure “research quality”, but “attention”; they agree that the metrics alone are not sufficient indicators and that therefore context is always required. Services like Altmetric, ImpactStory and PlumX have interfaces that collect the tracked activity in one single platform (that can also be linked to with widgets embeddable on other web pages). This means that these platforms also function like search and discovery tools where users can explore the “conversarions” happening around an output on line.
The rise of altmetrics and a focus on their role as a form or even branch of bibliometrics, infometrics, webometrics or scientometrics (Cronin, 2014) has taken place in the historical and techno-sociocultural context of larger transformations in scholarly communications. The San Francisco Declaration on Research Assessment (DORA, 2012) [PDF], for example, was prompted by the participation of altmetrics tools developers, researchers and open access publishers, making the general recommendation of not using journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual researcher’s contributions, or in hiring, promotion, or funding decisions.
The technical and cultural premise of altmetrics services is that if academics are using social media (various web services such as Twitter and Facebook only made possible by APIs) to link to (“mention”) online academic outputs, then a service “tapping” into those APIs would allow users such as authors, publishers, libraries, researchers and the general public to conduct searches across information sources from a single platform (in the form of a Graphical Unit Interface) and obtain results from all of them. Through an algorithm, it is possible to quantify, summarise and visualise the results of those searches.
The prerequisites for altmetrics compose a complex set of cultural and technological factors. Three infrastructural factors are essential:
Unlike traditional usage statistics, altmetrics can only be obtained if the scholarly outputs have been published online with Digital Object Identifiers or other permanent identifiers.
The online platforms that might link to these outputs need to be known, predicted and located by the service providing the metrics.
Communities of users must exist using the social media platforms tracked by altmetrics services linking to these outputs.
The scholarly, institutional, technological, economic and social variables are multiple and platform and culture-dependent, and will vary from discipline to discipline and country to country.
As open access mandates and the REF make “impact” case studies more of a priority for researchers, publishers and institutions, it is important to insist that any metrics and their analysis, provided by either authors, publishers, libraries or funding bodies, should be openly available “for reuse under as permissive a license as possible” (Dalmau, Scherer and Konkiel).
Arts and Humanities
If altmetrics are to be used in some way for research assessment, the stakeholders involded in arts and humanities scholarly publishing need to understand the technical and cultural prerequisites for altmetrics to work. There are a series of important limitations that justify scepticism towards altmetrics as an objective “impact” assessment method. A bias towards Anglo-american and European sources, as well as for STEM disciplines, casts a shadow on the growth of altmetrics for non-STEM disciplines (Chimes, 2014). A prevalence of academic journals, particularly in the arts and humanities, have yet to have a significant, sustainable online presence, and many still lack DOIs to enable their automated and transparent tracking.
At their best, altmetrics tools are meant to encourage scholarly activity around published papers on line. It can seem, indeed, like a chicken-and-egg situation: without healthy, collegial, reciprocal cultures of scholarly interaction on the web, mentions of scholarly content will not be significant. Simultaneously, if publications do not provide identifiers like DOIs and authors, publishers and/or institutitons do not perceive any value in sharing their content, altmetrics will yet again be less significant. Altmetrics can work as search and discovery tools for both scholarly communities around academic outputs on the web, but they cannot and should not be thought as unquestionable proxies of either “impact” or “quality”. The value of these metrics lies in providing us with indicators of activity– any value obtained from them can only be the result of asking the right questions, providing context and doing the leg work– assessing outputs on their own right and their own context.
Libraries could do more to create awareness of the potential for altmetrics within the arts of humanities. The role of the library through its Institutional Repository (IR) to encourage online mentioning and the development of impact case studies should be readdressed; particularly if ‘Green’ open access is going to be the mandated form of access. Some open access repositories are already using them (City University London’s open access repository has had Altmetric widgets for its items since January 2013); but the institution-wide capabilities of some of the altmetrics services are fairly recent (Altmetric for Institutions was officially launched in June 2014). There is much work to be done, but the opportunity for cultural change that altmetrics can contribute to seems too good to waste.
The file contains approximately 31,855 unique Tweets published publicly and tagged with #REF2014 during a 12-day period between 08/12/2014 11:18 and 20/12/2014 10:13 GMT.
For some context and an initial partial analysis, please see my previous blog post from 18 December 2014.
As always, this dataset is shared to encourage open research into scholarly activity on Twitter. If you use or refer to this data in any way please cite and link back using the citation information above.
As everyone in some way aware of UK higher education knows, the results from the REF 2014 were announced in the first minute of the 18th of december 2014. Two main hashtags have been used to refer to it on Twitter; #REF and the more popular (“official”?) #REF2014.
There’s been of course other variations of these hashtags, including discussion about it not ‘hashing’ the term REF at all. Here I share a quick first look at a sample corpus of texts from Tweets publicly tagged with #REF2014.
This is just a quick update of a work in progress. No qualitative conclusions are offered, and the quantitative data shared and analysed is provisional. Complete data sets will be published openly once the collection has been completed and the data has been further refined.
I looked at a sample corpus of 23,791 #REF2014 Tweets published by 10,654 unique users between 08/12/2014 11:18 GMT and 18/12/2014 16:32 GMT.
The sample corpus only included Tweets from users with a minimum of two followers.
The sample corpus consists of 1 document with a total of 454,425 words and 16,968 unique words.
The range of Tweets per user varied between 70 and 1, with the average being 2.3 Tweets per user.
Only 8 of the total of 10,654 unique users in the corpus published between 50 and 80 Tweets; 30 users published more than 30 Tweets, with 9,473 users publishing between 1 and 5 Tweets only.
6,585 users in the corpus published one Tweet only.
A Quick Text Analysis
Voyant Tools was used to analyse the corpus of 23,791 Tweet texts. A customised English stop words list was applied globally. The most frequent word was “research”, repeated 8,760 times in the corpus; it was included in the stop-word list (as well as, logically, #REF2014).
A word cloud of the whole corpus using the Voyant Cirrus tool looked like this (you can click on the image to enlarge it):
#REF2014 Top 50 Most frequent words so far
The map is not the territory. Please note that both research and experience show that the Twitter search API isn’t 100% reliable. Large tweet volumes affect the search collection process. The API might “over-represent the more central users”, not offering “an accurate picture of peripheral activity” (González-Bailón, Sandra, et al. 2012). It is not guaranteed this file contains each and every Tweet tagged with the archived hashtag during the indicated period. Further dedpulication of the dataset will be required to validate this initial look at the data, and it is shared now merely as an update of a work in progress.
[Updated. I replaced the spreadsheet on figshare twice as a couple of publisher names had to be corrected. This left a version with 101 unique publisher names –note that some might still be subsumable to other publisher names in the set.
I have also corrected the first bar chart and added another one two on this post. Please bear in mind there might still be errors in the source data. The spreadsheet, write-up and charts are shared “as is” and “as available”; the information presented reflects the data as manually curated and refined in the latest dataset version at http://dx.doi.org/10.6084/m9.figshare.966427.
This means that the number of publishers and costs and outputs associated to each publisher will depend on how the Publisher field has been refined; other quantifications and visualisations of the original dataset or other versions that have been refined differently are therefore likely to differ. I do not work for nor am I currently associated with the Wellcome Trust or any of the publishers mentioned here; these are not “official” figures and are openly shared here as research work in progress and should therefore be taken in that spirit].
In March 2014 the Wellcome Trust released a dataset via figshare giving information on their funding of Article Processing Charges in 2012/13.
The dataset includes all papers that the Trust is aware of paying money for.
Cameron Neylon subsequently shared a dataset on figshare (and github) with some of the inconsistencies refined:
I worked with his version of the dataset and manually refined inconsistencies in the Publisher field (same publishers appeared under different names and spellings and other text formatting issues). I did not refine the journal titles.
I also focused on 11 publishers from the dataset and obtained totals as well as their maximum and minimum APCs.
Total amount paid in APCs according to the dataset: £3,884,787.52
Highest APC in the dataset: £13,200.00, for the monograph: ‘Fungal Disease in Britain and the United States 1850-2000’ (Palgrave Macmillan)
Highest APC payment for an article in the dataset: £6000 for ‘Laboratory Science in Tropical Medicine’, in the Public Service Review journal.
Lowest APC in the dataset: £45.94 for the journal article ‘The association between breastfeeding and HIV on postpartum maternal weight changes over 24 months in rural South Africa’ on the American Society for Nutrition Journal.
APC average (excluding the £13,000 one for the book) £1820.01
With many thanks to Cameron Neylon.
Hopefully this helps in some way to provide a quicker idea of the average cost of APCs from the major for-profit publishers.