The June thematic issue appears to have been quite a success: the issue was picked up by a number of blogs and had numerous social media mentions. It was probably this latter phenomenon (plus the fact that all of the papers went into our Flipboard magazine, News on e-books, which is now read by more than 35,000 subscribers), that resulted in my own paper having more than 3,000 hits in the first month of publication. I suspect that many who read that paper will never read anything else in the journal (except perhaps other papers on e-books), but to have them discovering it at all is a bonus.

After the thematic issue of June, we are back in September with what we might call a 'normal' issue: i.e., no thematic papers, no conference papers, but the usual submissions to the journal. There are eleven papers in all, with a total of twenty-seven authors, from fourteen countries: Australia, Belgium, Canada, Chile, Denmark, Finland, Italy, The Netherlands, Poland, Slovenia, South Africa, Spain, the UK, and the USA. Given the numbers, it's obvious that some papers are authored by 'international teams', in fact, one paper has six authors from five countries, so it seems that international collaboration, fostered in this case, as with so much more, by the European Union, is thriving.

It may interest readers to know how the papers get to this stage. First, I check all submissions and I can illustrate what happens here by reference to the thirteen submissions I examined this week: five were rejected as being either outside the scope of the journal, or too weak, or lacking any real interest, or being more properly directed to a different journal; three were returned to the authors because the journal's instructions for authors had not been observed; one was within the scope of the journal but might be more usefully published in a journal devoted to the subject area examined bibliometrically, and four were passed on to the Associate Editors to go through the review process. We use a double-blind review process with two referees (sometimes three) for each paper and this usually results in a recommendation for the author either to make substantive changes and resubmit, or to make minor changes which the Editor or Associate Editor can review and approve.

Once approved, the paper goes through two phases of copy-editing before the html version is produced: this is usually done by the author, but can be done for a fee through the journal—something which authors are increasingly opting to do, since they can usually recoup the cost through their institution's support for publication in open access journals. The html version is then checked either by myself or one of the volunteer layout editors (there are only three such, so I get to do most of them!) and I do a final copy-edit and check of the references. The final preparation of the paper for publication involves checking that all internal links work, validating the code with the W3C html validator, inserting the counter, search terms for the search engines at the bottom of the page, checking that the link to Google for future citations is correct, entering in the author index and the subject index, listing everything on the contents page and, now, the RSS feed, uploading and then archiving everything to WebCite. Not surprisingly, I have a workflow spreadsheet for each issue to ensure that all of the steps are completed for each paper.

The papers in this issue

The eleven papers in this issue that have survived that process are a diverse set. So much so that it is very difficult to create groups; however, three papers deal with some aspect or other of searching for information. First, Lori McCay-Peet and her colleagues report on Development and assessment of the content validity of a scale to measure how well a digital environment facilitates serendipity. The information searching literature has very little in the way of scale development because of the preference researchers have for qualitative methods, and the development of a rating scale is clearly positivist, and it is good to see some attention turning in this direction. Secondly, Heather O'Brien of the University of British Columbia, and her collaborators, in What motivates the online news browser? News item selection in a social information seeking scenario investigate the discovery of news that might support social interaction. The motivation to search, of course, underlies every attempt to discover information, and here uses and gratifications theory underpins the research in a very effective way but, even so, the authors recognize its limitations and suggest other theoretical frameworks, particularly social cognitive theory, that might provide further insights. Finally, of these three, Rebekah Willson and Lisa M. Given use the ideas of mental models and 'self-concept' in Student search behaviour in an online public access catalogue. The authors point out that both of the theoretical concepts are mental representations and that the effect of both on search behaviour needs further research./p>

We then have a couple of papers that deal with information behaviour in a more general sense than searching: Barbara Nied┼║wiedzka and her five collaborators in a European Union project report on Determinants of information behaviour and information literacy related to healthy eating among Internet users in five European countries. This was a massive quantitative study involving 3003 respondents across five countries. One of the findings demonstrates how ubiquitous the technology now is: in each of the five countries, the main source of information on healthy eating is the Web, obtained through the use of a search engine. Not so long ago, I suspect that the main source would have been family members and the family doctor. Constance Bitso and Ina Fourie present a paper on Information-seeking behaviour of prospective geography teachers at the National University of Lesotho, in which one of the more startling findings (when we compare with a developed country) is that the prospective teachers used any technological means of communication either infrequently or not at all during their teaching practice, e.g., 70% never used the Internet. Clearly, there is a long way to go before such countries can emulate the developed world.

Two papers relate to the take-up of information systems and technology: Patricio Esteban Ramírez-Correa and colleagues provide An empirical analysis of mobile Internet acceptance in Chile, using the unified theory of acceptance and use of technology. The findings suggest that the factors affecting the take-up of the mobile Internet is more complex than even this complex theoretical framework can explain, since six factors explain only 29% of the variance in the relevant variable. Either many more factors are needed completely to explain the phenomenon, or there is some key factor which is not included in the model. Secondly, Nicolai Pogrebnyakov and Mikael Buchmann discuss The role of perceived substitution and individual culture in the adoption of electronic newspapers in Scandinavia. This is another well-grounded quantitative investigation, with 1,804 respondents from Denmark, Norway and Sweden, using Hofstede's well-known cultural variables. One of the authors' conclusions is:

The results suggest that while the intent to use electronic newspaper is driven by its perceived usefulness, this usefulness is in turn influenced by the significant perceived substitution effect. In other words, whether a new technology is seen as a good substitute for the existing one is a precursor to how useful it is perceived to be. This suggests that perceived usefulness may not be the ultimate determinant of the intent to use technology and the comparative benefits of the new technology compared to the existing one may be precursors to the perceived usefulness of the new technology.

Finally, we have four papers that are not associated with one another in any way: Olha Buchel and Kamran Sedig, present a study on Making sense of document collections with map-based visualisations: the role of interaction with representations; Ann Gillespie reports on Untangling the evidence: introducing an empirical model for evidence-based library and information practice; Kevin McCormack and Peter Trkman, discuss The influence of information processing needs on the continuous use of business intelligence and, finally, Ronald Snijder investigates Modes of access: the influence of dissemination channels on the use of open access monographs. Most research into open access deals with the journal literature, so this exploration of the world of monographs, which are still central to research in the humanities and social sciences, is welcome.


My thanks to my colleagues in the University of Murcia, Jose-Vicente Rodriguez Munoz and Pedro Diaz who prepare the abstracts in Spanish and to the Associate Editors, copy-editors and layout editors who help to keep the journal alive. You can read about them here.

Professor Tom Wilson, Publisher/Editor-in-Chief
August, 2014