Information Research logotype

Information Research

Vol. 28 No. 4 2023

Editorial

With this issue we come to the end of the 28th year of publication of Information Research. The journal was founded at a time a significant change in the information world, with the Internet and the World Wide Web being in the early stage of development, with the iPhone, the iPad, and social media yet to be invented. Now this year has seen the emergence of generative artificial intelligence and, just as 1995 was a transition point, in the move from print to digital, so this year is a transition point – in the move from what is still, very largely, a combination of human and computer to generate services, to a future of human and AI system of service delivery. And, perhaps, before too long, artificial intelligence acting alone.

I pasted that paragraph into Google Bard and asked it for an assessment: it replied:

“There are a number of reasons why AI could eventually be able to generate services on its own. First, AI is becoming increasingly sophisticated in its ability to process and understand information. This means that it can now learn from large amounts of data and make its own decisions about what to do with that data. Second, AI is becoming increasingly powerful in terms of its computational capabilities. This means that it can now handle more complex tasks and generate more sophisticated results.”

With the recent announcement by Google of Gemini, its more advanced large language model, which will be used by Bard in the future, the probability of AI-developed information services becomes even more likely. At present, the implications of this for the information professions is unclear, but one development is obvious from Google’s statement about the highest performing version of Gemini:

“Ultra will now power a new code-writing tool called AlphaCode2, which Google claimed could outperform 85% of competition-level human computer programmers.” (Milmo, 2023).

I don’t imagine that this will put programmers out of jobs, at least not completely: some will always be needed to check the output of any coding AI system, and they will also be needed in input the correct prompts to generate the code. However, the resulting code will probably be generated in seconds rather than days or weeks of human effort and this must have an effect on the number of programmers needed.

Information retrieval is already changing and one can guess that existing systems will fall into decline to be replaced by the question answering mode of generative AI. There is little point, for example, in using a search engine that delivers possible links for an answer to your question, when ChatGPT or Google Bard can answer the actual question. I wanted to know, recently, how to transfer the contacts on my iPad to my iMac: Google gave me the usual output of helpful links, but Bard gave me instructions on how to do it. That capability will get into organizational systems, designed to search internal documentation and not only retrieval, but help systems and chatbots (which I rarely find helpful) may be transformed if, through generative AI, they are trained on the total online organizational documentation.

Information literacy training also faces a transition point, with a need, now, to deal with the appropriate use of generative AI and the ethical issues that arise. Such training will also have to deal with new methods of assessment in education and how to satisfy these methods ethically.

Information behaviour research also has a new situation to deal with: we have seen a lot of research on existing question and answer systems, some of which involve human input, but generative AI systems are now, in effect, generalised question and answer systems, probably far more powerful than existing systems. Research into how people formulate prompts, the kinds of questions they ask, and how successfully they are answered are obvious research areas – and I imagine that researchers are already engaged in such work.

The implications of these developments for Information Research are yet to be fully understood, but it is obvious that we will be hospitable to any area of research involving generative AI systems, since they are engaged in manipulating information. We already have one paper on ChatGPT in process and I imagine that we will have many more over the coming few years.

I had intended to have the usual Editorial, with comments on the papers in the issue, but you can see these on the contents page and I’ve already filled a page! It simply remains for me to thank all of those engaged in work for the journal, referees, regional editors, and copy-editors, for their continued engagement with the journal.

Prof. Tom Wilson
Editor in Chief
December 2023

References

Milmo, D. (2023, December 7). Google says new AI model Gemini outperforms ChatGPT in most tests. The Guardian. https://www.theguardian.com/technology/2023/dec/06/google-new-ai-model-gemini-bard-upgrade