header
Vol. 9 No. 2, January 2004

 

The Library Visit Study: user experiences at the virtual reference desk

Kirsti Nilsen
Faculty of Information and Media Studies
The University of Western Ontario
London, Ontario, Canada



Abstract
This paper discusses the methodology and reports on initial findings of a study examining the perceptions of users of digital reference services. It is part of a long-term research project, The Library Visit Study, which has been conducted in three phases at the University of Western Ontario for more than a decade. Phases One and Two examined perceptions of users who approached physical reference desks in libraries with reference questions. Phase Three of the research considers reference encounters at virtual reference desks and compares users' experiences at the physical reference desk with experiences at the virtual reference desk. The findings suggest that, from the viewpoint of the enquirer, the virtual reference desk suffers from the same problems as the physical reference desk: inadequate reference interviewing, referral to alternative sources without a subsequent check on their suitability, and a lack of follow-up to determine satisfaction in general.


Introduction

For more than a decade, researchers at the University of Western Ontario have conducted a long-term research project that examines the perceptions of users who ask reference questions in libraries. This research project, which is called the Library Visit Study, has been conducted in three phases. Phase I of the study considered perceptions of users who approached physical reference desks in libraries with reference questions between 1991 and 1993. Findings were published in a number of articles by Ross and Dewdney in the 1990s (Dewdney & Ross, 1994; Ross & Dewdney, 1994; 1998). Phase II compared the earlier research with more recent user perception data gathered between fall 1998 and spring 2000 and also examined the extent to which digital reference sources and the Internet were used to answer reference questions asked at the physical reference desk The first report of Phase II findings was published by Ross and Nilsen ( 2000). Findings from these and other studies have been published in a book entitled Conducting the reference interview (Ross, et al., 2002). Chapter Six of the book discusses the reference interview in the electronic environment.

Because of a lack of empirical data comparing user perceptions of the interview experience in the physical and virtual environments, the focus of the Library Visit Study has expanded. While continuing to examine reference encounters at physical reference desks, Phase III of the research is also looking at reference encounters at virtual reference desks. This phase of the research is designed to elicit users' perceptions of their experiences when asking questions, using either e-mail or chat services. This on-going study also compares the data on users' experiences at the physical reference desk with experiences at the virtual reference desk. Phase III of the Library Visit study began in 2003 and is currently under way. This paper discusses the research method used and presents initial findings.

Defining the terms

Information sources in electronic formats have been made available for many years by libraries and other information providers. However, library reference services that allow users to ask questions in the digital environment are a relatively new phenomenon. The terms used to describe such services vary widely, including, among others, virtual reference, digital reference, electronic reference, remote reference, and real-time reference.

Definition of the service also varies. Here, the definition by the Machine Assisted Reference Section (MARS) committee of the American Library Association (ALA) that is preparing guidelines for implementing and maintaining virtual reference services is used:

Virtual Reference is reference service initiated electronically often in real-time, where users employ computers or other Internet technology to communicate with librarians, without being physically present. Communication channels used frequently in virtual reference include chat videoconferencing, Voice over IP, e-mail and instant messaging. While online sources are often utilized in provision of virtual reference, use of electronic sources in seeking answers is not of itself virtual reference. Virtual reference queries are often followed-up by telephone, fax, and regular e-mail, even though these modes of communication are not considered virtual. (American Library Association, 2003)

The term 'virtual reference' as defined above is used throughout this paper. The term 'chat' in this paper generically covers all real time, synchronous services, including instant messaging. The term e-mail refers to e-mail used for virtual reference purposes (that is, to ask and answer reference questions). Other e-mail messages are identified as 'regular e-mail'. The terms 'physical reference desk' and 'virtual reference desk' are usually abbreviated to PRD and VRD.

Review of the literature

The literature on virtual reference is exploding as evidenced by the number of items added to an online 'digital reference service bibliography' (Sloan, 2003). While e-mail reference services have been available at some libraries since the mid-1980s, the number of users of e-mail reference has been limited (Goetsch, et al., 1999; Gray, 2000) Until 1999, most libraries reported quite low use, though anecdotal evidence, reported on the Dig_Ref discussion list, suggests that use of e-mail reference services has been holding its own and even growing in the last three or four years. Beginning in the late 1990s (Sloan, 2001), real time chat services have been introduced to either supplement or replace existing e-mail reference services at a number of public and academic (college and university) libraries. The Association of Research Libraries (Ronan & Turner, 2003 ) surveyed its members in 2002; of the 62 (53%) respondents, 54% reported that they offered chat services. According to the literature reviewed here, the use of such services is not high. Ruppel and Fagan (2002) report 9.5 questions per day, Sears (2001) indicates 9.6 chat sessions per week, while Kibbee, et al. (2002) had 'over 600' transcripts for a twelve-week period. The number of hours such services are open will, of course, affect the number of questions received. Because of improved software capabilities and the popularity of chat technology with younger users, it is anticipated that the use of this type of electronic service will grow quickly (Breeding, 2001 ; Francoeur, 2000).

Reports on the types of users and the types of questions asked are readily available ( Cunningham 1998; Diamond & Pease 2001; Granfield, 2002 ; Sears, 2001). There have not yet been many studies of success in answering questions. However, Kaske and Arnold (2002) describe an unobtrusive study in which 12 questions were posed both to chat services and e-mail services at a random sample of 36 libraries. Success rates (correctly answered questions) were higher for e-mail services (59.8%) than for chat services (54.8%).

The research described in this paper is concerned with user perceptions of reference service received, specifically with respect to reference transactions using e-mail or chat services and comparing those with perceptions of in-person transactions. The literature on user perceptions of virtual services generally appears in case studies describing and evaluating the services offered by individual university libraries. User satisfaction data included in these studies are typically obtained using online questionnaires that pop-up at the end of a chat session. User satisfaction rates in such studies are high. Foley (2002 ) noted that 45% of respondents to the University of Buffalo library's questionnaire reported being 'very satisfied', while 79% were 'satisfied' or better, and she notes that patron comments regarding the described chat service were 'unexpectedly positive and very rewarding'. At Carnegie Mellon University, Marsteller and Neuhaus (2001) noted that of 78 respondents, 58 indicated that they received the information they needed, 12 indicated that they received 'partial' information needed, and only eight indicated that they did not receive needed information. Sixty-nine of the 78 respondents said that they would use the service again. Kibbee, et al. (2002) at the University of Illinois at Urbana-Champaign also reported high satisfaction rates, noting that "...[n]early 90 per cent of the respondents reported the completeness of the answer to their question was very good or excellent. Nearly 85 per cent found the service easy to use and would use it again." Response times for both e-mail and chat services are often described, and most authors note that using chat is more time consuming for librarians than are in-person transactions. Time spent using chat services can vary widely; Kibbee, et al. (2002) report an average of 9.8 minutes, but they ranged from 40 seconds to 58.5 minutes.

Virtual reference in public libraries is relatively new and evaluations of user perceptions of the services are few and far between. A number of projects can be identified on the Web, and evaluation plans are described, but results are not yet available. An evaluation that includes public libraries is provided by Saskatchewan Libraries, a multitype consortium of libraries in that province. Included are public, school, academic and special libraries. An evaluation of the Saskatchewan Libraries: Ask Us! Pilot Project indicated that 34 of 39 feedback messages were positive and 5 related to temporary technical difficulty (Saskatchewan Libraries, 2003).

User comparisons of PRD and VRD

Ruppel and Fagan (2002) set out specifically to compare user perceptions of PRD and VRD experiences at Southern Illinois University using two surveys. A short survey about the chat (instant messaging (IM)) service appeared immediately after patrons disconnected from the service, resulting in 340 completed surveys. A long survey (15 questions) was distributed to students in a library skills course, with 52 questionnaires completed. Ruppel and Fagan write that patrons were 'overwhelmingly positive' about the service. 'Of [the 340 respondents] 82 percent said IM reference is a 'very' good method of getting help, while 7 percent said it was a fairly good method of getting help.' Answers received were judged to be 'very' helpful by 82 percent and 12 percent said the answers were 'somewhat' helpful.

The long survey administered by Ruppel and Fagan allowed the 52 respondents to evaluate both the virtual reference and the physical reference services. These students were also very positive about the chat (IM) service. When asked why they usually do not ask for help at traditional library help desks, the responses were similar to those long reported in the literature. Twenty-nine percent noted that staff 'did not look like they want to help or they look too busy', 23 percent said they felt stupid for not knowing already, and 17 percent did not want to bother going to the library building, while 10 percent did not think the person at the desk would know the answer Another reason for not using the physical reference desk, identified by 23 percent of respondents is that they did not want to get up from the computer{ Ruppel & Fagan 2002).This last reason supports other findings that show that many of the questions come from on-campus locations and even from patrons who are sitting in the library at workstations (see Foley, 2002).

Advantages and disadvantages of virtual reference desk (VRD) versus physical desk reference (PRD) service were identified by respondents to the long survey used in the Ruppel and Fagan ( 2002) study. The most frequent response in terms of advantages of the VRD identified by respondents was that they did not have to get up from the computer; other advantages included anonymity and speed of responses. The leading response regarding advantages of the PRD was the personal touch. Respondents noted the one-on-one, person-to-person service and direct help provided (Ruppel & Fagan, 2000).

All of the studies of user perceptions identified above were conducted at university libraries. A search of the literature found no such studies conducted in public libraries. In addition, other than the Ruppel and Fagan study, there are few studies of user perceptions that provide any comparative data for PRD services and VRD services. The study discussed below provides user perceptions of both public and university library services and provides comparative data for the physical and the virtual reference environments and services.

Methodology

Many aspects of virtual reference can be examined. This paper does not consider the types of question asked, or how 'correct' the answers were. User evaluations of the library interface and access provided to the service are gathered but are not analyzed in this paper. Here we are interested in user satisfaction with the service provided. Because of the importance of the reference interview, monitored referrals and follow-up to success or failure of reference transactions, the extent to which these were evident in user accounts is also provided.

As noted in the literature review, most examinations of user perceptions of digital reference conducted to date have used surveys usually done at the point of contact (i.e., pop-up questionnaires), or as with Ruppel and Fagan by means of long questionnaires in a classroom setting. While surveys are useful to determine user perceptions, they can be problematic. Kibbee, et al. (2002) warn that their results are 'skewed by the fact that the online survey only went to users who completed a session and did not reach users who terminated sessions or were unable to connect.' It has been noted that many users of these services do not complete the questionnaires. Of 600 sessions analyzed by Kibbee, et al. only 130 provided completed questionnaires. Marsteller and Neuhaus (2001) based their data on logs of the sessions in the first seven months of the service, and only 20% of the logs contained completed questionnaires. Foley (2002) appears to have had more success, with only 11 (of 262) failing to indicate any level of dissatisfaction or satisfaction. She notes that response was encouraged by offering a chance to win a $25 gift certificate to an entertainment store near campus. The satisfaction information in the Saskatchewan Libraries (2003) evaluation notes that the 39 feedback messages received account for only 6% of users. Pop-up questionnaires do no reflect users who have disappeared during the electronic transaction; unhappy or annoyed users may not bother to fill in the questionnaires, while students completing questionnaires in a class might not be unbiased. The research described here used another approach, one that has worked well in examining physical reference desk encounters.

Phase III of the Library Visit Study is using the same method for gathering data that was used in the previous phases of the research. The study originally developed from a teaching exercise designed to help beginning reference students understand what it feels like to be a library user. Students in a first term MLIS reference course are required to approach a physical reference desk in a Canadian library of their choice and ask a question of interest to them unrelated to their course of study. We use the assignment because we want beginning librarians to have a vivid sense of the experience of being a user. For this assignment, students do three things:

  1. They produce a detailed step-by-step account of exactly what happened in the reference transaction.
  2. They reflect on their experience by summarizing which aspects of their experience they found helpful and which aspects they found unhelpful.
  3. They complete a questionnaire evaluating their experience as a user of the reference service.

In Phase III of this research, students in consecutive offerings of an advanced course in information services are asked to approach a virtual reference desk provided by a Canadian library with a question that interests them. They may choose virtual reference desks at university or public libraries and they may use e-mail or chat services. The data are collected in the same way as in earlier phases of the study, that is, by written accounts and completed questionnaires that mirror those used in earlier phases of the research (with minor changes in wording reflecting the virtual environment).

The written reports provide qualitative data on user perceptions, while the questionnaires provide for quantitative data comparisons. In addition, students submit copies of e-mails exchanged or transcripts of chat sessions, allowing for objective consideration of the reference transaction.

Accounts of 261 Phase I and Phase II library visits to physical reference desks were analyzed and reported in the literature (Dewdney & Ross, 1994; Ross & Dewdney, 1994, 1998, Ross & Nilsen, 2000). Phase III findings reported here are based on 42 reports and questionnaires submitted by students who asked questions at a virtual reference desk between February and May 2003. The findings of visits to virtual reference desks are compared to the Phase I and Phase II accounts of visits to physical reference desks. This study is continuing; therefore the findings on the virtual reference experience are necessarily preliminary. However, based on previous phases of this research, it is likely that these initial findings will be reflected in later results with larger numbers of participants.

User perceptions of success

In 1989, Durrance identified 'willingness to return' as a user-centered indicator for evaluating reference service, and this measure has been used throughout the Library Visit research project. As mentioned above, once the users asked their questions, they filled in a questionnaire in which they rated their experience and they also wrote an account in which they reported step-by-step what happened. For the PRD evaluation, the measure used to assess user satisfaction was the question: 'Given the nature of this interaction, if you had the option, would you return to this staff member again with another question?' For the PRD evaluations the question was changed slightly to read: 'Given the nature of this interaction, if you had the option, would you return to this digital reference site again with another question?' They were given the option of saying, 'Yes', 'No', or 'Not Sure'. The transactions are counted as successful where the users said 'Yes' that they would be willing to return. We counted as unsuccessful those transactions where the users said 'No' or 'Not Sure'. Table 1 compares the success rates of Phases I and II with those reported in the first 42 questionnaires submitted for Phase III.


Table 1: Success rates, Phases I, II and III - Would you be willing to return?
Phase/Type of LibraryNo. of Library Visits% Reporting Yes
Phases 1 & 2 PRD visits combined:
Public & university libraries 261 65
Public libraries 182 61
University libraries 79 75
Phase 3 VRD visits
Public & university libraries 4262
Public libraries 1867
University libraries2458

As reported in Ross and Nilsen (2000) there had been improvements in the success rate between Phase I (1991-1993) and Phase II (1998-2000), with an overall success rate of 60 percent in the earlier period and 69 percent in the later period. The overall rate for VRD visits of 62% willing to return shown in Table 1 suggests that providing virtual reference does not necessarily improve success rates. It should be noted however, that the users doing the Phase I and II components were beginning MLIS students in their first term, with no previous introduction to reference interviewing skills. They are more typical of the general public than are the users participating in the Phase III virtual reference component, who are advanced students and hence, possibly more critical than members of the public would be.

Chat versus e-mail

The choice of visiting a chat service or an e-mail service was left to students. Some students had not yet used chat services and were uncomfortable with trying it as part of a course assignment, so opted for the more familiar e-mail environment. Others tried to use chat services, but faced university library restrictions on types of users (i.e., they needed to have some affiliation with the institution). Even when a Web site indicated that non-affiliated users could not use the services, generally, those who tried were not questioned and received good responses. However, they were not comfortable doing so, as one user noted,

'I also had a fear in the back of my mind that she was going to ask me for my student number or staff ID and I would be in some kind of trouble for using an exclusive system.'

When restrictions were noted on the Web site, some users felt sufficiently intimidated to move to another service. Of the 42 accounts of visits to VRD sites, 15 (36%) used chat services, while 27 (64%) used e-mail services.

How well are users satisfied with e-mail versus chat reference? Table 2 shows that from this small number of cases, chat users were most likely to be willing to return to the virtual reference site again.


Table 2: Success rates for virtual reference. Willingness to return to e-mail and chat services
Type of Library and ServiceNo. of Visits% Reporting Yes
Chat overall 15 73 (n=11)
     Public chat 4 75 (n=3)
     University chat 11 73 (n=8)
E-mail overall 2756 (n=15)
     Public e-mail 1464 ( n=9)
     University e-mail1346 ( n=6)

The number of virtual reference desk visits is too low to validly calculate significance or to assume that the percentages will hold up over a larger sample. However, these initial findings suggest that e-mail services were less likely to please these users.

Unless they have much experience using chat to communicate with friends, users found the whole process quite intimidating, as one student noted, 'I felt panicky and rushed, as though I could not type the question out fast enough and feared the librarian would lose his/her patience.' Having used this service, most users became enthusiastic and cautious about its potential,

Being able to receive pages pushed by the librarian is exciting and should represent a boon to research and reference service (though in a rushed chat environment, one could question whether the patron receives the highest quality service in this type of transaction).

Reference behaviour resulting in user dissatisfaction

In Phases I and II of this research project, we found that when users are dissatisfied with the service they receive and express unwillingness to return to the same staff member, the underlying problems were:

  1. Bypassing the reference interview and simply taking at face value the user's initial statement.
  2. Unmonitored referrals, that is, failing to determine whether an alternative source referred to was useful.
  3. Failure to ask the follow-up question, to determine whether or not the enquiry had been answered.

The Phase III accounts, e-mails, and chat transcripts were examined for evidence of these problems. Table 3 illustrates the findings comparing Phases I and II PRD visits with Phase III VRD visits.


Table 3: Percent of library visits in which selected types of behaviour were reported
*Of 42 accounts, 2 received no response to their e-mail queries to university VRDs
Type of behaviourPhase 1 & 2 - 261 visits to PRDPhase 3 40 VRD visits*
Reference interview given49% (n=129)20% (n=8)
Unmonitored referral 37% (n=96) 28% (n=11)
Lack of follow-up 36% (n=94) 30% (n=12)

Reference interviews

As is described in Ross and Nilsen (2000), to be counted as conducting a reference interview at the physical reference desk the staff member needed to ask at least one question intended to find out more about the user's information need. A reference interview was counted as having occurred if a clarifying question was asked at any time during the entire transaction by any staff member including on a second attempt when the user started over with a second librarian. We counted not only well-formed open questions such as, 'What kind of information do you want on L.M. Montgomery/used computers/pine trees?' or, 'How much information do you want on this?' but also closed questions such as, 'Are you writing a paper on this topic?' (but not, 'Do you know how to use the catalogue?'). We also counted responses that were not formally questions but had the performative function of a question, such as repeating the key words of the user's statement and pausing strategically to encourage further elaboration.

In the chat VRD accounts, the same criteria are used to count reference interviews as were used for the PRD accounts as described above, along with questions noted in Richardson's ( 2002) 'Checklist for model reference transactions', which is designed for use with LSSI transcripts. Among the evaluation questions included are: 'Are open-ended questions asked at the outset of the transaction to clarify the information need?' and 'Is there a closed-ended question at the end of the initial interview confirming that the librarian understands the user's inquiry?'

To be counted as a reference interview in e-mail VRD accounts, the e-mail exchange had to include some sort of question negotiation or summary of the request. Question negotiation in this context was identified in Abels' (1996) description of a model remote [e-mail] reference interview as occurring when the intermediary [library staff member], 'asks the necessary questions using open-ended, closed-ended and follow-up questions as needed to clarify the need based on the information provided in the request form.' She identified a summary as occurring when the library staff member 'summarized the information need and the characteristics of a desired answer.' Abels noted that a 'summary should always be presented in a remote [e-mail] reference interview since the lack of a real time interactive medium inevitably results in a time lag and the information received in various messages must be consolidated.' In both chat and e-mail accounts, the equivalent question to 'Do you know how to use the catalogue?' is 'Do you know how to search the Internet?' This was not counted as a reference interview.

As Table 3 shows, at physical reference desks, library staff members conducted a reference interview only about half the time. At the virtual reference desk, reference interviews occurred in only eight accounts of the 40 completed transactions. There were no interviews conducted with patrons who used e-mail; the eight interviews recorded were all conducted using chat in university libraries.

In the Phase III accounts, users often expressed surprise that they were not interviewed While these users all have completed a basic course in reference and learned interviewing techniques, most had never used a virtual reference service . Their responses suggested that they were just as uneasy about using virtual reference as would be any other users.

Unmonitored referrals.

In the unmonitored referral, the staff member refers the user to a source, either inside or outside the library, but does not take any steps to check whether or not the user eventually gets a helpful answer. At The PRD, the most common example is when a staff member gives a user a call number and recommends browsing. The equivalent at the VRD is when the user receives a list of URLs and is urged to try them.

As seen in Table 3, the unmonitored referral occurred in more than one-third of the PRD library visits, while in the 40 accounts of VRD visits, it occurred in eleven (28%) accounts. This suggests that, at least with this one reference activity, VRD staff are providing better reference service than are PRD staff. However, there is a caveat, in that coding of unmonitored referrals is more difficult when analyzing e-mail and chat transaction records. For example, when forwarding URLs, a staff member might say, 'Here's a good site,' without indicating whether or not an answer to the question is on the site. It is not always clear that the staff member has checked the site before forwarding it to the user but, in coding such a statement, it was assumed that verification of good sites had been done and did not count it as an unmonitored referral.

As with PRD visits in which the user receives a few call numbers, simply referring VRD users to URLs without checking their usefulness is not good reference practice. The implication is that this is all the patron can expect. After receiving an unmonitored referral, one user asked a further question,

I felt as though my additional question annoyed them and created a hassle for their system... I felt as though my opportunity for asking questions was over and that I had to be satisfied with the answer I received.

For some users, the unmonitored referral is sufficient and they are able to find what they need. However, when the referral does not lead to answers to their queries, they can become unhappy patrons. One user wrote,

The virtual reference interview left the user cold. In point of fact, the librarian's answer was incorrect. The user feels that the [virtual] librarian really just wanted the user to go away. In truth this experience depreciates the entire Virtual Chat process.
.

Follow up

The literature on reference interviews has long noted that asking a follow-up question is the 'single most important' behaviour in the reference transaction (Gers & Seward, 1985). ALA's guidelines for reference behaviour note that, 'The reference transaction does not end when the librarian walks away from the patron. The librarian is responsible for determining if the patron is satisfied with the results of the search and is also responsible for referring the patrons to other sources, even when those sources are not available in the local library.' (American Library Association, 1996) Follow-up provides an opportunity to recover from previous deficiencies. At the PRD, follow-up is achieved when the staff member extends an invitation to return for further help or makes an effort to check on the helpfulness of the answer. Richardson's checklist for evaluating chat transactions includes the question, 'Did the librarian use some variation of this closed-ended question, 'Did this answer your questions?' at the end of the transaction?' (Richardson, 2002) Failure to provide follow-up can leave users hanging, 'I was disappointed by [the] lack of follow up or closing remarks and this left me feeling neglected as a user of the XXX Library.'

In virtual reference, particularly in chat services, recorded messages are frequently used at the end of the transaction, suggesting that the patron use the service again. These messages are not counted as follow-up because they have no relationship to the previous transaction, and often appear after the session is completed, when the staff member has already logged off. In e-mail transactions, the staff member frequently ends the exchange with, 'Hope this helps!' before disappearing. This does not encourage the user to e-mail a reply such as, 'Well no, actually, it didn't.' As Table 3 shows, a follow-up question is asked in PRD transactions about 36% of the time, and in the VRD transactions discussed here, it occurred in 12 (30%) of the accounts.

Writing vs. speaking

Virtual reference requires both the library staff member and the user to type out their responses. This is time-consuming, and causes anxiety at both sides. Library staff and some patrons are concerned with correct grammar and spelling. 'I took more time to compose my question than I would were I simply asking someone face-to-face.' Written messages provide no verbal cues and tone of voice is lost, so the writing must try to express tone in the words. One user commented,

I was surprised that the tone of the e-mail made such a difference to me, and suspect that this could be a substantive factor in how well an e-mail reference service is received by patrons.

Users can interpret relatively innocuous statements as negative or critical. In one exchange, a library staff member responded to a request for information with 'There should be some information on... [the topic].' The user commented,

This seemed to be a very abrupt response, which threw me slightly off-balance.... I got the impression that the librarian was slightly exasperated with me, since the phrase, 'there should be some', seemed to suggest the material was there, I just hadn't bothered looking for it.

Users frequently commented that library staff were annoyed or irritated when, if the same statement had been spoken, it probably would not have led to this interpretation.

A long chat exchange can make the user feel uncomfortable, and the written words can exacerbate this, especially if the staff member doesn't explain what is going on. As one user wrote,

Even [though] this person had been polite, if a tad curt, and remained polite until they logged out, I could not escape the sense that I was taking up their time. This is because the process is slow for the amount of information passed and because the patron is unaware of what is transpiring at the other end of the transaction.

Conclusion

This comparison of physical reference desk transactions and virtual reference desk transactions relies on a research method that has proved successful in identifying user perceptions of the services offered. The data on the VRD visits that are presented here are necessarily very preliminary and additional accounts will be collected in coming years. However, these initial accounts do suggest that virtual reference service is not necessarily going to be more successful than traditional service at the PRD. Users often expressed frustration at the poor service received, just as they do after visiting libraries in person. In both cases, satisfaction depends on many factors and was not solely dependent on the answers received. One user noted, 'I realized that digital reference is similar to in person reference in that much of my satisfaction was determined by my assessment of how well I had been treated, as much as by my reaction to the answer I received.' The reference behaviour shown in these accounts indicates a failure to translate good reference practices from the physical to the virtual environment. The failure to conduct even minimal reference interviews in the e-mail services is astounding, and the fact that only 20% of the chat transactions involved a reference interview is depressing. In addition, failure to ask follow-up questions in 70% of the VRD transactions means that VRD services are not verifying that their users are satisfied with the service. One user suggests a possible reason for this,

As the closing greeting and follow up instructions were left out I felt as though the e-branch of library focuses on the questions, whereas the in-person reference is forced to focus on the user and the question within the encounter.

Reasons for overlooking the reference interview and follow-up should be examined. One cause of failure may lie in the nature of the format itself. Typing is time-consuming and tedious, and might be a conscious or subconscious reason that staff members skip important steps in the reference transaction. One user noted:

I believe that... having to type in real time while working a potentially busy Reference/Info desk, militates against an involved interview process. It's as if as Reference Librarian, the tendency is to say, 'Let's get to it; there's no time for small talk, not time for foreplay.

Additionally, virtual reference offers the possibility of efficient and speedy responses to reference queries. Taking time to practice good reference behaviour might be seen as counter-productive.

One user provided this thoughtful summary:

'The XXX library was not helpful to the user. Factually the reference answer given was incorrect, the source offered provided misleading, incomplete, ultimately wrong information. Emotionally the reference librarian was nasty and short. She did not do her job. The [service] must be very costly to offer and operate. It is money poorly spent, if this interaction is typical. There needs to be a rededication to the reference interview process. The distance between user and librarian cannot be eliminated by technology alone. The human distance between the needy and the helper must be bridged. The helper must always be aware of the need in the user, and tend to that need through thoughtful, effective, library service epitomized in the reference interview.' (User W03 AC6).

It is worth noting that, while most users were satisfied with the service received, at both the physical and virtual reference desks, the rate of success, as measured by willingness to return, was not improved by the advent of virtual reference services. By examining the documentation provided by the users and their comments, it is possible to identify similar problems at both types of services. Moving reference service delivery into the virtual environment does expand the answering capabilities of library staff. However, simply answering user queries is not enough. User satisfaction with reference services depends on consistent use of best reference behaviour.

References




Find other papers on this subject.


How to cite this paper:

Nilsen, K. (2004). The Library Visit Study: user experiences at the virtual reference desk.   Information Research, 9(2) paper 171 [Available at http://InformationR.net/ir/9-2/paper171.html]


Check for citations, using Google Scholar

counter
Web Counter
© the author, 2004.
Last updated: 29 December, 2003
Valid XHTML 1.0!