Researchers and practitioners talk about users and each other. Making user and audience studies matter—paper 1
Brenda Dervin and CarrieLynn D. Reinhard
School of Communication, Ohio State University, 154 N. Oval Mall, Columbus Ohio, USA, 43210
Purpose
This paper is an initial report from an empirical study that is part of a continuing project for which the ultimate goal is to make recommendations for structural and procedural changes in the communicative aspects of the user and audience studies enterprise.
What we mean by communicative aspects is that research consists of many acts of internal and external communicating and, hence, can be examined systematically as communication activity. It is communication activity that at least in part builds and destroys bridges between disciplines and perspectives; it is communication activity that allows us to advance our work; it is also communication activity that deters us from advancing our work. It is communication activity that drives every step of the research process. Rarely, however, does one find systematic attention paid to how communicating is done in the research enterprise. This, at the most general level, is the purpose of this project. (1)
This paper is the first of two interconnected papers appearing in this issue of Information Research. This paper reports on a qualitative study that compared what experts in three fields, library and information science, human computer interaction and communication and media studies, described as their big unanswered questions about users and audiences of information, library, electronic, communication and media systems and texts. It also compares what these experts thought about each other and the difficulties of crossing the divides between disciplines or fields and between research and practice.
The second paper consists of a philosophic commentary focusing on research as communicating, drawing out implications for collaboration across disciplinary and research-practice divides for those of us who conduct and apply user and audience studies. It is in the second paper, for example, that we attend to the philosophic legacies that are at one and the same time strengths of scholarly efforts to acquire more useful pictures of the world and weaknesses inhibiting our attempts to learn from each other and build on each others efforts.
The impetus for the first paper came from the authors' attempts to bring to bear on central communication questions literature from different fields that were, to our minds, addressing the same questions. Thus, for example, if we look at studies on information seeking and use in the fields of communication and library and information science we find that the two rarely meet. Or, we examine the empirical work derived from different theories of what audiences get from media, again the two rarely meet. Or, if we look at work on user uses of information systems versus audience uses of media systems, again the two do not meet. One does not have to look far to find researchers lamenting how we are drowning in avalanches of disconnected empirical findings and producing work that 'piles up, but does not add up'. Further, one does not have to look far to find researchers lamenting the fact that even our many attempts at building theories, models and syntheses to bring order out of the chaos are, in fact, adding to the cacophony. The intent of this paper is to document in the terrain of user studies the fact that, while individually some of us may feel as if we are making progress, collectively we appear to be in disarray. Worse, as we will note below, that disarray embeds within itself some dangers to user and audience studies enterprises. (2)
The purpose for the Keynote paper is to bring to bear on the portrait we develop from this paper a perspective we call a communicative view of communication. Along with the endless calls for more collaboration across our divides, there are, of course, endless calls for more communication as if these processes magically happen by intention alone. Evidence clearly suggests that they do not, that communicating and collaborating across divides is hard. Further, evidence suggests that many of the normative views we have of communication prevent us from looking at communication as process rather than outcome. In this paper, we propose a way of looking at communication that refocuses our attention away from the idea that communication is something we can do to do research better, to the idea that research fundamentally involves a large repertoire of potential communications, some often used, some originally mandated for good scholarship but now too often ignored and some in need of invention.
Rationale and background
Why user and audience studies: given a user by any other name; given information by any other name
In this paper we will apply the term users to refer to a wide variety of persons who have been conceptualized in one way or another as the voluntary or intended users of information, media or communication systems: citizens, employees, patients, patrons, audiences, students, clients, customers, constituencies, recipients and so on. Every institution whose efforts point in some way at groups of such users is building electronic interfaces for doing so; users, too, are building their own interfaces that, in turn, impact upon institutions. For simplicity's sake, we will use the term users throughout this article although we are deliberately enlarging the intended meanings for the term as discussed in this section.
The largest traditional cut across all these persons has traditionally been the focus on users versus audiences with user and audience studies being historically disparate enterprises. Few researchers have focused on both; few have defined the two as synonymous. Users have traditionally been those persons who voluntarily made use of information and communication systems; e.g., library patrons, museum attendees. Or, they have been those persons that an information or communication system was in some way mandated to serve; e.g., the citizens 'served' by a city's or museum system; the students at a university served by an academic library; the employees intended to use an organization's knowledge management system. Users have been thought of traditionally as individuals each of whom at least conceptually was served as an individual and for whom the bottom line has been whether that person was served. Systems have traditionally cared about meeting individual user needs. (3)
The literature is filled, of course, with nuanced arguments which trouble this attention to users. Two primary ones are: a) whether we should be focusing on users, use, uses, usability and so on and what differences these differences make; and b) whether these user-oriented terms privilege systems over people by rhetorically making systems and use of them the center of attention. We acknowledge these complexities although for our purposes here will focus on the primary overarching polarization. (4)
In contrast, audiences have traditionally been defined as amorphous groups of individuals that communication, media and information systems attract or entice with arrays of offerings of particular genres, programme types, or content. Traditionally, systems have not worried about whether any given specific member of an audience was served but rather how many paid attention. The bottom line has traditionally been audience counts even to the extent to which political economists have argued that such systems have little interest in serving individual audience member needs. Rather, the focus has been on audiences as commodities; as units of economic value to the system. Sometimes this economic value is profit oriented, as in television network programming; sometimes it is oriented towards societal cost reductions, as in public communication campaigns. (5)
While, clearly, there have been researchers who have defined these divisions between users and audiences as system-imposed artificialities, there is little doubt that, traditionally, these divisions have driven research in the library and information science versus communication fields with the former focusing on users, the latter audiences. Human-computer interaction is a 'Johnny-come-lately' to user studies transitioning its attentions from physioloigical and functional usability per se to users in response to its enlarged and diversified role in the face of the remarkable diffusion of electronic technologies.
Of course, another polarity, that between texts and messages defined as information versus those defined in some way as not information, is a large part of the foundation on which the distinctions between users and audiences have rested. Traditionally, users were voluntarily getting informed, while audiences were either voluntarily getting entertained or being persuaded or educated to become informed in particular and sometimes deliberately biased ways. These traditional facile distinctions between kinds of information, kinds of media content and kinds of utilities or gratifications users and audiences get from systems have long been challenged both by information science and communication theorists as well as by a host of philosophers. (6)
In the fast-moving maelstrom of the advance of the electronic technologies, these traditionally neat divisions between users and audiences, information and not-information are clearly falling apart. Audiences and users are becoming one. What was traditionally defined as information and its opposite not-information have, much to the distress of those interested in information authority, become, in the practical world of everyday experience, a jumble.
Presenting a detailed review of the literature supporting these premises is beyond our purpose here. Suffice it to say that evidence supports three observations:
That users by any other name, citizens, lay persons, patients, patrons, participants, attendees, viewers, game players and so on, increasingly have greater and greater control over their access and use of all manner of information and entertainment systems. In this sense they are no longer best conceptualized as users or as audiences but rather as persons with agency. While in the past access was highly constrained both in space and in time by institutional availability, now these independent agents may surf the planet and beyond at their own whims. This is so even when we acknowledge that all users are unwittingly or wittingly constrained by political economies that limit both arrays of offerings, modes of access and availabilities of specific contents.
That information, communication and media institutions are themselves undergoing a maelstrom of change whose directions and ends are predictable in only the most limited of ways. The traditional task divisions between institutions are collapsing; the traditional safe market 'niches' are being eroded; the traditional divisions between kinds of content (e.g., entertainment versus information) are collapsing.
That in actuality, when examined from a communication perspective, the historical divisions between kinds of users, kinds of systems and kinds of content were never tenable. The distinctions looked as if they were tenable given the constraints of the designs of information, media and communication structures. But individual sense-makers have always struggled to navigate the crisp divides that institutions have attempted to maintain between their edifices. And, individual sense-makers have always had to struggle with the gaps between and the ways in which societally imposed beliefs and normative forms have been less than useful to their lived experiences. What is happening as the result of the maelstrom of changes impelled by the emerging electronic confluence is that these communication realities have become more obvious. They were in the past relatively quiescent. Now they bubble to the surface in numerous ways. (7)
It is small wonder that in the face of these changes we find ever increasing emphasis being placed in all the professions that create systems designed to serve and/or attract users on understanding users, by whatever names they may be called in order to construct more useful communication systems, designs and messages and in order to find a competitive niche whether for purposes of service and/or profit. (8)
It is also small wonder that researchers in virtually every field that supports these professions are jumping on the user studies bandwagon. This is as true, for example, of the fields of education, art and social work as it is for the three fields, library and information science, human computer interaction and communication, that form our primary focus in this paper. (9)
Do user and audience studies matter? To whom?
The issue of whether user and audience studies matter has for purposes of this paper three interrelated manifestations. One is whether user and audience studies conducted in one discipline matter to researchers in different disciplines. The second is whether these studies matter to practitioners who are the ones designing and implementing systems. The third is whether they matter to society as a whole.
The first two divides are the empirical focus of this paper. There is much informal talk and a smattering of empirical work, which suggests that user studies completed in one discipline matter little to researchers in other disciplines and that practitioners find little value in the results obtained from user studies. Because of the dearth of the evidence amid much informal talk, we explicitly asked both researchers and practitioners in the three fields most involved with user studies, library and information science, human computer interaction and communication, what they saw as the barriers hindering collaboration and research utility across disciplinary and research-practice divides.(10)
While it is beyond our empirical purpose here, it is important to focus attention on the third divide, that between user research and society. We found no commentary that addressed this issue specifically in the context of user studies, but we found a great deal addressing it in the wider context of the social sciences. In general, it can be said that there is increasing lack of interest and increasing distrust in the social sciences, worldwide, by policy makers and the general citizenry.
Indeed, society does ask social scientists to rise to instrumental challenges: find out why bad events happens so we can fix or eliminate them; figure out how to persuade people to do x and stop doing y. Even here, however, the fact is that social sciences produce an avalanche of studies on virtually every topic and that these often disagree not only in method and approach but results. The plethora of studies and the disagreements coupled with the highly specialized vocabularies in which social scientists speak within their discourse communities further exacerbates the problem of the gap between social science researchers and society.
The situation is further compounded by an increase of commentary arguing fervently either for or against the social sciences. Amongst the former, we see arguments for a return to the traditional emphases of the social sciences on more humanistic issues of equity and human struggle and less engagement with nation states attempting to control and manipulate citizens. Among the latter, by far the loudest and most fervent, are those who are either against the social sciences for any but the most instrumental purposes and those who are entirely against them. A sub-argument within this debate is between those who see value in the social sciences as tools of social engineering and those for whom the goals of societal engineering are seen as irrelevant, even detrimental to society. (11)
Increasingly, then, the question is overtly being raised: 'Do the social sciences matter?' While most generally manifested as an argument between more liberal versus more conservative observers, the increasing frequency and fervour of the debate cannot be ignored. It is itself a maelstrom in which user studies are caught. It is certainly a maelstrom which has impacted the availability of funding for user studies as well as restricted the range of questions considered fundable.
It is both the fortune and misfortune of those interested in conducting and applying user studies to be doing so at this juncture in history. The assumption behind this current study is that the forces and tensions enumerated above plague user studies and have given rise to a pulse of interest in finding ways to make user studies matter by fostering collaboration between researchers in multiple fields and practitioners in multiple fields. While there are rampant disagreements on what it might mean for user studies to matter, there is clearly widespread agreement that those interested in user studies want them to matter more. It is this well-meaning pulse with which we agree that was impetus for this study.
Methods
The dialogue project
The study reported here is part of a larger multi-stage project, a dialogic surround of how researchers and practitioners in three fields look at: a) the big unanswered questions about users; b) the gaps that stand between them in finding value from each other's work; and c) the barriers to collaborating in the application of user research to system development, implementation and design.
In order to fully explain why this study and analysis have been structured as they have, it is necessary to set these efforts in the context of the larger project. The project has been designed as a dialogic surround in six stages. The concept of a dialogic surround is illustrated first by describing the project's stages. In the next section, a dialogic surround is defined conceptually in terms of the particular relevance of the concept to the analysis presented here.
Round 1: the interviews that we are analyzing here constitute the first of the six stages.
Round 2: volunteers from among the interviewees (working with graduate students and invited colleagues) wrote impressionistic essays focusing on their interpretations of divergences and convergences in the interviews. The opportunity to write impressionistic essays will continue to be open for at least a year. As of this writing, forty-eight impressionistic essays have been written and are posted online. Stage 2 essayists were asked to identify convergences and divergences between the interviews. They were asked not to evaluate or propose their own solutions.
Round 3: this article is the first systematic thematic analysis drawn from the Stage 1 interviews. It has been designed explicitly to serve the purposes of early dialogic stages. Like the stage 2 essays, it does not propose 'right' answers or take sides. Rather, it focuses very generally on what we learned from our interviewees about their convergences and the extent and diversity of their divergences.
Round 4: panels focusing on stages 1 and 2 are already included in the programme for the 2006 meeting of the American Society for Information Science and Technology and tentatively planned as a submission for the 2008 meeting of Association of Computing Machinery, Special Interest Group Computer Human Interaction. Full-day working symposia are planned for the 2007 meetings of both the International Communication Association and the American Society for Information Science and Technology. Working titles for all, with relevant field adaptations, focus on: Making user, audience and usability studies matter: Relevance and irrelevance to the designs, policies and practices of library, information, communication, media and electronic systems.
Round 5: organizing a team to design and seek funding for an international multi-disciplinary dialogue and world conference. (12)
Intertwined between these stages are planned systematic analyses of the more traditional scholarly kind, i.e., those that qualitatively and/or quantitatively critique, evaluate and propose solutions. These will be executed separately and referenced back to the stages above. The stage 2 impressionistic essays and stage 3 thematic analysis focusing on communication issues are defined as necessary for later stages.
The nature of a dialogic surround and its relationship to this study
Drawn from Dervin's development of the Sense-Making Methodology, dialogic surround, as used in this article, is a concept intended to capture in one term the need to procedurally implement repeated rounds of communicating in dialogues where the intent is to enable people to hear how others construct their worlds in such a way that the hearing can become fodder for active sense-making rather than knee-jerk argument and resistance. (13)
It is beyond the purpose of this paper to explicate fully the empirical, theoretic and philosophic underpinnings of the Sense-Making approach to dialogue. Sufficient for our purposes here is specification of these ten procedural mandates which serve as underpinnings for all stages of a Sense-Making informed dialogue. A central assumption behind these mandates is that communicating activities, the verbs of communication, can be explicitly tailored to communicative purposes. Evidence from Sense Making studies of dialogue suggests that there are at least three basic needs in dialogue: 1) understanding and thinking about self; 2) understanding and thinking about others; 3) sharing and advancing own views to others. Our research suggests that these rarely can happen simultaneously. The procedure mandates in a Sense-Making-informed dialogue are:
Include both intra- and inter- personal communicating as legitimate aspects of total dialogic processes.
Mandate participants to strengthen their communicative repertoire capacities so they can use different behaviour when purposes are to understand and comprehend others versus when purposes are to critique, evaluate, teach, or propose solutions.
Implement specific procedures for some rounds in the dialogue in which the only purposes are listening and analyzing as free of judgment as possible.
Bracket, for the purposes of dialogue, the traditional emphases in social science interviewing on controlling the assumed biases of selectivity processes, selective perception, retention and recall, in favour of procedures that invite talkers to explain fully how they came to their views.
Break out of habitual and often hegemonic modes of communicating by asking different kinds of questions based on human universals of movement through time-space rather than based on the nouns of any given discourse community or institutional formation.
Build enforced anonymity into some communicating processes, particularly at the earliest stages of dialogue, so participants will be able to speak freely about both their struggles within systems and their successes.
Allow participants (including interviewees) to discuss multiple facets and be free to express inconsistent views which they are invited to navigate with their own explanations.
Build in questions that iteratively invite participants to make connections between their internal sense-making and their understandings of the material and structural conditions in which they are embedded.
Implement multiple rounds of communicating both within single interviews and within collective dialogues to facilitate the interplay between the conscious and the unconscious in human sense-making.
Invite participants to talk not only about what is but also about what might be particularly in the context of the ways in which they have struggled with specific hindrances they have identified.
In the project for which the study reported here serves as stage 1, these procedural mandates were applied. Specifically, they have informed both our approaches for interviewing and analyzing the stage 1 interviews as reported in this paper.
The stage 1 informants
Two groups of informants were interviewed in stage 1. One group, consisting of eighty-three informants, is called the international expert sample. The second, consisting of thirty-one informants, is called the local expert sample. Each is described briefly here. For more detail, readers may consult the dialogue project Website.
International expert sample
The international expert sample is a sample that must be admittedly described as having come together by implosion. The original plan was to use a set of ten to twelve interviews with national and international expert researchers and practitioners in our three focal fields to inform an IMLS-funded research project focusing on the hows and whys of user satisficing of their information needs. By the time interviewing was completed, there were, instead, eighty-three interviews with an effort supported about 10% by grant funds and 90% by extensive volunteer efforts and contributions of resources by some 180 people at multiple institutions. (14)
After completing the first twelve interviews, it had become obvious that there was both more agreement than we expected as well as more disagreement. We did not expect informants, for example, to concur so much on the struggles of crossing field and research-practice divides in user studies. At the same time, we did not expect such widespread divergence in what informants said, the premises from which they started and the very ways in which they talked. Nor did we expect that the divergencies did not link clearly to particular fields or even to practitioner versus research differences.
In addition, unexpectedly, the first twelve informants were extraordinarily excited by their interviews and suggested other possible informants. Given a lack of funds for paying informants and our desire to interview luminaries in the three fields, our primary means of identifying possible informants was by nomination and personal appeal, primarily from the senior author but also by each successive group of informants. Every effort was made, within resource limits, to achieve a reasonably balanced sample. Further, the standard qualitative rule for interviewing cessation was applied; interviewing continued until we concluded that additional effort could yield little additional diversity in input given available resources.
There were some obvious limitations in the process. While the final sample consisted primarily of well known top-level experts, many considered international luminaries, only ten were not current residents of the U.S. On the other hand, an examination of dossiers showed that an additional twelve were recent residents and substantially more than half had had non-US experiences, some quite extensive. Informants insisted we extend our interviewing outside the U.S. because of the obvious globalization of the electronic confluence and known differences in how researchers on different continents conceptualize users and audiences. We accomplished this to a small extent given budget constraints.
The sample also clearly favoured academics, most of whom had current or former practitioner experience. However, only 17% of the sample consisted of experts currently employed full-time as practitioners. While the communication and library and information science field samples were relatively 'pure', i.e., peopled mostly by individuals with advanced degrees at least in these core subjects, the human computer interaction sample was less so. Only about two-thirds of these informants came from computer science and engineering backgrounds. The rest were individuals highly involved in system design who had gravitated there from other backgrounds.
We see these limitations as having relatively little impact on the analyses we present here which focus on communication issues rather than substantive differences. These limitations will need to be actively engaged in planned further analyses of this data. Table 1 describes the resulting sample of international experts.
Table 1: Distribution of the eighty-three international expert informants by field, place of employment, work activity and advanced education focus.
% of informants (n=83)
REPRESENATION FROM THE THREE FOCAL FIELDS
library and information science
34.9
human computer interaction
31.3
communication/ media studies
33.7
PLACE OF PRIMARY EMPLOYMENT
academia
83.1
non-academic institution
16.9
corporations
2.4
government agencies
4.8
consulting firms
8.4
other non-profit
1.2
CURRENT EMPLOYMENT INVOLVED THESE ACTIVITIES*
research
81.9
design
26.5
service planning & implementation
12.0
ADVANCED EDUATION FOCUSED ON*
anthropology
2.4
business management, administration
3.6
communication & media studies
30.1
computer science, engineering, human computer interaction
20.4
education
3.6
humanities: English, history, philosophy, cultural studies
* % do not add to 100.0 for these two sets because informants were recorded to multiple categories
Local expert sample
The local expert interviews consisted of a total of thirty-one individual and seven focus group interviews with directors of the academic and public libraries serving the forty-four colleges and universities in central Ohio, the designated institutions from which the user sample was drawn for the IMLS-funded research project. (15)
A total of seventy-nine academic and public libraries served these forty-four institutions. Of these, 31 (39.2%) directors or their representatives attended a day long session at which both the individual and group interviews were collected. Of the 31, 24 (77.4%) were academic librarians; 7 (22.6%) were public librarians. Table 2 compares the academic librarian distribution by the institution's student enrollment focus and by institution support base. Other than academic rank (faculty, graduate student, undergraduate), an institution's student enrollment focus was the primary factor used in constructing the stratified proportionate sample of informants in the user study. Institution support base was also an important consideration in representing institutional diversity, particularly in the central Ohio context.
Table 2: Distribution of the thirty-one local expert informants by type of library, institution enrolment focus and institution support base compared to the percentages of the total student enrolments in central Ohio in each category
% of informants (n=31)
ACADEMIC OR PUBLIC LIBRARY
academic library
77.4
public library
22.6
% of academic library informants (n=24)
% of the total student enrollment in central Ohio (n=249,272)
INSTITUTIONS ENROLLS
undergraduates only*
70.8
81.8
undergraduates and master's
16.6
9.1
undergraduates, master's and doctoral
12.5
9.1
INSTITUTION IS SUPPORTED BY
public funds
54.2
40.9
private secular funds
16.6
9.1
private religious funds
29.2
50.0
* These sometimes have small specialized entry level master's degree offerings designed to meet specific employment needs, e.g., in nursing
Stage 1 interviews
Interviewing for both the international and local experts was informed by Sense-Making Methodology's interviewing approach. While the venues for interviewing differed, the focal questions aligned as closely as possible. Both sets of interviews focused on how informants viewed gaps: a) in understanding users; b) between researchers in different fields; and, c) between researchers and practitioners. They were also asked for their magic wands for research or procedures that would be especially helpful in alleviating gaps. Each set of interviewing approaches is described briefly here.
Interviews with international experts
A few of the international experts were interviewed in person; most by phone; and all at times set by prior appointment. Interviews were recorded and took, on average, forty-five minutes. Interviewers included both senior project personnel as well as Ohio State University students at all levels, undergraduate, Master's and doctoral.
Interviewers were trained in navigating the Sense-Making approach to interviewing with its emphasis on gaps, how gaps hinder, how participants struggle with gaps, how gaps are or might be bridged and how bridging gaps might help. In Sense-Making's theory of dialogue, it is answers to these gap identifying and bridging queries that allow people from different discourse communities to begin to understand each other. The interview script provided the essential structure organized in four sets of questions as follows:
About users and audiences
What in your judgment are the big unanswered questions about users and audiences and their uses of library, information and communication systems? For each big question: How would an answer to that question help or facilitate your work or research? In your judgment, what explains why this question is not yet answered?
What are the biggest challenges the advance of electronic information systems present to library/ information/ communication systems in serving users/audiences well? For each challenge: What would help or facilitate facing this challenge?
What would make user research more useful to library/ information/ communication system design and practice? For each suggestion: How would this in particular help?
About gaps between fields and disciplines
Have you found the many different approaches and fields focusing on user and audience research hindering in any way? For each hindrance: How has it hindered? If you could wave a magic wand, what would help?
Have you found the many different approaches and fields focusing on user/ audience research helpful in any way? For each help: How has it helped?
About gaps between research and practice
Have you found the different ways in which practitioners vs researchers look at users/ audiences hindering in any way? For each hindrance: How has it hindered? If you could wave a magic wand, what would help?
Have you found these differences in how practitioners vs researchers look at users/ audiences helpful in any way? For each help: How has it helped?
About the ideal study
If you could wave a magic wand and had any amount of money, what would you like to see a big study of users or audiences do?
The interviewer's goal was to cover all the focal questions without behaving like an interviewing robot because persons operating in different fields do not think about users and audiences in the same way. Often the interviews started with a negotiation about what 'users' or 'audiences' to talk about and what 'systems' and how to name these entities. This was a particular requirement in interviewing some communication academics who defined themselves not as focusing on users or audiences but rather on human beings (in various sub-categories) who happen to use systems in their life struggles and journeys.
Interviews with local experts
In the Sense-Making approach to focus group interviews, informants are simultaneously involved in self-interviewing (through keeping diaries or journals) and group discussions. For this study, on the day the local experts convened, informants were given individual workbooks with one page for each focal question. Individual journal entries involved informants in entering their 'sense-makings' as they came to them in four different boxes as each focal question was discussed.
My thoughts in answer to these questions.
What I strongly agreed with and why.
What I strongly disagreed with and why.
What I would most like to see your project do [in its user study].
The ten focal questions replicated the focal questions used for the international expert interviews although stated somewhat differently so as to apply more directly to the practice context.
Big unanswered questions about users
What are the big unanswered questions about users?
How would answers help?
What explains why these questions are not yet answered?
What are the biggest challenges the advance of information systems presents to library and information science in serving users well?
What understandings of users would help us handle challenges better?
Gaps between research and practice
What would need to be done to make knowledge about users more useful?
What would make the mountains of conflicting evidence about users more useful?
Are differences between how practitioners and researchers look at users a barrier and how?
Are there strengths in the diversity of understandings about users and how have these been useful?
The ideal help from research
If you could wave a magic wand, what would help you to apply different ideas and research about users to your work?
Participation then proceeded in rounds first focusing on questions 1-5, then 6-9 and finally 10. Each round consisted of solo journal keeping, group discussion coupled with self-journals and group breakout reports coupled with self-journals. At the end of the day, informants handed in their workbooks and the posters from group breakout reports were collected. These once transcribed and edited formed the local expert database.
Interview transcriptions
Both sources of interviewing input, the audio tapes from the international expert interviews and the written input from the local expert interviews, were initially transcribed using the smooth verbatim approach, eliminating repetitions and non-fluencies. Standard quality control procedures usually used in qualitative research were applied. The final transcriptions were content edited to remove non-fluencies and grammatical errors and repetitions common to oral talk while still retaining the informant's words. Final versions removed all nouns and descriptors that might identify informants, their institutions, their specific projects, or their specific specialties when these were unique to them. This was done for two reasons: 1) Because the attention to communication gaps, which was the mandate for this project, required deep anonymity by many of the informants, particularly some of the international expert luminaries. 2) Because the first stage of a Sense-Making-informed dialogue almost always involves anonymity. In this study, this was particularly important because it was vital that interpreters in latter rounds would not be spending their time focusing on which luminary in which field said what and then using stereotypes to frame their interpretations of what informants said. All informants were sent their transcriptions for final consent to participate. All gave consent. Institutional review of contact procedures, consent forms and informant protections were duly approved at Ohio State University for both sets of interviews.
The thematic analysis
The thematic analysis presented here was developed to highlight issues of communication as these informants described them. It was not our intent to fix a systematic description of what informants saw as the substantive nature of their gaps and solutions to them, or to do a comparison of what was said between fields, work foci, or institutional contexts. These analyses are planned for later efforts. Our purpose was to capture a broad impressionistic portrait of the struggles these informants reported having across disciplinary and research-practice divides in their internal and external sense-making efforts, in short in their communicatings.
As an example of our emphasis, readers will see that in the thematic analysis we do lay out a number of themes enumerating various ways in which informants disagreed with each other on what they thought would make user studies better. Our intent in developing this enumeration was to provide a portrait of the quantitative extent to which informants disagreed and the large variety of differing solutions they were playing with. Having a survey of these kinds of differences is an essential starting place for beginning to design more advanced stages of dialogue.
Our approach to extracting themes was based on intersecting the inductive procedures of the method of constant comparative coding developed in early grounded theory with the deductive, meta-theoretic emphases of Sense-Making on gaps and gap-bridgings. Because it is polarizations, both explicit and implicit, enduring or temporary, justified and imagined, that most hinder communication, we adopted the approach often used in applications of the Delphi Method and applied a dialectical lens to zero in on polarizations that leapt off the transcription pages in three senses: 1) those where informants were explicitly comparing two different approaches; 2) those where informants were comparing what is to what might be; 3) those involving absences versus presences in talk, as, for example, where there was much emphasis in the interviews on problems but despite specific interviewing queries relatively little attention to solutions. (16)
Because our emphasis was placed on communication struggles, it was vital that we not homogenize differences. For this reason, we chose the quotable quote as our unit of analysis and selected quotable quotes representing themes roughly in proportion to their presence in the 602 single-spaced manuscript pages of transcription (545 for international experts; fifty-seven for local experts).
Arriving at an approach for presenting the results was not straightforward because our specification of themes was necessarily conceptual and informed by a body of work on the structuring of dialogues for bridging gaps. Because of this our attention to the transcripts was necessarily pointed several abstraction levels above where the usual substantive thematic analyses start. In judging both what major themes to use and what sub-themes to nest within them, we required 100% agreement between the two authors.
The resulting thematic analysis is organized into twelve major themes with from three to nine sub-themes each for a total of seventy-five sub-themes. In order to emphasize the ways in which the themes address our central foci on research as communication and communication as research, we present the themes as if the 114 informants are speaking to us personally, describing themselves in terms of their agreements, disagreements and struggles.
While at a meta-level our themes were necessarily anchored on abstract communication principles, we also deliberately allowed the warrant of the data to mandate detailed sub-themes. We chose this strategy because we wanted at one and the same time to present a thematic analysis which would allow us to draw out implications for communication and to show the diversity within. For this reason we have separated our thematic presentation into two parts. The first is Table 3 below which lists the themes and sub-themes. Deliberately in the main body of this paper we did not attempt to explain and explore the substantive meanings from which we gleaned these themes. In the Appendix we again list the themes and sub-themes, this time illustrated with the 320 quotable quote units which we mined and ascribed to each of them. The Table 3 items are linked to the Appendix so that readers may navigate between. Deliberately again we did not apply another level of conceptual definition and analysis above the exemplar quotes and the themes. In line with the procedures mandated for the dialogue project of which this thematic analysis is a part, the intent is to present the portrait of themes in as non-judgmental and dialogic a manner as possible and to retain not only the most centrally relevant quotes within a sub-theme but those at the margins as well.
Table 3: Narratively structured thematic analysis of what local and international experts had to say about gaps and potential bridges, focusing on disciplinary and research-practice divides in thinking about the uses of user studies
Most of us said we want to make a difference (link)
—by serving society, being a public good (link)
—by designing and implementing services that serve people (users/ audiences) better (link)
—by having an impact on system design (link)
—by serving the 'bottom lines' of our institutional employers (link)
—by having proven value when the 'rubber meets the road' (link)
—we still struggle with the theoretical versus applied research divide (link)
Most of us agreed that user research is not doing the job (link)
—we don't understand users, well enough, in the right ways, in ways that matter (link)
—user research is scattered, shallow, incoherent, not very good (link)
—it consists of endless itty, bitty unconnected pieces (link)
—we are re-creating the wheel without making progress (link)
—we are not building on each others work, on what exists (link)
—we don't agree on the meanings of our terms (link)
—we still struggle with the quantitative-qualitative divide (link)
—we desperately need integration and synthesis (link)
—we may not even know what the questions are (link)
While most of us said we cared about being useful to users, we had some fundamental disagreements about users and user studies (link)
—some of us said user voices are being systematically left out (link)
—some said trying to understand the elusive user is a seriously challenged mission (link)
—some challenged the focus on users (link)
—some challenged whether use self-reports can provide useful data (link)
—some said we need to trust and listen to users even more (link)
—many concurred that studying users is very, very hard (link)
—and some said studying users is expensive (link)
For the majority of us who favoured user studies, we had some fundamental disagreements about our purposes (link)
—there were tensions expressed between profit versus service orientations (link)
—and within service, between serving user goals versus enticing users to our goals (link)
—we disagreed on whether the results or our work can or must be one-system that fits all (link)
Most of us pointed to environmental factors that make executing and applying user studies difficult (link)
—the speed of changes in technology, society and people (link)
—the resulting generationl gap (link)
—the lack of funding for studying users (link)
—and the constraints imposed by restrictions on conducting human subjects research (link)
Those of us who favoured user studies had an unending list of different suggestions for improving the user study enterprise, including (link)
—more theories and models (link)
—better research designs (link)
—better samples (link)
—more direct observing and inductive qualitative work (link)
—less qualitative work (link)
—more segmentation of users into different sub-groups (link)
—less emphasis on user segmentation and sub-groups (link)
—more studies of users interacting with specific systems and technologies (link)
—fewer studies that focus on users interacting with specific systems and technologies (link)
—more emphasis on contexts and situations (link)
—to study specific moments of information seeking and using (link)
—to get outside university labs (link)
—more longitudinal studies (link)
Most of us said that interdisciplinary communicating across the three fields that do user studies is not going well (link)
—we just ignore each other (link)
—we have no respect for each other (link)
—there are simply no rewards for interdisciplinary contact (link)
—there's no funding of mechanisms to support interdisciplinary work (link)
—and few publishing opportunities (link)
Most of us concurred that interdisciplinary contact is hard, hard, hard (link)
—the isolated silos of academic disciplines make interdisciplinary contact very difficult (link)
—this is compounded by fierce turf wars (link)
—disciplines and fields are separated by different worldviews, assumptions and vocabularies (link)
—it's hard to know the rules on the other side of the fence (link)
—academic reward structures force us to be non-collaborative (link)
—as a result, we seem to all live inside our disciplinary blinders (link)
Most of us agreed that communication across the research-practice divide is not going well either (link)
—researchers and practitioners too often ignore each other (link)
—they have radically different priorities (link)
—there's little reward or incentive for researcher-practitioner collaboration (link)
—there are few structures to support research-practice collaboration and translation (link)
Some of us, both practitioners and researchers, saw academic researchers as the problem (link)
—academic researchers workon toy problems (link)
—they see things in non-human terms (link)
—they are hyper-critical (link)
—they live in ivory towers, disconnected from the everyday (link)
—their research foci are driven too much by self-interest and money (link)
—their research is not useful to system design and practice (link)
Some of us, both researchers and practitioners, saw practitioners as the problem (link)
—too many practitioners are anti-intellectual and hyper-critical (link)
—they are forced to focus obsessively on the bottom-line (link)
—they are institution-centric (link)
—they, too, have rules and standards they must meet (link)
—they are too often research-illiterate (link)
—they have to meet deadlines that preclude rigorous research (link)
Most of us agreed we would benefit from contact across our divides (link)
—between researchers in different fields (link)
—between researchers and practitioners (link)
—communicating across our divides will help us do better work (link)
—some among us would relish the clash of competing ideas (link)
—but many expressed worries about "slash and burn" approaches that dominate our divides (link)
—nevertheless, many of us expressed a readiness to pursue communicating in different ways (link)
Because the theory of dialogue on which this project is based mandates several rounds focusing on listening and understanding before any attempts to engaging in substantive comparisons and evaluations, the thematic analysis presented here is considered one of many. As described above, readers can find on the dialogue project Website a series of stage 2 impressionistic essays mandated to essentially the same task that this thematic analysis addressed with the exception that this thematic analysis represents 100% coverage of the stage 1 interviews and an imposed requirement of 100% agreement between the co-authors. Any reader who wishes to may join the dialogue by applying to become a stage 2 essayist. Instructions are given on the project Website. (17)
We already know, because as of this writing some 48 impressionistic essays have been contributed, that there are mind-boggling differences in what essayists see as the covergencies and divergencies between our stage 1 informants. For this reason, we do not present our thematic analysis as the thematic analysis but rather as one analysis of potentially many that is at one and the same time both highly informed by and highly prejudiced by an anchoring in interpretive communication theorizings as well as untold biases in our own perspectives of which we are not aware.
Conclusion
For those of us interested in user and audience studies, many of us in the three communities studied here naturally feel impelled to intervene with our many suggestions, our well thought out theories, our arduously collected data, of why this or that concept or approach will resolve this or that difficulty or bridge this or that gap.
But, alas, that is communicatively the crux of our problem. Yes, our informants were calling for synthesis but at the same time they were decrying their inability to comprehend syntheses written in other tongues by alien beings. Yes, some wanted more theories and models but some, and sometimes the very same persons, said they were drowning in disconnected theories and models. Yes, they found many things to criticize, about communication across divides, but in actuality they had few solutions. Yes, most of our informants wanted to communicate and share with each other but they saw formidable barriers of power, structure, tradition and habit standing in the way.
Yes, one could see evidence of stereotypes between our divides, how practitioners were more likely to distrust researchers and how researchers were more likely to think practitioners uninformed; how HCI people were accused of caring only about their machines and user studies people were accused of being fluffy; or, how communication researchers did not seem to talk to practitioners hardly at all and were so internally divided that they formed disparate sub-fields within themselves; or, how people in librarianship and information science were accused of being too obsessed with their library institutions.
But, the fact is that these stereotypes were a minor key. In fact, perhaps what struck us most was how more often than not we could not predict what field an informant was from by what the informant said; or how often informants were self-reflexively critical of the constraints embedded in their own discourse communities and purposes. Likewise, we were struck by an overall sense of a collision between negatives and positives. On the negative side, there was befuddlement about the chaos of the state of affairs in user studies, an almost resigned disappointment that communication was not going well either across fields or across the research-practice divide and a sense of hopelessness about the speed and demands of current conditions that could prevent things from improving. Yet, on the positive side was the extraordinary committed engagement of the entire informant pool and the palpable wish to make things go better.
We can offer no magic wands for this scenario. But as communication specialists focusing on the requirements of dialogue, we can propose that the traditional modes of communication serving the user studies research enterprise and, in fact, all social science research enterprises, are not doing the job that needs doing.
It is at this juncture that we turn to our second article, a commentary that focuses on research as communicating. To project ahead to our bottom line, we draw in our second essay on a body of work primarily from the fields of philosophy and communication that says we must find ways to re-establish the importance of some communicating practices that used to be normative in the social sciences and have become marginalized in current research practice; and we must find ways to invent new communicating practices so that that communication we do about and for our research is more fruitful.
In the stage 1 interviews, we could sense the two polar dialectics that confound human efforts to communicate. One is the polarity between uniformity and diversity. On the uniformity side, there is the emphasis on homogenization, on orienting towards correct and right answers and on achieving consensus and avoiding dissensus. On the diversity side is the emphasis on hearing different interpretations so often challenged as ending in the chaos of solipsistic individuality.
The second polarity is between humanistic approaches to communication and instrumental approaches. The former focus on building empathy and deeper listening; the latter on structuring communications so as to achieve explicit ends, either manipulative (as in propaganda and much advertising) or well-intended (as in public communication campaigns and some management leadership styles).
We will develop an argument in the Keynote Paper which suggests that the choices offered at the ends of these dialectical polarities do not address communication in communicative ways. Rather, we propose that we need to focus on how symbolic interpreting humans move between the individual sense-making and sense-unmaking necessary for collective effort; and how humans are capable of flexibly designing and employing their communication so that these serve different ends.
Acknowledgements
Resources supporting the dialogue project as reported here came primarily from: a) senior author Dervin's Joan N. Huber Fellowship fund; b) the Ohio State University School of Communication; and c) volunteer efforts of some 180 volunteers , faculty, students, consultants, administrators and practitioners in three fields (library and information science, human computer interaction and communication and media studies) located at some seventy-five institutions (universities, libraries, corporations, consulting firms, governmental agencies) in twenty US states and eight countries. About 10% of the support came from the Sense-making the information confluence project which was funded by a grant from the Institute of Museum and Library Services to Ohio State University and by in-kind contributions from Ohio State University and the Online Computer Library Center. That project was implemented by Brenda Dervin (Professor of Communication and Joan N. Huber Fellow of Social & Behavioral Science, Ohio State University) as Principal Investigator; and Lynn Silipigni Connaway (OCLC Consulting Research Scientist III) and Chandra Prahba (OCLC Senior Research Scientist), as Co-Investigators. More information on the IMLS project may be found at its Website The authors owe special thanks to: a) the some 190 students and volunteers who assisted in various ways on the international-local expert dialogue, their names are listed at the project Website; b) OCLC Consulting Research Scientist Lynn Silipigni Connaway who completed six of the international expert interviews; c) the Online Computer Library Center for providing the venue for the local expert focus group meetings and Connaway for facilitating. Special thanks are due to Noelle Karnolt and Tingting Lu, Ohio State University students, for assistance in preparing the final manuscript.
Notes
(1) This paper's impetus rests on extensive reviews and the senior author's continuing 20-year emphasis on the problematics of effective communicating in different contexts, in particular the intersections between all manner of systems (e.g., media, information, library, communication, government, medical, service) and their users (by any other name, e.g., audiences, patients, patrons, participants, citizens). A recent turn in that emphasis focuses on dialogue between experts, across disciplinary and research-practice divides. The project reported here is one outgrowth the article 'Human studies and user studies: a call for methodological inter-disciplinarity' (Dervin 2003) which appeared in
this journal. Other recent works in this emphasis include: Dervin (1999); Dervin (2001); Romanello et al. (2003); and Schaefer & Dervin (2003). In addition, Dervin & Foreman-Wernet (2003) compiled a series of earlier works focusing on dialogue as a central requisite of the design of effective: democratic systems, education campaigns, programmes to address literacy issues, Third World development, information systems design and organizational-constitutency relationships. Central to this effort has been a focus on conceptualizing communication as communicative, as internal and external activities that humans pursue to make and unmake sense of their worlds and to both fall in line with and fall out of line with the collective structures in which they find themselves. This view of communication mandates that we look at communication not as an outcome but rather as a series of activities or step-takings (conscious or unconscious, habitual or capricious, designed or duplicated) that may have different outcomes. This formulation suggests thinking about communication as repertoires of internal and external behaviour. Theorists who have informed this position include, in particular: Carter (1974, 2003), Stephenson (1967) and Thayer (1987, 1997). For comprehensive literature reviews, see Dervin & Foreman-Wernet (2003).
(2) One example of a comparison between the library and information science and communication literatures on information seeking and use in the context of health information seeking is in a paper commissioned by the US National Library of Medicine (Dervin, 2001), An example of a comparison of twelve theories of media uses/effects is in Dervin et al. (2005). Examples of authors who have referred to the growing body of disconnected offerings in the social sciences include: Carter (2003) who has admonished for years that we need to stop doing research that 'piles up, but does not add up'. (Carter 2003: 360); McGuire (1999: 386) who called for senior scholars to stop adding more undigested pieces to the pile but rather to turn their attention to observation, syntheses and interpretation; and Hjørland (1996: 52) who challenged that 'We must cease the
overproduction of unrelated facts'.
(3) As usual, the academic literature is not a good indicator of trends. The current emphasis in the different fields on the concepts user(s) versus audience(s) supports the traditional divisions of attention. We did a title search for two top-tier journals in each of the three fields for two time periods (1990-95 and 2000-05) looking for use of the terms user(s) or audience(s). In the library and information science field (tapped with the Journal of the American Society for Information and Technology and Information Processing and Management), 100% of 48 articles in the earlier time period used the terms user(s) as opposed to audience(s) and 98% of 86 articles in the later period. For the human-computer interaction field (tapped with Human Computer Interaction and International Journal of Human-Computer Interaction), the comparable figures were 100% of 7 articles for the earlier period; 100% of 40 articles for the later period. For the communication and media studies field (tapped with the Journal of Communication and Journal of Broadcasting and Electronic Media), the comparable figures were 10% of 30 articles in the earlier period; and 14% of 21 articles in the later. For the communication and media studies field, it seems that emphasis on users and audiences has gone down but, in fact, what has happened is that an array of new journals in the field has diffused attention to users and audiences and the dust has not yet settled on the impact. Two examples include: New Media & Society and Journal of Mediated Communication. In general, however, communication and media studies field articles have overwhelming focused on audiences, the library and information science and human-computer interaction fields on users. Where the implosion between these tidy divisions shows is in the trade press, in, for example, the special section in The Nation, 2006 on the 'national entertainment state', which documents not only the increased control over all information, communication and media systems by a few mega-multinationals but also the qualitative blurring of the previously separated functions of information versus entertainment. At the same time the seeming freedoms which the exploding internet offers means we see average citizens taking on functions as journalists , 'a journalism without journalists' as Lemann (2006: 44) called it; or as their own librarians, what Bates (2006a: 29) termed the 'disintermediation of information'.
(4) Numerous authors have interrogated the fundamental concepts: use, uses, users. Especially helpful have been: Capurro et al. (2002); Frohmann (1992); Julien (1999); Pettigrew et al. (2001); Savolainen (2000); Talja (1997); Vakkari (1997); Wilson (1994, 2000). Also helpful has been this edited compilation: Olaisen et al. (1996).
(5) This line of inquiry has a robust tradition. These chapters provide a helpful overview: Murdock & Golding (1995); Smythe (1995).
(6) Considerations of the terms information and knowledge have been fodder for continuing debate in the library and information science field from its origins. Some particularly useful recent writings: Bates (2006, 2005); Hjørland (2002a); Capurro et al. (2002); Olsson (1999). In the communication and media studies field the distinction between being informed and being entertained has been maintained with relative constancy in a genre of research usually labeled as "media uses and gratifications". In this work, information or being informed is considered a separate function of media (usually conceptualized as channels) than escape, diversion, or entertainment. A recent example is Song et al. (2004). A seminal example: Blumler & Katz (1974). Signs that this distinction is being eroded show, however, in a number of terrains. Just as librarians worry about how to maintain an address of information authority, the media world is focusing more intently on such issues as 'the future of fact' (Strange & Katz, 1998).
(7) The literature support for these three premises is given in the notes above. An important point is that from a communication perspective the world as it has been constructed and divided into pieces by our disciplinary boundaries and the systems they support is not the world in which everyday actors make and unmake sense. The structures created by collectivities (e.g., governments, cultures, systems) may be seen, depending on your theoretical perspective, as tight or elusive boundaries within which humans experience the everyday. Most extant approaches to communication focus on one or the other end of this structure-agency polarity. This point will be addressed in our second paper in this series.
(8) Of the three fields that are the foci of this paper, two have had a traditional direct relationship with users or audiences, library and information science and communication and media studies. The impact of the exploding electronic confluence on these fields is awesomely visible. A Google search completed October 8, 2006 using the term 'future of libraries' yielded 60,000 hits; 'future of newspapers', 100,000; 'future of media', 260,000. Examples of articles from library and information science addressing issues relating to whether and how libraries and information systems can compete: D'Elia et al. (2002); Hjørland (2002b). Examples from communication and media studies focusing on media systems: Cohen (2002); Webster (2005).
(9) Examples of studies from other fields that have been hopping on the user study bandwagon: education, McDonald (2004); medicine, Chapple et al. (2002); nursing: Edward & Staniszewska (2000); art: Hsi (2003); Marty
(2006).
(10) All three of our focal fields have had a long-time emphasis on applying research to practice, to designing and implementing systems and services as well as campaigns and education efforts aimed at users and audiences. All three have also been traditionally embroiled in tensions that characterize the research-practice divide essentially along four polarities: a) research versus practice; b) research versus design; c) design versus service; and d) theoretical research versus applied. Examples of recent articles in library and information studies and communication and media studies attending to these issues include: Booth (2003); Bryant (2004). Of particular relevance to this study is an article by Smith (2006) which proposes that the standard divide between positivism and interpretivism feeds theory-practice inconsistencies. He proposes critical realism as a resolution. In the earlier stages of our dialogue project, we have deliberately asked interpreters not to propose solutions. The practice-research divide plagues all the applied social sciences: see, for example: Hammersley (2003); Small (2005). A literature review is available in Dervin, et al. (2003). Our study reported here focuses on polarities a, b and d as listed above. We did not explicitly ask informants about gaps between design and service needs although some frontline practitioners did speak directly to this issue. For most, however, the issue of the utility of design to service was foundational to their other attentions.
(11) This, we submit, is the divide that social scientists and their applied fields have been least paying attention to but is probably having the largest impact on our fates. We provide documentation for this argument and discussion of the issues it raises in our second article. We have been informed particularly by: Dreyfus & Dreyfus (1986); Flyvbjerg (2001); Johnson (1991).
(12) The stage two essays are posted online. Descriptions of panels, workshops, symposia and other project developments are available at the same link.
(13) See references listed in Note (1). The Sense-Making approach to dialogue draws upon many foundational sources particularly as referenced in Dervin & Foreman-Wernet (2003). Three that deserve special mention are: 1) The Delphi method, which has been used in research although was developed by the Rand Corporation as a technique aimed at building consensus in policy groups during the cold war (Turoff 2002). A major thrust of its application, in addition to utilizing the idea of rounds, was the bracketing of power relationships by mandating anonymity among participants. This is an aspect Sense-Making has adopted for early dialogic rounds. As normatively used Delphi focuses on building consensus and partitioning dissensus. A later development by Tapio (2002) focuses on establishing not a central consensus but diverse scenarios in an approach he calls 'disaggregative policy Delphi' although the goal is still to solve all inconsistencies. Because of studies (e.g., Schaefer and Dervin 2003) that show that an emphasis on achieving consensus and avoiding dissensus may inhibit communication processes necessary for effective community dialogue, the Sense-Making approach to dialogue deliberately attends to both convergences and divergences and does not demand homogeneity either between or within participants except as it emerges naturally from rounds of sense makings and unmakings. 2) Flanagan's (1954) 'critical incident technique', the cutting edge for its time approach for anchoring interviews to the material situations of interviewees' lives. In a Sense-Making dialogue this is implemented by asking participants to articulate the connections between their own views and opinions and how these relate to their material circumstances. 3) The Frierean theory of dialogue (Friere, 1970, 1983) as developed in his approaches to literacy training and Third World development with its emphasis on the role of talking in generating raised consciouness (or what Friere calls conscientizing) and participatory readiness. In Sense-Making interviews, this is implemented primarily by asking informants not only to explain what they see but the bridges, gaps and inconsistencies in what they see and how these connect to their material conditions.
(14) A complete list of persons who served as advisors and volunteers for the international expert dialogue is available here
(15) A list of the institutions sampled and persons involved in the local expert dialogue is available at the URL listed above.
(16) There are numerous debates and interpretations of the Glaser & Straus (1967) 'grounded theory' approach from its original statement most popularly known as the 'constant comparative method' to more recent debates (e.g., Strauss & Corbin, 1990). Since we did not intend to develop theory but rather a descriptive portrait of the discourse covergences and divergences most applicable to considering problems of communication, we applied the method of constant comparison to our reading of the expert interviews intersecting these readings with Sense-Making's emphasis on gaps and gap-bridgings. Because Sense-Making's approach to dialogue mandates attention to absences as well as presences, our readings attempted to focus both on what was said and what was not said as well as the ways in which informant responses represented contradictory aspects of the same themes.
What researchers and practitioners in three fields said about users, audiences, research, service, design and each
other: convergences and divergences focusing on research as communication and communication as research
EXPLANATION:
Please see article for explanations of purpose, sampling and approaches to analysis. In the thematic analysis below,
each thematic unit, a quotable quote relevant to a particular theme, is identified with the informant from whose
transcript it was drawn. Informants are referred to by anonymous informant identification numbers. Identification for
the two sub-samples differs as follows:
INTERNATIONAL EXPERT SAMPLE: The informants in this sub-sample
are identified as INT#301 through INT#383 as in the following example: (INT#364, HCI; academic & corporate; research & design). After the identification number are three terms or phrases which identify in turn the informant's field,
institutional employer and professional activity focus.
FIELD, LIS, HCI, COMM: Informants were
assigned to only one field, the field they were sampled to represent. In most cases, this field also represented the
foci of their advanced degrees. In a few cases in the HCI sample, it involved persons with different educational
backgrounds who had spent years focusing on human computer interaction/information technology concerns.
INSTITUTIONAL EMPLOYER, academic, consulting, corporate, governmental,
non-profit: This field tapped the informant's current institutional employer. Most informant's were identified
with only one employer category although 6 of the 82 were identified with some combination of academic and consulting/
corporate. One informant was identified with three employment institutions: academic, corporate and governmental.
PROFESSIONAL ACTIVITY FOCUS, design, research, service planning &
implementation: This category scheme tapped what each informant saw self as emphasizing in their current work. Most
were identified with only one emphasis: 48 for research, 6 for service planning and implementation. The remaining 29
were identified with research combined for 5 informants with service planning & implementation and for 24
informants with design.
LOCAL EXPERT SAMPLE: The informants in this sub-sample included 31 public and
academic librarians serving faculty and students at higher education institutions in central Ohio. Data collection involved a total of 38 interviews, 31 collected as informant journalings during attendance at focus groups; and 7 consisting of wall poster reports from the focus groups. Interview identification numbers ranged from LOC#401 to LOC#438. Only journaling interviews were drawn on for this analysis. Local expert informants were identified by categories describing their institutions. Informants representing public libraries are simply listed as: (INT#421, public library).
Informants representing academic libraries are listed in this format... (LOC#425, academic library; private religious; low enrollment; undergraduate)
where the three categories after academic library indicate in turn:
SUPPORT BASE OF THE INSTITUTION: public, private secular, private religious
STUDENT ENROLLMENT SIZE: low (669 students to 4,591); medium (4,592 to 20,548);
and high (20,549 or more).
PROGRAM EMPHASIS: undergraduate, masters, doctorate
MOST OF US SAID WE WANT TO MAKE A DIFFERENCE....
....by serving society, being a public good
...we need to start thinking of the work we're producing as more of a public good than a property that I create and own. We need to check our egos at the door, to say "This isn't my work, this isn't me, this is for the good of society, this [is] for the good of all." (INT#317, COMM; academic; research)
I would also say that having practitioners see my work keeps me a little more honest and focused on what's practical, making sure that this isn't just for my own amusement, that I'm doing something that is really going to affect the world in some way... (INT#360, HCI; academic & corporate; research & design)
....by designing and implementing services that serve people (users/ audiences) better
It [communication across our divides] would really allow us for the first time to answer the question of what do people want, under what conditions and how you provide it in terms...that are optimally satisfactory for them and enjoyable? (INT#329, COMM; academic; research)
And so I think that, again, for a long time I've shouted the advantages of action research as a way of becoming involved with the user communities so that what is discovered has some sort of impact on the performance of systems to help. (INT#319, LIS; academic & consulting; research)
Well, I guess I believe in approaches in which we go out and observe users and we seek to understand how they do things and we bring back that information into a process of design, whereby we seek to invent things that improve the situation. (INT#364, HCI; academic & corporate; research & design)
....by having an impact on system design
And then once we figure out ways to infer this sort of information, then we've got to figure out ways to use it effectively in the system design. (INT#305, LIS; academic, corporate & governmental; research & design)
We've got to be reminded, regularly, that this is where the rubber hits the road, for professionals at least. Of course, professionals have got to be convinced by us of the value of what we are doing and the merits of the application of what it is that we're recommending. We've got to be better at identifying what it is that we're recommending, perhaps publishing that in places, or promoting that, or advocating that in places where we're actually connecting more with the practitioner communities. (INT#369, LIS; academic; research)
....by serving the "bottom lines" of our institutional employers
I guess research always needs to be focused on the fact that ultimately we need to make money someday or else we're not going to get paid. Even for academics, ultimately you're trying to change the world in some fashion. (INT#360, HCI; academic & corporate; research & design)
....by having proven value when the "rubber meets the road"
So the people who are not engaged, who want to just be isolated and think about either the ideas or some theoretical system that you can prove mathematically, are semi-interesting. But it's an added value service to society when the cuts come and when the rubber meets the road... (INT#327, LIS; academic; research & design)
....we still struggle with the theoretical versus applied research divide
I love research for the sake of [research] and part of me says heck with practice, I love playing with ideas. Let's do something and see what happens and I actually, firmly believe that we as researcher have that responsibility to just do pure research for the sake of research. (INT#318, LIS; academic; research, LIS; academic, research)
I believe the university is in peril. And I believe it's important to protect what [has been] called the 'economy of knowledge'. It's not a 'screw you and leave me alone to do my science' attitude, but it's an attitude which says that there ought to be an institution in society for which the pursuit of knowledge is the top priority.... In a way, that sort of argues for keeping theory and practice separate. But that's only one model of what theory is and otherwise I think that the way theory and practice have been conceptualized in the past has been quite a problem for advancing the development of procedural approaches to change. (INT#379, COMM; academic; research)
MOST OF US AGREED THAT USER RESEARCH ISN'T DOING THE JOB
....we don't understand users, well enough, in the right ways, in ways that matter
I am not convinced we know a lot about users! I think we know a lot about librarian perceptions of users, which is not the same! (LOC#415, public library)
I think that sometimes designers have an egocentric view of users. They think that users are like themselves. (INT#337, HCI; academic; research & design)
Because we are so wrapped up in our own work that we haven't taken the time to go to them. I guess it is time. (INT#323, LIS; academic; serving planning & implementation)
Well, both the systems themselves and the designers of systems enforce this rational model of hyper information seeking that very few users actually have. (INT#313, COMM; academic; research)
...we still have a limited understanding of users. Not necessarily the user in terms of their demographics, their education, or the things that people say have been studied. I think there are some deeper, more fundamental things we don't understand about users, especially using the context within different tasks and focuses within different contexts and different cultures. Many studies focus on surface matters. (INT#366 HCI; academic; research & design)
...we don't understand what users really want and they themselves can't express their needs in terms that a system can understand. Thus, we have this mismatch between how the user talks about, describes, or maybe cannot articulate a need in systems that constrict people, because their vocabulary and representation is different, perhaps, than what the user's is. So we get this constant mismatch of communication system error, user error, confusion between system and user because they don't talk the same language. (INT#342, HCI; academic; research & design)
The history of user studies has been the history of getting closer and closer to the actual use of anything.... We haven't got close enough to those yet (INT#382, LIS; academic; research)
We know some things about users in general but not always specific... (LOC#425, academic library; private religious; low enrollment; undergraduate)
....user research is scattered, shallow, incoherent, not very good
...I do think that we lack an adequate understanding because we haven't put together all of the pieces of which there are so many in people's activities and information seeking and use of technology. (INT#356, LIS; academic; research)
Theoretically, we are rather naive.... Very few scholars stand back and try to get a big picture, a real understanding, of what this is doing for us in the long haul and where we're actually getting deeper rather than just riding along in a very shallow way. (INT#378, HCI; academic; research & design)
Theoretically it [user research] is not very good. (INT#349, LIS; academic; research)
People...replicating in different contexts between situations and at the end not being able to pull everything together into something meaningful. I've seen so many research studies talking about specific information-seeking behavior, but what do they all add up and tell us about? Because every situation seems to be so different. (INT#357, LIS; governmental; service planning & implementation)
I think it's about using static criteria. People conducting research, replicating in different contexts between situations and at the end not being able to pull everything together into something meaningful. I've seen so many research studies talking about specific information-seeking behavior, but what do they all add up and tell us about? Because every situation seems to be so different. (INT#357, LIS; governmental; service planning & implementation)
....it consists of endless itty, bitty unconnected pieces
...there are so many little, itty bitty, not terribly useful efforts going on...I'm just saying that they turn out nice little academic exercises, but I don't get the sense of them building on the field. They add maybe a data point. (INT#361, LIS; academic; research, service planning & implementation)
But they are all focusing on a particular attribute or aspect. They all need to be combined in order to make the information accessible across the spectrum as well as making it more relevant to address user needs. Work together, make links with other groups, disciplines. (LOC#412, academic library; public; low enrollment; undergraduate)
I would say that most of the...implementation is too micro-scaled or too narrow. It's not a concerted effort. Individual bubbles come up and there isn't an organized effort of doing research to have it come from a [standard] theme or platform. (INT#350, LIS; corporate; research)
[Researchers need to...] develop a common ground upon which all researchers build, vocabulary, respect, communication. (LOC#416, academic library; public; low enrollment; undergraduate)
....we are re-creating the wheel without making progress
Well, partly because of the language problems and partly because of other sort of structural barriers, one of the problems is that people spend a lot of effort reinventing the wheel and duplicating research that's been done in other disciplines simply because they're unaware of the work that's been done in other disciplines. So, there's a lot of wasted energy because people, and this is a problem with the dissemination of findings among practitioners as well, don't really understand what other people have done in other areas. (INT#313, COMM; academic; research)
There is not a sense of a cumulative nature of scientific inquiry as much in user as there out to be: further testing, further validation, further questioning and accumulation and building upon the work of others. (INT#369, LIS; academic; research)
I see a lot of replication rather than progression in the study of users. We just look at different environments. Environments shift a little and people say "Well, this must be new. Let's study the same issues in this new world." They don't phrase it that way, but in essence, that's what's happening. (INT#378, HCI; academic; research & design)
What I think is happening is that there are a lot of people...doing the same kind of and they are doing minuscule little pieces of it that don't really match up with other people's except that they use the same buzzwords. (INT#314, COMM; academic; research)
Some of us don't have this practice of replicating research. Our ego gets in the way. (INT#350, LIS; corporate; research)
....we are not building on each others work, on what exists
It's mostly a matter of the inefficiency of reinventing of the wheel, people not using tools and ideas that would help them. It's a lack of intellectual exchange. (INT#334, COMM; academic; research)
We don't want to change anybody, but we want to open people's eyes to see, "look, hey! I'm actually doing something very similar to you, except we take different approaches." (INT#366, HCI; academic; research & design)
...you get a little frustrated at people going off and doing something, making mistakes that you may have made ten or twenty years ago. Without knowing the literature, without thinking through the issues, without getting advice and then coming to you to clean up the mess. (INT#327, LIS; academic; research & design)
When [user research] is not replicated so that we can further validate it and when it is not applied. User research brings us to some potential conclusions and then there are not opportunities to test those conclusions. It's really, really frustrating. (INT#383, LIS; academic; service planning & implementation)
....we don't agree on the meanings of our terms
You've got to define your subject and make clear what it is that you deal with. There is quite a wide inconsistency in the use of terms, methods and in the weakness or even absence of methodology. What results are interesting, but self-contained pieces, not a body of knowledge. That's not the way for progress. (INT#301, LIS; consulting; research & design)
What consensus do we have on what we call those people, because the literature uses a number of terms. Maybe those are just some semantic differences or maybe there is no difference.... It would be a real advance if we could build a consensus; it just seems that everybody that does research comes up with their own operational definition. Doing a meta-analysis is a really useful concept to build cumulative, quantitative results, but only if you have approximately the same conceptual definition for the phenomenon you're looking at. Everybody goes out and defines the concept a little differently, so fill-in-the-blank accuracy, satisfaction, is a problem. (INT#363, LIS; academic; research)
...what do we mean by use, because I think a lot of the uses in people's heads and how they are using the material is not tapped well in a lot of research. That is part of it, the definition of what is use. (INT#347, COMM; academic; research)
We haven't defined who the user is and we don't have a common understanding of what the "understandings" are.(LOC#428, academic library; public; low enrollment; undergraduate)
....we still struggle with the quantitative-qualitative divide
We [people in user research] all use very much the same techniques - from psychological research to market research. But how do we differentiate between research goals. We get into arguments framed around "metrics versus qualitative". But that's not really the argument we're having. We're having an argument about whether the goal of that work is to produce a statistically valid result or whether the goal of that work is to produce insight. (INT#375, HCI; consulting; research & design)
There are different orientations in our discipline, one that actually seeks explanations and one that seeks understanding. The one seeking understanding is more based in the humanities and the one that seeks explanations is more based in the science.... So in a way we are in this hybrid situation with having to deal with something that is both a part of the sciences and part of the humanities...and this struggle to find the right way to approach things has not been solved. (INT#339, COMM; academic; research)
We should be trying to move towards fundamental understanding, patterns that operate across different situations, things that don't operate across different situations...and we've got plenty of people who are really thinking like humanity folks and want to observe the uniquenesses of every individual and every thing, but that's not my particular interest. I pick science. (INT#356, LIS; academic; research)
...trying to get the new crop of students to really be respectful and knowledgeable about this other binary. There's a cultural, there's a quantitative. Because they're not going to be able to be sophisticated scholars if they're only seeing the lit review from one. If they don't understand what's being said.. and they don't understand there is a conversation that can be had and that there is much to be gained from that conversation and that research will improve that conversation. (INT#358, COMM; academic; research)
A person who's a humanist has a different worldview. They fundamentally believe that it's wrong to look for common patterns in people. They believe that each person is a great individual, endowed by the creator with beautiful abilities to be different. To them, it's very repulsive that social scientists try to put all these people together and look for a common thread. It's not just the disciplinary differences.... (INT#316, COMM; academic; research)
The interdisciplinary approaches would be very helpful. Taking very rigid, quantitative methods and surrounding them with much more enriching qualitative methodologies would be very helpful. (INT#376, LIS; academic; research)
I think what is hindering is to have a narrow view of methodology. I think researchers have to look, consider different perspectives. Like understand, for example, that qualitative data can be informative and can be powerful and not just quantitative data is useful. (INT#337, HCI; academic; research & design)
...requiring multidimensional training of graduate students. Not requiring that they take the quantitative course and then letting the qualitative course be an option. (INT#381, COMM; academic; research)
One camp assumes that the answer to the question is simply given in terms of attitudes and behavior changes and [the other] camp assumes that the answer to the question is in terms of meanings and interpretations. (INT#331, COMM; academic; research)
....we desperately need integration and synthesis
In some ways, I think that we've got to stop these endless individual studies and start integrating and start aggregating and start accumulating. It seems that everyone wants to try to find a foothold in something. (INT#318, LIS; academic; research, LIS; academic, research)
[We] need time to assimilate [the research]. We all tend to "re-invent the wheel" rather than use what others have determined. Are our libraries so different that we need to all do something different? (LOC#425, academic library; private religious; low enrollment; undergraduate)
There needs to be some settling out of different approaches and some work at agreeing on some common boundaries and terms, language, taxonomies and so forth. Just so we can communicate to each other and divide up the labor and not necessarily even agree on things, but decide where it is we differ so we can have a dialogue about it. (INT#314, COMM; academic; research)
Again, it's time for the meta, that's what time it is. We've got millions of individual studies from multiple theoretical perspectives and employing multiple methodologies and I think some kind of critical, mutual analysis of this would really help. We seem to be so constantly reinventing the wheel, over and over. (INT#318, LIS; academic; research)
...we may not even know what the questions are
The hardest part of I believe, is figuring out what the questions are and I think too often we take it for granted that we know what the questions are. (INT#331, COMM; academic; research)
Also, I think there's a tendency to do more number crunching and that is great, but it doesn't answer the questions that we cannot formulate yet. In other words, in order to collect the numbers, you have to have a very clear question in your mind. And most of the phenomena that happens between users and [information and systems] we don't understand, so we can't formulate a clear question. (INT#374. COMM; research; academic)
WHILE MOST OF US SAID WE CARED ABOUT BEING USEFUL TO USERS, WE HAD SOME FUNDAMENTAL DISAGREEMENTS ABOUT USERS AND USER STUDIES
....some of us said user voices are being systematically left out
I believe that the citizens and the users are being massively left out. ...it's not really in the best interest of the folks who have power and money; that's a fairly obvious one. (INT#379, COMM; academic; research)
The finance director isn't so keen to promote knowledge sharing or understanding users' needs. It's ultimately about whether we're meeting the financial targets. And sometimes those financial targets are more important than users. (INT#357, LIS; governmental; service planning & implementation)
In some sense I feel like the public doesn't have much voice. Industry has an agenda of selling more stuff and they're inventing stuff and it gets deployed and there become expectations for access and use and a large assumption that access everywhere all the time is a good thing, which I actually think is not true. And I think if we could somehow begin to get a more differentiated worldview on when do we want access and of what sort and in what ways as a national agenda. If we could actually put some of that research on the table and, as researchers, industry is not going to do that, because it's not in their best interest. It has to come from the research community if that set of questions is going to come to the floor and be visible and then can push back or provide direction to industry and say "hey look this is where people really want you to be doing your work". (INT#372, HCI; academic & consulting; research & design)
That is another reason why [we] stay away from the users a little bit. Most people who do research in mass communication don't really like the industry and don't really like the manipulation of people. (INT#347, COMM; academic; research)
[But when it comes to incorporating the user's voice...] even when people want to, they don't know. Even when organizations want to do it, they don't know how to do it. They don't know how to do it, they don't know how to assess it when it happens and they don't know how to think about it. (INT#379, COMM; academic; research)
....some said trying to understand the elusive "user" is a seriously challenged mission
The definition of who our "users" are is a process that takes too much time and effort because the "user" is dynamic and will have changed by the time it is defined.(LOC#410, public library)
I think when we take into account, [when] we evaluate and we tend to look at the direct, observable things, we don't see the stuff that's moving underneath. As I mentioned, eighty to ninety percent [of an iceberg] is underneath and you cannot see [it] so easily. You can't measure it. It's really hard to quantify it. It's the invisible stuff. (INT#377, LIS; academic; research)
We know some things about users in general but not always specific. Also user needs are constantly changing, [you] need to keep 'hitting' a moving target. Not all users are alike; therefore if we categorize users in a group, we are missing needs of many of our users. (LOC#425, academic library; private religious; low enrollment; undergraduate)
You have X billion people and everybody has a different life, so I think that part of the reason that we don't really get into that very deeply is because it becomes so individualized that it would become pretty much worthless. (INT#347, COMM; academic; research)
I don't think it can be answered. It's an on-going process. It's not a destination. It's the journey, to use those nice clichés. But I think it's also because it is something that is unanswerable. It really is, you're trying to measure change. (INT#327, LIS; academic; research & design)
People are always changing the way they live and do things a certain way. How do you predict that? (INT#353, HCI; academic & corporate; research & design)
....some challenged the focus on users
I stick to my basic assertion that the very concept of user is a misleading concept. A user is only "one": one moment, in one time, of one person who in reality has several roles. (INT#301, LIS; consulting; research & design)
...as soon as you say it's the "user," you focus on a person instead of an activity. First we have to focus on an activity in the world and how we grow the activity in the world. This is an example of where we've [got] the tension between calling things human-centered, when in fact it really is more appropriate to call it "use-centered" or "practice-centered." (INT#322, HCI; academic & consulting; research & design)
I wonder if there was any real user research at all before systems were developed. (INT#383, LIS; academic; service planning & implementation)
We rarely look at the non-users... We tend to go with existing user groups. Also, we tend to fine tune whatever it is we're designing for those existing user groups, not recognizing that we have generations that are going to come along with very different practices. Any assumptions we have been making about user groups are going to be seriously challenged when the next generations come forward. (INT#376, LIS; academic; research)
You do not have a means of getting to your non-users. We don't stand in the malls, like the banks do. And who are our non-users? In academic institutions, we know who potential users are and, therefore, who our non-users are and we have a means of contacting them, of offering them a carrot. But in many other situations, we haven't a clue who our non-users are. (INT#343, LIS; academic; research)
If we're only modeling and designing for people that we already know are comfortable using the systems and services we design, it's a self-fulfilling prophecy that people will use them. The people who use them are the people who find themselves able to use them, whether it's because it's useful or easy for their purposes. Then we go to those people, find out what they need and refine our systems and services. We're probably not addressing the diffusion of access that we always talk about. We don't build as much critical mass as we could. We tend still to work with specific populations. (INT#376, LIS; academic; research)
I think also if you just focus too much on the user, you're missing the whole picture, because it's not just the user. It is the interaction with the system. Just focusing on users is not necessarily going to have all the answers. (INT#349, LIS; academic; research)
...if you carry the user as the unit of analysis too far, you end up in this world where nothing is real, it's all a projection from inside... (INT#344, LIS; academic; research)
....some challenged whether user self-reports can provide useful data
... often people do not know why they do what they do, or they may not have an unbiased perception of what makes them [interact with media]...people often do not fully have access to their inner motives, motivations and so on...if people do have access and know why they do what they do, they may not tell you...people might lie to you or might tell you half of the truth because they want to create a positive impression of themselves. (INT#339, COMM; academic; research)
...a lot of it might be in someone's head and they might not either be comfortable sharing that with us, or they might not know how to talk about it or even that it's happening. (INT#360, HCI; academic & corporate; research & design)
The answer to that question is more valuable in a lot of ways, but it is also essentially impossible to get by asking the naive user what they need.(INT#359, LIS; academic; research & design)
The problem is that it does make an assumption about human nature, in that it makes a very enlightened assumption about the human animal as a very rational being that knows precisely what it wants. It knows what it needs, so it only needs to go out and find answers to these questions. But that's not the case. We are far more irrational and far more disorganized and far more chaotic... Also, not only are we not as rational as you might want us to be, we also have far less understanding about ourselves and about our motives and needs and we always think that we are so clever that we know why we do things. (INT#325, COMM; academic; research)
....some said we need to trust and listen to users even more
So I would like to see research that begins from the presumption that people have unsung ways of managing too few resources and that those historical strategies will be brought to new practices, including digital media. (INT#381, COMM; academic; research)
I think it makes a lot of assumptions about people. Perhaps correct assumptions, but often unkind assumptions that they're lazy, that they're not critical enough, that they don't know what they're doing. People are probably more resourceful than most of us imagine. (INT#314, COMM; academic; research)
It seems the [other] groups focus on the difficulty [that] continues to lie with the users, not our inability to understand them... Our attitudes towards users is that they need to learn from us; of course, we have nothing to learn from users... (LOC#415, public library)
Because I think a lot of the research is predicated on the idea that, especially in areas like computer-human interaction and so on, that people are experiencing too much information or that people are reacting in a certain way. And I'm not certain that that's always the case. (INT#370, HCI; academic & corporate; research & design)
What I think the challenge is, that rather than thinking about what a person needs in terms of sources and facts, the challenge for the system is to actually understand more the nature of the internal constructings of [what] that user wants to do. And how do you enable a user to talk to a system about his or her internal constructings rather than saying I want information on the [topic]? (INT#318, LIS; academic; research)
....many concurred that studying "users" is very, very hard
It's very difficult to study. You can study some part of it, but it tends to be only a very small part of the bigger picture. (INT#366, HCI; academic; research & design)
The studies are incredibly complex; there aren't well defined methodologies for doing it.... (INT#326, COMM; academic; research)
It's just more complex than people have been willing to face and people who fund it, say the network perspective or the internet users perspective, they haven't really wanted to deal with that level of complexity because they think they'd have trouble understanding it. (INT#329, COMM; academic; research)
I think different situations create different needs. I think the same person in different situations would have different needs. So I just think it's really hard to study. (INT#360, HCI field; academic & corporate; research & design)
Another reason why it hasn't been answered is that there are so many confounding variables. Particularly in naturalistic inquiries, it's very difficult to control all of the confounding variables. It's a very complex question with lots of things that influence the way people interact and it is difficult to isolate those things. (INT#303, LIS; academic; research)
Everybody thinks they understand people, after all, they are people. And that's wrong. First of all, we don't even understand ourselves. Most of our behavior is subconscious, so we are often unaware of the behavior, let alone the causes. . (INT#371, HCI; academic & consulting; research & design).
They...[users] are very messy and they require a great deal of ambiguity and tolerance for ambiguity. (INT#343, LIS; academic; research)
Understanding internal states and reconstructing them, observing them and understanding them, is a very tough thing because there's nothing you can directly observe. You cannot look into the minds and hearts of people and determine what it is that makes them do what they do. (INT#339, COMM; academic; research)
...I think the fundamental difficulty and this might seem very simplistic to say, but the notion of peering into people's minds to see what's there and to see how what is there has changed, is very, very complex. And while I think over the years our field and other fields have developed theories and approaches to knowledge representation and conceptual change, my sense is they're relatively weakly developed and I just think that our area in the study of human information behavior hasn't really addressed that. (INT#318, LIS; academic; research)
....and some said studying users is expensive
User studies are really expensive. And so in many ways and they're typically in our field very limited, they have to be. Because they're expensive, they're limited to small samples. (INT#305, LIS; academic; corporate & governmental; research & design)
It's just that doing one study takes a lot of time and effort an, of course, huge amounts of money would be good, too.(INT#326, COMM; academic; research)
FOR THE MAJORITY OF US WHO FAVOURED USER STUDIES, WE HAD SOME FUNDAMENTAL DISAGREEMENTS ABOUT OUR PURPOSES
....there were tensions expressed between profit versus service orientations
It's just that those people all see each other as adversaries for one reason or another and historically, they have been in various ways intellectual or economic adversaries. They have very different missions, each of which is important, but they don't use the same vocabulary. They are suspicious of each other. (INT#359, LIS; academic; research & design)
It includes the interests as well as the whole manipulation issue, but what [companies] want to do is provide the product basically at mass...it's getting the product to consumers and getting them to buy the product. Whereas with the mass communication researchers, I would say, most of them are concerned about whether the social system is functioning correctly... (INT#347, COMM; academic; research)
...because the designer is strictly looking at being able to sell the product and making the user happy. (INT#355, HCI; academic; research & design)
The only people who think about the user are people who do commercial products because they have to. Otherwise they may not be competitive enough. They push it as far as they can without looking at the user, but they get to a point when they have no choice. In our language rhetoric, the user is the center, that's the most important thing...[But]...they're not there to help users or to make life better for other people. (INT#340, LIS; academic; research)
....and within service, between serving user goals versus enticing users to our goals
What we need to do is, we need to train those patrons better to learn the system and to learn library language, rather than us trying to change the language to something they understand. (INT#315, HCI; corporate; research & design)
That to my mind is the number one problem, there is a disconnect between what people design as opposed to what people need. (INT#343, LIS; academic; research)
We tend to look at those issues in terms of ergonomic factors or design factors after we create a system rather than have people intimately involved in a development of systems. (INT#313, COMM; academic; research)
But from the user's point of view [regardless of designer intent] it's a very rich environment to get what you want, whatever that is. (INT#336, COMM; academic; research)
Most librarians' attitudes towards virtual or digital reference is the field-of-dreams syndrome. Build it and they will come. In my opinion, that is not true. (INT#363, LIS; academic; research)
The one thing I think that bugs me is that users, with the new system, they don't really understand all of the things that you're trying to do for them. But of course they will once they've had an opportunity to use it. (INT#304, HCI; governmental; research, service planning & implementation)
We tend to think that everybody should be using the IT that's available, rather than first trying to understand how people perceive it and what they're really trying to do. What we push them towards may or may not fit. (INT#376, LIS; academic; research)
[Users] don't know what they don't know because we can't show it to them. They're never going to find it with the way things are being designed today. We need to find ways of exposing things that users don't know are out there. (INT#310, HCI; consulting; research & design)
And we can, in any of these complex activities, distill it down to a simple principle of access to the right information and then an individual will do the right things. But our studies of human behavior and users are continually telling us that that isn't the case, that it's more complicated than that. (INT#369, LIS; academic; research)
The thing that's been really interesting in the area of say digital libraries is how we get people to look at resources that are more curated, or at least how we get them to evaluate what they are using a little more carefully than they are right now. Because you look at the typical undergraduate, they might use something like Wikipedia without thinking twice about the veracity of what they're seeing.(INT#370, HCI; academic & corporate; research & design)
Librarians are kind of obsessed with factual accuracy and there's nothing wrong with that, but I think that if may be that some users are actually fully aware that things are not completely accurate but they find the source friendly or entertaining or whatever. (INT#314, COMM; academic; research)
....we disagreed on whether the results of our work can or must be one-system-that-fits-all
You need to figure out one good way that would work for everybody and it has to be pragmatic. [Computer corporation] cannot keep designing an interface for every different user. What they have to do is look for the common denominator and come up with something that most of the people can use. And there is no other way around that. (INT#316, COMM; academic, research)
But what I am going to say, is that if there is not a translation of this theoretical background, constant and almost an agreed upon translation of that theory into an application, then there will be confusion, false starts, [the] right hand won't know what the left hand is doing. (INT#332, LIS; governmental; research, service planning & implementation)
I think users mature in their usage of systems. So what you design one year might be old hat the next. (INT#304, HCI; governmental; research, serving planning & implementation)
Even though every user thinks he is different and every discipline thinks it is different... If the world was chaotic, we would not be functioning. Underneath there are some kinds of basic needs and basic approaches. (INT#350, LIS; corporate; research)
MOST OF US POINTED TO ENVIRONMENTAL FACTORS THAT MAKE EXECUTING AND APPLYING USER STUDIES DIFFICULT
....the speed of changes in technology, society and people
I think the environment has changed so rapidly that even if we have some answers from the past, they are no longer applicable. (INT#332, LIS; governmental; research, service planning & implementation)
...to some extent I think the technology and developments in technology were much more rapid than research can keep up with. Technology is developing constantly at a very unprecedented, fast rate and it's hard to harness it. (INT#337, HCI; academic; research & design)
I really think that the information environment has changed so radically with the advent of these kinds of searching capabilities, with the internet, that it's a whole new ballgame really, anyway, as far as users.(INT#332, LIS; governmental; research, service planning & implementation)
I think it's remarkable that we are able to even talk about it given the speed at which technological and cultural change is taking place. No sooner do we release a technology than all kinds of social and cultural practice grow up around it. (INT#367, COMM; academic; research)
I wouldn't limit it to just technology. The technology is changing. The information is changing. Today's information is not dead, it's not static...And people are also changed...collective intelligence is evolving and cultural characteristics do evolve and change. So it's all three of those things that are changing, not just the technology. (INT#327, LIS; academic; research & design)
Any time you have radical shift in the information infrastructure shaping a society, it's going to take a period of time of social and cultural experimentation before people fully get on top of it. (INT#367, COMM; academic; research)
I think the environment in which they're [users] seeking information changes. I mean it has certainly changed very quickly over the last ten years, since the Web became fairly ubiquitous.... (INT#304, HCI; governmental; research, service planning & implementation)
The environment continually changes because we use things, our preference change based on that use.... It's a constantly moving target right now and we're in a sort of closed feedback loop. (INT#375, HCI; consulting; research & design)
....the resulting generation gap
... people born since 1970 process visual information very differently. People born, you know, another decade or so later are really operating differently, they're multi-tasking cognitively different. (INT#338, COMM; academic; research)
...I would say right now there's some pretty interesting generational changes happening and that kids seven to eleven or thirteen, fourteen year-olds are in very radical ways interacting with information in ways that are different than the people who came before them. (INT#372, HCI; academic & consulting; research & design)
[The] current generation of librarians are so print focused, [it is] hard to envision the digital changes. (LOC#423, academic library; public; medium enrollment; undergraduate, masters, doctorate)
The graying of the profession. Most librarians are print oriented, [they] don't understand very visually oriented users. [The] world of information changes so fast. (LOC#405, academic library; public; high enrollment; undergraduate, masters, doctorate)
....the lack of funding for studying users
The first one, again, is a very practical one, time and money. There is not enough funding. I know that with [organization's] funding, every researcher and library researcher has questions they wanted answered and wants to get out there to get the answer.(INT#328, LIS; academic; research)
The barricade is being able to create the test situation and test bedding and apparatus and things like that [that] will allow us to test in the first place. And again that comes from lack of funding... .So that's the kind of resources that are required to answer these questions that are trans-disciplinary and we just haven't had that kind of support. (INT#329, COMM; academic; research)
I think it's because it's hard, because it requires the upfront investment in time and we're all pushed to have short-term results. So long-term studies of things, especially this emerging phenomenon, it's hard to get funding for them and it's hard to get results quickly enough to satisfy our masters, whoever those masters might be. (INT#370, HCI; academic & corporate; research & design)
We simply don't have some of the basic tools that many fields have. (INT#363, LIS; academic; research)
...we only have the time and money to use one approach. And so whichever approach somehow gains the upper hand, we then lack the time and money to take advantage of other approaches. (INT#371, HCI; academic & corporate; research & design)
If you have research that is related to technology or computing, it is more likely to get funding. But if I say I want to go look at how the [ethnic group] are living and what kind of information resources they consult or are unable to consult, I am not sure such a proposal would be valued and funded. (INT#350, LIS; corporate; research)
Well, I think in my field, it may not be true in others, but in my field I think the big reason is money. I mean everything has to be done in this sort of penny-ante way. (INT#356, LIS; academic; research)
I think the reason people don't even aspire to do larger scale is that there just isn't a practical way to do it. They don't even think of it because they're used to thinking literally that a five thousand dollar grant is a big deal. (INT#356, LIS; academic; research)
....and the constraints imposed by the restrictions on human subjects research
One thing that's also structurally hindered a lot of research is the extent of [the Institutional Review Board's] influence over what can and cannot be done. I understand the need for IRB and the protection it provides the university, but I think the development of IRB has overextended its bounds of necessity. I feel they have moved from making sure we're not blatantly hurting people, to making sure that every bit of the microcosms of the participant's world is safe. (INT#317, COMM; academic, research)
The biggest hindrance I think would be IRB procedures and turnaround times. I'm all for protecting the users and informed consent, etc., but the actual burden and delay it places on getting research in process is significant. People steer away from research with children or special populations.(INT#376, LIS; academic; research)
THOSE OF US WHO FAVOURED USER STUDIES HAD AN UNENDING LIST OF DIFFERENT SUGGESTIONS ABOUT HOW TO IMPROVE THE USER STUDY ENTERPRISE, INCLUDING
....more theories and models
If you have a theoretical foundation...you at least have an idea that helps you to explain why people do what they do. If you don't have that, you are depending on inductive information gathering...you have no way of proving whether this is true or false, whether this is accurate or biased information that they gave you. You just collect.... Even if the theories are flawed, even if the theories don't cover everything, it's much better than relying exclusively on the self, on the questionnaires, on what people tell you. (INT#339, COMM; academic; research)
....better research designs
Just studying users may not come up with anything. You have to be very, very good at figuring out exactly how to do the research design. Otherwise, you could end up with very little. (INT#349, LIS; academic; research)
....better samples
I think many of the studies conceptually don't identify the population. When I look at our research literature, often we simply sample and not necessarily well-defined samples either. Indeed, those are users but the issue of representativeness is still conceptual. (INT#363, LIS; academic; research)
One thing that's hindering research is that we don't really have good samples in our populations.... We're using students too often. We're using people with really good cognitive skills too often. We're using people from high socioeconomic classes too often. We're fooling ourselves with a lot of research that we see that's being published. (INT#355, HCI; academic; research & design)
....more direct observing and inductive, qualitative work
I think we need to do much more qualitative research in order to understand what's going on there [between users and information and communication technologies]... (INT# 374, COMM: academic; research)
There's a tendency in recent years to rely too much on cluster analysis and regression analysis. And the thing that concerns me is that the researchers tend to depend on just pulling all this stuff into a big pot and having those models come out with the answers without understanding what those answers really mean. I think that the researchers need to get in and try to wrestle with the data a little bit more, not just depending on some computer model to give them those answers. And I think there's kind of a laziness in pursuing that kind of modeling. (INT#341, LIS; consulting; research)
I think we need to do much more qualitative research in order to understand what's going on there. (INT#374, COMM; academic; research)
...the way to find out about how people really behave is to observe them...people who are trained in the social sciences, especially psychology, don't like these kinds of observations because there are no controls. You cannot control what is happening. (INT#371, HCI; academic & corporate; research & design)
Also, the role of theory is very important and although I'm not going to define theory, we still don't know enough to do deductive research. (INT#340, LIS; academic; research)
....less qualitative work
I see a lot of people opting for what I consider the easy way out and doing the kinds of simple descriptive and qualitative work and I don't have objections to qualitative work at all, because I think it helps fill in important understandings, but I think there are lots of times where people are doing things that could have been done in a far more informative way. And that's troubling to me, that they opt out the easy way. (INT#346, COMM; academic; research)
I see a lack of understanding, both of the basis of ethnographic epistemological methods and that people are so fascinated by them that they just want to [take] and apply them. Then you get tons and tons of qualitative data and people just get involved with it and they don't analyze it. Even in dissertations, I see only minute analysis of all that qualitative data. (INT#382, LIS; academic; research)
....more segmentation of users into different sub-groups
I know some people also at the university and they try to formulate a general understanding of HCI. I don't believe that is possible. You have to look on a specific type of users, you have to understand this group of users and then you can formulate theories about how they will behave, what they will do. (INT#345, HCI; academic; research & design)
Those ideas built into some kind of two or three part experiment is what I would like to see. We can go a long way with that if it's done well, if we sample the right kinds of populations, if we sample elderly people, low socioeconomic status (SES), high SES, students, non-students, different kinds of information needs, different levels of experience and so forth. If we do it right, we can really get at some good answers. (INT#355, HCI; academic; research & design)
In my opinion, the idea that there is an information user or communication user or library user is too broad. You need to have more targeted groups of people and that would be a better way to get at what they are doing. (INT#311, HCI; academic; research & design)
....less emphasis on user segmentation and sub-groups
...allow for greater understanding of the multiplicity and hybridity of others. Rather than saying 'Oh, I'm different to you because I'm black and you're white, male, you're female.' But more fluidity of the hybrid position then would allow you to see actually all people are this way.(INT#358, COMM; academic; research)
The existing disciplines, marketing for example, thinks the way to find out the answers is to divide the world into small groups: demographic groups based upon country, age, social economic class. And that doesn't tell you anything about people's behavior. Second, they believe that you get answers by questionnaires and by focus groups. And those don't tell you about real people's behavior. They tell you what people think, or what people would like you to think they think. They don't really talk about real behavior. (INT#371, HCI; academic & corporate; research & design)
My one pet peeve is that traditionally [service research and industry] has such a strong epidemiological influence. Their tendency is that audiences are distinguished demographically. While demographic differences are of enormous importance, the fact of the matter is often the within-group differences are as great as between-group differences. (INT#334, COMM; academic; research)
....more studies of users interacting with specific systems and technologies
User studies need to begin to look not only at people in the context of their lives, but in the context of their lives using information systems and contending with this information retrieval problem. I think that some of the user studies have gone so broadly and taken such a broad definition of information, that they aren't applicable. (INT#335, LIS; academic; research)
Research that is more inductive and specific rather than general, trying to solve problems for all the world and everybody, could be an improvement. The more general the system is, the less effective it is. It cannot cater to different groups of people. (INT#340, LIS; academic; research)
....fewer studies that focus on users interacting with specific systems and technologies
Well, I think that if we think about this user/system division, then a focus on motivation would take us back to the user and away from the nitty gritty of what happens in relation to internet searching, which, interesting though it is, doesn't really add an awful lot to the user, per se, as persons, as motivated beings. (INT#319, LIS; academic & consulting, research)
I think some of the technical disciplines sometimes overestimate the importance of understanding users from the socio-technical systems from which they operate. They regard that as this sort of fuzzy, nonessential part of designing these systems. There's too much focus on the technical. (INT#321, HCI; academic; research)
The non-stop repetition of "one more study on new technology." There's a lack of real progress there. (INT#378, HCI; academic; research & design)
I think that because information technology is such a powerful phenomenon in the last 25 years or 30 years, it gets the majority of the attention. (INT#350, LIS; corporate; research)
When we do a study to determine the use of [specific media of information], for example, in the library of an academic institution or any other kind of an environment, I'd like to do the research broadly to determine what other sources people use to get information, how do they identify that information from a broad range of sources, not just the library. (INT#341, LIS; consulting; research)
....more emphasis on contexts and situations
A person does not exist in a vacuum. A person or a group of people exist in a contextual environment. When you study users, you can't just study users on whatever prime choice you set up, which would be what I consider as the surface, shallow-veil users. You have to study users just as if you study communication. You have to study the dialog, the context and then probably even bigger context. (INT#366, HCI; academic; research & design)
Really understanding the contextual factors involved and people's needs and how they go about things [would help]. There's some of that out there, particularly in specific cases, but I just think we don't know enough about that. (INT#360, HCI; academic & corporate; research & design)
....to study specific moments of information seeking and using
I think it would be valuable to look at information seeking and all the way down to the moments of access of information. That moment when somebody is actually searching. Because it's not enough just to look at searching on it's own and not enough to look at the general information seeking on its own. But, again for practical reasons, they tend to get studied that way. (INT#356, LIS; academic; research)
Why don't we see a lot of research on the activities and the processes, per se, rather than the technology or the user? (INT#302, COMM; academic; research)
I still think we are short on much investigation into what people do with the information they discover. (INT#319, LIS; academic & consulting; research)
What I feel is dramatically missing is real information use and its influence over future information seeking. (INT#301, LIS; consulting; research & design)
....to get outside university labs
Well the reason I think it's important to look at everyday phenomena is they actually occur. Often I think when you set up a study, you can assume something's important or you can assume a great number of things about how it takes place and in gathering your results, lose sight of the fact that that isn't really how it happened. (INT#370, HCI; academic & corporate; research & design)
You can't just be studying what undergrads do.... If there is a particular problem, you need to roll up your sleeves and get out there. (INT#377, LIS; academic; research)
I think that people have got to get their hands dirty and not just try to find a problem that's comfortable to them. (INT#308, COMM; governmental; research, service planning & implementation)
....more longitudinal studies
How any given answer about users is not the answer. The answer is a process, which evolves over time. We need longitudinal understandings. (INT#327, LIS; academic; research & design, academic)
And I think that the reason it's not addressed is that it's very hard to do five, ten, twenty, fifty-year studies. And so that's the practical reason why we haven't come up with these good distributional models or transactional models. (INT#327, LIS; academic; research & design)
I think more longitudinal studies would be really nice but those are really complicated and difficult to run for many legitimate reasons, but I do think those would help us. (INT#360, academic & corporate; research & design)
So I would say longitudinal studies and in-depth qualitative studies. Long-term ethnographic work is what I mean. (INT#374, COMM; academic; research)
MOST OF US SAID THAT INTERDISCIPLINARY COMMUNICATING ACROSS THE THREE FIELDS THAT DO USER STUDIES ISN'T GOING WELL...
....we just ignore each other
I don't think they struggle with each other. I just think they ignore each other. (INT#382, LIS; academic; research)
Well, I think that it's the typical silos of academia and the fact that we don't tend to read a lot in each other's literatures or appreciate each others' point of views as a human. (INT#329, COMM; academic; research)
Well, sometimes I find it frustrating because they're not talking to each other. I don't mind the multiplicity of perspectives, but the problem for me is that they're not communicating with each other, so you have to be the bridge maker in your own mind and I don't think I should have do. (INT#374, COMM; academic; research)
....we have no respect for each other
Those entrenched approaches and the lack of conversation between and about the two approaches is a major problem as is the lack of respect for each other's work.(INT#335, LIS; academic; research)
And another big piece of it is the lack of respect on both sides. The user studies people are considered too touchy-feely, they're way out there, they're fluffy and the technology people are too technology and system-oriented, they don't look at users. I've heard people call tech people propeller heads. These kinds of put-downs don't enable the collaboration and really separates us. (INT#335, LIS; academic; research)
....there are simply no rewards for interdisciplinary contact
I think part of the problem of the disconnect are the power structures in those three areas. There is nothing whatsoever to benefit the academics, no rewards. There are limited rewards for crossing disciplines. (INT#343, LIS; academic; research)
One of the basic problems with that is that in our current social science situation, there's no real benefits, or limited benefits, to people who bring together those disciplines. (INT#313, COMM; academic; research)
....there's no funding or mechanisms to support interdisciplinary work
There are actual institutional barriers to it, despite institutional rhetoric that they want more interdisciplinary, cross-perspective advances. When you look at funding decisions, they don't support cross-perspective interaction and activities; they support single activities. (INT#322, HCI; academic & consulting; research & design)
It does seem that the mechanisms for coordinating scholarly activity are insufficient in the multidisciplinary context here. There is really no scholarly journal, organization, or peer review process that can coordinate all of the scholarly activities that are going on. It's sad in the sense that you run across scholars that are doing work you might consider outmoded and say, "Weren't we doing that forty years ago? What more do we have to learn?" (INT#336, COMM; academic; research)
Not that there are many approaches, but there isn't an effective way of bringing them together, comparing them and benefiting from the multiplicity. Even if there is creativity or innovation in some area, it's missed in some other branch of the inquires. It's a waste of effort, to a certain extent. (INT#351, COMM; academic; research)
....and few publishing outlets
...other groups committed to working in an interdisciplinary way actually carried out the interdisciplinary research successfully. But then when they went to try to publish it the journals wouldn't accept it and so what they had to do is retreat back to a packaging of their research along the disciplinary boundaries. (INT#372, HCI; academic & corporate; research & design)
We now have the conditions to completely reshape scholarly publishing, at least as far as research papers and journals are concerned. But the system doesn't want to do that. (INT#301, LIS; consulting; research & design)
MOST OF US CONCURRED THAT INTERDISCIPLINARY CONTACT IS HARD, HARD, HARD....
....the isolated silos of academic disciplines make interdisciplinary contact very difficult
Now, of course, in the academy there are many, shall we say, structures that make it difficult to do [interdisciplinary work]. Well, for example, people that do design work and people that do social science work are often in different departments and have different sort of incentive instructions, incentives for where to publish and what kind of work is valued and so forth. (INT#364, HCI; academic & corporate; research & design)
The main thing that's hindering is the pressure within each discipline to stay within each discipline. (INT#382, LIS; academic; research)
But the pressure towards disciplinarity enters into things at every level. It pushes our faculty towards commitments to their home departments rather than our interdisciplinary programs for their publishing, for their writing, because that affects their promotions. It forces students eventually to sort out which discipline they belong to if they're going on to the PhD programs or if they're going on the job market they're eventually going to be funneled into a discipline. (INT#367, COMM; academic; research)
....this is compounded by fierce turf wars
And they really don't want to discuss all the disciplines because if you do then it will reduce their authority. (INT#345, HCI; academic; research & design)
One of the things that is really important is letting go of some of the boxes that we put around territories.... In some ways, we're protective of the territory rather than looking at what many other disciplines have done. (INT#318, LIS; academic; research, LIS; academic, research)
People are very protective of their turf, they want funding to come to them and not someone else. So they might want to make [their research] different. (INT#357, LIS; governmental; service planning & implementation)
I think it's a fact that probably the computer scientists, information scientists might feel more ownership of this arena. They probably wish that the libraries would just keep working with dusty old books. (INT#322, HCI; academic & consulting; research & design)
There is also this tremendous sort of turf and terrain battle that we still have. The best minds in communication aren't talking to the best minds in psychology, the best minds in history, etc. We've all got our own little niche networks and that doesn't help us talk to each other. (INT#378, HCI; academic; research & design)
...and the explanation I found there for the lack of communication between these disciplines was that they are...weak in the political system of the university.... So it's seeking to demonstrate one's academic viability, which is at the back of everything, even although that may be unconscious. (INT#319, LIS; academic & consulting, research)
I think the difficulties arise more or less intra-institutionally, in the sense of political infighting over the territory.... Also, cases where there have been quite overt political battles over whose territory it was...I think that kind of turf war in institutions, if anything, is likely to proliferate. And because LIS departments are relatively weak in research terms in many institutions, they are going to find themselves deprived of the opportunity. (INT#319, LIS; academic & consulting)
....disciplines and fields are separated by different worldviews, assumptions and vocabularies
...the discipline[s have] different thought processes and mindsets and ways of grappling with information and interacting systems. (INT#383, LIS; academic; service planning & implementation)
And so it's good to bring a multiplicity of approaches to understanding and the general problems and I suppose the only problem, might be getting the different groups to communicate with one another in a language that they all understand. (INT#305, LIS; academic; corporate & governmental; research & design)
The major stumbling block is having many different words for the same thing, the lack of an interpreter in many cases and the difficulty and dearth of boundary spanner to do that. (INT#343, LIS; academic; research)
Each of these different disciplines...because you go through a gamut of smaller to larger disciplines, oftentimes they have their own vocabulary and their own representations. And if we talk to each other, maybe we could come up with some representation that was finally of some greater value than the representations we have now. (INT#342, HCI; academic; research & design)
I think another real problem with interdisciplinarity is ambiguous meaning. I'll give you an example. As a person [who] majored in the humanities, I have a meaning for the word ontology. It means something entirely different to computer sciences...we have to take the words that we have and put them in new meaning and that really causes a lot of confusion. (INT#344, LIS; academic; research)
I see that a particular topic actually appears across disciplines sometimes with the same name, but often with a different name because each discipline thinks that this term is something that they have developed and it's some kind of insight they have developed. (INT#306, COMM; academic; research)
Sometimes the same thing is called different things in different areas. That's annoying because you think, "I've done that, that's exactly what I'm talking about," but they use a different terminology for it. If you really think about much of academic work, there's jargon, a specialized language that academics use in their own field. That specialized language is used to address their peers and other academics in their circle or sphere of influence. Now, with the current state of technology, there is no agreed upon vocabulary. It's definitely in flux. Each field calls it something else when, in fact, they may all be referring to the same construct. (INT#316, COMM; academic, research)
....it's hard to know the rules on the other side of the fence
Different disciplines have different cultures, different norms. They have different incentives in different fields. How would you achieve your statue or achieve your reputation to be even tenured or promoted when the criteria is quite different in other fields? (INT#366, HCI; academic; research & design)
I don't know the rules for publishing there. There are always those unwritten rules about the structure of something or the way you have to spin something in particular. And I don't know those rules, so it's very difficult for me to actually get my food in the door in that field, without knowing somebody who can help me. (INT#360, academic & corporate; research & design)
The other thing is that different professions, different groups have critical taboos that are not the same from one profession to another. So it's hard to keep track of what is permissible in one area and what's not permissible in another. (INT#313, COMM; academic; research)
...it seems like the different fields have different criteria for success, so what motivates one field does not necessarily motivate the other and it can be challenging to come together on a team where everyone's needs are being satisfied. (INT#360, academic & corporate; research & design)
....academic reward structures force us to be non-collaborative
We feel that we can't get published unless we come up with a new term to describe something. (INT#318, LIS; academic; research, LIS; academic, research)
People have to find a different niche that nobody has ever looked at, give it a name and it doesn't connect to anything else around it. (INT#340, LIS; academic; research)
Doing something about the tenure system in universities that drives people to do lots and lots of studies. I mean not that there isn't value to them, but I think if people were only or mainly driven by intellectual curiosity we would have fewer studies and better studies. I think that there's kind of a frenzy of activity that needs to be somehow settled down and I know this is outside of the boundaries of our community, it's a societal issue, but the whole publish or perish regime drives a lot of marginal research not just in our field, but in other fields. (INT#314, COMM; academic; research)
Essentially I have to do what I can to get a product, a research article, together and out the door. I'm dealing with information management that has to do with twelve other disciplines. It would take me a lifetime to read through all their histories and incorporate them appropriately, not to mention how long the article is and [whether] the editors would accept it; it would take forever to read through that. (INT#317, COMM; academic, research)
....as a result, we seem to all live inside our disciplinary blinders
As side effect of pigeon-holing folks into particular methodology is that education tends to be very narrow. I only know that perspective and even if I were going to practice that perspective, having a broader view lets me do it better. In some sense it's self-perpetuating. (INT#352, HCI; academic & consulting; research & design)
And really it's only the rare student that is privileged to see these things from a more holistic perspective. (INT#329, COMM; academic; research)
To a large extent either their political assumptions ahead of time about the answers that they want, or their theoretical assumptions about the nature of communication and reality, tell them the answers before they even begin. Then they don't discover very much that's very new or interesting although the world around them is interesting. (INT#331, COMM; academic; research)
MOST OF US AGREED THAT COMMUNICATION ACROSS THE RESEARCH-PRACTICE DIVIDE ISN'T GOING WELL EITHER
....researchers and practitioners too often ignore each other
...there's always been a sense a disappointment on both sides. [On the researcher's side, they say] "Oh, well, the practitioners don't talk about things the way we do." (INT#370, HCI; academic & corporate; research & design)
Partly because it allows people to pigeonhole interesting issues into the "not of interest to me" camp.... (INT#378, HCI; academic; research & design)
It's because the people who conduct information behavior research are not system designers and, the opposite, system designers do not conduct information behavior research. And unfortunately, the two communities don't come together. They don't publish in the same place. Even when they do publish, I don't think they are reading each other's work. You get them at an interdisciplinary conference and they don't go to each other's sessions.... (INT#377, LIS; academic; research)
They won't talk to each other...and they put each other down. (INT#354, LIS; academic; service planning & implementation)
There is a lack of communication going on that's happening at a high level that's permeating down. I am sure if you look at other professional fields, you'll find similar types of schisms going on our there. A bit of disconnect between a profession and [the] academy. (INT#377, LIS; academic; research)
If people think of bringing in HCI or some kind of usability expertise, it is usually at the end of the project, which is the worst place to ask for it, because you have made all your decisions by then. (INT#359, LIS; academic; research & design)
Basically, there's not a lot of communication between them. We have theorists who develop a theory, go out and test it, modify it, retest it, etc. They engage in a refining process which tends to stay in their community. Then you have the practitioners that are out there gathering more mission-driven kinds of information. Usually there's not a lot of overlap between the two. Even if some of the practitioners might refer to the theorists they don't necessarily feed back their information. (INT#376, LIS; academic; research)
....they have radically different priorities
The user really falls between the cracks [because researchers and practitioners have different priorities]. (INT#340, LIS; academic; research)
...I really do think that often times the practitioners are not evaluating the researcher's work with a researcher lens on it and vice versa. I think the researchers often don't look at the practitioners' work with a practitioner's lens on it and so they try to evaluate the work based on inappropriate criteria and that just makes the two sides not play well together. (INT#360, HCI; academic & corporate; research & design)
And again, these are often competing perceptions. [One side] looks at the research being almost idealistic in the sense that this is what we know about users from all the research that we've done. Then you've got the practitioners in the field who see it through the eyes of practice and are very pragmatic. (INT#318, LIS; academic; research)
Well, I think from a practitioner's standpoint we look at users to serve and then we provide a service to users. We want, whether its customers or people who use our service, we want them to be happy with us and come back to us. And as researchers we need to be more objective, not necessarily feel that we must do something to or for someone. (INT#344, LIS; academic; research)
Quite often people who are in practice don't have much time to reflect upon their work and to see it from a broader perspective, whereas researchers, that's what they do, they're a little bit better at in that possibly practitioners. (INT#364, HCI; academic & corporate; research & design)
...you want to say, as a researcher, "This is complicated." And the practitioner wants it to say "No, but what's the answer? I don't care how complicated this is, what should I do about this particular problem right here and now?" (INT#370, HCI; academic & corporate; research & design)
They are in different worlds and have different missions. (INT#328, LIS; academic; research)
From what I can see, practitioners are primarily focused on seeking to solve problems and researchers are primarily focused on seeking to understand problems. (INT#353, HCI; academic & consulting; research & design)
They are different communities, they talk different languages, they have different practical constraints, economic, professional and so forth. (INT#328, LIS; academic; research)
The practitioner wants to handle a specific problem, a specific issue, whereas a researcher wants to come up with grand statements that are generalizable to everything. (INT#313, COMM; academic; research)
....there's little reward of incentive for researcher-practitioner collaboration
In commercial organizations, there's no motivation to do that. There's no client that's going to say that you have to publish your work internally so that some other project might benefit. There's not an incentive to publish externally because you don't necessarily want to release [information] or anything like that. (INT#353, HCI; academic & consulting; research & design)
Or it takes a long time for the results of research to feed into practice, so practitioners don't really see the results of the time that they've spent with the researchers. So they don't immediately see the payback for spending time and working with researchers. (INT#321, HCI; academic; research)v
If the theorist is somebody in academia, for example, building those bridges with the practitioner doesn't necessarily help their career or the tenure process. So they often don't do it, but they should. (INT#303, LIS; academic; research)
There do not appear to be any rewards for crossing the lines between theory and practice. (INT#343, LIS; academic; research)
Our structure as an academy has actually set up barriers which hinder the movement of ideas and placement of ideas into practice. (INT#318, LIS; academic; research, LIS; academic, research)
I think it is possible to bridge the gap, but again, for them the reward structures are nonexistent, because you are doing a local study and it may or may not be publishable.... There are a lot of things in the academic reward structure that work against that. (INT#333, LIS; academic; service planning & implementation)
....there are few structures to support research-practice collaboration and translation
I think there will always be those problems and there are various ways to ameliorate them, but there has to be an investment of time and resources in order to do that...it has to get funded...it's just a lot of practical and institutional barriers to do something like that. (INT#356, LIS; academic; research)
The major issue in using anything from user studies or studying users is that someone has to do the translation from the data about the users to the design implications for the system. That is not particularly easy to do. It's even harder when a lot of corporations have people who do the user experience and people who do the usability testing and you have no structured hand-off between them. (INT#312, HCI; consulting; research & design)
...there ought to be an intermediate field of people who do advanced product development, which is taking the work from the research laboratories and getting it ready for product. So they are neither research nor practitioner, they are the transition. (INT#371, HCI; academic & corporate; research & design)
The major issue in using anything from user studies or studying users is that someone has to do the translation from the data about the users to the design implications for the system. That is not particularly easy to do. It's even harder when a lot of corporations have people who do the user experience and people who do the usability testing and you have no structured hand-off between them. (INT#312, HCI; consulting; research & design)
SOME OF US, BOTH PRACTITIONERS AND RESEARCHERS, SAW ACADEMIC RESEARCHERS AS THE PROBLEM
....academic researchers work on toy problems
There are two ways to determine when a subject is nearing the end of its lifespan in management terms. One is government adopted. The other is academics start to research it.... (INT#330, HCI; consulting; research)
And a lot of the work in the research areas is very artificial, very difficult to understand how you would actually use it. It's what we call toy problems. They make up toy situations that aren't very real, because those are what you can study in the research laboratory. So that makes it very, very difficult to apply any of the work. (INT#371, HCI; academic & corporate; research & design)
....they see things in non-human terms
Yes, the researcher looks at things differently, in terms of numbers and figures rather than a person in need of information or services. (LOC#413, academic library; public; low enrollment; undergraduate emphasis)
I don't know for certain, but my intuitive response is that researchers look at users in the abstract, out of their broader context, as research data, whereas practitioners see them as a daily problem to be dealt with in real time. That disconnect probably, at the very least, impedes dialogue and exchange of ideas. (LOC#432, public library)
....they are hyper-critical
I don't think industrial research would meet the standards of the academic community and because it doesn't meet their standards, it just doesn't fit into that little world of "this is trustworthy data." (INT#312, HCI; consulting; research & design)
But if you look at how ethnography is working in design firms, you'll see a really different approach and you'll see that that approach has impact as extremely effective for the purposes of design. As a social science approach, this work would face many, many criticisms. (INT#364, HCI; academic & corporate; research & design)
....they live in ivory towers, disconnected from the everyday
The researchers get trapped in over-simplification fallacies about the nature of work, by seeing it as less dynamic and less adaptive than it really is. (INT#322, HCI; academic & consulting; research & design)
Researchers sometimes operate far enough 'above the ground' that they never dirty their hands, so to speak, with the details of the systems that are there. As a result, they don't understand that the real problems are that they probably have skills to try and solve [problems], but people who are more immersed in data, they use in practice just don't have the time or skills to solve either. (INT#312, HCI; consulting; research & design)
In general the practitioners have a better understanding of how different users actually are because they meet them, maybe not everyday but at least every so often. Some HCI professionals at universities only meet the users who are their students and computer science students at that. (INT#345, HCI; academic; research & design)
And practitioner have a very different perspective and they generally know more about what's going on in the field, at least in their region. We scholars tend to always be behind because we know what has been published. And what has been published is describing stuff that was going on two years ago. In the world of [information and communication technologies] things move much faster, so what was going on two years ago is already outdated. (INT#374, COMM; academic; research)
....their research foci are driven too much by self-interest and money
...many times researchers have this freedom to just focus on their personal interests. Many faculty members do research that is highly self-relevant. They're interested in the research they're doing only because they have some personal experience that led them to that research. In a way that's helpful, because you're genuinely interested in what you are doing. But in another way, it's hindering because you're only doing what interests you personally. I think personal interest would not help the researchers with user research. (INT#302, COMM; academic; research)
I have a sense here that the research community doesn't really pay attention to what the real practice issues are.... I think most people in the universities go where the money is. And they structure their work depending upon who's funding what today. (INT#308, COMM; governmental; research, service planning & implementation)
Most of the studies are done to please academics or the funding agencies and foundations. The intelligentsia decides what problem is worth investigating. Then the poor practitioners show up struggling with the problem. They have no say in the definition of the research questions and much less in the product of the research. Very often they don't have access to the results of the research. (INT#301, LIS; consulting; research & design)
....their research is not useful to system design and practice
Because the user people don't really understand what the systems people need in order to develop systems.... They've never worked in computing environments and they don't know anything about it. And they are also not doing very good quality research because they are not [cognitive] scientists, they are not trained in psychology and they have very little impact. They don't do experimental research. So their research is generally not accepted in the [cognitive] psychology community, nor the computer science community. (INT#349, LIS; academic; research)
You can't do systems design without being a practitioner and being a practitioner is devalued in the academy. And yet, how can I say this? I'm not trying to say that what we're doing is the only way to do research on technology, but I am trying to say that if you don't do this, then all you're going is critiquing other people. Or studying their effects and then saying "Oh, they're really good" or "Oh, they're really bad." (INT#365, COMM; academic; research)
We don't know how to apply and translate it into the world. [Someone told me that] I should tell them how to use it. I will try that, but my own sense is I am not practicing now and therefore I am not the know-it-all that say these are things that should be done. (INT#350, LIS; corporate; research)
And on the practitioner's side, the practitioners tell me "This is all pie in the sky stuff. I need something to bring back and use right now and this is not helping me." (INT#370, HCI; academic & corporate; research & design)
...we actually need to be far more reflective and far more specific and far more detailed in terms of what we are saying to the system designers. We have to be much richer in the detail of what we say. (INT#318, LIS; academic; research, LIS; academic, research)
Most of the research done in the user area has little impact...on system design. So I think what the biggest problem is that the people doing the research [are] not doing research that has much impact. (INT#349, LIS; academic; research)
... I'm putting it right back on the researchers, is how we articulate implications for system design and distance and workplaces and environments. In many ways, we tend to do it just very offhand and make these kind of sweeping assertions about practice and so on. (INT#318, LIS; academic; research, LIS; academic, research) One of the things that I see typically in a lot of user studies is you get to the end of the study and you've got a nice, interesting portrayal of a range of something of this user group and then you get into the inevitable, quite bland discussion on the implications for practice and inevitably they'll talk about system design. One of the failings that I see is that they tend to be general, cursory kinds of things and I think we actually need to be far more reflective and far more specific and far more detailed in terms of what we are saying to the systems designers. We have to be much richer in the detail of what we way. (INT#318, LIS; academic; research, LIS; academic, research)
SOME OF US, BOTH RESEARCHERS AND PRACTITIONERS, SAW PRACTITIONERS AS THE PROBLEM
....too many practitioners are anti-intellectual and hyper-critical
I do have some sympathy for those in the really theoretical camp who feel that the very mention of the T word is enough to send practitioners running and screaming. I see a lot of that in the librarian community. It's almost an anti-intellectual stance and that's extremely dangerous for any profession. (INT#378, HCI; academic; research & design)
And here's a thing that is a real frustrated gripe of mine: is a lot of practitioners are expansive in their scathing criticism of us. You know, us woolly-headed academics, not knowing "what's up in the real world." (INT#356, LIS; academic; research)
But when it comes to doing the last thing that apparently crosses their mind is that we actually learned something in the three to five years beyond the master's degree and know something about how to but maybe they don't.... And [then] they come to us too late in the game to clean up the mess and then they ask for advice and there's nothing else you can do at that point. (INT#356, LIS; academic; research)
... in some cases, the suspicion of research was just palpable. And it was very difficult to get people to understand why anything we were doing had any utility to them. (INT#346, COMM; academic; research)
I think researchers may see the kinds of issues that designers and developers focus on as being too limited in scope.... They don't have the kind of openness to the 'what if' questions. (INT#312, HCI; consulting; research & design)
I don't think practitioners for the most part look at user which is probably the biggest hindrance... They think of themselves in terms of a terminal degree and the only thing they need to learn is more about some new software without looking at what lies around and beyond. (INT#343, LIS; academic; research)
....they are forced to focus obsessively on the bottom-line
It's hard to convince people things are important if they don't know how it relates to making a product out of it. Where you say "but this is really important." but they say "well, show me how that changes the bottom line," and you say "well, I don't know how it changes the bottom line. I think it's important, though." (INT#370, HCI; academic & corporate; research & design)
I don't think money is going to be falling from heaven, but one of the reason [practitioners] don't have time is money. That's not the only reason. The other reason is that they've got to get things out there and working. Part of why they don't have time is because they have too many other things to do and they're understaffed, so that comes back to funds again. (INT#303, LIS; academic; research)
Those conflicts still exist between designers and researchers where designers are motivated by what's going to catch the user's attention and satisfy them at some level, while the researcher is more concerned with meeting certain principles of user information processing. (INT#355, HCI; academic; research & design)
I think the issue is [practitioners] are not really interested in the question, they're interested in the money they are going to make and I understand that's their goal and they only take into account as much as they need to make enough money. (INT#339, COMM; academic; research)
....they are institution-centric
Sometimes I think I never leave the library. Sometimes I thought I should relocate my office to one of the departments I serve, I bet I'd learn a lot. But I'm like stuck in the library. I so want to get out of this place... (INT#380, LIS; academic; service planning & implementation)
I must say that practitioners really think that libraries are unique. It is almost a cultural thing that everybody always thinks they are unique. (INT#350, LIS; corporate; research)
I think that one of the major problems with librarians and information providers is that they're centered on their institution and the established goals of the institution...I see it was a hard shift because they're hired by an institution and they think about how this institution functions. (INT#335, LIS; academic; research)
We have mountainous amounts of information; but seldom convert to practical uses because it would change how we do things: develop programs and services! While we do not have a lack of innovative ideas; we seldom can let go of old ways of doing things, limiting our ability to change. (LOC#415, public library)
....they, too, have rules and standards they must meet
[Practitioners] have very specific requirements if you want to publish in that journal, you have to write in that particular way. So you have to learn how they require you to write. And they don't want all this jargon, all this theoretical language. They just want to get to the point. Very short, but get to the point. If I write that way, I won't get it published in my academic journals, so I have to write it in an academic way to get it published. (INT#366, HCI; academic; research & design)
It's hindering because researchers have different values than practitioners. And practitioners don't necessarily value research; they don't want to be held to making decisions and changing things based on data. (INT#382, LIS; academic; research)
....they are too often research-illiterate
Well, mainly the people who do practice seldom know what the academics, the researchers, are doing. (INT#371, HCI; academic & corporate; research & design)
I also suspect that the system designers are not paying attention anyway. They may not even know about studies. Or they may not be finding the information that they need in them. (INT#332, LIS; governmental; research, service planning & implementation)
There's so much [relevant] literature...that has gone under the bridge and librarians act like they don't know any of it. (INT#380, LIS; academic; service planning & implementation)
I know a pretty large number of people who are technical and they see no value in theoretical or behavioral and they're just close-minded about it.... I think it produces real barriers [because] a lot of these people are ignorant about a lot of the important things that can be brought by theoretical approaches to their subject. (INT#326, COMM; academic; research)
The trick here is, how do you conduct a theoretically and analytically rigorous study in a real life, organizational study when the people involved are not trained in any of these things? And therefore they don't really know what they mean and they're often skeptical of the concept of research and theory because they have a stereotypical view [that] theory is abstract and not very useful. (INT#306, COMM; academic; research)
I also find that practitioners tend to focus on using one method and one method only because that's what their companies will pay them to do and nothing more, but I think that's a little bit of a shame because you can learn a lot from methodologies or methods. (INT#360, academic & corporate; research & design)
Depending on who you're dealing with, some people are very savvy and actually want all the technical details. Other people, you can't talk to them in a technical language. They don't understand what theory is about, so you've got to again translate theory into language they can understand. (INT#339, COMM; academic; research)
....they have to meet deadlines that preclude rigorous research
The only hindering thing is that practitioners often don't have the time to do research or wait for research. They have to do something right away. (INT#303, LIS; academic, research)
[The practitioner's] boss have him/her three days and by the end of the three days [wanted] the report. How can you do rigorous theory-based in three days? No way. (INT#366, HCI; academic; research & design)
The practitioner has to be very concerned about money, about time, about market acceptance, about existing standards. So sometimes there's a better way to do it, but you cannot do it because it might make the current product obsolete. And the researcher really should be freed of these constraints. So it's important that researchers not be bound by cost or time because they are trying to advance the state of knowledge. So their goals are very different. (INT#371, HCI; academic & corporate; research & design)
...they have very understandable time pressures and monetary constraints and they say, "How can I justify taking on or doing a complex or serious research project if I can't see how that's going to be useful to me right away?" Because that's the nature of their job, that's the nature of their work. (INT#306, COMM; academic; research)
The practitioner does not need to know the correct answer...we need rapid and approximate answers. See, even if you do it quickly and badly, an answer that's based on data is apt to be far better than one that isn't. (INT#371, HCI; academic & corporate; research & design)
MOST OF US AGREED WE WOULD BENEFIT FROM CONTACT ACROSS OUR DIVIDES
....between researchers in the different fields
You need the best minds on it and they need to be sharing it with each other, rather than as now I sense that they work in isolation. (INT#378, HCI; academic; research & design)
The one sides looks at users within the context of their lives. The other side looks at what the system can do. And what would be useful would be for us to have a conversation, a real conversation, about these two perspectives and both sides move to a better understanding. (INT#335, LIS; academic; research)
Well, because everybody thinks that their specialized view is the correct view. And it isn't. Everybody's specialized view is important, but nobody's alone is enough. We need to work together. We need some way for people to understand that other disciplines have something important to contribute. And that no single discipline is the most important. Yet each of us probably thinks that whatever we do is the very most important thing to do, because otherwise, why would we do it? So, getting people together to understand the different disciplines and the different kind of training, is most essential and is very difficult. (INT#371, HCI; academic & corporate; research & design)
If we are all co-orienting around the same thing then we aren't looking over our shoulder and watching the other guy. We are all looking at the same thing together, trying to solve the same problem together and not worrying so much about, well, is he or she better, faster, bigger, smarter, richer, poorer, than I am? (INT#344, LIS; academic; research)
Studies of user information need to have teams of people who come from different perspectives at different levels of analysis and be more contextual. Different approaches might have different methodologies and different sources of data and so you can learn from that. (INT#306, COMM; academic; research)
I've always been convinced that universities, as beautiful as they are, ultimately are wrong. The buildings we've erected have confused us into thinking that because the building is labeled psychology or economics, then world epistemology exists that way. Because it's in bricks and mortar, it's legitimate and it's bounded. When really, the social contract of academics is to deal with thorny issues and try to solve them, even if no one else in the world thinks it's a thorny issue. When you say users, you may as well just say humans, who studies humans? When you think of it that way, no one discipline can handle it all. It's beyond any one discipline. Once we accept that, there's the good side. Let's be aware of bringing in to our focus any discipline we can be informed by and let's have our eyes opened into ways that we wouldn't necessarily think about them otherwise. (INT#378, HCI; academic; research & design)
I think the more diversity of approaches the better. We call that bootstrapping and you want to triangulate findings and you want to have a diversity of methods. (INT#311, HCI; academic; research & design)
....between researchers and practitioners
People who do information behavior research for example and people who design systems or implement systems or build systems, the need to come together and engage in a dialogue of mutual discovery about points of view and techniques... the answer is pretty straightforward that there needs to be a commitment by individuals and by communities to actually engage each other in mutual understanding. (INT#364, HCI; academic & corporate; research & design)
...as a practitioner, when you're in a project where you have a hundred screens to design and you're just designing screens, it's hard to remember the big picture. So [talking to practical theorists] is a chance to get your head out of the sand, to just sit back and think the big thoughts for a while and let them percolate down through your work. (INT#375, HCI; consulting; research & design)
If you want to study a static field, the researchers can study practice and you get a separation of practitioners and research. If you're studying a dynamic field, then you actually need to let research and practice co-evolve. (INT#330, HCI; consulting; research)
It's kind of similar to getting so close to the trees you can't see the forest. The practitioner research is what we need to operate day to day and the theoretical is what we need to look at vision and horizon with. So there's a real importance for both, though the dilemma is that typically they're not embodied within the same context. (INT#354, LIS; academic; service planning & implementation)
Theoretical development that is not tied to what the real world is like doesn't do us any good. It doesn't yield new knowledge. It just yields new opinion. Practical research that's not grounded in theory doesn't accumulate to anything. You've got to have both perspectives. (INT#376, LIS; academic; research)
Most, at least the theorists I talk to, would be very concerned at how theory played out in practice because without understanding that, one would have to question the value of the theory. Also, any information system in essence, is a theory. It's an absolute embodiment of what somebody thinks works for these people in that context. For me, there's no better way of thinking about an existing information system than to say, "It's a theory." It's an embodiment of a theory. Looking at it from that perspective, it will tell you what questions we should be asking to have better theories. (INT#378, HCI; academic; research & design)
One thing is that the research needs to be applicable. It needs to be applicable to the problem that you want to address. And the second thing is it needs to be specific to the problem that you want to address. So in order to meet those two criteria the research process, the information behavior research process, needs to be closely tied with the design process. So it needs to fit in and be directed and formed by the design process. And when you attempt to do research that doesn't do that, isn't embedded in the process, then there is a danger, a risk, that it fails to be applicable. (INT#364, HCI; academic & corporate; research & design)
I think one thing with researchers is they can be much more objective with people and services than practitioners can be. They have got a set of rules that they are following, established procedures that they are trying to be free of biases. I think all of that is really good. The practitioners are not that good at that, even though we have our little rules and stuff, we are not going by the book every day. (INT#323, LIS; academic; serving planning & implementation)
The biggest difference between the two is that the practitioners are a bit too practical. In a sense, they have to be because they're working in a very high pressure, high tempo, work industry world. They have to be practical. But that brings a limitation to their world view. The researchers, on the other hand, will be thinking forever without giving you an answer if you don't give them a deadline. Again, this is a trade-off; they have to be the way they are, but the way they are limit them. (INT#302, COMM; academic; research)
The practical work needs to be much more guided by research and academics need to have their work much more grounded in the real world. (NT#377, LIS; academic; research)
...communicating across our divides will help us do better work
I learned if you really sit around the table with different perspectives and talk about these things, ultimately what emerges is far richer than you ever can do if you're only working within your own silo and your own tent. (INT#329, COMM; academic; research)
Sometimes, you learn things from unlikely places because they've done something that is so different than how you would approach the problem. And so you learn things because they've used a different sort of method or taken a really different approach than you would have. (INT#370, HCI; academic & corporate; research & design)
It helps in terms of creativity because different people take different source approaches to things and often it sparks ideas when two different approaches come together. Whether they clash or compliment [doesn't matter because] they often spark ideas. (INT#326, COMM; academic; research)
...some among us would relish the clash of competing ideas
But I like the fight and I like the disagreement because that actually gets us a step ahead to understanding the complexity of the user. (INT#339, COMM; academic; research)
It's always good to have multiple perspectives, multiple questions, multiple methodologies, multiple theories. It's good to have people challenging the assumptions that other people make in their research. (INT#331, COMM; academic; research)
I suspect that the competition of paradigms, approaches and perspectives creates an open stage for the development of these systems that is actually quite beneficial to the "creative chaos." There is also a positive side to it, in that the diversity of approaches sometimes comes up with good solutions. If you have too much consensus, then you may miss creativity and originality. There's certainly an argument for the multiplicity of approaches. (INT#351, COMM; academic; research)
...but many expressed worries about "slash and burn" approaches that dominate communication across our divides
...[a] perspective tends to put a funnel on thinking that is overly restrictive and limits creativity. (INT#326, COMM; academic; research)
[People in my field] gain [a] point of view, but they don't realize they have it. And so when somebody comes along who doesn't have it, they know the person's "wrong" or something's amiss, but they can't articulate it and so they can't gain any new understanding [from] interacting with the other person. (INT#356, LIS; academic; research)
Especially in multidisciplinary communities, you end up with disciplinary dominance and people saying, "Well, that isn't science and this is science and this is what we're going to do and this is what matters and what counts." (INT#370, HCI; academic & corporate; research & design)
Sometimes it's really hard to talk to people who are convinced there's only one way to do user research.... That is a hindrance to communicating with them because you can't say, "Well, I did this study in which I found out X..." because, necessarily [to them] you didn't do it right. How could I believe in what you did since you didn't do it the way I think is appropriate? (INT#312, HCI; consulting; research & design)
I've seen researchers argue against each other, trying to defend their own view and persuade each other on how much more secure their own view is. That's not helpful. (INT#302, COMM; academic; research)
Many of the people in [user studies] are crotchety and into critique. They slash-and-burn other points of view, so it doesn't win friends equally. (INT#326, COMM; academic; research)
And that kind of research [2x2 classical experiments...] and that way of criticizing communication research is very bothersome, because these people do not understand that what I think that we bring to the table is a much more rich understanding of how complex communication is. (INT#346, COMM; academic; research)
That's also why I'm not optimistic about the chances for the dialogue in today's world, because listening implies the readiness to change. If I really want to listen to you, I have to take the risk that I may have to think differently and act differently. I may have to change. But, if I'm not willing to do that because I'm deep down a fundamentalist and conservative, as most people are, why would I listen? I've taken an enormous risk. (INT#325, COMM; academic; research)
I think there's a tendency to exclude some lines of thought as incorrect and then try to, on the other hand, diminish the differences that the different researchers find. So, for me, it would be helpful to have a better map of the field in terms of theories and agendas. I think sorting out some of these issues about why it is people, how it is that people come about to seek information, I think that would help. (INT#314, COMM; academic; research)
And, it seems to me, what happens is that the information systems and IR people are looking at the user people and saying "well, you move," and the user people are saying, "well, you move," and until we see that there's somewhere in the middle where we can have a conversation, I think the progress will be slow. (INT#335, LIS; academic; research)
....nevertheless, many of us expressed a readiness to pursue communicating in different ways
There's not one way to roam, there are many ways to roam and that certainly applies to user studies. One is not better than the other, but one might be more appropriate to one specific question, one specific problem. If somebody is capable of using or looking at it from different perspectives and using different methodologies, that's a plus... (INT#339, COMM; academic; research)
I think you can't assume that one discipline has all the answers to user studies. Different disciplines have different perspectives.... no one discipline has the monopoly perspective on that.... It is a matter of picking which is the best aspects of the different perspectives and trying to integrate them. (INT#349, LIS; academic; research)
It essentially comes down to mutual respect. Mutual respect in a meaningful sense starts with the healthy sense of humility about the limits of what you know. When people come to the table with the assumption that they don't have all the answers, a conversation has to go much better. (INT#334, COMM; academic; research)
But, the real serious dialogue begins with just questioning your own certainties and your own assumptions. And real dialogue basically means that you're courageous enough to say, "maybe you're right, maybe I'm wrong." That's the core of any democratic community. (INT#325, COMM; academic; research)
I think we need a little more tolerance and understanding. Not just tolerance like "oh, it's okay, do your thing," but a real understanding of how the disciplines fit together. Because none of them is the whole picture of the user experience. (INT#375, HCI; consulting; research & design)
Well the best magic wand I know of is to make people work together. (INT#371, HCI; academic & corporate; research & design)