header
vol. 19 no. 3, September, 2014


Development and assessment of the content validity of a scale to measure how well a digital environment facilitates serendipity


Lori McCay-Peet
Department of Sociology, University of Western Ontario, London, Ontario, Canada N6A 5C2
Elaine G. Toms
Information School, The University of Sheffield, Sheffield, United Kingdom, S1 4DP
E. Kevin Kelloway
Department of Psychology, Saint Mary’s University, Halifax, Nova Scotia, Canada, B3H 3C3


Abstract
Introduction. Digital information environments such as Websites and search engines have the potential to support serendipity: to spark and nurture positive, unexpected interactions with information and ideas with valuable outcomes. But we have few tools to measure how well they support serendipity and thus no data-driven way to guide their development.
Method. Grounded in prior research, facets of a serendipitous digital environment were identified together with a pool of items that could be used in the creation of a self-report scale. To assess the content validity of the preliminary scale or the extent to which a measurement reflects the content domain it is intended to capture, we conducted two successive studies: an expert review by eight experts; and an analysis of variance approach involving a Web-based study of 107 university students.
Analysis. Qualitative and quantitative approaches were used to analyse the data.
Results. The items of the serendipitous digital environment scale were revised and the scale’s content validity confirmed.
Conclusion. Five facets of a serendipitous digital environment were identified and defined and a 37-item scale was developed in preparation for future studies that will reduce and refine the scale and test its construct validity.


Introduction

Serendipity has been defined in earlier research, in media, and in dictionaries with terms such as unexpected, surprising, accident, chance, and luck. While often contrasted with scientists’ and engineers’ carefully planned efforts, serendipity is also given credit for unforeseen and significant leaps in research and innovation, exemplified in the stories behind penicillin and X-rays. But serendipity is also a notable phenomenon in other areas of work and research, such as history (Martin and Quan-Haase, 2013) and interdisciplinary studies (Foster and Ford, 2003) in which serendipity is associated with resource discovery and new research directions.

Furthermore, the personal impact serendipity has on users of digital information environments such as scholarly portals are of interest to their designers and developers. Unexpected discoveries in digital libraries, for example, are a source of positive user experience (Blandford, Stelmaszewska and Bryan-Kinns, 2001), which have the potential to encourage users to access these specialized systems rather than generic search engines such as Google.

Some consider hyperlinked digital environments to be fertile ground for serendipity (e.g., Merton and Barber, 2004), providing a diversity of resources to which users may not have otherwise been exposed (Thurman and Schifferes, 2012). Others, however, have expressed concern about the ability of digital environments to facilitate serendipity like their physical counterparts such as print newspapers (McKeen, 2006) and library stacks (Martin and Quan-Haase, 2013). The information we encounter through search engines, for example, is curated through mechanisms such as personalization and computer-generated relevance judgments to ensure that we are more likely to get the information we want based on the information with which we, or people in our social networks, have interacted. Some argue these mechanisms create a filter bubble (Pariser, 2011) or a level of homophily (Zuckerman, 2013) that threatens to limit the sort of information we encounter and consume, and thus hamper serendipity. To counter potential threats to serendipity, diverse mobile applications, recommender systems, and search engines have been designed to trigger or nurture serendipitous experiences. For example, BananaSlug allows users to actively engage with the search engine to produce random results designed to produce accidental encounters with information. The Bohemian Bookshelf is a large-screen interactive display system designed to support the exploration of digital book collections and facilitate serendipity by, for example, enticing curiosity, offering multiple access points, and enabling a more playful approach to the collection (Thudt, Hinrichs and Carpendale, 2012). In contrast, a mobile semantic sketchbook both creates opportunities for serendipity and supports reflection to further nurture serendipity (Maxwell, Woods, Makri, Bental, Kefalidou and Sharples, 2012).

However, while we continue to speculate about what technologies support and hinder serendipity and introduce potential solutions, relatively little research has been undertaken to assess how well existing and novel approaches to information interaction support serendipity. The focus in tool development is on novelty rather than checks of validity, which makes it difficult to make any significant advances in our understanding of what facilitates serendipity. While some researchers take great and commendable effort to ground their tool development in prior research on serendipity, evaluation is set aside as future research (e.g., Thudt et al. 2012; Maxwell et al., 2012). In cases where evaluation has been reported, it falls short of confirming validity because of a lack of transparency in the methods used (e.g., Beale, 2007). In other cases, confirmation of validity relates to whether or not the tool has the capacity to bring previously unknown but interesting information to the user’s attention (e.g., Campos and Figueiredo, 2002) rather than whether the user has experienced serendipity itself.

The problem with the latter approach is evident in the literature on recommender systems where evaluation may take the form of asking users to indicate which recommendations are unexpected. The success of such recommendations is based on whether the user followed the recommendations. Shani and Gunawardana (2011) argued that this approach to serendipity is flawed and has potentially negative implications. At first, the recommended documents or Websites may appear to be unexpected to users and users may click on them, suggesting success. But these recommendations may not be as useful as anticipated once users are given an adequate amount of time to follow-up on them. The recommendations designed to be serendipitous may not actually trigger serendipitous experiences even when they pass the tests of unexpected and success. Unsuccessful recommendations not only have a negative affect for the user but also for the recommender system itself. For example, consider a recommender system in a digital library. The recommender system suggests papers related to the current paper a user is examining. But after clicking to view one of the recommended papers, a user may decide it is of little or no use. Imagine this scenario repeats itself several times. We can see how a user over time may stop using the recommender system (or worse, the digital library) to avoid distraction and the potential for wasting time. Digital environments designed to support serendipity, therefore, are in danger of providing disincentives for further use if not thoughtfully designed and implemented making approaches and tools for evaluation an imperative rather than an afterthought. However, no valid tool or approach currently exists to evaluate the likelihood of a digital or physical environment to foster serendipity.

This paper reports on the development of a scale to measure users’ perceptions of how well a digital environment (i.e., an environment reliant on computer technology) supports serendipity. We argue that the digital environment may facilitate serendipity by enabling, containing, pointing to, and providing what it is that helps a person to have a serendipitous experience. Serendipity is not anticipated to happen every time an individual interacts with a digital environment, even one that fully fits the criteria of a serendipitous one; however, through its use over time, we would expect users to form a perception of a digital environment in relation to how frequently they experience serendipity as a result of their interactions in it and how well the digital environment, for example, supports exploration. Prior research has demonstrated the viability of developing a scale of this nature (McCay-Peet and Toms, 2011). Grounded in Björneborn’s (2008, Dimensions affecting serendipity) ten serendipity dimensions of a physical library, a series of items were developed as part of a self-report questionnaire. Seven research assistants and doctoral students were provided with the definitions of the ten dimensions together with an initial pool of items intended to reflect these dimensions. Through an iterative process in which the research assistants and students rated how well each statement captured the essence of the dimensions and suggested alternative items, twenty items were identified for inclusion in a questionnaire. Through the questionnaire, 123 participants, mainly university students, were asked to assess an experimental information system they had been instructed to browse as part of a larger study examining curiosity and the non-goal-based use of an information system. While both the pool of items and the sample size were small for this type of multivariate analysis and the items were based solely on Björneborn’s serendipity dimensions, the results demonstrated the feasibility of developing a multi-faceted questionnaire to measure the serendipitous digital environment. A broader review of the previous research used to ground the development of the scale discussed in this paper will be examined in the sections to follow. Just as a single question is insufficient to measure user engagement with interactive systems (O’Brien, 2010) and system usability (Brooke, 1996), a serendipitous digital environment is a multidimensional, subjective phenomenon that requires a multi-faceted tool to capture its nuances. The research described in this paper, therefore, had three main objectives:

  1. to develop a preliminary serendipitous digital environment self-report scale through the identification of facets of serendipity and items based on earlier serendipity research;
  2. to refine the facets and items; and
  3. to establish the content validity of the serendipitous digital environment scale.

We are reliant on measures to understand phenomena such as serendipity and a key indicator of the quality of a measure is its content validity. Moreover, content validity 'is a necessary precondition for establishing evidence for construct validity' (Hinkin and Tracey, 1999, p. 175).

Background

Through a review of earlier research on serendipity and related constructs such as information encountering (Erdelez, 2004), we examine serendipity and the facets of an environment that may facilitate serendipity.

Unbundling serendipity

To support serendipity we must first understand it. We define serendipity as “an unexpected experience prompted by an individual’s valuable interaction with ideas, information, objects, or phenomena” (McCay-Peet and Toms, in press). A review of prior research (e.g., Makri and Blandford, 2012; McCay-Peet and Toms, 2010; Rubin, Burkell and Quan-Haase, 2011; Sun, Gilmartin and Bryant, 2011) and semi-structured interviews with twelve professionals (e.g., information manager, journalist, creative writer) and academics (e.g., digital humanities scholar, computer scientist, molecular biologist) with specific work-related examples of serendipity, found that serendipity contains the following five elements unfolding in a quasi-linear fashion and may occur in any number of contexts (McCay-Peet and Toms, in press):

  1. Trigger: a verbal, textual, or visual cue that initiates or sparks an individual’s experience of serendipity.
  2. Connection (and possible delay in connection): the recognition of a relationship between the trigger and the individual’s knowledge and experience. A delay is the interval that may occur when an individual perceives a trigger but does not immediately recognize a connection between the trigger and the individual’s knowledge and experience.
  3. Follow-up: actions taken to make the most of a trigger or connection and obtain a valuable outcome.
  4. Valuable outcome: the positive effect of the serendipitous experience both realized and projected.
  5. Unexpected thread: the unexpected, chance, accidental, or surprising element that is evident in one or more of the trigger, connection, follow-up, or valuable outcome elements of the serendipitous experience.

An experience is perceived to be serendipitous in retrospect based in part on an individual’s awareness of its trigger, connection, valuable outcome, and unexpected thread. We can see how the process of serendipity unfolds through an example provided by an occupational therapist (McCay-Peet and Toms, in press). Her serendipitous experience was triggered during a conversation with her boss over coffee. She wanted to attend an upcoming conference but lamented that none of her research was ready to submit. While talking about her student’s as well as her mentee’s research, however, she made a connection between their research findings and her own. She could have made this connection during previous research discussions but it was delayed; it had to percolate. Once she made the connection, she followed-up by reviewing the literature to ensure the novelty of the concept and wrote and submitted an abstract for the conference. At the time of the interview she was excited that the conference abstract had been accepted and considered this serendipitous experience to be a highlight of her year. She also planned to work toward a further valuable outcome: she hoped the findings would open up theoretical and practical discussions among those in her profession. The unexpected thread wove through the experience and contributed to her perception of the experience as serendipitous. The research findings themselves (trigger) and that the findings were related (connection) was unexpected as was the collaboration (follow-up) with others, which she noted was highly unusual in her field where single-author publications are the norm.

Serendipity is often described, as it is above, as a process (e.g., Makri and Blandford, 2012; McBirnie, 2008): 'a sequence of individual and collective events, actions, and activities unfolding over time and in context' (Pettigrew, 1997, p. 338). American physicist Gell-Mann could have been talking about serendipity when he described the universe as a process, 'the full complexity of which emerges from simple rules plus chance' (Gell-Mann, 1994, as quoted in Burnes, 2004, p. 315). But how can we support a process that is only discernable in retrospect? And even more challenging, how can we support the process of serendipity for which the unexpected plays such an important role in its perception? While serendipity has shared a close association with individuals’ personality traits, aptitudes, and abilities, serendipity does not happen in a vacuum; environment matters (Merton and Barber, 2004). As Blandford and Attfield (2010) note, 'information interaction always takes place within some setting' (p. 13, emphasis added). Therefore, how can the environment support the information-intensive process of serendipity?

The serendipitous environment

Very little research has explicitly examined what characteristics of the environment, whether physical or digital, are related to serendipity. The research that does exist suggests that not all environments are created equal and some appear more likely to support serendipity than others. Patterns of environmental characteristics such as resource-rich as well as specific types of environments such as libraries (Sun et al., 2011) have been noted. We focus here on six characteristics of a serendipitous environment identified in previous research:

  1. trigger-rich;
  2. enables exploration;
  3. highlights triggers;
  4. enables connections;
  5. enables capturing; and
  6. unexpected.

A trigger-rich environment contains information or ideas related to an individual’s experience or knowledge, which have the potential to spark serendipity (McCay-Peet and Toms, in press). Trigger-rich is conceptually associated with other previously identified characteristics of a serendipitous environment including diversity and resource-rich. A library with a diversity of resources, topics, genres, and activities, for example, was found to support divergent behaviour (Björneborn, 2008), a type of behaviour associated with information encountering (Erdelez, 2004). Diversity was one of the ten serendipity dimensions of the physical environment of the library Björneborn identified in a longitudinal study of patrons of two Danish libraries that included observation, interviews with 113 participants, and, finally, think-aloud sessions with eleven of these participants in which they walked through the library and commented on what triggered both their attention and their information behaviour. But an environment that is simply diverse may not support serendipity, the information or ideas contained in the environment must also be of use to an individual. For example, a mobile diary study that included interviews with eleven PhD students found that environments in which serendipity took place tended to be resource-rich or were places such as conferences, offices, and libraries that contained a lot of information or people (Sun et al., 2011), which were a good match to the individual’s interest space or problem (Björneborn, 2008; Sun, et al., 2011; Toms, 1997).

Trigger-rich is also tangentially related to features of physical and digital environments that enable exploration: explorability, multi-reachability, stopability, and accessibility (Björneborn, 2008). Exploration helps a user understand the depth and diversity of the content contained in a digital environment, the boundaries of the information space in which they find themselves, which by extension may influence perceptions of how well this environment may support serendipity. This notion is reflected in what Blandford, Stelmaszewska and Bryan-Kinnsal (2001, p. 188) refer to as 'discriminability' or 'forming understandings of the content and possibilities in a collection' (p. 188). From an observational study in which five computer scientists were instructed to think aloud as they worked on a task in a digital library, Blandford et al. concluded that discriminability was a design issue related to serendipity.

Research that analysed fifty-six accounts of chance encounters in blog entries found the environment must provide something of value to the finder, but that something must also be noticed (Rubin et al., 2011). An environment that highlights triggers helps ensure an individual’s attention will be drawn to information or ideas (McCay-Peet and Toms, in press). Highlighting triggers is associated with the sounds, colours, or movements, for example, which may activate an individual’s attentional resources, thus helping an individual notice a trigger. Highlighting triggers may be particularly important in environments filled with perceptual cues. In Toms’s (1997) study in which forty-seven adults were given search goal tasks as well as non-goal-directed tasks to perform in a newspaper database, findings indicated the experimental tool providing suggestions for further reading had the potential to facilitate serendipity as it led participants to news stories participants had not intended to find but were useful. Features of a physical library may grab attention and prompt divergent behaviour through striking contrasts, pointers, or display (Björneborn, 2008) just as emotion in face-to-face conversations or noises that alert users to new posts on social media sites (McCay-Peet and Toms, in press) may lead an individual to pay attention to ideas. Researchers have designed information systems that highlight triggers to support serendipity; for example, Max, a Web-based system that sends direct e-mails to users with links to Websites with unexpected and interesting information (Campos and Figueiredo, 2002) while Mitsikeru, an ambient intelligence system, calls attention to interesting and surprising content (Webpage links) through visual cues (Beale, 2007).

An environment that enables connections helps an individual to engage with information and ideas and describes something or someone who encourages exploration, critical thinking, and the sharing of knowledge and ideas that make it possible to see relationships between information and ideas (McCay-Peet and Toms, in press). For example, interviews conducted with twenty-four elderly people indicated that informal social spaces that encourage spontaneous information sharing facilitate opportunistic discovery of information useful in everyday life (Pálsdóttir, 2011). Enabling connections may also involve 'cross contacts' or the juxtaposition of different 'topics, genres, materials, people, and library spaces' (Björneborn, 2008, Dimensions affecting serendipity, para. 9) that may help individuals see connections between information or ideas and their own knowledge and experience. Enabling connections may be particularly important when these relationships are not immediately obvious on a conceptual level (McCay-Peet and Toms, in press).

Enables capturing (McCay-Peet and Toms, in press), describes something or someone who helps an individual record or copy a trigger for later use. Capturing is a key element of an information encountering episode (Erdelez, 2004) in which an encountered document or piece of information that relates to the user’s background problem is saved for later use before the user returns to their original information search relating to a foreground problem. Of course, people may instead rely on memory. Or, in the case of a trigger that sparks an idea, capturing may not even be necessary. However, where capturing an item for later use is important, it may include e-mailing, recording, bookmarking, photocopying or otherwise ensuring access to a trigger at a later time (McCay-Peet and Toms, in press). Thus, enabling capturing through simple tools such as pen and paper or more complex mobile applications designed to support serendipity (e.g., Maxwell et al., 2012) helps individuals follow-up and reach some of the hoped-for valuable outcomes relating to one’s work (McCay-Peet and Toms, in press).

Finally, the perception of the unexpected, tied in part to the environment, is very important to the perception of serendipity (McCay-Peet and Toms, in press). Imperfections such as misshelved books (Björneborn, 2008, Dimensions affecting serendipity, para. 8; Delgadillo and Lynch, 1999) or incorrectly indexed manuscripts (McCay-Peet and Toms, in press) in libraries, for example, are unexpected in nature and have the potential to spark serendipity and contribute to a perceived lack of control (Rubin et al., 2011). Reflecting on the unexpected circumstances of the experience may lead an individual to consider it serendipitous (Makri and Blandford, 2012). Because serendipity so often occurs during interactions with other people (e.g., Pálsdóttir, 2011; Pettigrew, 1999), the environments in which these experiences are triggered may be perceived as serendipitous. Through interviews with fifteen post-graduate students, Dantonio, Makri and Blandford (2012) found that students were not only aware that social media sites can lead to their own serendipitous experiences but were cognizant of the notion of reciprocity in social media spaces; students may discover unexpected information, but they may also share information that others may find unexpected, helping to create serendipitous environments for others.

In summary, several characteristics of a potentially serendipitous environment have been identified; namely, environments that are trigger-rich, enable exploration, highlight triggers, enable connections, enable capturing, and lead to the unexpected. There is, however, a conceptual overlap between these six characteristics that is difficult to parse. For example, an environment that enables exploration may lead an individual to perceive it as trigger-rich because of exposure to more information and ideas. Therefore, can it be argued that there are fewer than six characteristics? Or, maybe there are more. Moreover, as these characteristics are drawn from studies in both physical and digital contexts, for the purposes of our research, what are the most salient characteristics of a serendipitous digital environment? As well, given our goal of developing a self-report scale, are all six tied to perceptions of serendipity? While the process of serendipity (trigger, connections, follow-up, valuable outcome, unexpected thread) has obvious links to the six characteristics of a serendipitous environment, do all of these characteristics feed perceptions of serendipity? For example, is an environment perceived to enable capturing as closely tied to serendipity as one associated with the unexpected? At face value, it does not appear to be the case. These questions are addressed in the following sections as we develop and test a measure of the serendipitous digital environment.

Methods

Scale development is a lengthy process with multiple phases. We apply a well-documented approach (see DeVellis, 2003; MacKenzie, Podsakoff, and Podsakoff, 2011) to the development of a scale designed to assess whether a digital environment has the critical elements to facilitate serendipity. The following sections describe three stages in the preliminary development of the serendipitous digital environment scale:

Stage 1 draws upon prior research. We first needed to conceptualize the serendipitous digital environment, to identify possible facets or characteristics of a digital environment that support serendipity, develop robust definitions for the facets, and then examine how each of the facets may be measured through items or statements contained in a scalar questionnaire. But how can we verify that the scale fully captures the conceptual space of the serendipitous digital environment? The latter two stages test the content validity of the facets and items through two complementary methods. Stage 2 enlisted experts in serendipity-related constructs to review the scale to ensure the facets and items would indicate a potentially serendipitous digital environment. Stage 3 adopted a quantitative approach and tested how well each of the items reflected the facet they were created to reflect and how much conceptual overlap there was between the facets. As well as establishing the content validity of the scale, both Stage 2 and Stage 3 were used to refine the scale. Each successive stage builds on the previous stage’s findings; therefore, the methods and results for each stage are described in turn in the sections that follow.

Stage 1: Generation of facets and items

To help ensure that the conceptual space of the serendipitous digital environment would be captured in a scalar instrument, five facets hypothesized to contribute to its perception were first delineated based on prior research before items (i.e., statements designed to reflect each of the facets) were generated. Modifications were later made to the preliminary facet definitions and items based on findings from Stage 2 and Stage 3 of this research.

Facets

Grounded in a review of previous research as well as a study investigating serendipity (McCay-Peet and Toms, in press), we identified and defined five potential facets of the serendipitous digital environment. For clarity, these five facets are shown in bold italics.

  1. Trigger-rich: The digital environment is filled with a variety of information, ideas, or resources interesting and useful to the user.
  2. Enables connections: The digital environment exposes users to combinations of information, ideas, or resources that make relationships between topics apparent.
  3. Highlights triggers: The digital environment actively points to or alerts users to interesting and useful information, ideas, or resources using visual, auditory, or tactile cues.
  4. Enables exploration: The digital environment supports the unimpeded examination of its information, ideas, or resources.
  5. Leads to the unexpected: The digital environment provides fertile ground for unanticipated or surprising interactions with information, ideas, or resources.

These five facets, like the characteristics described in the Background section from which they were derived, may overlap. However their inclusion at this early stage of scale development will help ensure the concepts they reflect are captured within their own facets if not within the conceptual spaces of the other facets. The only characteristic identified in prior research that we did not develop into a facet was enables capturing because of its utilitarian nature. While enabling capturing may support the follow-up element of the process of serendipity (McCay-Peet and Toms, in press), at face value, an environment that enables capturing is not likely to contribute to perceptions of an environment as serendipitous, which is what we aim to measure. For example, while digital environments that contain functions that allow users to save items in personal folders and mark items as favourites may contribute to perceptions of system usability, it seems unlikely that one digital environment that does this better than the next will be perceived to be a more serendipitous environment.

Items

Reflecting on each of the five facets defined above, we developed seven to ten items for each facet for a total of forty-three items. Items are short statements designed to capture the essence of each of the facets (DeVellis, 2003). For example, what descriptive phrases illuminate the scope of enables exploration? We identified, for example, '[The digital environment] is easy to explore', 'I can navigate freely within [the digital environment]', and '[The digital environment] offers easy access to content'. Once the adequacy of a set of items to measure how well a digital environment supports serendipity is established and the scale is administered, '[The digital environment]' can be replaced with the specific Website, application, or information system being tested and participants would be asked to report their level of agreement with the items in relation to the environment being assessed. For example, '[The digital environment] is easy to explore' may become 'Wikipedia is easy to explore'.

Several considerations were made in the development of items relating to scale length, level of specificity, best practices for item wording, and measurement format. Ideally, the serendipitous digital environment scale will be relatively short to ensure its length will not overburden respondents. Non-trivial redundancy in the initial pool of items, however, was built-in to ensure the nuances of the attributes of the serendipitous digital environment are captured (DeVellis, 2003). For example, the original pool of items of the enables exploration facet included items such as 'I can navigate freely within [The digital environment]' and '[The digital environment] offers multiple pathways to information'. While both these items essentially refer to exploration, they are phrased in two different ways, potentially eliciting different though similar responses. The level of specificity of the facets and corresponding items is intentionally low to accommodate various content, features, and functions of diverse environments like social media sites such as Twitter or digital libraries such as PubMed. As well, the features and functions of digital environments are constantly evolving; therefore, aiming for the right level of specificity is important if the scale is to be relevant for more than a few years. The following rules guided scale item development (DeVellis, 2003):

Finally, the Likert scale format was selected because of its prior use in the measurement of the perception of digital environments (Brooke, 1996; O'Brien and Toms, 2010) as well as its summative nature as the data will be treated as interval data that can be added to provide a score.

Summary

A total of forty-three items were generated for the serendipitous digital environment scale, seven to ten items for each of the five hypothesized facets of the serendipitous digital environment derived from prior research: trigger-rich, enables connections, highlights triggers, enables connections, and leads to the unexpected. Once the facet definitions and their representative pool of items were developed, two studies described in the following two stages were conducted to assess the content validity of the scale.

Stage 2: Expert review

Method

An expert review involves the assessment of a measure by experts in the content domain (DeVellis, 2003). Therefore, eight international researchers with expertise in the area of serendipity and related constructs in the fields of information seeking and behaviour and human-computer interaction were asked to review the facet definitions and the pool of items.

Questionnaire

The experts were asked to assess the appropriateness of the facet definition; the clarity of the definition; suggestions for improvement; and to consider the facets as a whole and comment on whether there are more or perhaps less than five facets. They were also asked to assess the items that had been generated for each of the facets. Item wording best practices were provided and participants were asked how well the items align with the facets they are meant to capture; whether the items are clear; and to suggest items that would better capture the essence of the facets. The full questionnaire can be found in McCay-Peet (2013, Appendix 3).

Procedure

E-mail invitations were sent to researchers who had conducted research on serendipity and related constructs. Those who responded were emailed the questionnaire described above, a Microsoft Word document containing the consent form, an introduction to the expert review, the facets and scale items, and a final question asking whether or not participants’ contributions may be acknowledged. Participants were asked to treat the document as a working document and return this document via email. One of the participants requested an in-person interview; this request was accommodated and the questionnaire was used as the interview protocol with comments recorded by hand by the interviewer, the first author.

Data analysis

Responses were entered into and analysed in Microsoft Excel spreadsheets so that the comments relating to each of the facets and sets of items could be reviewed and common suggestions and critiques noted. Many of the suggestions for changes to the facet definitions were followed, though not all. Particular attention was paid to similar assessments of the facets made by two or more participants. For the items, all comments related to each of the items were reviewed and the decision was made to keep, remove, or revise each item. For new items that were suggested, each was reviewed and either rejected or added to the pool of items (sometimes with minor changes). While all critiques by participants were taken into consideration, the final decision for changes to the facets and items rested with the scale developers (DeVellis, 2003).

Results

A number of common critiques were raised and participants offered numerous and valuable suggestions for improvements. Many revisions were made to improve the clarity of the facet definitions and items and ultimately to strengthen content validity. The main critiques and our responses are described in relation to the adequacy of the number of facets, the facet definitions, and the adequacy of the items in the following sections. But we will start by addressing the more general critiques.

General critiques of the facets and items
Adequacy of the number of facets

Two participants suggested other possible facets, reflecting the playfulness of a digital environment, how it stimulates curiosity, and how it supports the incubation of ideas. A separate facet for playfulness, however, would likely overlap with the discovery and exploration elements of the enable connections and enables exploration facets. Conceptually, stimulating curiosity could be encompassed within the highlights triggers and trigger-rich facets as both have the potential to stimulate curiosity. Support for the incubation of ideas could be manifested in the features and functions that enable connections as well as those that enable capturing. However, as previously noted, an environment that enables capturing is unlikely to contribute to perceptions of an environment as serendipitous.

Two participants questioned the inclusion of the facet leads to the unexpected and whether it was an outcome rather than a facet, equating it with serendipity itself. However, we would argue that while unexpected is very closely associated with serendipity as evidenced from the main elements of serendipity, which includes the unexpected thread (McCay-Peet and Toms, in press), it is not synonymous with serendipity. The same is true for the serendipitous digital environment. For example, consider a digital library with a recommender system. The recommender system recommends papers somewhat related to the paper currently being viewed. The digital library may be perceived to highlight triggers, enable connections and lead to the unexpected, contributing to the overall perception that the digital environment supports serendipity.

Facet definitions

While all five facets were retained, the facet definitions were refined based on input from the participants. These changes to the facets and what prompted them are described below.

The original facet definition of enables exploration, 'the digital environment supports the unimpeded examination of its information, ideas, or resources' was revised as follows.

Enables exploration: A user’s assessment of the degree to which a digital environment supports exploration and examination of its information, ideas, or resources.

Changes were made to the original facet definition to answer the concerns raised by four of the participants about the use of the word unimpeded: unimpeded is a negative rather than positive descriptor, its meaning is unclear, and impediments at times prompt creativity and perhaps, by extension, serendipity. Therefore, unimpeded was simply removed. Also, exploration was added to the facet definition, as it was evident from the reviews that examination did not adequately capture this facet’s intended meaning.

The trigger-rich facet definition, 'the digital environment is filled with a variety of information, ideas, or resources interesting and useful to the user' was revised to read:

Trigger-rich: a user’s assessment of the degree to which a digital environment contains a variety of information, ideas, or resources that are interesting and useful to the user.

Filled with was changed to contains to simplify the wording of the trigger-rich facet definition.

The enables connections facet definition was revised from 'the digital environment exposes users to combinations of information, ideas, or resources that make relationships between topics apparent' as follows:

Enables connections: A user’s assessment of the degree to which a digital environment makes relationships or connections between information, ideas, or resources apparent.

Participants made suggestions to simplify the enables connections facet that helped to rephrase the definition.

The highlights triggers facet definition was changed from 'the digital environment actively points to or alerts users to interesting and useful information, ideas, or resources using visual, auditory, or tactile cues' to read:

Highlights triggers: A user’s assessment of the degree to which a digital environment brings interesting and useful information, ideas, or resources to the user’s attention.

Two participants indicated that the described environment sounded overwhelming and that digital environments may provide more subtle cues to capture users’ attention. Therefore, to allow for latitude on what constitutes bringing something to one’s attention, the highlights triggers facet’s definition was simplified: rather than actively points to or alerts users to content using visual, auditory, or tactile cues, it was shortened to bringing content to the user’s attention.

The leads to the unexpected facet definition, 'The digital environment provides fertile ground for unanticipated or surprising interactions with information, ideas, or resources' was revised to read:

Leads to the unexpected: a user’s assessment of the degree to which a digital environment provides opportunities for unexpected interactions with information, ideas, or resources.

The participants offered helpful suggestions regarding wording and clarification of this leads to the unexpected facet definition.

Adequacy of the items

A number of changes were made to the items of each of the facets based on input from the participants. Of the forty-three original items, seventeen were retained, fourteen changed, four added, and twelve removed, leaving a total of thirty-five items with seven items per facet.

Summary

While all five facets of the preliminary serendipitous digital environment scale were retained and none added, the facet definitions were refined based on input from the participants. The forty-three original items were reduced to thirty-five items with seven items per facet after some items were added, removed, and refined. The item pools for each of the facets were adjusted to clarify specific items, reduce redundancy, and to help ensure that the items together represent the content domain of their respective facets.

Stage 3: Analysis of variance approach

To assess further the content validity of the scalar items, an analysis of variance (ANOVA) approach was used following the procedures of Hinkin and Tracey (1999) to reduce the subjectivity of the item inclusion decision-making process. The approach provides a means of testing the strength of the relationship between an item and the facet that it is posited to reflect through mean values and significance and highlights items that may be conceptually confounded, reflecting more than one facet, or simply suggest possible correlations between facets. This method has been used by Yao, Wu and Yang (2008) and is recommended by MacKenzie et al. (2011).

Method and study design

With the analysis of variance approach, participants are asked to rate the extent to which items match facet definitions on a scale of 1 (not at all) to 5 (completely). In other words, participants were asked not only how well items such as ’[The digital environment] is easy to explore’ captures the meaning or gist of the enables exploration facet it was developed to reflect, but also the trigger-rich, highlights triggers, enables connections, and leads to the unexpected facets. As Hinkin and Tracey describe it,

A one-way ANOVA provides a direct method for assessing an item’s content validity by comparing the item’s mean rating on one conceptual dimension to the item’s ratings on another comparative dimension. Thus, it can be determined whether an item’s mean score is statistically significantly higher on the proposed theoretical construct. (Hinkin and Tracey, 1999, p. 181)
It is recommended that only those items meeting two criteria of content validity be retained for further testing (Hinkin and Tracey, 1999):
  1. Items have the highest mean rating on their posited facet; and
  2. Items have a significantly higher mean rating (p <0.05) on their posited facet.

Previous studies (Hinkin and Tracey, 1999; Yao et al., 2008) using the analysis of variance approach had fewer facets and items to assess than the serendipitous digital environment scale. Therefore, to reduce the potential of participant fatigue and attrition, the thirty-five items of the serendipitous digital environment scale were divided with seventeen items in survey group 1 and the remaining eighteen items in survey group 2. To minimize order effects, each survey group contained two surveys in which the facets were presented in a different order. While Hinkin and Tracey and Yao et al. used a paper-based questionnaire, this study delivered the questionnaire through the Web.

Participants and recruitment

University students were recruited through social media sites, university student e-mail lists, and notices on campus bulletin boards. Based on a study conducted by Hinkin and Tracey (1999), they suggest that fifty participants are sufficient for the type of analysis employed. A total of 107 participants completed the Web-based survey; fifty-three assessed one-half of the scalar items (seventeen items) and fifty-four assessed the other half (eighteen items). The incentive to participate was a chance for participants to win one of four $50 gift cards for an online store. Previous research has found that women tend to have higher survey response rates than men (Sax, Gilmartin and Bryant, 2003) and this was reflected in this study’s demographics. Participants were predominantly female (N=87, 81.3%), which may introduce some sex bias in the findings; however, no differences in the study results by sex were found. Participants were between the ages of eighteen and thirty (N=93, 86.9%) with the most common age group of 18 to 20 (N=42, 39.3%). The majority were undergraduate students (N=61, 57%) or masters students (N=37, 34.6%).

Survey instrument

The facets and items developed in the preceding sections were adjusted for this study to suit the study task. The word it replaced the digital environment. For example, the question for the enables exploration facet read: ‘How well do the following statements capture the gist or meaning of "it supports exploration and examination of its information, ideas, or resources"?’ Ratings were on a scale of 1 (not at all) to 5 (completely). Items such as ‘[This digital environment] is easy to explore’ were altered to read ‘It is easy to explore’. This approach was designed to remind participants to think about how well the items matched the facet definitions rather than think about their own level of agreement with the statements in relation to a digital environment that may be brought to mind. Each facet definition was presented at the top of the screen and participants were asked to rate all seventeen or eighteen items against each of the five facets.

Procedure

The Web-based survey was hosted on a university server and used LimeSurvey, an open-source survey software. The study ran from October 18 to November 8, 2012 and took an average of fourteen minutes to complete with a median completion time of eleven minutes. A link to the Web-based survey was included in recruitment materials and participants were randomly redirected to one of the four surveys when they opened the survey link. Participants were presented with the following steps in a series of self-directed Web pages:

  1. consent form;
  2. demographics questionnaire;
  3. introduction to the task through an example;
  4. task: assessment of the degree to which each of the items matches each of the five facets;
  5. option to submit e-mail address for a chance to win a gift card or obtain copy of consent form;
  6. option to comment on survey; and
  7. thank-you.
Data analysis

Data were loaded into SPSS 17.0. Of the 107 participant data sets, seven were flagged as possible careless responders because of their consistent item ratings of ‘3’ or ‘5’ which suggested these participants had not thought through their responses. However, the analysis was performed with and without these seven data sets and the results were the same, therefore all data sets were retained as these participants had taken the time to respond and there was no statistical reason to remove them. Repeated one-way analysis of variance was conducted and Bonferroni Pairwise comparisons tests (p < 0.05) were performed to test whether each item’s mean rating was significantly higher on the posited facet than the other facets. Those items that did not meet the content validity criteria suggested by Hinkin and Tracey (1999), i.e., items have highest mean ratings on their posited facets and significantly higher mean rating on their posited facet, were revised together with their facets. Essentially, the results were used as a tool to re-examine and refine the facets and items rather than simply as a tool to reduce the original set of thirty-five items. In addition, the items and facets were examined as a whole to ensure that together they were capturing the conceptual space of the serendipitous digital environment.

Results

Of the thirty-five items, thirty-one items satisfied the first test of content validity with their highest mean ratings on their respective posited facets. The second test of content validity, that items have a significantly higher rating on their posited facet, was met by fifteen of the thirty-five items. The enables exploration, enables connections, and leads to the unexpected facets faired best, with the majority (four to five each) of their items’ highest meaning ratings solely and significantly on their respective facets. The trigger-rich and highlights triggers facets faired poorly on the second test of content validity with only one satisfactory item each.

The results of this analysis are summarised in Tables 1 to 5, followed by a description of the facet. The final column in each table 'Highest facet(s)', indicates on which facet(s) the items had their highest mean ratings. In cases where there are two or more facets in this column, this indicates that the item was deemed an appropriate fit with more than one of the facets. For example, participants considered ‘The content contained in it is diverse’ (T1) a good reflection of T1’s posited facet, trigger-rich, and the leads to the unexpected facet. Conversely, when only one facet is indicated in the final column of Table 1 this indicates that the item was considered a good reflection of only one of the facets. For example, the item ‘It is easy to explore’ (E1) was a good reflection of the posited facet of enables exploration only.

Note: in each table * p<0.05, **p <0.01, ***p <0.001; N = number of participants; E = enables exploration; T = trigger-rich; C= enables connections; H = highlights triggers; U = leads to the unexpected; All = all facets (E, T, C, H, and U); Shaded values (e.g., 3.63) indicate the item’s highest mean rating(s); Bolded values (e.g., 4.30) indicate the item’s highest mean score across the facets


Table 1: Results of appropriateness ratings of each item on the facet enables exploration
Original facet and itemsNMean valueF (df₁, df₂)Highest
facet
Enables
exploration
Trigger
-rich
Enables
connections
Highlights
triggers
Leads to the
unexpected
E1 It is easy to explore533.722.362.702.572.85F (4,208) =14.23**E
E2 It supports exploration534.282.642.892.623.34F (4,208) =26.86**E
E3 It is easy to wander around in it533.602.362.582.512.92F (4,208) =14.10**E
E4 It offers multiple pathways to information543.443.203.092.963.19F (4,212) = 1.82All
E5 There are lots of ways to access information in it543.193.302.852.742.91F (4,212) = 2.69*All
E6 There are many ways to discover information in it543.673.412.833.043.39F (4,212) = 6.71**E, T, U
E7 It invites examination of its content544.303.093.063.333.02F (4,212) = 14.70***E

Enables exploration. Six of the seven items of the enables exploration facet had the highest mean rating on this facet. Items E1, E2, E3, and E7 exhibited adequate content validity, showing significantly higher ratings on their intended facet than the other four facets. However, there were no significant differences between the ratings on this facet and other facets for three of the items (E4, E5, E6). Items ‘It offers multiple pathways to information’ (E4) and ‘There are lots of ways to access information in it’ (E5), in particular, had no significant differences across all five facets. In other words, similarity in meaning was found between these two items and all of the facet definitions rather than enables exploration alone.


Table 2: Results of appropriateness ratings of each item on the facet trigger-rich
Original facet and itemsNMean valueF (df₁, df₂)Highest
facet
Enables
exploration
Trigger
-rich
Enables
connections
Highlights
triggers
Leads to the
unexpected
T1 The content contained in it is diverse532.813.792.812.943.34F (4,208) =9.14**T, U
T2 It is rich with interesting ideas532.724.082.493.553.15F (4,208) = 24.59**T
T3 It offers exposure to a wide variety of information533.624.062.743.32 3.42F (4,208) = 12.02**T, E
T4 There is a depth of information in it533.253.582.642.943.11F (4,208) =6.61**T, E, U
T5 It is full of information useful to me542.944.112.743.572.85F (4,212) =15.93**T, H
T6 I often find information of value to me in it542.804.202.873.802.85F (4,212) = 22.80**T, H
T7 I would describe it as a treasure trove of information543.22 3.812.743.35 3.00 F (4,212) = 8.35** T, H

Trigger-rich. While all of the trigger-rich items had the highest mean rating on this facet, only one item, 'It is rich with interesting ideas' (T2) was the only highest item on the posited facet. The remaining six items also rated highest on one or more of the enables exploration, highlights triggers, and leads to the unexpected facets.


Table 3: Results of appropriateness ratings of each item on the facet enables conections
Original facet and itemsNMean valueF (df₁, df₂)Highest
facet
Enables
exploration
Trigger
-rich
Enables
connections
Highlights
triggers
Leads to the
unexpected
C1 It enables me to make connections
between ideas
533.092.604.302.363.30F (4,208) =27.74**C
C2 It makes associations between
ideas obvious
532.642.554.402.382.72F (4,208) = 33.52**C
C3 I can see connections between topics in it532.742.45 4.362.343.06F (4,208) = 31.16**C
C4 It is easy to see links between information in it543.28 2.894.282.89 2.81F (4,212) = 16.60**C
C5 In it I see relationships between topics
I had not thought of before
543.563.20 4.133.023.61F (4,212) = 9.36**C, U
C6 It helps me to make useful connections
between resources
543.524.203.303.07F (4,212) = 10.21**C
C7 It allows me to make insightful connections543.483.224.023.133.15F (4,212) = 7.95**C, E

Enables connections. Again, all of the enables connections items had the highest mean rating on this facet. Five of the seven items were rated significantly highest on this posited facet (C1, C2, C3, C4, C7), however the remaining two items also rated highly on the leads to the unexpected facet (C5) and the enables exploration facet (C7).


Table 4: Results of appropriateness ratings of each item on the facet highlights triggers
Original facet and itemsNMean valueF (df₁, df₂)Highest
facet
Enables
exploration
Trigger
-rich
Enables
connections
Highlights
triggers
Leads to the
unexpected
H1 It often points to valuable information532.893.852.773.922.98F (4,208) = 15.64**H, T
H2 It draws my attention to useful information532.963.812.774.303.06F (4,208) =20.96**H, T
H3 It highlights information that interests me532.584.002.403.852.79F (4,208) = 29.72**H, T
H4 The way that it presents content often
captures my attention
532.773.362.774.172.94F (4,208) =16.29**H
H5 It alerts me to information that helps me543.023.633.02 4.022.98F (4,212) = 10.19**H, T
H6 Features of it catch my eye542.702.872.413.242.98F (4,212) = 5.67**H, E, T, U
H7 It exposes me to information that I would
not normally pay attention to
543.313.262.933.833.87F (4,212) = 8.69**H, E, U

Highlights triggers. Six of the seven items had their highest mean rating on the highlights triggers facet. However, only one item, 'The way that it presents content often captures my attention' (H4), was significantly highest on its posited facet. There were no significant differences between ratings on this facet and one or more of the enables exploration, trigger-rich, and leads to the unexpected facets for the remaining six items.


Table 5: Results of appropriateness ratings of each item on the facet leads to the unexpected
Original facet and itemsNMean valueF (df₁, df₂)Highest
facet
Enables
exploration
Trigger
-rich
Enables
connections
Highlights
triggers
Leads to the
unexpected
U1 I bump into unexpected content in it532.742.662.382.984.23 F (4,208) = 28.87**U
U2 I encounter the unexpected in it 532.722.72 2.302.924.13F (4,208) =25.15**U
U3 I am often surprised by what I find in it532.64 2.722.30 2.724.02F (4,208) = 26.94**U
U4 In it I come across topics by chance543.042.852.432.873.69F (4,212) =10.08**U, E
U5 It exposes me to information I am not familiar with543.393.33 2.613.613.81F (4,212) = 12.33**U, E, T, H
U6 It leads me to information that is unexpectedly valuable543.373.692.853.963.83F (4,212) = 11.25**U, E, T, H
U7 I stumble upon information in it542.852.782.723.153.80F (4,212) = 10.69** U

Leads to the unexpected. Finally, six of the seven leads to the unexpected items had their highest mean rating on this facet. The item, 'It leads me to information that is unexpectedly valuable' (U6) had a higher mean rating on the highlights triggers facet. Four of the seven items were rated significantly highest on this posited facet (U1, U2, U3, U7), however the remaining three items also rated highly on one or more of the other facets.

The majority of the items represented by three of the facets met the two criteria of content validity outlined by Hinkin and Tracey (1999): enabled exploration (four items), enabled connections (five items), and leads to the unexpected (four items). Two of the facets, trigger-rich and highlights triggers had only one item each that exhibited sufficient content validity to meet the two criteria. Rather than simply eliminating the items that did not meet both criteria for content validity as Hinkin and Tracey suggest, we used the results to help us modify the items to correct for their potentially confounding meanings. Eliminating too many items would have jeopardized construct validity.

Trigger-rich is defined as a digital environment that contains a variety of information, ideas, or resources interesting and useful to the user while highlights trigger is defined as a digital environment which brings interesting and useful information, ideas, or resources to the user's attention. While the former facet was developed to capture importance of the diversity of interesting and useful content within a digital environment as a way to facilitate serendipity, the latter was developed to capture the ability of the digital environment to highlight or point out interesting and useful information. While all five facets’ definitions make reference to information, ideas, or resources, only the trigger-rich and highlights triggers facets qualify this with interesting and useful and both facets’ items contain qualifiers such as valuable, useful, and helps me. The common wording may have led the participants to rate trigger-rich and highlights triggers items highly on both facets. This is problematic because it suggests users may also have a difficult time distinguishing between environments that highlight interesting and useful content and simply contain interesting and useful content when responding to the serendipitous digital environment scale.

The highlights triggers definition and its items could be revised to correct for this potentially confounding effect by removing the interesting and useful aspect of the highlights triggers facet definition and items. However, while it may be the case that the highlights triggers facet is attempting to do too much, i.e., capturing the interesting and useful content component as well as the attention-grabbing component of a serendipitous digital environment, it may also be the case that these two facets are related though may still function as two distinct factors in future factor analyses of the scale. In other words, there may be a correlation but not to the degree that they are measuring the same thing. Rather than make a decision at this point in the scale development process, the highlights triggers items referencing interesting and useful were retained and others generated that do not qualify the type of content highlighted.

As decisions to add, remove, or revise items were made, the item pools for each of the five facets were reviewed to check that the collective items capture the essence of both their respective facets and the serendipitous digital environment. This was to help ensure that the items will do what the scale is intended to do, i.e., to provide a means for assessing how well a digital environment facilitates serendipity. For example, had we removed all of the items that did not meet Hinkin and Tracey’s (1999) criteria, we would have all but eliminated the trigger-rich facet, which had been identified in earlier research and confirmed through our expert review to be an important characteristic of the serendipitous environment. A careful examination of the results allowed us to identify why some items did not meet both criteria and revise some of these items rather than throwing them out entirely. In the end, a set of thirty-seven items was selected for further testing.

Summary

Fifteen of the thirty-five items of the serendipitous digital environment scale exhibited sufficient content validity to be retained using the two criteria of highest mean rating and significance suggested by Hinkin and Tracey (1999). Items and facets, however, were re-examined and revised based on the results and reviewed to ensure the scale could effectively capture the serendipitous digital environment. This holistic assessment is important at this stage of scale development; we need to be cautious not to sacrifice the construct validity of the serendipitous digital environment scale for content validity at the facet level. Rather than reducing the number of items in the scale, in the end we increased the item pool to thirty-seven items. What became clear through this study was that it was difficult to know whether highlights triggers and trigger-rich facets would function as two separate factors when applying principal components analysis to the items in a future study. Because of apparent correlations between several of the highlights triggers and trigger-rich items, more highlights triggers items were added to help correct for the potentially confounding affect of items with qualifiers such as interesting and useful, which mirrored the trigger-rich items. For example, going forward in the next stage of scale development involving principal components analysis, items in the highlights trigger facet will include both items such as, 'I am pointed toward content in [the digital environment]' and 'I am directed toward valuable information in [the digital environment]'. While the latter item includes the valuable qualifier, the former does not. Because we do not yet know why there was such common ground between the highlights triggers and trigger-rich items, this seems the most conservative approach.

Discussion

This research developed a preliminary serendipitous digital environment scale and tested its content validity while also refining it for future testing. Both approaches to testing content validity – the expert review and the analysis of variance approach – were useful in different ways. The expert review was helpful as a tool for conceptually refining the facets and identifying and exploring potential gaps in the scale because of the expertise the participants could bring to the discussion. The analysis of variance approach was useful for the revision of the items to ensure the distinctiveness of each set of facet items. For example, participants in the expert review pointed out the difference between highlighting triggers in a physical versus digital environment, which led to a softening of the language in the facet definition. This is something that, by its nature, the analysis of variance approach study could not do. Through the analysis of variance approach, on the other hand, we were able to take note of and take steps to reduce potentially high correlations between the trigger-rich and highlights triggers facet items that were not identified in the expert review. However, we did not follow Hinkin and Tracey’s (1999) guideline to select only items that meet the two criteria of highest mean rating and significantly highest mean rating on the posited facet. Strict adherence to the latter criterion would have reduced the pool of items significantly and would have had a significant impact on the construct validity of the scale moving forward. Moreover, while we would expect a level of correlation among the five facets it was difficult to tell from either the expert review or the analysis of variance study whether the facets are confounded, sharing too much conceptual space, or whether there are relationships between the facets but are different enough to warrant separate facets (e.g., trigger-rich and enables exploration). Instead, we used the analysis of variance approach criteria to explore and mitigate potential problems, a useful stage in scale development.

This research had its limitations. The expert review relied on the opinions of the participants, however expert, and final decisions regarding the inclusion and revision of items, though guided by both the expert review and the analysis of variance approach study, were the scale developers’. Different researchers may have developed a different set of facets and items. Because of the stage of development of the scale, however, more facets may be identified and items revised in future studies. Future research will also reduce the 37-item scale to a more parsimonious set of items through principal components analysis. Its construct validity also needs to be tested – is it measuring what it was intended to measure? Do digital environments that are perceived to be trigger-rich, enable exploration, enable connections, highlight triggers, and lead to the unexpected facilitate serendipity? The subjective nature of serendipity, whether we believe we experienced serendipity or not, appears best measured through self-report measures. Future research, therefore, will test relationships between responses to the scale and how frequently users perceive they experience serendipity in a specific digital environment. As well, other variables that are hypothesised to influence serendipity – openness to experience and the broader work environment, for example – need to be examined in relation to the serendipitous digital environment.

Conclusion

Grounded in a review of the prior research as well as in-depth interviews with scholars and professionals about their serendipitous experiences (McCay-Peet and Toms, in press), this paper describes the initial development of a scale to measure a serendipitous digital environment in which facets were defined, items generated, and the content validity of the scale assessed. An expert review led to the refinement of the five facets and the item pool. Content validity was further assessed using a quantitative approach meant to remove some of the subjectivity involved in item selection and development. In the end, a revised five-faceted, 37-item serendipitous digital environment scale was developed and will undergo scale evaluation, refinement, and validation in future studies. Methodologically, this research highlights the value of using different approaches to assess content validity. Furthermore, this research provides a strong foundation for the further development of an instrument that will be of value to researchers and practitioners interested in facilitating serendipity in digital environments.

Acknowledgements

This paper is part of the first author’s PhD research on serendipity. Thanks to the reviewers for their careful reading of this manuscript and very helpful comments and suggestions for improvement. Special thanks to the international set of researchers who participated in the expert review: Paul André, Lennart Björneborn, José Campos, Jannica Heinström, Nigel Ford, Stephann Makri, Anabel Quan-Haase, and Borchuluun Yadamsuren. Research was supported by grants to Toms from SSHRC, CFI, and the Canada Research Chairs Program while she was at Dalhousie University as well as by GRAND NCE. McCay-Peet was also awarded a SSHRC Doctoral Scholarship to support the research.

About the authors

Lori McCay-Peet graduated with her PhD in May 2014 from Dalhousie University, Halifax, Nova Scotia, Canada. She is currently a post-doctoral fellow in the Department of Sociology at the University of Western Ontario, London, Ontario, Canada. She can be reached at: Lmccaype@uwo.ca
Elaine G. Toms is Professor in Information Science, Information School, The University of Sheffield and former Canada Research Chair in Management Informatics, Dalhousie University, Halifax, Nova Scotia, Canada. She can be reached at: e.toms@sheffield.uk
E. Kevin Kelloway is Professor in the Department of Psychology, Saint Mary’s University, Halifax, Nova Scotia, Canada and the Canada Research Chair in Occupational Health Psychology. He can be reached at: kevin.kelloway@smu.ca

References
  • Beale, R. (2007). Supporting serendipity: using ambient intelligence to augment user exploration for data mining and web browsing. International Journal of Human-Computer Studies, 65(5), 421–433.
  • Björneborn, L. (2008). Serendipity dimensions and users' information behaviour in the physical library interface. Information Research, 13(4), paper 370. Retrieved from http://InformationR.net/ir/13-4/paper370.html (Archived by WebCite® at http://www.webcitation.org/6REoRyS5E)
  • Blandford, A. & Attfield, S. (2010). Interacting with information. San Rafael, CA: Morgan & Claypool Publishers.
  • Blandford, A., Stelmaszewska, H. & Bryan-Kinns, N. (2001). Use of multiple digital libraries: a case study. In JCDL '01, Proceedings of the 1st ACM/IEEE-CS Joint Conference on Digital Libraries, (pp. 179–188). New York, NY: ACM Press.
  • Brooke, J. (1996). SUS: A quick and dirty usability scale. In P. W. Jordan, B. Thomas, B. A. Weerdmeester & A. L. McClelland (Eds.), Usability evaluation in industry (pp. 189-194). London: Taylor & Francis.
  • Burnes, B. (2004). Kurt Lewin and complexity theories: back to the future? Journal of Change Management, 4(4), 309–325
  • Campos, J. A. & Figueiredo, A. D. de. (2002). Programming for serendipity. In Proceedings of the 2002 AAAI Fall Symposium on Chance Discovery - The Discovery and Management of Chance Events, (pp. 48-60). Palo Alto, CA: Association for the Advancement of Artificial Intelligence. (AAAI Technical Report FS-02-01)
  • Dantonio, L., Makri, S. & Blandford, A. (2012). Coming across academic social media content serendipitously. Proceedings of the American Society for Information Science and Technology, 49(1), 1-10.
  • Delgadillo, R. & Lynch, B. P. (1999). Future historians: their quest for information. College & Research Libraries, 60(3), 245-259.
  • DeVellis, R. F. (2003). Scale development: theory and applications. Thousand Oaks, CA: Sage Publications.
  • Erdelez, S. (2004). Investigation of information encountering in the controlled research environment. Information Processing & Management, 40(6), 1013-1025.
  • Foster, A. & Ford, N. (2003). Serendipity and information seeking: an empirical study. Journal of Documentation, 59(3), 321-340.
  • Hinkin, T. R. & Tracey, J. B. (1999). An analysis of variance approach to content validation. Organizational Research Methods, 2(2), 175-186.
  • Jordan, P. W., Thomas, B., Weerdmeester, B. A. & McClelland, A. L. (Eds.) (1996). Usability evaluation in industry. London: Taylor & Francis.
  • McBirnie, A. (2008). Seeking serendipity: the paradox of control. ASLIB Proceedings, 60(6), 600-618.
  • McCay-Peet, L. (2013). Investigating work-related serendipity, what influences it, and how it may be facilitated in digital environments. Unpublished doctoral dissertation, Dalhousie University, Halifax, Nova Scotia, Canada. Retrieved from http://dalspace.library.dal.ca/handle/10222/42727 (Archived by WebCite® at http://www.webcitation.org/6REq99Br9)
  • McCay-Peet, L. & Toms, E.G. (in press). Investigating serendipity: how it unfolds and what may influence it. Journal of the American Society for Information Science and Technology.
  • McCay-Peet, L.& Toms, E.G. (2010). The process of serendipity in knowledge work. In Proceedings of the Third Symposium on Information Interaction in Context (IIiX 2010). (pp. 377-382) New York, NY: ACM Press
  • McCay-Peet, L. & Toms, E.G. (2011). Measuring the dimensions of serendipity in digital environments. Information Research, 16(3), paper 483. Retrieved from http://informationr.net/ir/16-3/paper483.html (Archived by WebCite® at http://www.webcitation.org/6RErtLYZA)
  • McKeen, W. (2006, March 26). The endangered joy of serendipity. St. Petersburg Times. Retrieved from http://www.sptimes.com/2006/03/26/news_pf/Perspective/The_endangered_joy_of.shtml (Archived by WebCite® at http://www.webcitation.org/6REsEZXus)
  • MacKenzie, S. B., Podsakoff, P. M. & Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques. MIS Quarterly, 35(2), 293-334.
  • Makri, S. & Blandford, A. (2012). Coming across information serendipitously. Part 1: a process model. Journal of Documentation, 68(5), 684-705.
  • Martin, K. & Quan-Haase, A. (2013). Are e-books replacing print books? Tradition, serendipity, and opportunity in the adoption and use of e-books for historical research and teaching. Journal of the American Society for Information Science and Technology, 64(5), 1016–1028.
  • Maxwell, D., Woods, M., Makri, S., Bental, D., Kefalidou, G. & Sharples, S. (2012). Designing a semantic sketchbook to create opportunities for serendipity. In Proceedings of the 26th Annual BCS Interaction Specialist Group Conference on People and Computers. (pp. 357–362). Swinton, UK: British Computer Society.
  • Merton, R. K. & Barber, E. (2004). The travels and adventures of serendipity: a study in sociological semantics and the sociology of science. Princeton, N.J.: Princeton University Press.
  • O'Brien, H. L. & Toms, E. G. (2010). The development and evaluation of a survey to measure user engagement. Journal of the American Society for Information Science and Technology, 61(1), 50-69.
  • P&aacute;lsd&oacute;ttir &AACUTE;. (2011). Opportunistic discovery of information by elderly Icelanders and their relatives. Information Research, 16(3), paper 485. Retrieved from http://InformationR.net/ir/16-3/paper485.html (Archived by WebCite® at http://www.webcitation.org/6REsrieju)
  • Pariser, E. (2011). The filter bubble: what the internet is hiding from you. New York, NY: Penguin Press.
  • Pettigrew, A. M. (1997). What is a processual analysis? Scandinavian Journal of Management, 13(4), 337-348.
  • Pettigrew, K. (1999). Waiting for chiropody: contextual results from an ethnographic study of the information behaviour among attendees at community clinics. Information Processing & Management, 35(6), 801-817.
  • Rubin, V. L., Burkell, J. & Quan-Haase, A. (2011). Facets of serendipity in everyday chance encounters: a grounded theory approach to blog analysis. Information Research, 16(3), paper 488. Retrieved from http://www.informationr.net/ir/16-3/paper488.html (Archived by WebCite® at http://www.webcitation.org/6REt0hOuE)
  • Sax, L. J., Gilmartin, S. K. & Bryant, A. N. (2003). Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education, 44(4), 409–432.
  • Shani, G. & Gunawardana, A. (2011). Evaluating recommendation systems. In Ricci, F., Rokach, L., Shapira, B. & Kantor, P. B. (Eds.), Recommender systems handbook. (pp. 257-297). Boston, MA: Springer US.
  • Sun, X., Sharples, S. & Makri, S. (2011). A user-centred mobile diary study approach to understanding serendipity in information research. Information Research, 16(3), paper 492. Retrieved from http://www.informationr.net/ir/16-3/paper492.html (Archived by WebCite® at http://www.webcitation.org/6REtA7C2c)
  • Thudt, A., Hinrichs, U. & Carpendale, S. (2012). The bohemian bookshelf. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems - CHI ’12 (p. 1461). New York, NY: ACM Press.
  • Thurman, N. & Schifferes, S. (2012). The future of personalization at news websites. Journalism Studies, 13(5-6), 1-16. Retrieved from http://openaccess.city.ac.uk/1067/ (Archived by WebCite® at http://www.webcitation.org/6REtND0BD)
  • Toms, E. G. (1997). Browsing digital information: examining the ‘affordances’ in the interaction of user and text. Unpublished doctoral dissertation. University of Western Ontario, London, ONT, Canada.
  • Yao, G., Wu, C-H. & Yang, C-T. (2008). Examining the content validity of the WHOQOL-BREF from respondents’ perspective by quantitative methods. Social Indicators Research, 85(3), 483-498.
  • Zuckerman, E. (2013). Rewire: digital cosmopolitans in the age of connection. New York, NY: W.W. Norton & Company.
How to cite this paper

McCay-Peet, L., Toms, E.G. & Kellaway, E.K. (2014). Development and assessment of the content validity of a scale to measure how well a digital environment facilitates serendipity Information Research, 19(3), paper 630. Retrieved from http://InformationR.net/ir/19-3/paper630.html (Archived by WebCite® at http://www.webcitation.org/6Rm5wmRDM)

Check for citations, using Google Scholar