ublished quarterly by the university of borås, sweden

vol. 22 no. 1, March, 2017

Proceedings of the Ninth International Conference on Conceptions of Library and Information Science, Uppsala, Sweden, June 27-29, 2016

A rejoinder to Nicolaisen’s refutation of Hjørland’s relevance definition

Birger Hjørland.

Introduction. Dr. Nicolaisen has claimed a refutation of the definition of relevance as provided by the present author. This present paper responds to Nicolaisen’s paper.
Method. Hjørland’s original definition of relevance is exemplified and each of Nicolaisen’s arguments are examined.
Results. All Nicolaisen’ arguments failed to consider the differences between defining the concept of relevance and the measurement instances of it. His arguments are, therefore, misdirected and irrelevant. Furthermore, his 'refutation' resembles the well-known mythical 'proof' that bumblebees cannot fly.
Conclusion. It is concluded that the relevance definition under discussion is still valid and it is the most fruitful one that has been suggested so far.


Scholarly debates and critiques are necessary for the advancement of knowledge and should always be considered welcome. It is more important to contribute to progress and the elimination of failures than to protect the researchers from criticism. It cannot be overrated that scientific and scholarly progress is based upon arguments in what Charles Sanders Peirce called a 'community of inquirers' (Atkin, 2010). It should be clear from my bibliography that criticism and debate are genres that I give a high priority to in my writings. Therefore, I also consider it fruitful that Dr. Nicolaisen has chosen to examine and to discuss the definition of relevance, originally published in Hjørland and Sejer Christensen (2002). To receive serious critical discussions of one’s work should be considered as a gift and they are really an important matter that appears much too seldom in our field. I will also add that the epistemological problems as addressed by Dr. Nicolaisen seem important (and few people have emphasised the importance of the theory of knowledge in relation to the problems of information science more than I).

However, this paper will argue that Nicolaisen’s arguments against my relevance definition are invalid and that it is, therefore, not refuted. In addition, by not considering the aspects of the definition (e.g.,, its aim at moving the understanding of relevance, from the psychological, cognitive and individual understanding, to the sphere of social epistemology), Nicolaisen’s refusal may well support the less fruitful understandings that so far have been the dominating conception of relevance.

The definition explained and exemplified

The definition, which I have suggested, and which is the object of Nicolaisen’s criticism, is this:

Something (A) is relevant to a task (T) if it increases the likelihood of accomplishing the goal (G), which is implied by T. (Hjørland & Sejer Christensen, 2002, p. 964).

In Hjørland (2010, p.227), I used scurvy as an example. I will return to that, but first, I want to introduce two other examples with more adequate empirical data. The first example is a contemporary controversy in medicine concerning mammography. Is it relevant for women aged 50+ to have a regular screening? The way that this is decided in evidence-based medicine (EBM) is usually by considering randomly controlled clinical trials (RCCTs). I shall return to some of the methodological and epistemological problems below. So far, I will just say that according to medical science, the criterion for whether mammography is relevant or not is whether it reduces the likelihood or the probability that women receiving a mammography will die due to cancer (based upon survival statistics, e.g., within a five-year period after the diagnosis). This example fits very nicely with my definition: Something (here, mammography) is relevant to a task (the early detection of breast cancer) if it increases the likelihood of accomplishing the goal (here, preventing a death by cancer).

The other example concerns the former Minister and the Chief Burgomaster in Copenhagen, Ritt Bjerregaard, who has been very open about her cancer illness. After an operation in her rectum, she was offered preventive chemotherapy, which she refuted with the following argument:

'Does that mean, that if I agree to this therapy, then I can be sure that the cancer will not return', I asked the doctor. 'No, it does not', was the answer. 'There is a 20% risk for a new cancer and if the preventive chemotherapy is followed correctly, that risk can be reduced by 5%.' I went home and I thought about that. I should have chemotherapy for half a year and the stomy could not be returned to normal until a month after the therapy was finished. I could imagine that I had to be a patient for the rest of 2015 and I would not agree to that. I would have some good times in the years that I have left.' (Madsen, 2016; translated by BH)

In this example, A is the chemotherapy, the task T is to cure the cancer, and G is the goal to prevent the spreading of the cancer. In this case, there are two different and conflicting views of the goal: G1: simply reducing the probability of the cancer recurring and G2: to obtain a reasonable quality of life.

We see that the probability of getting a new cancer was, according to Bjerregaard, assumed by the doctors to be reduced by 5% with the new chemotherapy. If the goal alone was to survive, this would have been a relevant cure. But because the cure reduced her quality of life, Bjerregaard did not consider it relevant in her case. We see that if we accept the information, upon which Bjerregaard based her decisions, then the relevance formula fits this example very well. If the goal can be clearly decided (e.g., if the goal is a survival at all costs), then the relationship between the goal and the therapy is a logical implication (and the relevance is logically deduced). If, on the other hand, the goal is unclear (how much should a quality of life count in relation to a reduced risk?), then the goal has to be clarified first (which is more of a psychological problem).

In these two examples, I do not consider how medical science has measured the reduced risk of dying from cancer (I will return to that, although I claim that it has no bearing on my definition). All I am saying is that such estimations or measurements are made all of the time (in medicine and elsewhere) and that they are used in determining relevant actions. There are always methodological and epistemological problems that are associated with doing research, but although researchers may disagree whether mammography or preventive chemotherapy reduces the risk of dying, they nevertheless agree that this is the right criterion. The methodological and epistemological questions when determining probabilities does not invalidate my definition of relevance.

Examining Dr. Nicolaisen’s arguments

Nicolaisen’s arguments against my definition all fail because he has confused my definition of what relevance means with the methodological and epistemological problems in measuring and interpreting probabilities. Already in his introduction, Nicolaisen has made the false statement that I have claimed to have solved the problems related to the measurement of relevance (without reference to where I should have said this). It is simply untrue that I have claimed to solve the problems concerning the measurement of relevance. I just claim to have provided a fruitful definition of relevance.

Nicolaisen introduces two options: the so-called logical theory and the subjective theory. He has found that my definition fails, irrespective of which of these theories are chosen (although he does find that my relevance definition will be supported by the subjective theory, in cases of absolute consensus, which, however, is a very seldom phenomenon). Nicolaisen has not considered more than these two options as being possible. My first comment is that probability estimations are common in much of science and that our understanding has greatly increased in the 20th Century. It is unclear how the two options presented by Nicolaisen are related to actual scientific methodology.

If we look at the mammography case, according to Goodman (2002), there is no agreement or consensus whether mammography reduces the risk of dying from breast-cancer. Two systematic reviews at that time have produced opposing conclusions:

The first thing to observe is that although the two studies are in a disagreement on whether mammography is effective or not, they nonetheless agree about what counts as a relevant action criterion, thus supporting my definition of relevance. Goodman discussed these opposing findings and wrote:

But a closer look at this controversy... shows that its focus has shifted in a way that poses a dilemma, not only for women and their doctors, but for evidence-based medicine itself. The debate in the 1990s was mainly about the advisability of screening for women younger than 50 years of age; for older women, the benefits of mammography were thought to be certain. (Goodman, 2002).

I shall not review further studies about mammography here (Olsen and Gøtzsche have since modified their view, but this does not change the principles that are being discussed here). I will just consider Goodman’s point of view, who considered the disagreement in the two systematic reviews about mammography. In the case of disagreements on a factual level (in the case of the effects of mammography) by scientists, according to Laudan (1984), they try to reach an agreement by considering the methodological level (in this case, of how the reduced risk is measured). (Disagreements on a methodological level may be caused by disagreements on an axiological level, but such disagreements about values cannot be solved by an appeal to a higher level). What Goodman (2002) said was that Olsen and Gøtzsche (2001) did not only bring forth a disagreement on a factual level, but also on a methodological level (problems related to the axiological level were considered in the Bjerregaard case above).

In medical science, there is an agreement about the meaning of the sentence 'mammography is a relevant precaution', and therefore, about the meaning of relevance, which corresponds to my definition of relevance. Both of the parties in the controversy agree that mammography is relevant if it reduces the probability of dying that is caused by breast cancer. What is important for information science is how we index and retrieve the information, so that the relevant studies may be distinguished from the non-relevant. Hjørland (2011) criticised EBM, as being 'too narrow, too formalist, and too mechanical an approach, on which to base scientific and scholarly documentation'. This corresponds to Sadegh-Zadeh’s (2015, p. 386-389) criticism as well as to Goodman’s (2002) view that studies cannot just be selected by formal criteria. Different views (e.g., Olsen and Gøtzsche, 2001 and Humphrey et al., 2002), used different subjective criteria about which studies to include in their overall evaluation. There are other ways by which scientific methodology still have open methodological problems in this respect (see Parascandola, 2004, 2011), but if medical science is able to calculate the probability that a certain step will decrease an illness, then that step is relevant in order to combat that disease. Such calculations are made all of the time and they are neither based on the subjective theory, nor are they based on purely logical deductions (see the next section). There is, thus, a serious gap between Nicolaisen’s philosophical analyses and actual scientific methodology.

The reference to probabilistic logic by Nicolaisen (2016) is mostly about inductive logic. It is concluded that the probability of any universal statement [based on induction from a limited number of observations] is zero. It is not clear from that section of his paper how this is relevant to the discussion. The only possibility that I can see is that Nicolaisen has argued about the way relevance is determined (for example, in clinical trials and in EBM). Actually, in the next section, Nicolaisen wrote:

But without the full insight of Laplace’s Demon, how do we then warrant the assumption that ascorbic acid increases the probability of curing scurvy? Arguing that, we know from experience that many scurvy patients have been cured by this treatment, and thus, that this experience raises the probability of treating the next scurvy patient by the same treatment, commits the fallacy of making universal generalizations from a limited number of observations. Actually, if we reason like this, according to Hjørland’s relevance definition, ascorbic acid should be seen as a non-relevant treatment! Why? Because, as we have learned from Chalmers in the preceding section, that by dividing a finite number of observations with an infinite number, equals zero. According to Hjørland’s definition of relevance, something (A) is relevant to a task (T), if it increases the probability of accomplishing the goal (G), which is implied by T. It follows logically, that the probability of the treatment would be zero, the probability of treatment is not increased, and that the treatment is thus, not relevant.

This quotation from Nicolaisen reminds me of the old story of those scientists who are supposed to have claimed that bumblebees cannot fly.

Supposedly someone did a back of an envelope calculation, taking the weight of a bumblebee and its wings area into account, and worked out that if it only flies at a couple of metres per second, the wings wouldn’t produce enough lift to hold the bee up (Institute of Physics, 2009).

But of course science is able to explain why bumblebees can fly and that Vitamin C has a high probability of being able to cure scurvy. The first trial to indicate Vitamin C’s effect on scurvy was Lind (1753), but this study did not fulfil today’s methodological standards. I have not identified an updated systematic review of a scurvy treatment and I have not made much of an effort, as I do not consider that that is important for the argument. By principle, this can be done in the same way as for any other medical intervention and the probability of a cure can be assessed in the same way. I am not sure whether Nicolaisen has intended to criticise the methodology of RCCTs, as to establishing the fallacy of making universal generalisations from a limited number of observations? This would indeed be an attack of the queen of medical methodologies and should rather be addressed to that particular community.

Although RCCTs and EBMs are not without problems (see Hjørland, 2011; Goodman, 1999a and 1999b, 2002, Sadegh-Zadeh, 2015, p. 386-389), the induction problem as described by Nicolaisen is not one of them. The best evidence is mostly considered to be information from RCCTs:

In order for the doctor and patient to determine the expected values of treatment alternatives and to make a therapeutic decision, they must know what the therapeutic efficacy of those treatments are. Probabilistic statements of the form 'in a patient with acute appendicitis, the probability of a cure on the condition that she receives an appendectomy, is 0.98' are simple examples of the knowledge required. Therapeutic efficacy is tested in so-called randomized, controlled clinical trials, or RCCTs for short. An RCCT is a genuine, scientific experiment in the proper sense of this term. It is a well-designed investigation consisting of specified intervention in, and manipulation of, some condition to determine the effect of the intervention and manipulation. More specifically, it constitutes a systematic, prospective study of the efficacy of an intervention in human affairs designed to prevent, cure, or ameliorate a malady.” (Sadegh-Zadeh, 2015, p. 377; italics in original).

An example:

From Sadegh-Zadeh (2015, p. 380, Table 9) “This 2 × 2 contingency table demonstrates the results of an RCCT in 250 patients with peptic ulcer disease. Patients in the treatment group received antibiotics (metronidazole, amoxycillin, and clarithromycin), while patients in the control group didn’t receive any therapy”.

Sadegh-Zadeh (2015, p. 382, Table 10).
Treatment A 230 20 250
No treatment 30 220 250
(When a placebo was the alternative to no treatment, the following results occurred)
Treatment B (placebo) 80 170 250

The relationship to the probability or likelihood was established in this way:

According to this table [9], the proportion of the cured in the treatment group is 230:250 = 0.92 and in the control group it is 230:250 = 0.12. If we consider these numbers as estimates of probabilities in the long run, we obtain the following conditional probabilities, where X is the population of patients with peptic ulcer disease; A is the application of the therapeuticum A; and C means ‘cured’:
P(C|X ∩ A) = 0.92
P(C|X) = 0.12
Obviously, it is more likely for a patient to be cured by treatment A than without it. (Sadegh-Zadeh, 2015, p. 380-381)

Thus, medical science does not estimate probabilities by making universal claims from a pool of observations, but by randomly and double-blindly dividing the patients into experimental and control groups, respectively, treating only the experimental groups, and then for example, comparing the statistical patterns for a five-year survival in the two groups.

There are some specific problems with scurvy. In Hjørland (2010), scurvy was chosen as an example in order to convince those readers who belong to the dominating cognitive school of relevance research. I considered this a case of established knowledge, and thus, it was easy to distinguish from the relevance that was based on examining the user’s individual beliefs. Now, facing arguments based on a probabilistic philosophy, I have introduced the mammography and Bjerregaard cases, because in these examples, empirical findings and the methodological discussions from medical research are available. Another problem with the scurvy disease is that the blood level of vitamin C is sometimes used in the diagnosis. There may, therefore, be a circular relationship between the definition of the disease and its cure (In the case of full circularity, an increment of a patient’s level of Vitamin C in the blood, will by definition, cure the patient. The probability is then one, not zero). A third problem is that the mechanisms behind scurvy are probably well understood today. (Today, the illness is seldom seen in developed countries, except when people suffer from malnutrition. Perhaps, because of these two reasons, clinical trials are considered relatively unnecessary?). There is a difference between basing knowledge on clinical trials and basing it on research into fundamental mechanisms. This last difference implies a new challenge. How do we evaluate the probabilities of our actions when they are based on a deep understanding about underlying mechanisms that are based upon many different experiences that have been cumulated over long periods of time? How do we evaluate established knowledge? Perhaps, the question is not relevant, because we act on our knowledge, until we have reasons to doubt it? (Medical science has considered mammography relevant, until questioned by Gøtzsche and others, i.e., a case related to Popperian’s falsificationism).

Nicolaisen has addressed the differences in the meanings of the words likelihood and probability, where he found that my use of the term likelihood was wrong. Perhaps he is right (this is then the exception to the rule that all of his arguments are wrong). I consider this issue as open, because it may also form part of question about the methodological approach chosen. In Bayesian methodology, the term a likelihood ratio is used (and there are good arguments in favour of this methodology, see, for exampale, Goodman, 1999b). Likelihood is also used differently from probability by those researchers following Charles Sanders Peirce’s abductive approach, as distinguished from statistical procedures that are based on objective randomisation (see, Wikipedia, 2016, likelihood function). Finally, likelihood is also used in the literature in the way that I have used it, for example, in Sadegh-Zadeh (2015, p. 380-381). Because my definition of relevance is open with regards to how likelihood or probability is determined, I find it best to consider these two terms interchangeable, until this issue has been settled.

Nicolaisen has made a strong distinction between a probabilistic causation on the one hand and a deterministic causation on the other and claims that I have confused this distinction. The view on determinism and causation has developed greatly in the 20th Century, in almost all sciences. An important issue is to what kinds of experiments and investigations are able to uncover causal relations (see Parascandola, 2004, 2011).The following definition is useful:

[A] causative factor may be thought of as any condition which, in a given situation, increases the probability that a specified event will occur (Hammond, 1955, p. 174).

Vitamin C, as such, is a causative factor in relation to scurvy. In most sciences, the old-fashioned belief in one sufficient and necessary cause, a one-to-one deterministic relationship (and thus, in a deterministic system in which the cure is guaranteed with a 100% certainty), has been given up. Even treating scurvy with vitamin C, is thus, a question of probabilities (it may be very close to 1, but it must necessarily be smaller than 1 – that is, if we disregard the possibility of a circular definition of scurvy). Our knowledge about scurvy and Vitamin C is sufficient to warrant my example, without accepting an old-fashioned determinism.

The generalisability of the relevance definition: Nicolaisen’s conclusion is that 'contrary to Hjørland’s claim that his relevance equation applies to anything (including documents, ideas, meanings, texts, theories, and things), it is found at best to have very limited generalisability'. The main error in Nicolaisen’s conclusion is that it confuses two kinds of generalisability:

  1. The generalisability that concerns consensus or other measurements.
  2. The generalisability that concerns the referents of the concept of relevance (e.g., documents, ideas, meanings, texts, theories, and things).

All of Nicolaisen’s arguments are about the problems of measuring probabilities, due to issues such as the problem of induction and the degree of consensus. He has never considered whether, for example, the relevance of medical drugs and the relevance of medical articles need two different concepts of relevance, although this was his strongest argument against my definition of relevance. He wrote, for example:

In the abstract of the same paper, it is stated that “tool” should be “understood in the widest possible sense, including ideas, meanings, theories and documents” (Hjørland and Sejer Christensen, 2002, p. 960). Admittedly, this is a very wide ranging claim


Perhaps the most obvious problem with Hjørland’s relevance equation is his claim that it applies to anything (including documents, ideas, meanings, texts, theories and things).

Based on this rather heavy attack, it seems strange that Nicolaisen has never considered my arguments in favour of this generalisation of the concept, and he has never provided any argument why different kinds of things need different definitions of relevance. To use his own criteria: if there is a full consensus, that Vitamin C cures scurvy, then Vitamin C is relevant. If there is a full consensus that some experiments are important, because they increase the probability of solving a given problem, then these experiments are relevant.

Nicolaisen has attacked my use of the term logic in relation to the definition of relevance. In (Hjørland, 2010, p. 235, note 37), I have expressed my agreement with Patrick Wilson’s statement:

As with logical and evidential relevance, the recognition that an item is relevant is not an automatic or a mechanical procedure and it may even be beyond my abilities. That an item is situationally relevant is a logical fact, not a psychological fact (Wilson, 1973, p. 464)

At the time, I wrote in the endnote that I considered that logic is not the only alternative to a psychological explanation, but I decided not to add further to the already very extensive set of notes. The alternative to psychological explanations is, of course, domain knowledge or theory (e.g., medical knowledge and theory). Whether an A is relevant, is evaluated on the basis of knowledge in the domain. If we know that something works for our purpose, then it is relevant – and if it does not work, it is not relevant. Given a theory (and disregarding unclear or conflicting goals), the relationship between the theory and relevance is a matter of deductive inference.

Above, we have encountered two theories:

Given theory a), mammography is relevant; given theory b), mammography is not relevant. New theories may arise, or the established ones may be confirmed or refuted. Nevertheless, logic is about the relationship between our theories and our relevance of assessments. Given that our knowledge is fallible and that our theories may change, anything may strictly speaking be considered potentially probable and relevant, but in using the relevance definition, the probability should, of course, be estimated on the best available knowledge. If we do not have good reasons to believe that A is a probable solution, then we do not have good reasons to believe that A is relevant. To repeat: we may use likelihood or probability in the definition without specifying how this should be measured, but each measurement or estimation should, of course, be subjected to further investigations. Moreover, every relevance statement is fallible, as are other kinds of statements, according to the principle of fallibilism.

Let us now consider Nicolaisen’s lottery ticket /database search example. First, there is a logical error in this example. Nicolaisen has mixed up the relevance of documents in the database with the relevant search strategies to find the relevant one(s). Of course, if the goal is to draw out the one relevant document, then the strategy to try a random document search is certainly better to try than to not try, and thus, it is relevant (but that does not make all documents in the database relevant, as Nicolaisen wrote). In addition, Nicolaisen seems to have forgotten that the goal of retrieving documents from databases is to have a high recall and a satisfactory precision, not just to increase the recall. The drawings suggested by Nicolaisen have also failed because they are only relevant if we disregard the issue of precision.

What if a probability is extremely low? What if the probability of getting a new cancer in Bjerregaard’s case was not reduced by 5%, but by 0.5%, or even much lower? Has the relevance formula not had the implication that everything is relevant, because anything has, at least, a hypothetical (but extremely small) possibility of being relevant? Could even contra-evidential actions, such as eating a poison, turn out to be relevant, because such actions possess an unknown, although an extremely low probability, of accomplishing G? Again, if we do not have good reasons to believe that A has a probable effect, then we do not have good reasons to believe that A is relevant. Another objection is that in science and social science, there are conventions used in order to cope with the problem of statistical significance. That does not say what the probability of a hypothesis is. It just says what the probability is that a specific study is not due to randomness (i.e., a low chance of incorrectly claiming an effect, and thus, the relevance of a given interaction). Nonetheless, if the probability of the effect of a given intervention is extremely low, it is unlikely to be reported in the scientific literature. It may still exist, of course, thus implying relevance, according to my relevance formula. This does not make anything relevant. It just makes the relevance of certain interventions undetermined, and when we act, we should, of course, act upon the best knowledge.

A standard view in the Danish health system seems to be (according to the information provided by Ritt Bjerregaard, as quoted above) that a treatment which reduces the risk of a returning cancer with 5% is considered relevant. There are some issues here:

  1. Would the health authorities also consider a treatment with a lower risk of reduction as a relevant treatment (if yes: how much lower?). How are such decisions made? The dominant goal is to reduce the death rates as displayed by official statistics and to try to reach that goal by applying a range of procedures (each of which may be questioned and further examined, as for example, Gøtzsche 2013 and 2015, has done). Sometimes, large-scale attempts are made in order to reduce cancer deaths, for example, by doubling the number of people being operated upon. If this does not work (as reflected in the death statistics), then this line of intervention is not further increased, even if their might be a hypothetical possibility for a reduction.
  2. On what scientific basis is the 5% risk reduction calculated? We have seen in the scholarly literature that there may be conflicting evaluations on an evidential basis (the same statistics and the same published research have been interpreted differently). The disagreements have been so strong that the evidence-based practice itself has been declared to be in a crisis (see, Goodman, 2002). This confirms the importance of philosophical and critical positions in relation to the evaluation of scientific claims. Nonetheless, we have to make our decisions on the basis of our current knowledge.
  3. Do the patients share the opinions of the health authorities? Some patients seem to go very far to get treatments which have an extremely low probability of a cure, according to official medical authorities.
  4. Can we trust the medical establishment, or should we, for example, go for alternative medicine?

These questions are important, but they do not invalidate my understanding and my definition of the concept of relevance. To repeat: to define relevance is very different from deciding whether something is relevant. To consider the implications that anything is relevant, or that nothing is relevant, is to disregard the given knowledge and to play with purely speculative ideas that do not take the point of departure in actual research, theories and practices. This is not a constructive approach.

The relevance definition in a wider theoretical perspective

The development of scholarly theories involves the development of basic concepts. In other words, theories and concepts are co-evolving. Chalmers (2013, p. 98) wrote: 'Newton could not define mass or force in terms of previously available concepts. It was necessary for him to transcend the limits of the old conceptual framework by developing a new one'. Following Chalmers, I propose that the scientific definitions of terms like information and relevance depend upon the roles that we give them in our theories. In other words, the type of methodological work they have to do for us.

By not addressing this problem, Nicolaisen’s arguments have tended to refute the domain analytical view. I do not believe that this is what Nicolaisen intended. But if not, it cannot be that all aspects of the relevance definition are refuted, just some of them. Alternatively, Nicolaisen has to show that my definition of relevance is in a fundamental way, in a conflict with my own domain-analytical understanding. Therefore, this way of isolating a single concept from its theoretical context is problematical.


Probability statements are ubiquitous in science and in everyday life. Nicolaisen (2016) has rightly pointed out that there are different probability interpretations and that these are often used in unclear and problematical ways in the scholarly literature (see also Upshur, 2013). My claim is, however, that the definition of relevance (and the application of the concept of probability in the definition) does not depend on one or another way to determine or interpret probability.

Nicolaisen’s refutation of my definition of relevance is, therefore, wrong. It also represents an unconstructive approach, in the way of an all-or-nothing evaluation. A more useful criticism would be to analyse the attributes of the definition ‘one by one’: (a) its realism, (b) its connection between tasks, goals and relevance and (c) its probabilistic assumption (and further assumptions). Nicolaisen’s criticism has concentrated on the probabilistic aspect, but he has not considered whether this part could be removed, or be replaced, by something else. He has never confronted my definition of relevance with other definitions. Nicolaisen’s refusal may, therefore, well support the less fruitful cognitive understandings that so far have dominated this concept. Therefore, a total refutation seems harmful. Nicolaisen may be right when he has not committed to suggest alternatives to what he criticises. However, a relevant criticism of conceptions mostly opens the way for alternatives. This is not the case in Nicolaisen’s criticism. That might yet be another indication that he is on the wrong track.

What I claim is that I have contributed to clarify the definition of relevance. Such clarity is the purpose of all theoretical work. In addition, I have always considered my understanding of relevance as a part of my general approach to information science, in which psychological issues are downplayed, but scholarly and epistemological issues are upgraded. To this endeavour, I have considered and I still consider Nicolaisen as one of the most important allies. Hopefully, the present controversy will contribute to an increment of interest in understanding relevance when viewing it within an epistemological perspective.

About the author

Birger Hjørland holds an MA in psychology and PhD in library and information science. He is Professor in knowledge organization at the Royal School of Library and Information Science in Copenhagen since 2001 and at the University College in Borås 2000-2001. He was research librarian at the Royal Library in Copenhagen 1978-1990, and taught information science at the Department of Mathematical and Applied Linguistics at the University of Copenhagen 1983-1986. He is chair of ISKO’s Scientific Advisory Council and a member of the editorial boards of Knowledge Organization, Journal of the Association for Information Science and Technology and Journal of Documentation. His H-index is 38 in Google Scholar and 21 in Web of Science. He can be contacted at: birger.hjorland@hum.ku.dk


Check for citations, using Google Scholar