- Open Access
Marketing data: Has the rise of impact factor led to the fall of objective language in the scientific article?
© Fraser and Martin. 2009
- Received: 31 January 2009
- Accepted: 11 May 2009
- Published: 11 May 2009
The language of science should be objective and detached and should place data in the appropriate context. The aim of this commentary was to explore the notion that recent trends in the use of language have led to a loss of objectivity in the presentation of scientific data. The relationship between the value-laden vocabulary and impact factor among fundamental biomedical research and clinical journals has been explored. It appears that fundamental research journals of high impact factors have experienced a rise in value-laden terms in the past 25 years.
- Impact Factor
- Scientific Article
- Optical Character Recognition
- Impact Factor Journal
- High Impact Factor
A recent editorial addressing the care which must be taken in the reporting of clinical results concluded: "The numbers and not their interpretation, must speak for themselves" . This statement succinctly expresses that which is often taken for granted in scientific research articles; a commitment to the standard of objectivity. Insofar as the scientific article is the principal forum for the dissemination of new knowledge it must reflect a detached and objective set of arguments supported by data and leading to reasonable conclusions . The role of the author is to record, evaluate and situate new evidence within the context of existing scientific literature. It is generally agreed that subjective interpretation of results ought to be minimal and tempered with discretion. Yet, we have noted adjectives imposing subjective value on an otherwise neutral knowledge claim appearing with increasing frequency in the scientific literature. Readers of scientific articles currently encounter frequent claims of "crucial", "critical" or "unique" events as well as "important" or "original" discoveries. The hypothesis that the language of science has changed to include words which might potentially bias the reader in his/her interpretation of the research article has prompted us to conduct an investigation into what appeared to be a shift in the use of language in scientific articles.
We evaluated this hypothesis by examining twelve established biomedical and fundamental clinical and clinical research journals over a twenty year time period for adjectives which modified an otherwise neutral knowledge claim. Our findings indicate that there is an increase in value-laden language in the scientific article from 1985 to 2005. Both high and low impact fundamental research journals exhibit an increase in biased word choice over time, this trend being most marked in high-impact biomedical journals devoted to fundamental research. Comparatively, clinical journals showed a low incidence of biased words and this characteristic has remained consistent over the time period under investigation. We suggest that the increase in incidence of biased language may provide a means through which to view broader changes occurring within the scientific community. Publication practice has evolved over the past twenty years as authors face increasing pressure to publish in high impact journals . While a definitive causal link between current publication pressure and biased word choice cannot be established by our data; we believe that an analysis such as ours raises some pertinent questions about publication practice as it exist today.
New England Journal of Medicine
Journal of Clinical Investigation
Journal of Experimental Medicine
British Medical Journal
Canadian Medical Association Journal
Journal of Immunology
Journal of Pharmacology
American Journal of Physiology
European Journal of Clinical Investigation
Bias Factor 3
Bias Factor of 1
The increasing incidence of adjectives expressing subjective judgments undermines what has traditionally been accepted as the objective nature of the scientific paper. Our argument therefore assumes that objectivity is an integral and necessary component in the quest for scientific progress. Most would tacitly acknowledge that objectivity occupies a unique position within scientific disciplines. In his paper: The Scope and Limits of Scientific Objectivity Joseph F. Hannah states: "It is generally agreed that one of the distinguishing virtues of science is its objectivity. The scope of science is the objective world and the limits of science are determined by the limits of the objective methods of formal and empirical research" . Insofar as the scientific paper is the primary vehicle for new and private scientific findings to enter into the realm of public discourse, it should also demonstrate a commitment to the principles and standards of objectivity. We would argue that the paper may take a subjective stance insofar as it argues for the relevance of the observations it posits as well as to the implications the observation will have on the established body of knowledge, but these contextual arguments should be minimal and tempered with discretion. The strength and import of observations and conclusions should be evident in and of themselves with minimal positioning on the part of the authors.
The demonstrable increase in the use of adjectives with the potential to bias the reader may indicate that the interpretation of results has come to replace what has traditionally been a more objective stance. This shift towards the somewhat hyperbolic interpretation of data from the more conservative representation of data, raises important questions about the evolution of the scientific article and must be examined in conjunction with changing attitudes within the scientific community regarding the writing and submission of articles, the mounting impact of the impact factor and the pressures currently facing authors seeking publication.
The Rising Impact of the Impact Factor
Changing attitudes towards scientific publication must be examined in tandem with the changing role of the impact factor in assessing the merits of a body of work and the "impact" this has had on the scientific community. Briefly, the impact factor of a journal reflects the number of citations appearing in indexed publications in a given year to articles published in a given journal in the previous two years, divided by the number of citable papers published within these two years. However, the original purpose of the database developed by the Institute for Scientific Information and used for citation analysis has been somewhat forgotten and the impact factor has taken on a life of its own. Several detailed critiques of the impact factor have been published , highlighting shortcomings such as the limitations of the impact factor in comparisons of journals involving different research fields. In addition, even within a discipline the impact factor may not measure appropriately the quality of the journal. For example, it is sensitive to whether an area of research is young and developing, and therefore likely to lead to citations that are recent, or more mature.
Although the merit of impact factor remains the subject of intense debate, its current influence on scientific papers and publication is not. Impact Factor has extended its reach to be included in the evaluation of academic and medical institutions as well as in the evaluation of researchers for tenure and promotion and the awarding of grants . The latter often hinges not only on the number of publications and the quality of the research but also the impact factor of the journal. In 2002 a Nature News feature noted: "...the implicit use of journal impact factors by committees determining promotions and appointments is endemic" . Similarly, a 1997 British Medical Journal article claimed: "The increasing awareness of journal impact factors and the possibility of their use in evaluation are already changing scientists' publication behaviors towards publishing in journals of maximum impact" . Moreover, the pressure currently facing researchers to publish in high impact journals is in stark contrast to publication behavior as recently as 25 years ago. An investigation undertaken in 1984 into which factors influenced scientists' selection of journals for publication concluded: "... that journals were primarily selected on the basis of the audiences they reach, rather than the rewards they confer, and the reward seeking model of selection behavior found little or no support" . It is interesting to note that the twenty years in which our data demonstrates an increase in biased language corresponds to a time period wherein scientific authors began to change their behaviors with regards to publication. We suggest that the emergence of a new trend in which a reward-seeking-model (high impact factor) begins to supercede target audience as the primary motivation in the selection of journals should not pass unnoticed.
Scientists' response to the barriers to publication
The status of scientific journals is measured by the impact factor and journal editors have adopted strategies to enhance the impact factor, e.g. by publishing review articles which tend to be cited frequently. Editorial evaluation of articles and their potential acceptance or rejection based on priority is based on interest to the readership, and not necessarily the quality of the science. Rejection of an article based on being low priority for the journal is often not reflected in the reviews provided to the authors. A judgment of low priority is a subjective opinion and as such is not an issue for debate. How "hot" a topic is, is of critical importance to its chances of publication. This trend, when examined in conjunction with the increased use of biased words, raises some fundamental questions. Does a reward-seeking-model of publication – as reflected in the current desire to publish in high impact journals – influence the use of language in scientific manuscripts? For instance, is it possible that authors have discovered that an effective strategy to counter the failure of reviewers to be excited about an article is to create bias through the use of language that exaggerates the importance of the findings? Or, is it merely that language exists in a state of flux and any changes in style or vocabulary merely reflect time-related alterations in writing? Finally, perhaps the biased words are not so much biased as emphatic, though necessary, descriptors of the work which is being presented?
At first glance it seems plausible to state that the words under investigation are not reflective of bias, but are rather necessary descriptive terms of what is, in fact, a new and important knowledge claim. A detailed discussion as to whether manuscripts in high impact factor journals are truly more "important" or "novel" than those in low impact journals is beyond the scope of this paper and may be a subject for future investigation. However, we would argue that it is remarkable that the use of biased words has shown an increase over time in both low and high impact journals. That is, it seems unlikely that the ideas posited in scientific articles in 2005 are markedly more valuable or significant than those put forward in 1985. A more plausible explanation is that it is the style, rather than the substance of the articles, which has altered.
It is a truism to state that language is constantly evolving and it seems reasonable to consider the possibility that changes in style and vocabulary may simply reflect time-related alterations in writing. Still, it is interesting that the difference between the language used in fundamental and clinical journals is so marked, with biased words more frequently found in high impact fundamental journals. This prompts the question: why is it that language has only "evolved" in fundamental journals? A hypothesis which suggests itself is that the language used in the interpretation of data in clinical journals has the potential to impact upon clinical practice and is therefore more likely to be tempered than language used in fundamental journals. Be that as it may, the question remains as to why the use of biased language is on the rise in fundamental journals and whether this trend should continue unchallenged. Furthermore, what conclusions may be drawn from grandiloquence and high impact factors? Perhaps it is possible that high rejection rates by editors without the use of peer review increases the pressure for hyperbole so as to clear the first hurdle
The increased use of biased words provides an interesting locus for a discussion on the changing trends in publication and the increasing pressure felt by authors today. While we hesitate to suggest that the latter is responsible for the former we are confident in the assertion that the use of biased words in a scientific manuscript does not serve a useful purpose. The readership is unlikely to require orientation to ensure that pivotal and central observations pass unrecognized inadvertently. On the contrary, language that exaggerates the importance of findings may fuel skepticism and alienate the reader. Perhaps journals should encourage more modest claims on the part of the authors and encourage a return to objectivity. To end at the beginning; "The numbers and not their interpretation, must speak for themselves."
V. Fraser was the beneficiary of a summer studentship from the Meakins-Christie Laboratories. The authors would like to acknowledge the critical review of the manuscript provided by Dr. Marie-Claire Michoud and the assistance with statistical considerations provided by Dr. Heberto Ghezzo.
- Editorial: Truth in Numbers. Nature Medicine 2006, 12:1.Google Scholar
- Suppe F: The structure of a scientific paper. Philosophy of Science 1998,65(3):381–405.View ArticleGoogle Scholar
- Hanna JF: The scope and limits of scientific objectivity. Philosophy of Science 2004,71(3):339–361.View ArticleGoogle Scholar
- Adam D: The counting house. Nature 2002,415(6873):726–729.View ArticlePubMedGoogle Scholar
- Seglen PO: Why the impact factor of journals should not be used for evaluating research. British Medical Journal 1997,314(7079):498–502.View ArticlePubMedPubMed CentralGoogle Scholar
- Gordon MD: How Authors Select Journals – A Test of the Reward Maximization Model of Submission Behavior. Social Studies of Science 1984,14(1):27–43.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.