The increasing incidence of adjectives expressing subjective judgments undermines what has traditionally been accepted as the objective nature of the scientific paper. Our argument therefore assumes that objectivity is an integral and necessary component in the quest for scientific progress. Most would tacitly acknowledge that objectivity occupies a unique position within scientific disciplines. In his paper: The Scope and Limits of Scientific Objectivity Joseph F. Hannah states: "It is generally agreed that one of the distinguishing virtues of science is its objectivity. The scope of science is the objective world and the limits of science are determined by the limits of the objective methods of formal and empirical research" [3]. Insofar as the scientific paper is the primary vehicle for new and private scientific findings to enter into the realm of public discourse, it should also demonstrate a commitment to the principles and standards of objectivity. We would argue that the paper may take a subjective stance insofar as it argues for the relevance of the observations it posits as well as to the implications the observation will have on the established body of knowledge, but these contextual arguments should be minimal and tempered with discretion. The strength and import of observations and conclusions should be evident in and of themselves with minimal positioning on the part of the authors.
The demonstrable increase in the use of adjectives with the potential to bias the reader may indicate that the interpretation of results has come to replace what has traditionally been a more objective stance. This shift towards the somewhat hyperbolic interpretation of data from the more conservative representation of data, raises important questions about the evolution of the scientific article and must be examined in conjunction with changing attitudes within the scientific community regarding the writing and submission of articles, the mounting impact of the impact factor and the pressures currently facing authors seeking publication.
The Rising Impact of the Impact Factor
Changing attitudes towards scientific publication must be examined in tandem with the changing role of the impact factor in assessing the merits of a body of work and the "impact" this has had on the scientific community. Briefly, the impact factor of a journal reflects the number of citations appearing in indexed publications in a given year to articles published in a given journal in the previous two years, divided by the number of citable papers published within these two years. However, the original purpose of the database developed by the Institute for Scientific Information and used for citation analysis has been somewhat forgotten and the impact factor has taken on a life of its own. Several detailed critiques of the impact factor have been published [2], highlighting shortcomings such as the limitations of the impact factor in comparisons of journals involving different research fields. In addition, even within a discipline the impact factor may not measure appropriately the quality of the journal. For example, it is sensitive to whether an area of research is young and developing, and therefore likely to lead to citations that are recent, or more mature.
Although the merit of impact factor remains the subject of intense debate, its current influence on scientific papers and publication is not. Impact Factor has extended its reach to be included in the evaluation of academic and medical institutions as well as in the evaluation of researchers for tenure and promotion and the awarding of grants [1]. The latter often hinges not only on the number of publications and the quality of the research but also the impact factor of the journal. In 2002 a Nature News feature noted: "...the implicit use of journal impact factors by committees determining promotions and appointments is endemic" [4]. Similarly, a 1997 British Medical Journal article claimed: "The increasing awareness of journal impact factors and the possibility of their use in evaluation are already changing scientists' publication behaviors towards publishing in journals of maximum impact" [5]. Moreover, the pressure currently facing researchers to publish in high impact journals is in stark contrast to publication behavior as recently as 25 years ago. An investigation undertaken in 1984 into which factors influenced scientists' selection of journals for publication concluded: "... that journals were primarily selected on the basis of the audiences they reach, rather than the rewards they confer, and the reward seeking model of selection behavior found little or no support" [6]. It is interesting to note that the twenty years in which our data demonstrates an increase in biased language corresponds to a time period wherein scientific authors began to change their behaviors with regards to publication. We suggest that the emergence of a new trend in which a reward-seeking-model (high impact factor) begins to supercede target audience as the primary motivation in the selection of journals should not pass unnoticed.
Scientists' response to the barriers to publication
The status of scientific journals is measured by the impact factor and journal editors have adopted strategies to enhance the impact factor, e.g. by publishing review articles which tend to be cited frequently. Editorial evaluation of articles and their potential acceptance or rejection based on priority is based on interest to the readership, and not necessarily the quality of the science. Rejection of an article based on being low priority for the journal is often not reflected in the reviews provided to the authors. A judgment of low priority is a subjective opinion and as such is not an issue for debate. How "hot" a topic is, is of critical importance to its chances of publication. This trend, when examined in conjunction with the increased use of biased words, raises some fundamental questions. Does a reward-seeking-model of publication – as reflected in the current desire to publish in high impact journals – influence the use of language in scientific manuscripts? For instance, is it possible that authors have discovered that an effective strategy to counter the failure of reviewers to be excited about an article is to create bias through the use of language that exaggerates the importance of the findings? Or, is it merely that language exists in a state of flux and any changes in style or vocabulary merely reflect time-related alterations in writing? Finally, perhaps the biased words are not so much biased as emphatic, though necessary, descriptors of the work which is being presented?
At first glance it seems plausible to state that the words under investigation are not reflective of bias, but are rather necessary descriptive terms of what is, in fact, a new and important knowledge claim. A detailed discussion as to whether manuscripts in high impact factor journals are truly more "important" or "novel" than those in low impact journals is beyond the scope of this paper and may be a subject for future investigation. However, we would argue that it is remarkable that the use of biased words has shown an increase over time in both low and high impact journals. That is, it seems unlikely that the ideas posited in scientific articles in 2005 are markedly more valuable or significant than those put forward in 1985. A more plausible explanation is that it is the style, rather than the substance of the articles, which has altered.
It is a truism to state that language is constantly evolving and it seems reasonable to consider the possibility that changes in style and vocabulary may simply reflect time-related alterations in writing. Still, it is interesting that the difference between the language used in fundamental and clinical journals is so marked, with biased words more frequently found in high impact fundamental journals. This prompts the question: why is it that language has only "evolved" in fundamental journals? A hypothesis which suggests itself is that the language used in the interpretation of data in clinical journals has the potential to impact upon clinical practice and is therefore more likely to be tempered than language used in fundamental journals. Be that as it may, the question remains as to why the use of biased language is on the rise in fundamental journals and whether this trend should continue unchallenged. Furthermore, what conclusions may be drawn from grandiloquence and high impact factors? Perhaps it is possible that high rejection rates by editors without the use of peer review increases the pressure for hyperbole so as to clear the first hurdle