Hi everyone,

  I am beginning to collect information for an assignment I will write regarding trust, honesty, and objectivity in scientific research. I've come across many specific examples of fraud, (like the 2005 incident with the Korean stem cell researcher) but I think I'd like to focus on the root of the problem which in many cases seems to be intense pressures from funding agencies (universities or governments) on scientists to publish or perish.
  In an article in "NewScientist" it talked about a survey carried out by the American Association for the Advancement of Science which basically concluded that most scientists surveyed had the "impression" that fraud was on the rise, but had no real proof of "incidence".
  Any thoughts on the issue would be greatly appreciated. Have any of you ever experienced this type of pressure to publish?

Also, my instructor, who will mark the paper (who's a grad student...I'm only in my first year of university) suggested I also look into pressures from private pharmaceutical companies on their scientists to produce drugs, because there's lots of money to be made. But don't these drugs go through rigorous testing before being allowed to reach the shelves? And thus there isn't much room for "cheating" there?

Thank you for your comments in advance,

Jack

Great question! If we are honest I would guess that few if anyone who contributes to this site has not been aware of some sort of “fraud”. Lets start off by defining what we mean by fraud – if has of course many degrees.

At one extreme there are many examples of people (just look at retractions in Nature and Science) who just made up or falsified data. As you say they are usually caught sooner rather than later once others are not able to replicate their data. The more surprising the published finding, the greater the likelihood of being caught since everyone wants to “try it”. In contrast people who make things up that are entirely consistent with existing literature can get away with it for longer since no one immediately “jumps on the band wagon”. That said undetected fraud of that type usually occurs in small “unsexy” fields where there are many fewer people working in the field available to try and confirm and replicate new findings. As to causes of the above – in to day’s competitive world academics are only as good as their last paper and papers=grants and the converse is equally true. Further, for junior researchers on short term grant funding who need to get a job every 2-3 years, a glowing CV with high profile papers helps, so again the pressure to succeed and stand out from the crowd is obvious. Similar pressures are equally valid in small biotech – exciting findings and excellent early pre-clinical results on a lead compound = more money from venture capitalists (funders) and poor results can mean the rapid demise of a company. In that situation as you say manufactured data on a drug would one hope be picked up before the compound was tested in man but not always.

At the other end of the spectrum there is minor “massaging of data” – post-hoc an outlier is removed from a group to make a change statistically significant. Most of us at some time have seen that when we review grants or papers and it is justified as NOT being fraud since it saves on time, money and animals by not having to repeat the experiment with a larger N number and anyway the trend was there it was merely “helped on its way and it was nearly significant”!   

So… you can see from the above as with almost anything in life – fraud is a matter of degree and its definition and what we are prepared to tolerate varies with the norms of the scientific community. Good luck with your assignment.

I would say there is a huge pressure to publish. Sadly, and I would argue foolishly, your rating as a scientist is often based on the concept of publication number, citations and impact factors - in other words how many papers you publish, how many other people refer to your work, and how 'good' the journal was you published in. Of course this means that if you produce something controversial that takes 3 pages in Nature and is then cited by 50 other people all pointing out you are wrong, you will appear to be better than someone who publishes three 100 pages monsters of superb work in a less prestigious journal that no-one immediately realises their importance!

The pressure is therefore for quantity and not quality. In a field like taxonomy (naming things) which can require months of work to produce a short description of a new species that is of little interest to others outside of the field, you will be considered 'poor' compared to someone who dashes out lots of junk on an exciting field like DNA cloning that will pick up lots of citations. Anyway you try to quantify 'how good' someone is and 'how important' their work is will always be flawed, with science these are very abstract terms, but it does not help when they are so obviously flawed.

I like many people on AAB am on a short-term contract. I only get employed for two or three years at a time and I am constrantly applying for jobs or grant money to stay in employment. Since not everyone has time to sit and sift through all the dozens of applicants and the hundreds of papers they have written, they need to sort out the good form the less good. Competition is fierce, and so those with the fewest papers are likely to be dropped first, no matter how good their work or how important it may come to be. It is not exciting enough *now*. Thus the pressure is to get out more and more papers. Sure, getting them into a good journal and doing good work is important, we have standards, but I want to say in research and to do so I need to be employed, and to ensure that, I need to publish. It is (worryingly) the first measure of my ability. I may not like it, but I don't have much choice.

Just to add a little and to put the contrary view - as a senior clinical academic and a neuroscientist when I hire clinical or scientific staff I am NOT interested in the total number of publications on a CV. I only look at those where the individual is first or joint first author (ie did most of the work) and I very definitely favour high impact publications in the top journals (they have already been through rigorous peer review). Thus I go for quality and not quantity. Having said that other scientific fields outside bio-medicine might treat this issue differently.

For sure though Dave is right re metrics - in the UK much of our core university funding (especially in the biomedical field) derives from the RAE (research assesment exercise) which is very largely driven by impact factor of the journals people publish in.

But, David W., "high impact" doesn't seem to correlate very well at all with quality (at least in my field vertebrate palaeontology).  It pains me that a two-pager in Science or Nature, which amounts to nothing more than an extended abstract, is in impact-factor terms worth more than a dozen high-quality descriptions in high-quality but low-impact journals such as Palaeontology or the Journal of Vertebrate Paleontology.

Thanks Mike. I tried to be careful in my post and make the point that my comments only relate to bio-medicine where on the whole the true impact and quality of a paper (as assessed by RAE panels) and the impact factor of the journal do correlate very well. It is for that reason that HEFCE in son-of-RAE are going to use metrics to assess quality from 2010 onwards for the biomed sciences which will in turn largely determine QR (central funding to universities).

In taxonomy especially (naming things) it is especially bad. Doing a formal description of a good speciemn (a hundred or more pages of journal space - basically a monograph or small book) can take a year of full-time research. It is essential for our work, but will never be (that) highly cited and becuase of it's size will often have to go to a minor journal. But exactly as Mike says, a two page job in Nature that jsut spins how cool it is would be worth 50 times as much becuase it get's into a high impact journal and will be cited by others in the field wanting to say how cool it is.

One is the mark of a great researcher / anatomist, the other the mark of someone who was lucky enough to find a great fossil, it says little about their research, just that they can spin it well or at least write it up well. I wouldn't like to see things stop appearing in Nature and Science, but once it is there, there is no pressure for someone to then go a do the full description. They have generated their '50' points in a few weeks work, so why work for another year to get another '5'? In fact there is pressure *not* to do that work as it will take too long and not be worth much to the RAE, no matter the benfits to other researchers.

Agreed, but like it or not the RAE (and its son) are the only game in town in the UK - it determines a huge chunk of a University's income. If you look at the details of the new proposed metric system, there will be a much greater emphasis on citation rates rather than the impact factor of the journals in which one published ones 4 best papers. I suspect that will make little difference in biomedicine - how about in your field?

That is much the same. Our reseach often takes years to reach fruition (e.g. digging a fossil site over 10 years to get all the bones you can, then preparation time, descriptions and finally publication), so a key contribution to the field might not be credited as such for years or decades until other fossil material or analyses catch up. Even the major analytical studies that I tend to be involved in can take months to complete and years to publish. So anyone doing a similar kind of work will not be able to cite my research as relevant to theirs for several more years. Assuming they start a project based on my own, theirs will not be in press for a few years. Citations will be high (eventually) if it is good, but perhaps not for years, over two or three years it might be little or nothing.

Still, assessing publications on how often they are cited is at least a lot better than assessing them on how often other papers in the same journal are cited!  It's not great -- not at all -- but it's a step in the right direction.

David H. makes an important point about the slow pace of palaeontology, which means the impact factor as currently defined is biased against palaeo journals.  Since the typical in-press time of a palaeo journal is a year or so, that means that half of the impact-factor citation window for my article will have passed before there is any realistic chance of anyone citing it, because even if someone writes and submits a paper citing mine on the day mine appears, it will still spend a year in press before coming out.

This just goes to show how different things are in different fields.  In palaeo, the significance of a paper often doesn't start to be become fully apparent for five years or so.