Uechi-Ryu.com

Discussion Area
It is currently Fri Apr 25, 2014 6:45 am

All times are UTC




Post new topic Reply to topic  [ 8 posts ] 
Author Message
PostPosted: Thu Jan 12, 2006 8:00 pm 
Offline

Joined: Tue May 22, 2001 6:01 am
Posts: 284
Location: Mansfield, MA USA
The following article is from the January 11, 2006 Boston Globe, and was written by Gareth Cook. The article addresses a problem in scientific research where laboratory results are manipulated by use of inexpensive digital technology. With pharmaceuticals and bio-med being such potentially lucrative businesses, have there been any significant changes to peer review procedures?

Sincerely,

Norm Abrahamson
______________________
Technology seen abetting manipulation of research
By Gareth Cook, Globe Staff | January 11, 2006
An explosion of new digital image technology has left many of the world's top biology journals vulnerable to fraud, scientists say.
The same advances that have given consumers inexpensive digital cameras -- and software to easily copy, crop, or alter an image with a few clicks -- have also proven a temptation for unscrupulous researchers. Federal science fraud investigations involving questionable images have grown from 2.5 percent of the cases in 1989-90 to 40.4 percent in 2003-04, according to the federal Office of Research Integrity, which investigates scientific misconduct. And in just the past few months, there have been two high-profile cases -- those of discredited South Korean scientist Hwang Woo Suk and the fired MIT biologist Luk Van Parijs -- that involve duplicated images.
The manipulation of images of cells can give a scientist a way to convince colleagues that experiments were successful when they were failures. By merely changing images on a laptop, scientists can earn acclaim, win lucrative research grants, and advance their academic career.
For decades, many scientists and journal editors have assumed that cases of scientific fraud are extremely rare because few come to light each year. The scientific community, they believed, had adequate checks against fabrication, such as the practice of other scientists repeating experiments after they're published.
But an innovative program at one leading biology journal challenges these assumptions. In September 2002, the Journal of Cell Biology began examining all the images in papers it had tentatively accepted but not yet published, using the software program Photoshop. The journal has had to reject 1 percent of these papers because authors manipulated images in a seriously misleading way, adding or subtracting elements that changed an experiment's results, according to Mike Rossner, the journal's managing editor. Photoshop makes it easy to manipulate an image, but it also allows someone to adjust an image and look for signs of manipulation.
''We can catch a lot of stuff before it comes out," said Rossner. ''For me, I consider this a matter of personal responsibility as an editor."
Rossner has been warning editors at other journals of the scope of the problem, and what can be done about it, and some are listening. The journal Science, which published both papers in which Hwang falsely claimed to have cloned human embryonic stem cells, announced yesterday that it would start using new safeguards to detect altered images this month. The journal will use the techniques developed by Rossner, according to Science editor in chief Donald Kennedy, but he said he didn't think these procedures would have caught the fraud in the Hwang case.
None of the other biology journals contacted by the Globe, including Nature, Cell, PLoS Biology, and Proceedings of the National Academy of Sciences, has such a system for checking images. But all said they were now reviewing their procedures for handling images, and may change their policies.
Images play an important role in documenting the results of biology experiments, and some of them can be especially prone to fraud, scientists say. For example, biologists commonly include in their papers images of a gel used in tests of characteristics of cells, with black bands indicating the presence of DNA, or particular proteins. These bands are easy to fake to change the results of an experiment, and this is one of the most common types of misconduct Rossner finds.
At Science, which publishes a full range of scientific papers, the new screening will focus on papers in biology, which tends to use a type of image vulnerable to manipulation, Kennedy said. He and other journal editors said that they were shocked to hear that 1 percent of the papers accepted in the Journal of Cell Biology had to be rejected because of apparently fraudulent images.
Rossner said that his involvement in detecting fraud came entirely by accident. Several years ago, the journal started requiring its authors to submit their papers, including all of the images, electronically. One author had submitted an image in the wrong format, so Rossner was working to fix it. As he did this, he noticed a box around part of an image, where the background didn't quite match -- a sign that someone had altered the image.
''I said, 'Oh, boy,' " Rossner remembered.
Eventually, he set up a list of checks that a production person at the journal does for every image to be published. For example, he said, they will take the image and enlarge it, then increase the contrast of the image to search for signs that the background is not consistent. They also look at magnified images to see whether any are duplicates. On average, the checking takes about 30 minutes per manuscript.
Since the system has been in place, about 1,300 papers have been reviewed. In about 25 percent of the cases, there is at least one image that has been changed so much that the editors think it is not an accurate representation of the original, and the authors are asked to resubmit the photo. For example, it is common for authors to increase the contrast on the image so that some of the fainter lines disappear, making the image look cleaner.
In these cases, the manipulations do not affect the conclusions of the paper, but violate the journal's strict policy on image manipulation.
In 13 cases, Rossner said, the screening has found what the journal considers to be fraudulent manipulations and has rejected the paper. These include duplicating cells, and adding bands on gels. He said that the journal rejects papers only if he and three other editors agree that the misdeed is that serious. Rossner has been consulting with Hany Farid, a computer scientist at Dartmouth College, who is developing computer software that will automatically check digital images for signs of fraud, including portions of images that are duplicated. He plans to make the software available for free.
Hwang published two papers claiming to have created cloned embryonic stem cells, and both had duplicated images -- one used the same image more than once, and another used images that had appeared in other journals, which would have been much more difficult to catch. Science said yesterday it would retract both papers, after the release of an investigation in South Korea showing that Hwang had invented his findings.
In a separate case, the Massachusetts Institute of Technology fired Van Parijs for research fraud last year. MIT did not identify the specific problem, but several papers he wrote before coming to MIT contain what appear to be duplicated images.
Part of the problem, scientists said, is the speed with which the technology has changed, meaning that journals, and the scientific community, have not had much time to think through the implications. Now, a single scientist can acquire data from an experiment, analyze it, and generate a final image for publication by computer, without anyone else being a part of the process, eliminating a check on fraud. There is also something of a generation gap -- the people who head labs and train the next generation in what is acceptable are often not as familiar with the full power -- and potential for abuse -- that the new tools provide, according to Emilie Marcus, editor of Cell.
Rossner said that while some editors have been enthusiastic to hear about what can be done to detect fraud, others are complacent. Twice, he said, he has seen papers rejected by his journal for fraudulent images appear later in another journal, with the same, problematic figure. Once, he said, he wrote to the editor of the journal to tell him about a band whose intensity had been manipulated. Rossner got a note back; the editor didn't see any problem.
Gareth Cook can be reached at cook@globe.com.
_____________________


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jan 12, 2006 8:36 pm 
Offline
User avatar

Joined: Thu Mar 11, 1999 6:01 am
Posts: 17040
Location: Richmond, VA --- Louisville, KY
This is just a new look (literally) at an old problem.

Scientists are no different than any other group of individuals - except for their training and their minimum level of intellgience and imagination. They compete, they become jealous, they can panic under pressure, they can be petty, they can be prone to painting the rosiest possible picture of their point of view, etc.

Scientists also are vulnerable to the temptations of lieing and/or withholding truth. There are two good checks for this.

The first is the peer-review process. As you can see from the article, the process of submitting a publication for examination by peer experts before publication allows for detection of fraud along the way. People who have "been there" in a field have a good sense of the way the world is. When something doesn't pass the "sniff test", then perhaps they look closer.

The second is the final step in science - having someone else reproduce your results. Without that final validation, your work means nothing. If you're going to fake something, you damn well better fake it good. Because when it comes time for your results to be checked by a colleague who may or may not be your fan, (s)he may find the real truth and report it.

There is a bit of a single sanction in science. If you are caught lieing, you are dead in the water. Nobody will pay attention to you any more, and nobody will hire you. You will be shunned in a very small, tight community.

Better yet...

It is my personal experience that "the truth" is much more interesting than what we believe it to be. I've never been tempted to fake results, because I've found early on that the imperfections and the exceptions are the playground for new discoveries.

When it came time to do my dissertation research, my advisor recommended I take a paper published in Science, and bring the work to the next level. The author used a standard engineering way to quantify fluctuations in heart-rate, and show how those functuations changed with different states of the autonomic nervous system. The far-reaching implication was that this possibly could be a way to detect diseases like diabetic autonomic neuropathy. And that was my take on it.

But first... I had to reproduce the author's results. So I spent 2 years in the lab trying to do it. According to the author, the fluctuations were stable, and could be shown as stable peaks in the frequency domain. Well... I could never make my peaks stable. They came, they went, they came, they went... Only when my experimental dogs were under stress (exercise or blood loss) did they remain stable.

And then it hit me... The fellow was absolutely, positively, 100% wrong. The natural order of the body at rest is one of flexibility and adaptability. Only when the body is stressed does it exhibit a central control tendency where fluctuations in physiologic signals become constant and stable. At rest, we are in a beautiful, NATURAL state of chaos.

My refusal to disbelieve my results - and a few years to digest the truth - led me to my dissertation. And that led me to my Ph.D.

It pays in the long run to be on the side of truth - no matter how ugly it first may appear. 8)

- Bill


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jan 12, 2006 8:46 pm 
Offline

Joined: Wed Oct 27, 2004 9:40 pm
Posts: 3700
Quote:
The first is the peer-review process. As you can see from the article, the process of submitting a publication for examination by peer experts before publication allows for detection of fraud along the way. People who have "been there" in a field have a good sense of the way the world is. When something doesn't pass the "sniff test", then perhaps they look closer.

There is a bit of a single sanction in science. If you are caught lieing, you are dead in the water. Nobody will pay attention to you any more, and nobody will hire you. You will be shunned in a very small, tight community.
And if you reviewed the work of such a person and didn't catch it your reputation goes in the toilet also. And publishing the work of such a person is a major black-eye for the journal.

_________________
I was dreaming of the past...


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jan 13, 2006 2:48 pm 
Offline
User avatar

Joined: Thu Mar 11, 1999 6:01 am
Posts: 17040
Location: Richmond, VA --- Louisville, KY
As it should be...

Amen, brother Mike.

- Bill


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jan 13, 2006 3:03 pm 
Offline

Joined: Tue May 22, 2001 6:01 am
Posts: 284
Location: Mansfield, MA USA
A disturbing line from the article above is:

_______________________
Federal science fraud investigations involving questionable images have grown from 2.5 percent of the cases in 1989-90 to 40.4 percent in 2003-04, according to the federal Office of Research Integrity, which investigates scientific misconduct.
______________________

I never heard of the "Office of Research Integrity" before, but the figure of 40% of cases involving "questionable images" seems huge. It's not clear to me if that is 40% of all articles the agency examines, or only 40% of those articles that show possible fraud.

What do any of you folks know about the Office of Research Integrity?

Sincerely,

Norm Abrahamson


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jan 13, 2006 3:41 pm 
Offline
User avatar

Joined: Thu Mar 11, 1999 6:01 am
Posts: 17040
Location: Richmond, VA --- Louisville, KY
Norm wrote:

What do any of you folks know about the Office of Research Integrity?

I don't know anything about it, Norm, because I've usually worked with good people.

There was one case of a research fellow (young MD) having "faked" data in my group. He was booted from the group. I'm sure he was able to operate a private practice somewhere and do just fine. But his academic career ended right there.

As for the 40% figure, well... I wouldn't get too worked up on what it means in terms of the integrity of results.

I've worked with imaging data both professionally (contrast echocardiography) and personally (digital photography). Basically there are lots and lots of things you can do with images once you get them digitized. I've had the training in digital signal processing and digital image processing. And my home computer with XP operating system has photo editing software with it. It's REALLY easy to use. I never have to write the programs like I did years back when I was in grad school and when processing VCR streams of images of the heart.

Here's the deal...

You may need to do image processing to get your work done. When I was doing the contrast echocardiography, we would sample one heart cycle worth of images (at 30 frames per second) into the digital imaging system. Then we might use autocorrelation programs to help superimpose the discrete images on each other in time to examine the wash-in, and wash-out of microbubbles. Or we might use one of a number of image enhancement techniques to help see the heart wall so we could measure the thickening and thinning of the heart muscle through the cardiac cycle. That was part of us getting our job done.

The issue here is whether or not your image processing was a means to a legitimate end, or was used to make data something that it was not. That's it - period. My sense is that only a small fraction of that 40% was fraud.

What a researcher MUST do though is disclose EVERYTHING that they did. So if you enhanced the image for a publication, SAY you enhanced the image and how.

This Globe article is sensationalizing things a bit. There are many much more easy ways to commit fraud in research. The image thing makes press because it's new and people fear things that are new. But any old fool can put numbers down on a page (or change those numbers just a bit...) and call it data. A reputable researcher does not.

And the frauds eventually get caught by smart people. That's why we have the peer review process. The truth eventually rises to the surface.

Back when I was working for a BCBS health plan, they had a full-time fraud department that literally recovered millions every year from detected fraud. Much of that fraud was committed by MDs, and we weren't afraid to put their sorry butts in jail. A typical case was where someone was supposed to have a stress test before BCBS would approve a transplant. The MD was too bothered to do another and had an old one available but not in the time window. So he used white-out to change the date, and faxed it to us. We did not catch it. But a secretary called us up and ratted on him. He lost his job over it - as he should have. And he was a prominent specialist in an academic institution.

Was that a niggling detail? Absolutely not. That stress test was designed to see if the operation could be done safely on the patient. This MD put his patient's life in danger because he was lazy. No excuse!

- Bill


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jan 13, 2006 6:33 pm 
Offline

Joined: Tue May 22, 2001 6:01 am
Posts: 284
Location: Mansfield, MA USA
Bill wrote:

_____________________
And the frauds eventually get caught by smart people. That's why we have the peer review process. The truth eventually rises to the surface.
______________________

But how long does it take? Is it after millions of dollars have been directed to develop a drug or product that will never work properly? Those costs are ultimately passed on to consumers and investors. It would be nice to catch the fraud before it causes damage. But is it practical? It sounds like researchers police themselves, but it takes another researcher to be interested enough to try to duplicate a process or experiment. Couldn't that take years?

Sincerley,

Norm


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jan 13, 2006 7:19 pm 
Offline
User avatar

Joined: Thu Mar 11, 1999 6:01 am
Posts: 17040
Location: Richmond, VA --- Louisville, KY
My sense, Norm, is that fraud exists as assuredly as the sun rising in the east. Just to give you a number... The federal government has estimated that as much as 15% of the dollars floating through the healthcare system are fraudulently billed for. (There are shades of wrong here...)

Norm wrote:

how long does it take?

It depends. Sometimes fraud is caught in the lab of origin. Sometimes it is caught when an article submitted to a journal is making the rounds amongst peers. Sometimes - as you can see - the journal itself takes on some degree of activity to do what it efficiently can do. And sometimes it's years after a number of articles are published before someone's work begins to achieve an "outlier" status. Is it paradigm shift or fraud? Only time can tell. (It's taken decades to get through many of the implications of Einstein's theories.)
Norm wrote:

It would be nice to catch the fraud before it causes damage. But is it practical?

Fraud of many kinds has been caught through the years - mostly through pedestrian methods.

Actually I do a little bit of consulting work for a fraud unit in my own health information company - this being healthcare financing fraud. The methods for detecting it are getting more and more sophisticated. And given the amount of revenue involved, it makes it possible for smart people and expensive tools to enter the arena.

What you are asking about though is fraud from the standpoing of drug development, testing, and marketing. There are processes in place to check that, as Mike and I have discussed. But there most definitely is room for improvement from the standpoint of F.D.A. standards. There's a tradeoff involved. Already people with life-threatening illnesses complain because new treatments don't make it to the marketplace fast enough. So the F.D.A. plays this tricky game of a balance between getting new treatments to the marketplace faster (to save lives) while preventing risky treatments from making it past the review process (to save lives). That's the overall goal - save the most lives.

Individual companies themselves IMO can help on two fronts. First... I'm a big believer of ethics training. I've been through several versions of it with different employers. This is a message that the company needs to be on board with from top to bottom. The second is six sigma training. Companies need to be in tune with the needs (voice) of the customer, and they need to deliver a product with the lowest defect rate. The message needs to be communicated over and over again that honest, defect-free business is good business. At the end of the day, you maximize profit. You make the most money when you do it right the first time. And that's a good thing for everyone involved.

And my health information company now does work for the federal government to detect drug problems that haven't been (or couldn't be) "detected" in the F.D.A. approval process. We deal with the medical claims of tens of millions of people a year. That's a gold mine for detecting new drug problems - if you know how to find the needles in the haystack. This is a relatively new business for us (about a year old).

So yes, Norm, I think process - on many levels - is helpful in this matter.

- Bill


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 8 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group