Sezen & Sensibility: Lab dynamics & the promotion of (ir)responsible conduct of research

The chemistry tubes have been abuzz lately with the ongoing saga of the Bengu Sezen misconduct investigation. For excellent coverage, go check out ChemBark’s posts. It is a disturbing tale of years of data fabrication, fraud, sabotage, evasion, and perjury on the part of a chemistry graduate student at Columbia University, resulting in the retraction of six papers from the laboratory of her adviser, Professor Dalibor Sames.

Early on, it became evident that Sezen’s chemistry was peculiarly ‘finicky’. Other students in the lab attempted to use the reactions she’d developed – without success. Soon other labs were contacting Sames with similar quandaries. What was the secret that everyone was missing? Eventually – after publication of six papers and Sezen’s dissertation defense – it became clear that there was no secret aside from an elaborate con.

It would perhaps be a relief to leave the blame solely at Sezen’s feet, to believe that Professor Sames was duped, to be able to say that Sames was but one victim of an incredible con game. Sadly it appears this is not the case. There is nothing to suggest that Sames was complicit in the data fabrication, but he created a situation that allowed it to continue and expand. In the face of overwhelming evidence, Sames continued to stand behind Sezen’s work – until a grad student set up a sting operation that left little doubt Sezen had sabotaged the experiment to cover her tracks. Among the documents recently released under Freedom of Information Act requests, the Office of Research Integrity investigation reveals that two graduate students were dismissed by Sames and a third switched labs. Although a redacted someone (I would guess Sames) claims other circumstances contributed to the dismissals, there is an obvious implication that they were due in large part to the students’ failures to reproduce Sezen’s results.

This is where some fault falls on Sames. Things rarely work exactly the same way every time, but there has to be some degree of reproducibility. From the description of what happened in the Sezen case, a dichotomy was established, in which a professor placed absolute faith in one person and viewed failure of others as utter incompetence. This breeds an environment of stress, fear, and contempt. Sezen’s actions are inexcusable, but it is perhaps not so difficult to understand the motivation to provide the expected results.

It is unreasonable to expect a lab head to review every piece of data, to keep watch over every experiment. Not only is it impractical, but also an environment where “Big Brother is watching” would be detrimental to morale and the creative process of science. Trust and integrity are central elements to the success of a laboratory. But trust between scientists, no matter the stage, must not be blind and should never come at the expense of skepticism and sensibility. We should welcome the attempts of others to repeat and build upon our work. And when they cannot, we should be willing to explore why, not with the expectation fraud at every turn, but in the hope of improving our research and advancing the field.

Posted in advisor/trainee interactions, attitudes, conduct of research, ethics in science, lab management, retractions, troubles of science | Tagged , | 3 Comments

Scientiae Carnival: Inspiring Women in STEM

Inspiration.

It is an essential element to finding motivation, creative avenues, success, satisfaction…

But where do we find it? Is it in those who have come before us? The encouragement and confidence others give? The goals and dreams we have set for the future? In particular, how do we inspire girls and women to join and remain in STEM fields? This is the theme of the second quarter Scientiae Carnival.

Everyone has her own story, her own source of inspiration, and it varies at each stage of career and life. Elaine Westwick found her inspiration to become a scientist amongst science writing greats, but as she considers re-entry to a science career, she is looking for new role models. J.J. Brown recognizes the importance of success, visible female scientists shares a personal role model, Barbara McClintock. lin at PhD-baby is inspired the women she grew up with, women who had to fight for their education and jobs. Karina over at Ruminations of an Aspiring Ecologist reminds us that mentors come in many forms – including friends and peers. Alyssa at Apple Pie & The Universe and Dr. O at The Tightrope have been learning to entangle the roles of scientist and mother. JaneB shares her difficulty in finding inspiration among role models, a sense of being weighed and found wanting – a feeling I think anyone could relate to.

From my own experience, you don’t have to be a female scientist to encourage girls’ interests in STEM. Growing up with strong and occasionally spunky working women in my life, it never occurred to me that anything of interest was out of bounds. Those women also influenced the men I grew up with – namely my dad and brother – who never told me what I could or couldn’t do because I was a girl. It was a middle-aged white guy teaching chemistry in a small town high school who planted the first suggestion that I, Belle,  could personally make a career out of science. I didn’t think much of it at first, or even for a few years. But that first bit of encouragement laid the groundwork for other fantastic mentors – also male – in college and grad school. The further I go in my scientific career, the more I realize that there are few women at the top and that there are particular challenges and stereotypes to navigate as a woman in science. I now look more for strong, female scientists as role models and potential mentors.

But in this quest for inspiration, we must remember: The goal isn’t to become those who inspire us. It’s to find a spark of motivation, an element of respect, and – most of all – the knowledge that through all our exploits, we’re not alone.

Posted in Scientiae carnival, women in STEM | 4 Comments

Retractions: What are they good for?

Science's Scarlet Letter

Retraction.

It’s perhaps one of the juiciest words in scientific language. One might think of it as the Scarlet Letter of science, a public stamp that can trigger gossip, speculation, whispers… Something went wrong somewhere, and someone is paying for it by labeling their published work as invalid.

But what exactly has to go wrong to call for a retraction? This is the question posed at Retraction WatchDrugMonkey is polling readers on their interpretation of the retraction and posts his stance:

… we need to be exceptionally clear in the business of science that a failure to replicate is not, in fact, evidence of fraud.

In my view, a correction suffices… in most cases where there is not fraud.

Retraction, to me, implies that there is reasonable evidence of some sort of shenanigans.

I’m certain we could all agree that shenanigans (e.g. data fraud or fabrication) should result in retraction.

I also strongly agree with DrugMonkey’s first point, that lack of reproducibility is not in and of itself a strong enough reason to retract a paper. I think anyone who’s been in the lab for even a year can understand the difficulty of repeating a protocol from someone else, step by step, in the same way and getting exactly the same answer-especially with biological systems. Sometimes there are small differences in how we do certain steps that we would never think to write down. Occasionally something as innocuous as cell passage, density, or reagent batches can affect our results. Environmental changes (e.g. building temperature or humidity) can wreak havoc on instruments or growth conditions.

However, with the current publishing structure, I think an inherent problem with the data does warrant retraction. I’m talking about things that directly influence outcomes and conclusions, such as contamination of a sample or reagent, realizing that your reagent isn’t what you thought or has an altered specificity, etc. This retraction in PNAS provides a good example.* Experiments were done, conclusions made, paper accepted. Then the authors realized they had “made an error in interpretation”. They go on to explain in detail how this occurred. The original paper is still accessible, now with a note to See Retraction in bold red letters at the top of the page and a link to the retraction in the side bar. This approach allows us to learn from other’s mistakes but makes it clear that the original results do not mean what we first thought.

One day the scientific community will trade the static print-type approach of publishing for a dynamic, adaptive model of communication. Imagine a manuscript as a living document, one perhaps where all raw data would be available, others could post their attempts to reproduce data, authors could integrate corrections or addenda. Maybe the future of scientific publishing will make retractions for technical issues obsolete.

But we’re not there yet-and there will be some time before we reach that future. In the meantime, we should be making it clear that retractions do not always mean that someone did a bad, bad thing, that sometimes it simply means someone made an honest mistake. After all, we are only human.

 

* Admittedly it’s a little odd that one of the authors did not sign, but according to the corresponding author, that’s because Xu was an undergrad student not involved in the erroneous portion of the project.

 

To prevent being investigated by the Office of Blogging Integrity, I should note that part of this post was originally written by me at DrugMonkey’s What does a retracted paper mean?, provided comment moderation did not eat it.

Posted in conduct of research, ethics in science, publishing, Uncategorized | 7 Comments

Please standby…

We’ll return to regularly scheduled programming… sometime.

I am not going to apologize for my lack of blogging this year. I am still here. I have no intentions of giving up blogging. I miss it-not just writing but also reading and commenting on so many other blogs I love.

The intermittent posting and silence are the direct result of a crazy hectic quarter (which is why this isn’t an apology ;)). There have been grant and award applications (and I have to get to work on another soon), new collaborations, grad school manuscript that I’m re-writing, setup for various parts of my projects, and typical getting-shiznit-in-order-in-the-lab business. There’s lots of stuff going on, but I’m having a blast.

Several things are kicking around in my head and, when I find the time, I will post them. So keep your RSS feeds tuned in. I’ll be back!

Posted in blogging | 1 Comment

Alphabet Soup: Abbreviations and acronyms in scientific communication

Acronyms and abbreviations abound in scientific communications. With the lengthy and at times complicated names scientists give to molecules and methods, it is no surprise that we need to shorten these into something that doesn’t take up half a line of a poster, slide, or manuscript. Some acronyms are so often used that they are accepted as part of our everyday scientific language, for instance DNA, RNA, and NMR. Indeed some abbreviations are adopted into everyday general language, things like radar, laser, and MRI; the explicit roots of those words are lost and forgotten by the majority, the shorter version having morphed into a word that now stands on its own.

As scientists, the use of particular terms and acronyms expedite communication in our day-to-day work. Imagine if you were a toxicologist and every time you spoke with a colleague about an experiment, you explicitly stated cytochrome P450 3A4. It is faster–and less tongue-tying–to call it CYP3A4. Likewise jotting down MAPK in your notebook instead of mitogen-activated protein kinase is going to save some hand cramping. Most chemists don’t pause to process terms like LC, MS, CID, TLC, NOESY, or Rb flask. For immunologists, it’s all about FACS, CD11, Th9, IL-4, CD68 and M2 macrophages. Those working in computational biology rattle off things like MM, MD, ODE, and MCCM without second thought. Similarly a cell biologist is in the loop when people start throwing around MAPK, JAK, ERK, PP2, IKK, PAMPs, and DAMPs.*

Acronyms have their utility and place, but sometimes we are overzealous in their use. We are simply so accustomed to using shorthand everyday that, when we sit down to write about our work, the jargon just flows from our fingertips. It doesn’t occur to us that our second language may not be widely used. Sometimes these abbreviations actually interrupt the flow of a paragraph. These issues are particularly relevant when writing or speaking to an audience that may be hearing your jargon for the first time.

Scientists should reevaluate the use of abbreviations outside our microenvironments. We should carefully consider whether an acronym helps our presentation–or just turns it into alphabet soup. Here are a few guidelines I am trying to adopt:

  • Is the abbreviation unique? Some abbreviations mean different very different things in different disciplines. Sometimes they mean different things in the same field. Your audience is smart enough to recognize the discrepancy between their definition and the context. Still, seeing an acronym that is familiar but seemingly out-of-context can distract their attention. My view: If reasonable, avoid its use; if not, make sure it is clearly defined with the first use.
  • Is the abbreviation widely recognized? In some cases, full-length terms are not particularly descriptive, and over the years, abbreviations have been adopted in place of names. I often see this in cell signaling with things like ERK and Jnk. Other terms are more descriptive but may still be as readily recognized by their acronym as by their name.  If the short form is as readily recognized as the long form, I use the conventional notation.
  • Does the abbreviation enhance the flow in subsequent usage? For long or complex terms, introducing and using abbreviations makes reading and speaking easier. I daresay most people would prefer reading MAPEG over membrane-associated proteins in eicosanoid and glutathione metabolism. On the other hand, shortening a relatively simple 12-letter word to a 3-letter abbreviation often does little to help the flow and can even hinder it with the introduction of a parenthetical.
  • How much space does the abbreviation actually save? This links in with the previous point and is particularly relevant for abstracts. You’re writing an abstract or short grant that’s going to be read by people outside your field. You include an abbreviation–because you always use that abbreviation–that appears 6 times. The space it saves is half a line. This is a case where I revisit the guidelines above to decide whether the abbreviation is necessary. If it’s just a space saver, then I can probably find other cuts to make while improving the overall flow of the paragraph.
  • Who is your audience? You can get away with more shorthand if you’re among peers in your field. However, you should still consider whether the jargon you’re using is widely recognized or more specific to your lab. If you’re writing or speaking to a broader audience, you should certainly define your acronyms on first use, more than once for an oral presentation. More importantly, minimize the use of acronyms where you can.

These ideas may simply be founded in personal likes and dislikes, but I think they can improve the clarity, cadence, and continuity in scientific communication. What are your views on acronym usage?

Thanks to Zoonotica, Josh Drew, Simon Lancaster, Wardatron, Steven Saltman, Geeka, and tevetorbes for discussion earlier in the week, which partially inspired this post.

*If you’re bored, see how many of these abbreviations you can decipher–without the help of Google. 🙂
Posted in communication, presentations, publishing, writing | 8 Comments