The terrible beauty of change

It just changes – that’s all

Over the weekend, I was enjoying a typical quiet Saturday morning, drinking a delectable cup of coffee and catching up on blog reading, when I came across this post by David Kroll that really struck a chord.

In the post, David writes about career goals and transitions and how sometimes, just when you get everything you’ve worked toward, you realize that it’s not what you want at that phase of your life. The title of his post says it: It just changes – that’s all. For a little more context, consider these opening lines from David’s post.

My friends: changing your career path is okay. It really is. What you wanted at 21 may no longer serve you at 41. It’s okay.

Some people always know exactly what they want. Most people don’t. It just changes – that’s all.

This piece should be required reading for every scientist. In fact, we should revisit it periodically, because the message is just that important. Go on. Go read it. I’ll be waiting when you get back…

Surprised by the positive responses to the post, David asked me why I thought the post had resonated so. I had an initial gut reaction that took time to transform to words, and of course, I can speak only for myself, from my experience. Much of what follows is from my initial response to David. Some has been said before, but the context is the only way I know to share my feelings on why David’s post is so important. Understand that this is solely my experience, and although it may not reflect that of most scientists, the feelings are probably more prevalent than any of us would care to admit.

Change is terrifying

Expectations for careers in science are changing slowly, but we haven’t reached critical mass yet. Despite the claims of being more open-minded, there is still much of this sense that there is only one true shining path-the tenure track to full professor. It becomes deeply ingrained, often by the most subtle, subconscious, and unintentional means. Most scientists are… well, almost obsessive people. We become fixated on a goal, and even when we begin to have doubts, we often remain committed to that goal. We attempt to rationalize our doubts.

“I only feel this way now because experiments aren’t working/project isn’t taking off.

“Once things get rolling, it will be better.”

“What was the point of all this time and education if I walk away now?”

“(Adviser/parent/prof) will be so disappointed.”

“I can’t just give up. I don’t want to be a failure.”

I should know-I think I said every one of those things in the months leading up to my decision to leave my first postdoc.

Like many senior graduate students, I had a roadmap to success: a good postdoc (or two, if necessary) in a different field at a high profile institution to create a kick-ass research niche for my tenure track adventure. Then that vision came crashing down. I remember agonizing over the decision to tell my PhD adviser that things weren’t working out. I was lucky because he was extraordinarily sympathetic and supportive. Even so, it took a few more months for me to fully commit to deviating from the grand plan I had when finishing my PhD.

There was a particularly absurd rationalization that was my final holdout… one that I’m not quite ready to share with the whole world wide web (or the five readers here). The gist is that I transformed my own personal fears, doubts, and feelings of failure into a conviction that I would reap disapproval from someone important in my life–who wasn’t around to debunk the ridiculous notion. A few hours after reaching my breaking point and irrevocably initiating that walk away from my first postdoc, I realized that person would have really told me to stand up or move on.

In other words, I was carrying a lot of baggage, mostly of my own making and mostly born out of my deep, personal fear of failure.

Change can be liberating

That painful experience cause me to reevaluate just how badly I want a career in academia, how much I am willing to sacrifice for the “holy grail”. I’d rather not end up like Elsa of Indiana Jones and the Last Crusade, who falls into a bottomless hole in her last attempt to grab hold of the prize. I can deal with putting in hard work and long hours and not making a lot of money… but being miserable most of the time while doing it is something I can’t tolerate.

My experience opened my eyes and my mind to other possibilities. I did something that I wasn’t supposed to do-leaving a postdoc without a single publication or recommendation-and, in some ways, this makes it a bit easier to consider doing something else that, in the eyes of academia, I’m not meant to do.

I’m much happier in my new postdoc. I like the environment, the project, the approaches. In my earlier postdoc, my coping mechanisms were pretty horrendous, and the stress took its tolls. Now I’m taking better care of myself, Paramed, and our relationship. I don’t feel like I need a drink most nights.

My passion for research has been reignited, and I haven’t given up on pursuing an independent academic career. I have reconsidered what that path might look like. I know that I don’t want to destroy my body or my relationships for it.

I also know that I could be happy doing other things. While I’m trying to establish a track record, connections, and mentoring relationships that could advance a research career, I’m also trying to pursue opportunities that could benefit an alternative career path.

Change is anathema – to some

The St. Kerns of the world remain. They impress upon us that those with interests outside of research are shirking social responsibility and are simply not committed enough to science, maybe even to humanity. I don’t buy it, but I think it’s an idea that’s exploited by certain individuals to excuse pushing people beyond normal limits and to get those people to think that they should take it, and take it gladly, because it’s going to get you to the prize. It’s a stick they hit you with while dangling the carrot in front of you. Nonetheless, they possess rather strong voices–and sometimes large venues in which to voice their opinions. Thus the message and expectation persists.

It just changes – and that’s okay

The culture of research science sometimes makes us feel like failures for pursuing or even considering any other path. Or that it’s wrong to expect some level of personal satisfaction from our work.  It may be cliche, but life is too short not to do what you love. Many people think they’ve got all the time in the world. Some of us carry reminders that there’s not nearly as much time as we thought.

Those who have attained that great expectation–or are certain that they want it–need to understand that it’s not for everyone or every stage of life, that uncertainty and alternatives are not a reflection of work ethic or commitment.

Those of us who are open to other paths and (probably moreso) those who have chosen other careers know what David is talking about. That post expressed much of what we feel.

Those who are in the position of uncertainty need to hear that it’s OK to think about something else, to do something taboo.

It helps to know that we’re not crazy and we’re not alone. Thanks for your wonderfully articulated reminder, David.

Posted in attitudes, career decisions, do what you love, doubt, for the love of science | Tagged | 16 Comments

Reining in the defensive line

You do great work. You have beautiful slides. You’re giving a brilliant, clear description. You exude confidence.

And then someone asks you a question. How you respond has a profound impact on how your audience will view you and your work.

The great orators respond with ease and a cool, confident manner. The inquirer is left satisfied, feeling as though s/he has raised an excellent point and contributed to the scientific discussion. These orators are like great quarterbacks, thinking on their feet, buying time when they need to, wooing the crowd with the graceful arc that lands the first down or touchdown.

Others don’t respond so well. They come across as though you’ve just threatened to kill their firstborn. They become defensive, locked in a struggle with the inquirer, losing momentum and the ability to effectively control the direction of the discourse. They are more like the blockers on the offensive, aiming to protect their quarterback by brawn, stopping the blitz by brute force.

These are rather hyperbolic descriptions, but the two ends of the spectrum exist: confident to defensive. It may be an inaccurate generalization, but it seems that more than any other group, many young female scientists take the latter approach. It’s something that I have noticed myself do on occasion.

Responding defensively makes us look guilty, so to speak–as though we’re trying to pass off data that we know is shoddy or incomplete, or that we don’t know the topic as well as we should. Often this isn’t actually the case; we know the data, the literature, the models, but there’s something that keys us up for a fight. Maybe it’s simply nervousness, that little shot of adrenaline before we put our work out there for people to critique. Perhaps we think the data–and thus our work and its quality–should speak for itself, and in asking about the data or its interpretation, we feel that our personal scientific merit is being questioned. Or it could be a case of impostor syndrome, a sense that we really do not and never can know as much as we should.

Regardless of our reasons, we need to realize when we respond in this manner and adjust accordingly. We have to learn to take a breath, gather our thoughts, and reply with a clear, concise, and confident answer. We’ll have an easier time convincing people that we know what we’re talking about not when we beat them senseless with information, but when we sound like we believe the answer we’re giving.

Posted in personal style, presentations | 13 Comments

On where I come from and why I (don’t) talk funny

The Blue Ridge Mountains - A beautiful place to be

I was born and raised in a southern U.S. state–not the deep South, but in the foothills of the Blue Ridge Mountains. It was, and still is, a very rural place. When people ask where I’m from, I never respond with a town or city; I respond with a region. The county where I grew up covered well over 450 square miles, but the population was well below 20,000. One high school for grades 8-12 served the entire county, and there were still only 1000 students. Drive a few hours north, and people don’t believe I’m from the same state because of my accent.

At least that’s the way it used to be.

Over the past few years, whether owing to the influence of science, academia, or living in progressively larger and less conservative cities, my accent began to neutralize. Without a conscious intent, the drawl has diminished to the point that few people now recognize it’s there. On a rare occasion, someone will pick up a hint of something different, though they’re nearly as likely to ask if I spent time in England growing up as they are to guess I’m from the South.

I haven’t cast aside the accent completely. When I’m talking with family or back in the South–where three letter words can take on 4 syllables–my Southern speech comes out in full force. But put me back in the lab, and the lilting drawl goes back into hiding. There are a few words that retain their unique Southern flair (such as my pronunciation of “naked”, which greatly amused a friend and former colleague). Or should you be so foolish as to incite the temper that those ginger strands hint at, that drawl is likely to find its way out.

For the most part, though, there are only small vestiges of my Southern accent in my everyday life. Before my accent became more neutral, it occasionally became a focus for attention from others–mostly good-natured teasing. But the attention made me uncomfortable, nonetheless. And sometimes it was accompanied by the implication that it was unusual for a Southern girl to become a scientist or simply to be smart, or the assumption that I was too timid or well-mannered to stand up for myself or anything at all.

In the United States at least, there is a distinct cultural stereotype associated with the South, especially the rural South. David Kroll recently caught Stephen Colbert in a slip of his Southern accent and posted this quote from 60 Minutes interview with Colbert:

At a very young age, I decided I was not gonna have a southern accent. Because people, when I was a kid watching TV, if you wanted to use a shorthand that someone was stupid, you gave the character a southern accent.

Agent Pendergast, the protagonist of this and other novels, is one of the few smart Southern characters I've encountered in fiction. And it's OK, because he came from money.

In TV and even many books, portrayal of intelligence among Southerners is often reserved for the aristocracy. The rural Southerner is often painted as the bigoted village idiot with a deep Southern drawl chewing tobacco, listening to country music with NASCAR on in the background.

The stereotype takes on even more nuances for the girl from the rural South. I use girl, because it really doesn’t change from adolescence until she becomes the family matriarch. The Southern woman is demure, waiting for her husband or father to make decisions or form opinions for her, doing what she is told. She isn’t terribly interested in education. She’s polite to the extreme, not one to disagree often and certainly not with a man. In other words, she is weak and basically empty save for the edicts of Southern hospitality and the occasional baby.

If you were to ask my father, grandfather, brother, or husband, and they’d tell you a different story. Each has been husband, father, and/or son of a fierce Southern fem. The women of my family have been (mostly) punctual and polite, good cooks, hosts, and caregivers. But should you ever mistake our hospitality as a doormat, you could quickly learn just what a fiery spirit we possess. They were nurses and secretaries and are some of the strongest women I’ve ever known.  They taught me about resilience, doing what needs to be done, not letting others take advantage of me, going after my passion… In short, they taught me that to do what I set my mind too. Among those ladies, there was never a question, never a doubt that I had what it would take to follow a career in science.

All our lives are governed by unwritten rules, many of which we’re not even consciously aware. Still it makes me a little sad to think that I drop my Southern accent when I go to work, that I basically hide a part of my heritage without even intending too. It feels like a bit of a betrayal to my family, and particularly to those women who showed me so much. It also makes me more acutely aware of the stereotypes ingrained in my mind–and try harder to check those assumptions at the door.

Posted in attitudes | Tagged , , | 24 Comments

My favorite things: A quick tour of There and (hopefully) back again

There and (hopefully) back again was initially an outlet, a product of a feeling of isolation from starting over in a new place and a new field. This blog has been around for over a year and a half now (including its start on Blogger and a short stint at LabSpaces. To be honest, when I made that first post, I don’t think I really expected it to last this long, and I certainly had no inkling of the community I would find.

This blog has developed into a place for me to hash out ideas, to learn more about issues postdocs or scientists in general face, to solicit advice, to share some of my philosophies of science, and–yes–to kvetch occasionally. Here are few of my favorite posts that I think are worth revisiting or for introducing you to my (mis)adventures in science.

The practice of research has evolved over time. There still questions regarding how science has been, is, and should be done and the motivation behind it. For two different perspectives, consider the humanization biomedical research and ethics in research and innovation through the lens of Fritz Haber.

Like most in the field, I want to be recognized for my work as a scientist. Period. Bit by bit, though, I’ve become more aware of what it means to be a woman in science. Sometimes there is an undercurrent that women and minorities in science are second-rate, and the power of the unspoken can be debilitating. Being a woman in science does subconsciously affect how I am perceived–even by myself. I have become conscious of how these perceptions influence my (and other women’s) behavior, such as how we simply don’t ask.

After leaving my first postdoc, I recounted a few lessons learned from the experience on the Benchfly blog. (If you arrived here via a link from a careers feature in some science-y journal about switching postdocs, this post is the reason; it’s also the source of the quote in the opening paragraph.) The Postdoc’s Tale is a collection of links about the postdoc experience in general and from my personal point of view.

But it’s not all seriousness around here. Occasionally I have a little fun, such as my Martha Stewart-inspired list of six things to do in the lab everyday and my coverage of the shock after the 2010 Nobel Prize announcements.

I hope you find something you like!

Posted in blogging | 6 Comments

In the shadows of greatness

How do we define greatness in science? I started pondering this question after responses started coming in to Nature Chemistry‘sunscientific & arbitrary Twitter poll“, asking “Who is the greatest chemist of all time?” The results are now posted on The Skeptical Chymist, the Nature Chemistry blog.

My opening question was sparked by a particular name on the list: Fritz Haber.

Fritz Haber syntheszizes ammonia in the lab. Where's your PPE, Fritz?!? (Image from BASF)

Even if you’re not a chemist, you might have heard of Haber in a general chemistry course. A German chemist, Haber is most widely known for the reaction that bears his name, the Haber or Haber-Bosch process, the first method for synthesizing ammonia from nitrogen and hydrogen gases. It may seem trivial now, but at the time, chemists had been trying to do just that for over a century. Haber and collaborator Robert Le Riossignol found that under high pressure and high temperature, the gases would react to form ammonia. The formation of ammonia was still quite slow, but Haber and Le Riossignol subsequently discovered that the rare metal osmium accelerated the reaction.

Following a successful demonstration of the process, Carl Bosch and Alwin Mittasch, working for Germany’s largest chemical company, BASF, were tasked

with industrializing the process. Being a rare metal, osmium was expensive and its supply limited. Mittasch tried 4000 catalysts and found that a mixture of iron and metal oxides, a much cheaper and more abundant alternative, could be used. The first industrial unit fired up in September 1913, generating up to 5 tons of ammonia every day.

At this scale, ammonia could be readily converted to nitrates, which were (and still are) used in fertilizers and explosives. Prior to industrial production of ammonia, Germany’s primary source of nitrates was saltpeter mines in Chile, a supply expected to be depleted within 30 years. Moreover, supply from the mines was blocked by the British Royal Navy after the start of World War I in 1914. An oft-quoted claim (for which I can find no citation) is that at the start of WWI, Germany’s stockpile of saltpeter could only have supplied munitions for just over a year, and that without the Haber process, Germany would have been forced to end the war in 1916. Jerome Alexander, a chemist of the day, even suggested that “without the Haber process it is doubtful Germany would have started the war, for which she carefully prepared.”

Haber’s impact on the war front did not stop there. Haber was a fierce patriot, devoting his time, intellect, and force to Germany’s war effort, and he was appointed to the War Department for Raw Materials as head of chemistry. By numerous accounts, he lived by his credo, “In peace for mankind, in war for the fatherland!” Realizing that tear gas

From Le Miroir, 1915 (source unknown)

would be ineffectual on the frontlines, Haber recommended using chlorine gas to flush enemy soldiers out of the trenches. He committed to turning this idea to reality, even though it was in direct contravention to the Hague Conventions. He oversaw every aspect of the effort–manufacturing, testing, installation.

The first test on the frontlines came at the second Battle of Ypres, ushering in a new era of warfare. Over 150 tons of chlorine gas were released within 10 minutes in each of two deployments in two days. The result: 15,000 casualties with 5000 dead, including many German soldiers because gas masks did not reach the front prior to the attack.

Haber continued to develop chemical warfare agents and tactics. He was not dissuaded by the suicide of his wife, who considered chemical warfare barbaric (although it’s unclear how much Haber’s work contributed to her suicide).It seems Haber hoped that chemical warfare would bring a swift victory, and thereby an early end to the war. Instead, by the end of the war, both sides were using chemical weapons, contributing to more than 90,000 deaths and over 1 million casualties. Yet even after the war, Haber continued to oversee development and production of chemical agents.

Soldiers drill in their gas masks during WWI. Photographer, unknown. Image from State Library of Queensland.

Haber’s war contributions was at the center of the consternation surrounding the 1918 Nobel Prize in Chemistry (which was not announced until 1920). Here stood a man who months before had been labeled a war criminal (although the charges were dropped), now being lauded for his scientific prowess. The Prize was awarded for “the synthesis of ammonia from its elements“, and the committee highlighted the importance of this accomplishment to agriculture. There was no mention of the use of ammonia for making explosives or of Haber’s campaign for chemical warfare.

And this brings us back to the question: What makes scientists great? Do we consider primarily their greatest scientific contributions? The brilliance and creativity that brought them to the answer for a long-held question, a solution for a problem plaguing the field? Do we focus solely on how their science benefited society?

The boom of human population growth in the last half of the 20th century would have been unsustainable without the Haber process.

The staggering rise in world population over the second half of the twentieth century was supported in part by the Haber-Bosch process. By one estimate (illustrated in graph above), in 2008, almost half the world’s population was sustained by agriculture dependent on fertilizer made from Haber’s ammonia.

Man standing in field of grain, unidentified farm, Washington, ca. 1910. Photographer, Albert Henry. Part of Albert Henry Barnes Photograph Collection at Univ. of Washington Libraries

Sometimes there are alternative uses for scientific discoveries and unforseen consequences. From my readings, the motivation behind Haber’s desire to make ammonia from air remains unclear. Some imply that he pursued this work with the knowledge–and maybe even intention–that it would be used for war. Others are more generous, suggesting he did it to help mankind, and the work was assimilated by others for more nefarious purposes. Max Perutz wrote:

By a terrible irony of fate, it was his apparently most beneficent invention, the synthesis of ammonia, which has also harmed the wold immeasurably. Without it, Germany would have run out of explosives once its long-planned blitzkrieg against France failed. The war would have come to an early end and millions of young men would not have been slaughtered. In these circumstances, Lenin might never have got to Russia, Hitler might not have come to power, the Holocaust might not have happened, and European civilization from Gibraltar to the Urals might have been spared.

Some shell cases on the roadside in the front area, the contents of which have been despatched over into the German lines. Photographer, Tom Aitken. From the National Library of Scotland.

Nonetheless, we should not ignore the dark side of science and its practitioners. How should our perception of the morality of scientists’ actions affect the laud of their work or their standing as the greats of science? Setting aside Haber’s motivation for making ammonia, he had a very good idea of what he was promoting when he began developing toxic gases and delivery methods.

But what of the brilliant minds involved in the making of the atomic bomb? Do we more readily overlook the “sins” of scientists if, later in life, they expressed remorse over their contributions to death and destruction? Do we excuse culpability because we consider a cause just? Is it the level of involvement–whether a scientist was active, complicit, or simply stood by and did nothing–that affects our perspective?

Should we consider the inevitability of a discovery? Does it make a discovery less great–or the blame for negative impact less dire–if others were nipping at their heels? Perutz posited:

Haber’s synthesis of ammonia for fertilizer was an extremely important discovery, but unlike relativity, it did not tak a scientist of unique genius to conceive it; any number of talented chemists could, and no doubt would, have done the same work before very long.

I suspect the same would hold true for chemical warfare. If it hadn’t been Haber, it would have been someone else. And Allied forces responded with similar tactics in turn. Neither point dims Haber’s reputation as the father of chemical warfare, though.

Moving beyond the questionable and destructive work of the great scientists, what of some of the more eccentric ideas or less rigorous investigations of famous scientists? How do these alter our concept of greatness? Haber was one of many who spent years trying to extract gold from seawater.

Linus Pauling, one of the greats but not without his flaws. Photograph from the Smithsonian Institution Archives.

Or consider, as Neil Withers pointed out, Pauling’s promotion of vitamin C as a health remedy. As Perutz noted in an essay on the subject, “It seems tragic that this should have become one of Pauling’s major preoccupations for the last 25 years of his life and spoilt his great reputation as a chemist.” It is worth noting though that Perutz still considered Pauling the greatest chemist of the century, in spite of this failing. Others may feel that great scientists should not have such weaknesses, but I find it easier to pass over these more innocuous shortcomings. In my mind, such idiosyncrasies remind us that these legends were indeed human and, thus, flawed. They also provide cautionary tales of how even the brightest minds can fall into trap of searching for the answer they want to find, rather than the answer the data yield.

These questions are hardly new. But as we consider who the greatest individuals in our field might be, we would do well to contemplate not only their magnificent achievements but also the things that are sometimes hidden in their long shadows.

Additional References & Further Reading

Erisman, J. W.; Sutton, M. A.; Galloway, J.; Klimont, Z.; Winiwarter, W. “How a Century of Ammonia Synthesis Changed the World”. Nat. Geosci. 2008, 1: 636-630. PDF

Friedrich, B. “Fritz Haber”. Published in part in Angewandte Chemie (International Edition) 44, 3957 (2005) and 45, 4053 (2006). PDF

Gibson, A. “Chemical Warfare as Developed During the World War and Probable Future Development” (An Address at the Annual Graduate Fortnight of The New York Academy of Medicine, October 22, 1936). Bull. N. Y. Acad. Med. 1937, 13: 397-421. PDF

Perutz, M. F. “Friend or Foe of Mankind?” and “The Battle Over Vitamin C” “What Holds Molecules Together?” in I Wish I’d Made You Angry Earlier: Essays on Science, Scientists, and Humanity. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press, 2003.

“The Nobel Prize in Chemistry 1918”. Nobelprize.org. 22 Jan 2011 http://nobelprize.org/nobel_prizes/chemistry/laureates/1918/

*Where possible, I have listed primary sources of images in their captions. Original digitized versions may be accessed by clicking on the desired image.

Posted in conduct of research, ethics in science, history of science, humanity, responsibilities | Tagged , , | 55 Comments