Embracing the Molecular Jiggle

A molecule is intangible. It’s too small to see, too small to feel. Trillions could fit on the sharp end of a pin. These strange entities lives in a world very different from our own, at the boundary between quantum uncertainty and statistical chaos.

Many processes in chemistry, biology, and medicine depend on our understanding of molecules in this alien world. However, it can be a challenge to accurately represent what molecules are really like. To simplify things, we often cheat and draw them as “blobology” – featureless coloured circles and squares. If we have structural data, we can do better and present them as a ball-and-stick models, ribbon drawings, or molecular surfaces. While helpful, these more detailed representations are still cheating. Images of a molecular structure all share a major limitation: they’re static. They don’t move.

A molecule’s function depends not just on its structure, but in the change of structure as it interacts with other molecules. This includes large, dramatic movements that translocate thousands of atoms, small movements of individual atoms, and everything in between. Macromolecules that carry out biological processes contain thousands to millions of atoms, each with some freedom of motion. They are intrinsically dynamic and flexible, and this motion is critical to our understanding of how they work.

I’ve mentioned before that I often think of molecules like LEGO, snapping together to build more complicated systems. But if we think about jiggly molecules, we should think less “brick” and more “jellyfish”, “slinky”, “JELL-O”, or “Flying Spaghetti Monster“. This is a case where a descriptive adjective can be really helpful, like greasy polypeptides, oily odorants, fuzzy electron density, and squishy polymers.

How can we best describe biological macromolecules? They’re jiggly.

Jiggle jiggle jiggle. T4 lysozyme, PDB ID 2LZM
Jiggle jiggle jiggle. T4 lysozyme, PDB ID 2LZM

Shake what mother nature gave you

A drop of water may look serene, but on the molecular scale, it is a violent mosh pit of collisions between molecules. Think soccer riot, demolition derby, or a playground full of kids on espresso. Particles move in all directions, flailing about wildly, constantly crashing into each other. Inside a biological cell, the chaos is even wilder, with thousands of different types of molecule bumping, wiggling, twisting, and squirming around. The Brownian motion of particles in this soup puts molecules in a state of constant fluxuation and vibration. They bend, twist, and bounce. They sample an almost infinite number of shapes, switching between states at breakneck speed.

While molecular scientists understand the complexity of this world, we can skim over it when communicating our work. Worse, sometimes we outright forget. We talk about how “the structure” of a molecule was solved. We assume that the shape of a molecule determined from crystals represents its shape at all times. We pretend that “disordered” parts of the molecule don’t exist. In many cases, these approximations are good enough to answer the questions we want to ask. Other times, they hold us back.

We should always remember the importance of flexibility. But if we know that molecules are intrinsically flexible, why do we fall back to talking about static shapes? The technology we’ve used to study molecules, and the history of the field have both played a role.

Structural biology: picking the low-hanging fruit

Structural biology has been an extremely powerful set of techniques to look at the high-resolution structure of molecules. But limitations of these techniques have trapped our thinking at times to picturing molecules as static, blocky particles. X-ray crystallography and electron microscopy calculate an average structure, which represents a huge ensemble of possible conformations. We sometimes refer to parts of molecules we can’t resolve by these techniques as “disordered”, although what we really mean is that is that all of the molecules we are looking at have different shapes, and we can’t average them into a meaningful representative model. As a byproduct of the technique, we miss some of the forest for trees. Other techniques like nuclear magnetic resonance (NMR), more easily acommodates multiple models, but because of the precedent set by crystallography, we still frequently treat NMR structures as a single model.

These techniques also bias us toward samples that are “well-behaved” – that is, they easily crystallize, purify, or otherwise make the life of the scientist easy. The problem here is that the molecules that purify or crystallize more easily are often those that show less flexibility. Lab lore dictates that flexible molecules cause problems in structural biology labs. As a result, scientists have picked a lot of the low-hanging fruit, leaving the most flexible (and some might argue, most interesting) molecules alone. As structural techniques mature, they are beginning to seriously tackle the idea of flexibility, but we still contend with a historical legacy of studying the easier, less flexible molecules.

Biochemistry: From floppy to blocky

The history of biochemistry has also affected our thinking about molecular flexibility. The history of the field tracks our growing understanding of how large molecules work. With more data and more powerful techniques, we have developed increasingly nuanced ways of thinking about these complicated microscopic machines, but that history leaves a legacy.

Without knowing details of molecular structures, the first biochemists were left to assume that strings of atoms will exist as a floppy or disorganized shape in solution, waving around unpredictably. This was changed by the father of biochemistry, Emil Fischer. In 1890 he proposed a model that changed how we viewed biological molecules. The “lock and key” model involves two molecules with rigid, complementary shapes. Features of the smaller molecule (the “key”) perfectly match features of the larger (“lock”) so that they can specifically interact. A well-defined, rigid structure is necessary for this mechanism to work.

However, alongside Hofmeister, Fischer also determined that biological macro-molecules are made as flexible chains of atoms. This raises a problem. How does a floppy string-like molecule become a blocky shape that can form the “lock” to interact with its “key”?

This problem wasn’t conclusively resolved until 1961. Anfinsen showed that the sequence of atoms in one of these floppy chains can guide the molecule to adopt a compact, blocky shape spontaneously on its own, by interacting with itself in reproducible ways encoded in the molecular sequence. The understanding that came from this work came to be known as Anfinsen’s Dogma: One sequence makes one structure. This is the blocky model of macromolecules, where floppy chains of atoms fold into a reproducible, rigid, blocky shape. More than 50 years after Anfinsen, the idea persists that molecules fold upon themselves to this single, rigid state.

And yet, it moves

We know a lot more now than we did in 1961. We know that folded molecules keep some fundamental flexibility and still move and jiggle, despite their folded shape. Anfinsen’s Dogma isn’t incompatible with this understanding, it only needs one concession: Folding a molecule into a three-dimensional shape restrains a molecule’s flexibility, but doesn’t remove it.

Over the intervening years, more complicated models for molecular behaviour have emerged that take flexibility into account. These models can sometimes still treat flexibility as the exception rather than the rule, but are a welcome improvement. Biochemists and biophysicists fight over the relative contributions of competing induced-fit and conformational selection models. Despite this bickering, these models are compatible and are starting to be reconciled in a new synthesis of molecular flexibility and action. Key to understanding this phenomenon: jiggliness. From floppy to blocky, this is now the beginning of the jiggly-molecule paradigm.

Several grand challenges in biochemistry depend on a nuanced understanding of molecular flexibility. If we want to start to solve these problems, we need to get better about talking about jiggly molecules. We need to know not just what a molecule’s structure is, but also how that molecule moves. Some specific problems that require an understanding of flexibility include:

  • Prediction of two interacting molecules. Fischer’s lock and key model is conceptually useful, but high-resolution models have shown that it is usually too simplistic. Upon interaction, molecules will change shape as they come together. It’s a rubber key in a JELL-O lock. Because of this, it is still almost impossible to predict the productive interaction of two molecules without accounting for flexibility.
  • Determining the impact of amino acid changes on molecular function. Reductionism often fails when we try to pull apart the action of a single amino acid on a protein’s function. While we can make changes that disrupt interactions, prediction of changes that form new interactions requires understanding dynamic flexibility. We also know that mutations that have no effect on the protein structure can have dramatic effects on dynamics, and hence function.
  • Allosteric effects are still impossible to predict. Changes caused by binding of a compound that alter a molecule’s properties are almost never easily determined by their shape alone. Flexibility, dynamics, and interaction energies are critical to understanding how allosteric transitions take place.
  • The active state of a protein is not well populated in experiments. The state of a protein that carries out its function is almost always not the “rest state” – that is, the most stable state. We find low-energy states in crystallography and other techniques, but the states of proteins that are poorly occupied are frequently the most important states. We usually have to infer the active state from the data we are able to measure. Understanding dynamics and flexibility are necessary to learn and model how molecules reach their active state.

Move past static structures – Embrace the molecular jiggle!

The paradigm of the jiggly molecule is starting to take hold. New technologies like free-electron lasers and improved cryo-electron microscopes are starting to allow us to look at single molecules. This will allow us to directly observe states of molecules and compare them. Single-molecule fluorescence and biophysical studies let us harvest data from single particles, to appreciate the subtleties of their action.

Molecular dynamics simulations get us closer to an ensemble-level understanding of molecular data, and are more powerful every year by Moore’s law to model complicated and flexible systems of molecules. Well-designed experiments can use NMR techniques to their true potential, to probe the flexibility and structure of biomolecules. Although in their infancy, ensemble methods are starting to be used in crystallography and scattering methods. Hybrid methodologies further combine information from many sources to begin to integrate into comprehensive models.

The developments I’m most excited about, however, have come from outside of the scientific world. Developments in animation are bringing the molecular world to life, and animators are merging the science and art of displaying molecules. The jiggliness of molecules becomes completely clear once you observe them in video.

Viewing the movement of a simulated molecule grants an intuitive understanding of the world of a molecule much better than a 1900-word blog post ever could. If a picture is worth a thousand words, an animation is worth a billion. Professional molecular animators are using experimental data to inform their illustrations of molecular behaviour. As we move from publication on printed paper journals to digital publication, these animations will play an ever-larger role in illustrating the behaviour of substances on the molecular level.

An intuitive understanding of jiggly molecules opens up a new level of problems we can approach in biochemistry. No matter what you know about molecules, appreciate the complexity these dynamic, flexible objects show. Appreciate and embrace the jiggle. If things are just right, the molecules might embrace you back.

Advertisements

A Tuberculosis Enzyme Decapitates Vital Energy Molecules to Kill Cells

If you cant defeat your enemies by force, defeat them with subterfuge. Mycobacterium tuberculosis, the bacterium that causes tuberculosis, lives by this mantra. While other disease-causing bacteria mount an all-out assault on the body, the tuberculosis bacteria lay low, hide, and slowly kill us from the inside out. M. tuberculosis is a master of stealth and deception. Like the Greeks entering Troy in a wooden horse, it hides from the immune system within our own cells – often the very same cells that guard us from bacterial infections. M. tuberculosis is a treacherous enemy.

In order to infect us, many bacteria use protein toxins to kill or manipulate our cells. The cholera and diphtheria pathogens are famous for producing toxins that attack our tissues. These toxins are fired as cannon blasts that break into our cells, and chemically change our own molecules to drive a toxic effect. Until recently, we thought that M. tuberculosis didn’t have any of these toxins. We thought that it accomplished its stealthy invasion through other means. It turns out, we were wrong: M. tuberculosis has a toxin, but it’s not a cannon, it’s an assassin’s blade.

TNT, a deadly enzyme produced by M. tuberculosis

Last year, researchers at the University of Alabama at Birmingham identified a toxin from M. tuberculosis that kills immune cells. They named the enzyme tuberculosis necrotizing toxin, or TNT, because it induces necrosis, or cell death, in the target immune cells. In a recent follow-up, they have now demonstrated exactly why the toxin is so deadly.

The TNT toxin is particularly nefarious. Rather than the upfront assault of the cholera and diphtheria toxins, it kills its host cells from the inside out. TNT breaks down the cell’s stores of NAD+, or nicotinamide adenine dinucleotide. This molecule is an important energy carrier molecule*, used by all life forms from the tiniest bacterium to the giant sequoia. Our cells use NAD+ to shuttle energy between biochemical processes. NAD+ harvests energy from the breakdown of glucose and other molecules, and passes that energy to other systems that drive the processes of life. If you remove a cell’s NAD+, the cell will die. This makes NAD+ a convenient target for M. tuberculosis. Destroy the NAD+, destroy the cell.

TNT Enzyme Decapitates NAD+. Artist's Interpretation.
TNT Enzyme Decapitates NAD+. Artist’s Interpretation.

The tuberculosis bacterium uses the TNT toxin to do exactly this. It acts the assassin’s blade, selectively destroying all of the NAD+ in the host cell. The enzyme “decapitates” NAD+ by breaking a critical bond, separating the head from the body of the molecule. Without their stores of NAD+, the immune cells that host the tuberculosis bacteria die, releasing them to spread to other cells. Triggering necrotic cell death also bypasses more orderly means of cell death that would allow the immune cell to sacrifice itself and quarantine the mycobacteria.

Stealth attack

In tuberculosis, some of the worst symptoms aren’t mediated by the bacteria themselves, but the immune system’s inappropriate response to the bacterium. This is part of why it wasn’t always clear that M. tuberculosis would need a toxin at all. For the most part, the M. tuberculosis lies low, waiting for its chance to strike. When it does strike, it appears to use TNT to do so in a selective and controlled way. Like the Greeks crossing the walls of Troy, the TNT enzyme has to be helped across a membrane to the host cells. Mycobateria live within compartments in the cells they infect, but in order to disrupt the metabolism of those cells, the toxin needs to reach the cytoplasm. TNT doesn’t contain any functions to get it into the cytoplasm by itself, so it has to be helped, by an export complex called ESX–1.

This is different than the cannonball toxins of cholera and diphtheria. Those toxins have their own means of forcing their way into a target cell. The TNT enzyme is actually a small enzyme, which means it doesn’t carry any parts to help cross into the host cell by itself. The researchers identified that the ESX–1 system is needed to get into the cytoplasm, although there is still a huge amount unknown about this process. This is a very interesting area of future study, because moving TNT into the cell probably involves an important switch in the bacteria’s strategy. M. tuberculosis switches from lying silently in wait, to mounting its sneak attack by cover of darkness.

Protection from a double-edged sword

There is an interesting consideration for any bacterium that makes a toxin, especially one that targets a ubiquitous molecule like NAD+. How does M. tuberculosis avoid killing itself? The bacterium synthesizes the toxin inside its own cells, but NAD+ is important for all life, including M. tuberculosis itself. How does the bacterium keep the TNT enzyme from destroying its own NAD+? Well, if this toxin is the assassin’s sword, a second protein, IFT (immunity factor for TNT), is the scabbard.

When the TNT enzyme is made in the mycobacterium, it is secreted, but if it somehow remains in the cell, it is bound by a molecule of IFT. This IFT blocks the part of the TNT enzyme that interacts with NAD+, inhibiting the enzyme. The researchers determined the structure of TNT and IFT together, and showed convincingly how the IFT would completely block TNT activity by obstructing the interaction of TNT and NAD+.

Outside of the bacterium, TNT and IFT are separated, and the toxin is active. Inside the cell, IFT acts like the sheathe that protects a swordsman from their own blade. It’s a cleverly-evolved means for M. tuberculosis to protect itself from its own weapon.

Every enzyme is a unique snowflake

The TNT enzyme is a great example of a reductionism-breaker. In a lot of molecular biology, if you want to probe an enzyme’s activity, you create site-directed mutants. In these mutants, the functional amino acids that interact with the target of the enzyme are replaced with non-functional amino acids. In this way you can test, in a straightforward way, what role the individual amino acids play. Reduce the activity of an enzyme to its constituent amino acids.

If you replace a functionally critical amino acid, you expect to get a non-functional protein. In TNT, removing the amino acid most important in related enzymes only reduced TNT’s activity by half. This isn’t much by molecular biology standards. This highlights that even though the same residue is present in this enzyme as related toxins, TNT appears to play by different rules than the rest. This actually is a fairly common story when studying enzymes, especially ones that are involved in bacterial disease**. A lot more work is needed to map out the mechanism that this enzyme uses, but this is a great reminder not to assume similar enzymes work exactly the same.

Why do I like this paper so much?

I really, really like this paper. Although full disclosure, I have a soft spot for bacterial toxins after doing an undergraduate placement in a toxin lab. There are a couple more points I’ll mention:

The researchers discovered the NAD+-breaking activity of the enzyme through some extremely clever detective work. They observed that when they produced the toxin in standard lab E. coli bacteria, the E. coli died. This happens occasionally, even with proteins that aren’t toxins. The next step they took was key. They sequenced the RNA of the expression bacteria, and found that the E. coli had up-regulated genes responsible for the synthesis of NAD+.

Some other bacterial toxins break down NAD+, notably another toxin produced by S. pyogenes. The researchers tested if TNT also acted on NAD+, and found the enzyme carried out the same reaction. I think this is a great case of critically evaluating your lab materials, and sharp thinking about the systems you work with. In this case troubleshooting lab problems appears to have turned into a huge discovery.

This research also identifies the first known bacterial toxin from M. tuberculosis. This bacterium is notable within microbiology because it tends to always play by different rules, growing slowly, using distinct chemistry and metabolism for everything it does. It would be easy to believe it fights the immune system in different ways, as well. This is partly true, as the TNT toxin is quite different from any other known toxin (it couldn’t be identified by comparisons to known toxins). But it seems M. tuberculosis uses a familiar weapon, just in an unfamiliar way, fitting its stealthy mode of infection.

Lastly, TNT and IFT are an interesting case study of the identification of unknown gene functions. The M. tuberculosis genome was sequenced in 1998. The NAD+-destroying function of mycobacteria was seen in the 1960s, as was the inhibitor function of IFT. However, without the appropriate understanding, no one could connect these functions to the genes until now. While modern sequencing technologies help us compile long lists of genes, we still need smart, careful experiments like this study to work out just what these genes do. It’s a great example of careful, inquiry-driven research in the post-genomic era.

M. tuberculosis uses the TNT toxin to decapitate a molecule that our immune cells need to live. A stealth murder weapon wielded by a treacherous infiltrator. This paper is a great piece of work illustrating how a nasty pathogen manages to sneak past our immune defences and make us sick. I’m very interested to see what we learn about this system in the future. I think this is an excellent piece of work, the UAB researchers should be proud.

Update 2015.09.17 11:21: Initially, I had assumed that all TNT will interact with an IFT before it is exported. I have learned this probably isn’t true and most TNT is exported without ever seeing an IFT.

Citation:

Sun, J., Siroy, A., Lokareddy, R., Speer, A., Doornbos, K., Cingolani, G., & Niederweis, M. (2015). The tuberculosis necrotizing toxin kills macrophages by hydrolyzing NAD Nature Structural & Molecular Biology, 22 (9), 672–678 DOI: 10.1038/nsmb.3064

* Technically, NAD+ is a redox shuttle, but that’s a discussion for another time.
** The virulence proteins of pathogens tend to change more rapidly due to the ongoing evolutionary arms race between the pathogen and the host.

A Blog About Biochemistry

Hi! Welcome!

If you’re reading this, you’ve found your way to my tiny corner of the web, where I will be writing on a regular basis about the things that excite and challenge me in science (with regular digressions, as mood strikes). I hope you might join me for the ride. You can also find me elsewhere @superhelical.

There’s one important thing to know about me at the start: I’m a huge nerd about molecules. While many students break into a cold sweat at the mention of the term “SN2 reaction”, I’ve always enjoyed organic chemistry. It can be like playing with LEGO. Except the LEGO is microscopic, you assemble the LEGO by shaking it vigorously in a flask, and some types of LEGO can kill you.

benzene_lego_tabs_03
Making a LEGO benzene that is scientifically consistent with good design is surprisingly challenging

It is fascinating to see the world through a lens of molecules and their interactions. I love thinking about the molecules that drive our lives and the environment around us. Some proteins link together when you bake a loaf of bread. Coral deposit minerals to build their skeleton. Dye molecules are linked to colour a cotton t-shirt. Even mundane things like the smell of the air after it rains or the way a hot iron straightens hair have molecular explanations. There’s a hidden richness of molecular phenomena around us. Molecules shape the world.

You can expect a heavy emphasis on protein biochemistry in this blog. Topics I find most interesting often involve biological molecules. In particular, the structure and dynamics of molecules like proteins and nucleic acids. It’s a shame that there’s a huge amount of published work on these bio-molecules that doesn’t receive much attention. I’m going to dive in and highlight some of this work, explain why I find it exciting. Hopefully I can help you to enjoy thinking about biochemistry, too.

What Exactly is Biochemistry?

So, to kick off, I’d like to start with a surprisingly difficult thing to do: define what “biochemistry” actually is.

This is a specific case of a more general question: What is any scientific discipline? Fields are labels that divide researchers and techniques into categories, even though the borders of these fields always remain fuzzy. More than techniques and objects of study, the defining features of a field is usually cultural or philosophical. I’ve learned when speaking to physicists, one of the first questions to ask is “experimental or theoretical?” because the most fundamental divide in that physics lies between theory and practice. Similarly, field biologists are a completely different breed than those who do lab work. As you get more specific, the distinctions get smaller, but you’d be very surprised how differently an immunologist and a microbiologist view the world.

When we look at the fields that study molecules of life, it doesn’t help that the names of many related disciplines are so damn confusing. Chemical Biology, Biochemistry, Molecular Biology, and Biological Chemistry all mean roughly the same thing in the dictionary. However, those of us working in these fields know that they refer to different communities that have distinct organizational cultures, and that target different types of problems using different methods and technologies.

I’ve cast around a little to try to get an idea of how some working scientists define “biochemistry” specifically, and recieved some interesting opinions from people in various fields:

  • It’s the necessary wet-lab work required to validate a computational model (Bioinformaticians)
  • It’s ELISAs and Western Blots (Immunologists)
  • It’s the damn prerequisite that makes you to memorize amino acids (Pre-Meds)
  • It’s chemistry without enough flammable solvents (Synthetic Chemists)
  • It stinks (Physicists)

Even people who work in closely aligned fields can’t agree on what biochemistry is:

  • It’s a broad umbrella, including sub-fields like chemical biology, structural biology, cell biology and systems biology
  • It’s an anachronism, and has been absorbed into “modern” fields like chemical biology, structural biology, cell biology and systems biology
  • It’s a defined set of techniques, used in fields like chemical biology, structural biology, cell biology and systems biology
  • It’s a historical administrative handle used to divide group chemical biologists, structural biologists, cell biologists and systems biologists into a single department

The definition I sympathised with the most is a little vague, but speaks to a simple truth:

  • It’s whatever you get when you look at biology with a chemical understanding

Biochemistry seeks to explain biology at the chemical scale. Start with simple building blocks like carbon, oxygen, hydrogen and nitrogen. Sprinkle in the occasional phosphorus or sulfur. How can something as complex as a tree, cat, or human emerge from these materials? Or even a single, replicating bacterial cell? There are biochemical questions to answer at every level of complexity, from shapes of sugars and amino acids, all the way to the muscle tissues of a sperm whale.

Biochemistry seeks to explain how the complexity of life can emerge from simple atoms. To do this, it takes chemical principles like thermodynamics, biological principles like natural selection, and some principles from developed from scratch, like models that explain the behaviour of enzymes. Biochemistry is an amalgam of principles from various fields, pulled together and applied toward the molecules of life.

Is the Field of Biochemistry Going Extinct?

Some might say biochemistry stopped being innovative decades ago and other fields have moved on and left the biochemists behind. That all the hard problems in biochemistry have been solved, that there’s nothing new to find.

I disagree, but it is time for a pivot.

There is one thing I might complain about the the state of biochemistry in 2015. Work in the field is often descriptive, reporting on the world but not gaining a lot of insight from it. This is a real shame, because there’s a vast amount of phenomena that still remain for biochemists to look at, study, and clarify.

A descriptive focus means that while biochemistry has generated some extremely useful tools, it is not as active a field of research as it could be. I think this is a big part of why some of the scientists seem to view biochemistry as those tools, rather than an independent area of study. To be relevant going forward, biochemists need to keep in mind that there are still Grand Challenges in biochemistry, that we have to continue to work on. These questions will drive the frontier of biochemical research. I’ll describe some of the most compelling challenges.

The Grand Challenges of Biochemistry

  1. The emergence of functional macromolecules
    Starting from simple atoms, how did the first functional molecules emerge? How have these functions changed over time. Can new functions emerge?
  2. The folding problem
    A long chain molecule needs to fold back upon itself to have a function. How a molecule does this efficiently and with high fidelity is still a mystery in many cases. How does the information in a molecule convert from chemical sequence to a three-dimensional structure?
  3. Modelling dynamics
    Many important functions of molecules depend on the fact that they are flexible machines that can bend into many different shapes as part of their function. We currently do a very poor job of modelling, understanding and predicting this flexibility. How do we understand this flexibility and predict where it can have important functional roles?
  4. Exotic chemistries in water
    Living cells don’t have access to an organic chemistry lab to create complicated molecules of life. How are exotic chemical reactions carried out, all in a water-based environment?
  5. Selectivity versus sensitivity
    Some molecules need to interact with a broad range of other molecules, others a very narrow set. How is this accomplished? What features govern this switch between sensitive, broad interactions, and highly specific, tight interactions?
  6. Non-ideal activity in real environments
    The history of biochemistry has involved purifying and isolating proteins from the complex mix of things in a cell, in order to study that protein in a test tube. But their natural environment is much more complex, full of thousands of other molecules. These other molecules will greatly affect and alter function. How can we understand the natural environment of a molecule?
  7. Ultrastructural chemistry
    Many of the most interesting molecules to study in living cells are absolutely gigantic. Normal rules of molecular behaviour break down when we get to those sizes. Modelling chemical properties at large scales is an ever-pressing challenge of biochemistry. What emergent properties occur in large molecular complexes?

What is the Future of Biochemistry?

So these Grand Challenges laid out, where does the future of biochemistry lie? Every new technology helps us tackle these problems in even greater detail. I’m optimistic that there are questions that we can now start to ask and answer, that we couldn’t even 10 years ago.

Technologies that get me excited include the high-resolution structural work now accessible at free-electron laser facilities, ever stronger supercomputers to calculate the dynamic nature of the molecules, ingenious in-lab evolution experiments to probe the change of molecules over generations, single-molecule experiments that track the behaviour of individual molecules, and many others. The future of biochemistry isn’t extinction, it’s evolution. We can now probe deeper than ever before at the inner workings of our molecular selves. The future is bright indeed.

Some may tell you biochemistry isn’t worth your time. I’m going to fight to show otherwise.

I hope you’ll come along with me.

Letter to the McGill Daily – Neo-colonial medicine?

I haven’t written for a while, but a recent article in a McGill student paper got me motivated to write the editors. The piece I’m responding to is an opinion piece that claims that scientific medicine is a paternalistic system that displaces traditional practices around the world. Most frustratingly, the difference between scientific and nonscientific medicine is framed as Western/non-Western, and the author labels the scientific establishment “racist, ethnocentric, and neo-colonial”.

The article can be viewed here.

There are just too many fallacies to properly address in 300 words, but I did my best (view online here):

We don’t talk about how Middle-Eastern Mathematics, with it’s cold and sterile zeroes, spread throughout the world because “scientists” said it was more effective, tragically displacing the Traditional Mathematics of the rest of the world. Yet somehow the same argument gets made when we talk about modern medicine.

In Decolonizing Healthcare (104:3) the author makes a distinction between Western and Nonwestern medicine and criticizes the Western approach for ignoring traditional methods. The thing is, the Western-Nonwestern divide doesn’t exist. There is a division in medicine, but it’s not about geography. It’s between the medicine we know works, and the medicine we’re not so sure about.

Modern medicine is not hostile to traditional methods, it just needs to know that they actually make people better. Unfortunately, many traditional methods haven’t been shown to do so. Modern medicine has not avoided these practices because of lack of understanding, as the author erroneously claims, but because there is no evidence that they actually treat disease. Gambling on such unproven methods wastes limited resources and diverts patients from treatments that have a much greater chance of success.

In many cases where traditional methods have been found to be effective, they are carefully tested and eventually become standard in modern medicine. A favorite example is the antimalarial artemisinin, originally identified from a traditional Chinese herb. This life-saving compound is now produced and used more safely, effectively, and at greatly reduced cost, thanks to modern science and technology.

Modern medicine certainly has many problems. Economic factors have a corrupting influence on patient care. Paternalistic practices can prevent patients from receiving the best possible treatment. Medicine is largely reactive, treating illness rather than proactively promoting health. However, introducing a false concept of “Western medicine” and dismissing all proven medical practice as neocolonial does nothing constructive to address these issues.

Shane Caldwell

PhD Biochemistry

Antibiotic usage regulations – too little, too late?

This post is cross-posted from a blog post I wrote for the Science and Policy Exchange. For more insightful writing on science and how it relates to government, the media, and society at large, visit their site http://www.sp-exchange.ca

In Dr. Seuss’ The Lorax, the narrator reflects sadly upon his past. He recounts arriving at a pristine forest of Truffula trees, which he cuts down to make thneeds, a product that “everybody needs”. Ignoring warnings and later pleadings from the titular Lorax, he cut down trees one-by-one, later 4-by-4, growing his business enormously in the process. He continues to cut down trees until all of the sudden, the last one is gone. Without any more Truffula trees, his business collapses and he falls into poverty, and tells his story so future generations won’t make the same mistake. He realizes how he was blinded by the promise that his business would only grow bigger, and overlooked the fact that he was rapidly removing the key resource he and others depended upon. By the time he realized his wrongs, it was already too late – the forest was gone, along with its animals – the Brown Bar-ba-loots, Swomee-Swans, Humming-Fish, and the Lorax himself.

abspng

Sure, The Lorax is a children’s story, but it illustrates some profoundly adult concerns. Take too much from a public good, and it will collapse, leaving everyone worse off in the long term. This story obviously parallels real-world issues like deforestation, pollution, and overfishing, but also applies to abstract public resources like transportation networks, taxation, and stability of financial markets. To protect common resources for the good of the public as a whole, regulations are needed to prevent individuals from exploiting these resources for short-term personal gain.

A somewhat obscure public good, but one that medicine depends upon is the efficacy of antibiotics. Antibiotics are only effective when the microbes they are used to treat are susceptible to the drug. If resistance to antibiotics spreads, antibiotics lose their utility, and this public resource is lost. We need these drugs not just to treat infections, but also to facilitate surgery, cancer therapy, and to protect immunocompromised patients, so losing our effective antimicrobials would be an unmitigated disaster.

Unfortunately, to retain antimicrobial effectiveness, we fight against formidable evolutionary forces. Add a strong selective pressure (antibiotic), and evolution drives the selection of the most fit (resistant) microbes. These resistant microbes will thrive where their non-resistant counterparts die, and take over the competition-free environment left behind. For this reason, we instruct patients to take the full course of antibiotics to completely eradicate an infection. Otherwise it could come back, stronger and tougher, and spread to other patients as well.

This example is usually framed around medical patients, but when we look into the emergence of antibiotic resistance, the more important subjects of antimicrobial use aren’t humans, but chickens, pigs, and cattle. Almost 80% of antibiotics are used in agriculture. These antibiotics are used in three ways: to treat acute infections, in feed as an additive for “prevention and control”, and lastly for use as growth promoters. The last usage doesn’t make much sense – why would chemicals that kill microbes alter the growth of livestock? The short answer is that don’t really know, this question is at the forefront of the burgeoning field of microbiomics. What matters for farmers and feedlot managers, however, is that it works, and their animals grow faster, bigger, or both, which means better profitability. It makes perfect economic sense to routinely add antibiotics to feed as growth promoters.

The problem is that this constant background of growth-promoting antibiotics in agriculture generates a selective pressure that drives the emergence and spread of antibiotic resistance. This was long suspected based upon our understanding of natural selection. Careful research has now shown that indeed, these growth-promoting antibiotics lead to increased emergence and spread of antimicrobial-resistant bacteria, and further, that these resistant bacteria cross over to human patients. Knowing this to be true, the widespread use of antimicrobials for growth promotion in agriculture is now viewed as a considerable public health hazard.

Recognizing this hazard, in 1977 the US Food and Drug Administration (FDA) committed to putting restrictions on antibiotic use in agriculture, a crucial step to limit the spread of resistance. The agency took the first step last December, 36 years later. Their Final Guidance for Industry 213 recommends that antimicrobial manufacturers discontinue indication of antibiotics as growth promoting agents. Some are celebrating this as an important first step. Many are rolling their eyes at what appears like a mostly empty gesture.

The FDA recommendation took far too long to happen. It also lacks teeth. First, it is voluntary, and it doesn’t seem likely that many manufacturers will embrace the opportunity to cut their revenues in half. Farmers operate on such narrow margins that discontinuing antibiotic use could force them out of business, and so change won’t happen there either. The recommendation also has a gigantic loophole: by simply rebranding antibiotic use from “growth promotion” to “prevention and control”, farmers and feedlot managers can continue business, fully compliant with the recommendation, without changing their de facto usage at all.

The FDA recommendation is a first step, but it’s a feeble one, and one that took far too long to happen. A knot of competing interests seems to be preventing much from happening. The medical community and food purity movements want antibiotics removed from agriculture. While the FDA seems to concur, its hands are often tied – the agency is caught between protecting public health and strong economic incentives to keep the status quo in agriculture. The US congress has meddled considerably in the matter, and no doubt lobbyists are pulling strings along the way. Comparatively low food costs have also contributed to the problem, as they push producers to squeeze every drop of efficiency they can from the system, growth promoters forming an important part of their plan. Given these opposing forces, it becomes less surprising that it took the FDA 36 years to publish their recommendation, but at this pace, by the time any substantial change happens, it may be far too late.

The Lorax in this story is the scientific and medical community, who have been pleading with industry for decades to adopt more responsible agricultural antibiotic practices. Individuals within agriculture, like Dr. Seuss’s tragic narrator, are mostly concerned with day-to-day operation of their business, and not esoteric concerns about the future of medicine. But as a collective, the industry needs to realize the loss of antibiotics will have long-term consequences for their business as well and take steps towards more sustainable use. In the absence of this kind of industry action, firm regulations on responsible antibiotic use are essential to protect antibiotics as a public resource. Otherwise, our antibiotics will drop one-by-one from pharmacists’ shelves like Truffula trees, until none remain, and like the animals in the Lorax’s forest, other life-saving innovations of medicine could be lost as well.

Full disclosure: I study antimicrobial resistance in a biomedical research lab, so I do have a stake in this matter. I’m also sympathetic towards farmers – I grew up on a beef farm, where my family still raises hereford cattle.

Edit 2015.09.02: I’ve updated my account since I first wrote this post. If you’d like to follow me online, find me @superhelical

Who Watches the Watchmen? Blind Trust Isn’t Enough in Today’s Research Environment

This post is cross-posted from a blog post I wrote for the Science and Policy Exchange. For more insightful writing on science and how it relates to government, the media, and society at large, visit their site http://www.sp-exchange.ca

In the spring of 2012, an article appeared in the niche crystallography journal Acta Crystallographica, Section F (“Acta F” to those in the know). This article, “Detection and analysis of unusual features in the structural model and structure-factor data of a birch pollen allergen” describes a protein structure published in a 2010 Journal of Immunology paper. Following a thorough analysis, author Bernhard Rupp (of textbook fame) concludes there is:

… no doubt that model and data of [structure] are incompatible and that the deposited [data] are not based on actual experiments, and their standard uncertainties are not based on experimental errors.

 Translated to everyday language, this reads

The data isn’t real.

 He is accusing the authors of data fabrication.

fake

 Technical language and a deferential tone mask the severity of this accusation. The author response, published in the same issue, cuts to the chase.

The University of Salzburg immediately informed and commissioned the Austrian Agency for Research Integrity (OeAWI) to carry out an investigation into possible data fraud on the part of author Robert Schwarzenbacher, the co-author solely responsible for the Bet v 1d structure and the crystallographic section of the J. Immunol. paper. The OeAWI is presently preparing a report of this investigation.

This is a great example of researchers and institutions taking the appropriate steps to address concerns of research misconduct. Then:

Author Schwarzenbacher admits to the allegations of data fabrication and deeply apologizes to the co-authors and the scientific community for all the problems this has caused.

 Oh. Well then. No need for an investigation after all, right? Well….

Note added in proof: subsequent to the acceptance of this article for publication, author Schwarzenbacher withdrew his admission of the allegations.

 Presumably he talked to a lawyer.

 So, what went wrong? We can infer that researchers working in one field (immunology) brought in a collaborator from an outside field (structural biology) to add complementary experiments and drum up the impact of the project. The steep learning curve and specialized techniques of structural biology meant that the other authors had to trust Schwarzenbacher’s work was conducted rigorously and honestly. Obviously, this trust was misplaced.

 Almost two years after the Acta F paper was published, Schwarzenbacher has lost his job, but has sued for wrongful termination. The structure in question has been mothballed in the Protein Data Bank, and his contribution to the Journal of Immunology paper has been removed. The paper still stands on its other experiments, as the authors argue that the paper’s conclusions did not depend on the fraudulent structure work. Regardless, it’s a black mark on the record of the coauthors, the journal, funding agencies, the university, and the Austrian structural biology community. The reputations of many parties have suffered from the actions of one misguided researcher.

 Most have been happy to hang the blame on Schwarzenbacher’s shoulders, and rightly so. But does he carry all of the responsibility? I’ll argue he doesn’t. 7 co-authors all approved the work for publication. The Journal of Immunology, its reviewers and editor all gave it the stamp of approval. The department and university provided the environment where this misconduct could go unnoticed. The allergy and immunology research community also missed the fraud. Everyone appears to have been content to accept the credit and conclusions from presumably legitimate work, but once fraud was brought to light immediately distanced themselves from the situation. This is not a sustainable practice.

 This isolated case could be one of many to come. We need not just consider fraud, either. Even the most scrupulous researchers are subject to the insidious influence of wishful thinking. Critical mistakes can slip through when a “ringer expert” operates on their own, without any scrutiny. As technology drives ever more complicated experiments, and granting agencies continue to reward multidisciplinary work, lone specialists will increasingly be required in collaborations, and the chances of fraud or major errors slipping through will increase.

 Schwarzenbacher’s is a particularly good example of this problem because there is no ambiguity about his fraud – a novice crystallographer would find obvious problems with the data. Passing the data past a single critical eye could have caught the fabrication before publication. This drives home how the current environment can allow ethical or methodological problems to slip through, leading to flawed or fraudulent conclusions. A mechanism is needed to improve oversight – the assumption of good faith is not sufficient. Peer review is supposed to provide this function, but here, as elsewhere, it failed.

 Besides fixing peer review, how can future incidents like the be prevented? Movements like open data will play a role. Deposition of data is already a condition for publication of structural work, and Schwarzenbacher’s fraud was discovered through curation of the database. Post-publication peer review can also help identify problematic data and correct the scientific record, but neither of these mechanisms can prevent the initial publication of bad or fraudulent data.

 What is really needed is a culture change. The institutions, journals, and researchers involved in multidisciplinary collaborations can’t escape responsibility for scientific oversight. This could mean some sort of institutional review, or a requirement to pass the data by a friendly but impartial third party. Most importantly, the community needs to accept that scientific integrity is the responsibility of everyone involved, not just the person who processed the data. Researchers need to take steps to ensure that they can stand by the integrity of all of the work to which they attach their names. Ignorance is a poor excuse.

 It’s worth considering what drove Schwarzenbacher to cross the line. Rather than some master manipulation to trick his collaborators and journal reviewers, his actions appear more like simple indifference and laziness. As highlighted by the partial (not full) retraction, the structural work wasn’t central to the study, and the faked data is obvious enough that he clearly didn’t expend much effort to cover his tracks. Perhaps he didn’t think anyone would notice or care, and if so, he was somewhat correct. His collaborators and research environment allowed him to cut a corner that he shouldn’t have cut. He chose expedience over honesty.

 The editors of Acta F make the observation that

It seems clear that the pressures on scientists early in their careers are so severe that a few are compelled to risk their careers in order to further them.

 I think it’s time everyone else assumes a little responsibility for letting it happen.

Find out more about the Schwarzenbacher debacle on RetractionWatch:
http://retractionwatch.com/2012/04/02/protein-structure-retracted-after-investigation-into-highly-improbable-features-journal-calls-it-fraud/
http://retractionwatch.com/2012/04/12/salzburg-university-fires-crystallographer-robert-schwarzenbacher-for-faking-data-in-journal-of-immunology-paper/
http://retractionwatch.com/2013/05/27/a-partial-retraction-appears-for-former-salzburg-crystallographer-who-admitted-misconduct/
And in Nature:
http://www.nature.com/news/trial-tests-austrian-integrity-body-1.10564
As well as a relevant Nature editorial from around the same time:
http://www.nature.com/nature/journal/v483/n7391/full/483509a.html

Edit 2015.09.02: I’ve updated my account since I first wrote this post. If you’d like to follow me online, find me @superhelical

 

Poor Public Understanding is Killing Basic Research in Canada

This post is cross-posted from a blog post I wrote for the Science and Policy Exchange on December 1. For more insightful writing on science and how it relates to government, the media, and society at large, visit their site http://www.sp-exchange.ca

Years ago, I worked for the summer on my neighbours’ cash crop farm. We mostly worked with cabbages. It smelled terrible. My coworkers, tough guys in their forties, learned I was studying in the sciences, and had no end of recommendations of useful things I could do with my degree.

“Hey science-boy! You should make some cabbages that bugs won’t eat!”
“Hey science-boy! You should make cabbages that keep better in storage!”
“Hey science-boy! You should make stackable square cabbages!”

picOf course these comments weren’t serious, and came from small everyday frustrations on the job, but the assumptions behind them are interesting. These men viewed science solely as a tool to produce innovations, and by extension, to make their lives easier. I don’t think their attitude is unique. For the most part, today’s lay public views science as something that makes new smartphones and performance fabrics, not much more.

This is a problem. Science doesn’t work well when it’s focused at specific applications. When you work in the realm of uncertainty, one of the only things you can have confidence in is a high rate of failure. Spreading initiatives broadly and pursuing the most interesting questions as they arise is a more efficient use of resources, and improves the chance of success. This is especially important when we remember that many (most?) great innovations come from serendipity, not planned investigation, so the wider we spread our inquiry, the more likely we will hit upon something that will drive technology forward.

While fundamental science is poor at developing specific solutions, it excels in discovering new principles. Sometimes these principles can translate directly into commercial products, but more often they form part of an accumulated wisdom that moves our understanding incrementally forward. The useful applications emerge naturally, albeit slowly, from that understanding. But that’s boring. And certainly isn’t a compelling story.

What is a compelling story?

Innovative new technology will convert industrial greenhouse gas emissions into commercial products

Canadian companies developing natural health products will be able to get science-proven, competitive new products on shelves faster

“Canadian firms will now be able to transform agricultural and forestry by-products to create new materials and reduce the use of petroleum-based polymers (plastics).”

The amount of certainty in those statements should make any scientist bristle. It betrays a misunderstanding of the scientific process and a fundamental arrogance that the result is predetermined, not subject to any uncertainty. These projects are part of the National Research Council’s new mission to shift “the primary focus of … work at NRC from the traditional emphasis of basic research and discovery science in favour of a more targeted approach to research and development”.

At the NRC and other government research institutes across Canada, the research climate is moving towards projects with the potential to produce marketable products, away from pursuit of understanding of the world around us. Government-funded research agencies across the country are being retooled to stop looking outward and pursuing novel ideas, instead turning inward, transforming into glorified factories. This is an extremely short-sighted strategy, crippling our capacity for innovation.

If I’m generous, the politicians enacting these changes are influential laypeople, at least as far as science is concerned. They don’t recognize that the long-term benefits of fundamental research are being lost as they focus myopically on risky but sexy-sounding megaprojects. If I’m a little more cynical, they realize that the public thinks science exists solely to produce new products. They exploit this by claiming they’re working to improve the lives of the average Canadian, and will include that in their re-election platform. The amount of times the term “strengthening our economy” gets dropped into government research press releases suggests the more cynical interpretation.

So what needs to change? To start, those in power need to recognize the importance of fundamental research to the long-term health of a society. We’re on a treadmill moving forward, and if we don’t keep pursuing challenging questions, we fall behind. More importantly, the public perception of science has to change. As long as Mike from Canmore is happy that his tax dollars are propping up companies instead of pursuing important fundamental discoveries, nothing will change. Scientists need to hold politicians accountable, but more importantly, they need to educate the public on the importance of pure research to long-term innovation. The layperson needs to know that pure research is a long-term investment that pays economic dividends many election cycles into the future.

The current direction of government-funded research in this country is troubling. Without public understanding of the importance of fundamental research, the trend will continue, and the foundations of our research apparatus will continue to erode. However, there may be one positive effect – my coworkers might finally get those square cabbages.

Edit 2015.09.02: I’ve updated my account since I first wrote this post. If you’d like to follow me online, find me @superhelical