A Motion-Sensor Switch for Antibiotic Resistance: My New Paper in the Journal Structure

fx1

I’ve been working on my thesis for the last few months, squirreled away in libraries and coffee shops, but now I’ve submitted and waiting to defend, I’m happy to share what’s happened in the meantime! A research paper I’ve been working on for a long time has been accepted, and published in the journal Structure. You can find it online, here. This paper makes up a bulk of the work in my PhD thesis, containing 10 protein crystal structures, and I’m glad to have it finally available to the world!

In the paper I describe the structure of a protein that causes antibiotic resistance. This protein makes a bacterium resistant to a class of antibiotics called aminoglycosides. They are one of the oldest classes of antibiotics and include some well-known compounds such as streptomycin, kanamycin, tobramycin, and gentamicin. They are effective antibiotics used against a broad variety of bacteria, and resistance factors that make them ineffective are a serious problem in the treatment of infections.

The protein that I work with generates aminoglycoside resistance. It chemically alters the antibiotics, turning them into inactive byproducts, making any bacterium with the protein resistant to aminoglycoside antibiotics. The protein acts as a “resistance factor” – bacteria that carry the gene for this protein use the protein to deactivate the antibiotic. They can easily break it down and go about their bacteria business instead of being killed.

This protein is called APH(2”)-Ia (more on that name later). It inactivates several different aminoglycosides. To learn how it carries out this transformation, I looked at the structure of the molecule and how it changes when it interacts with the antibiotic. To understand why the structure is important, let’s talk about what this type of molecule really is.

Enzymes: biological molecules that make chemistry happen

Proteins that carry out chemical reactions are also called enzymes. They allow a chemical change to happen that normally happens at extremely low or nonexistent rates. The enzymes that act on aminoglycoside antibiotics are collectively referred to as aminoglycoside-modifying enzymes. They deactivate antibiotics by transferring part of a common, metabolic molecule to the antibiotic. This makes the antibiotic worthless, and lets the bacterium survive the toxic effects of the compound.

An enzyme drives a reaction by forcing these chemicals together. They do this by specifically binding to the molecules, in a mechanism sometimes referred to as a lock-and-key interaction. An enzyme is specific for the molecules that it acts on, like a lock only opens for a specific key. The keys of an aminoglycoside-modifying enzyme are the antibiotic and a cellular molecule that donates a chemical group to the antibiotic. In APH(2”)-Ia, that cellular molecule is guanosine triphosphate, or GTP. The enzyme binds the antibiotic and GTP tightly, and undergoes changes in structure to drive a chemical change between these molecules. This all happens in the most important part of the protein, the active site.

The active site is the most important part of an enzyme. This part of the protein is typically a deep pocket that the rest of the molecule wraps around, where the enzyme holds and manipulates the molecules, and where chemical bonds are broken and formed. The enzyme separates these molecules from water and other compounds, and in this isolated state, the enzyme drives the chemical change to occur.

The active site of any enzyme is typically extremely sensitive to the shape and properties of the molecules it acts upon. Evolution has driven enzymes to develop a high degree of specificity for these molecules, known as substrates. An enzyme typically has only a few different substrates that it acts upon. The combination of the specificity of chemicals that an enzyme acts on and the reaction it carries out gives it its name, in this case aminoglycoside phosphotransferase.

Aminoglycoside phosphotransferase

I try to avoid saying the name of the protein I work on unless I want to sound important.

APH(2”)-Ia stands for aminoglycoside (2”)-O-phosphototransferase type Ia. It’s usually preferable to just say “the enzyme” and that’s what I’ll mostly do here. As the “type Ia” might suggest, the enzyme is part of a larger group of enzymes that all carry out a similar reaction. They use magnesium atoms, held tightly in the active site, to move a phosphate group, PO43-, from one molecule to another. In this enzyme, it moves a phosphate from GTP to the aminoglycoside antibiotic. GTP is a nucleoside triphosphate molecule, the cellular phosphate source in this reaction. You might be familiar with a similar molecule, ATP, the “energy currency of the cell”.

APH(2”)-Ia is somewhat unusual because most similar enzymes use ATP, but this enzyme uses GTP. Researchers in my lab and in other groups have studied the relationship between these proteins and GTP, and there’s still some interesting unsolved mysteries about the use of GTP in these proteins. However, for this work, I focused on the part of the molecule that is the same between ATP and GTP, the triphosphate.

These enzymes that use nucleoside triphosphates as phosphate donor have a special name: kinases. We know quite a lot about kinases. Their importance for cell biology was discovered in the 1970s-1990s and researchers learned that they are critically important regulators of cellular metabolism, cell division, and many other processes. They add phosphate groups to proteins that generate molecular communication networks in the cell. In mammalian cells kinases are typically involved in important cellular decisions, and because many of these decisions impact how a cell grows and divides, many of these kinases are involved in cancer. When it was found that aminoglycoside phosphotransferases were kinase enzymes, there was already a large amount of research to compare to on similar enzymes. However, comparisons to other enzymes only gets you part of the way. To really learn how any molecule works, you have to look at it directly.

How do you look at something a few nanometers in size?

A mantra in the molecular sciences is that structure dictates function. I argue it needs a little updating, that structure and dynamics dictate function (more here and here), but you need to have a structure before you can study how it moves. Determining the structure is where we start.

Using the techniques of structural biology, we can get a direct look at the molecules that carry out biological functions. Techniques like nuclear magnetic resonance, and electron microscopy can provide excellent structural information about biological macromolecules, but the technique I used for these experiments was X-ray crystallography. Matt Kimber, the professor who taught my undergraduate structural biology course referred to crystallography as a “one-trick pony, but it’s a damn good trick”. Well, I’ve ridden that pony right to the end of my degree.

To determine a structure by crystallography, you purify the protein that you’re interested in, and run an array of experiments in parallel to try to find conditions under which it will form crystals. Not every protein can crystallize, and not every crystal gives you good data, which makes protein crystallography an intimidating technique. The perceived risk of protein crystallography experiments drives many in the molecular sciences to treat protein crystallographers with a sort of reverence. I don’t know that that reverence is particularly well-placed, but I’ll take it all the same.

302_B3_001
Crystals of APH(2”)-Ia + GMPPNP. This is a ~3μL drop with protein and various chemicals, suspended upside-down on a glass cover slide. The bright colours are because I used a polarizing filter, crystals do cool things when you shine polarized light through them!

If you are able to make protein crystals of sufficient size and quality, then you can collect X-ray diffraction data with the crystals. Several companies sell instruments to collect this diffraction data, and there are dedicated facilities that conduct these experiments using high-intensity X-rays from accelerated electrons. The instrumentation for these experiments continues to dramatically improve, allowing us to get more and better data from our protein crystals all the time. The job of the X-ray crystallographer is much easier these days than it used to be.

Using a home-based or synchrotron source, a beam of X-rays are directed at the protein crystal. X-rays interact with the electrons of a molecule. Because of the physics of diffraction of rays from a crystal, the result is a pattern of diffraction from the x-rays that can be recorded. By measuring these diffraction spots, we can apply physical rules about how diffraction works to interpret the distribution of electrons within the crystal of proteins. From the intensity of diffracted X-ray spots, we can reconstruct the shape of the electron density in our molecule of interest.

281a31_testa_001
X-ray diffraction from a crystal of APH(2”)-Ia. The spots are X-rays diffracted from the crystal, and the intensity of those spots is related to the shape of the molecule in the crystal. The further out from the centre, the lower the “resolution” of the data – the better quality structure you can build. This crystal diffracted to ~2.15 Å, or 0.215 nanometers.

At this point, the job still isn’t done. In some cases, you have to solve what we call the phase problem, although in this case it wasn’t too much trouble so I’ll jump past it. However, there is still a considerable amount of analysis required to interpret what the electron density means, and what the shape of molecules that fill this electron density truly are. It’s the crystallographer’s job to build a molecular model that recapitulates the observed electron density as closely as possible. The point at which you consider the model “done” is an ongoing struggle, similar to that experienced by artists and writers, where there is always another brushstroke, sentence, or water molecule that could improve the final product, but then at some point you stop and call your model “finished” and interpret what it says about the chemistry of the molecule you’re studying.

Screenshot from 2016-06-23 13:20:04
Part of the model-building process for APH(2”)-Ia. The blue/purple mesh represents the electron density of the molecule, through which I build the model of the protein molecule, with yellow (carbon), red (oxygen) and blue (nitrogen) atoms linked together to form the structure of the protein. The mesh are a transformation of the experimental data, while the sticks built into it are the model of connected atoms that we interpret from this data. The cross in the bottom left represents a water molecule.

In the case of APH(2”)-Ia, a lot of the challenge for me was making the models as good as possible, and after a long struggle, they were of sufficient quality that I could use them to gain some interesting insights about how the protein works and suggested a new feature of an antibiotic resistance enzyme.

Determining the structure of APH(2”)-Ia

When I started working with APH(2”)-Ia, others had already determined the structures of three related molecules: APH(2”)-IIa, -IIIa, and -IVa. These structures give us the shape of the enzymes and some of their interactions with their substrates, but a key factor always missing was a well-defined active site set of the triphosphate ligand.

Without the triphosphate in the enzyme, we can’t get any sense of how the enzyme interacts with that molecule. And if we can’t see it, we can’t predict how it works, or understand how to affect it.

Building the shape of the first versions of APH(2”)-Ia weren’t as hard as I’d expected. I had a few nights up late, excited to carry out my next rounds of model-building and improving the data, and was proud to build models with some pretty excellent statistics for the quality of X-ray diffraction data I was working with. The challenge came when I had to start interpreting what was in the active site of the enzyme.

overview
Overview of the structure of APH(2”)-Ia. There are four copies (A-D) within the crystal structure, and I blow up one here for illustration. The three regions of the protein, the N-terminal lobe and core and helical subdomains of the C-terminal lobe are indicated. The nucleoside triphosphate binds between the N-lobe and core subdomain, while aminoglycosides bind between the core and helical subdomains.

Probing the active site of an antibiotic resistance enzyme

Remember how I talked about how the enzyme takes a phosphate group from GTP and moves it to the antibiotic? Well, in the first structures I looked at, that really didn’t seem possible. I used a GTP-like analogue molecule called GMPPNP to make the protein crystals, and in the structures, the phosphate groups of the GTP analogue were stuck in a position where they couldn’t react with the antibiotic. I named this the stabilized conformation, because it seems to be sitting in a position where it can’t react with anything.

There were several similar enzymes I could look to for comparison, and none of them show this stabilized triphosphate form. They have a different, activating conformation which directs it toward the other substrate. I was able to make the molecule adopt the activated conformation, but I had to break the protein by removing an important amino acid to make it let go of the stabilized phosphate.

conformations_equilirbrium
Stabilized and activated forms of the GMPPNP molecule in the APH(2”)-Ia active site. In both cases the magnesium atoms in the active site stay the same, but the phosphate groups (yellow) of the molecule switch to a different location, far from or close to the aminoglycoside, which contacts D374. By removing the S214 contact from the enzyme, the normal enzyme gave us the activated conformation, which suggests that the enzyme normally holds the compound in a stabilized form for some reason. Why?

So why didn’t the protein in my structures normally activate the triphosphate?

The answer came when I added antibiotic molecules to the crystals. Introducing the antibiotic after the crystals were grown could let us track the changes driven by the introduction of the antibiotic. I had a good idea from other aminoglycoside kinases that the antibiotic would bind in the same position, and initially was just trying to confirm that the antibiotic bound in the same position. As a fortuitous observation, the addition of aminoglycoside substrate drove changes in the shape of the protein, as well as the GMPPNP molecule. The antibiotic activated the triphosphate by binding to the enzyme.

The flip between these states gives us a clear way to understand how the enzyme could turn itself “on” when it encounters an antibiotic.

A motion sensor switch for antibiotics

gentamicin_induction
Gentamicin binding to the APH(2”)-Ia active site pulls the equilibrium of conformations in the active site from the stabilized to the activated form of the triphosphate group. A hydroxyl group (dark red) of gentamicin lies closest to the activated triphosphate, where it could then be phosphorylated.

So, we see a new shape of the triphosphate group that can’t modify the antibiotic. It is held in an awkward, non-reactive form in the back of the enzyme active site. When an antibiotic comes in, the enzyme shifts its shape to respond. In the process of making this change, the GTP triphosphate changes and becomes active. This catalytic switch helps the enzyme keep the GTP inactive until the antibiotic is bound and then activates its triphosphate to a shape that lets it carry out the reaction.

But why is this necessary?

This is where we have to speculate a bit. In the stabilized position, the enzyme can’t carry out its normal reaction, but it also can’t carry out a second reaction, called hydrolysis. Hydrolysis is the breakdown of a molecule caused by water. As all biological molecules are found in water, it is always available to react with an activated molecule. Normally, the kinase enzyme should transfer the phosphate the antibiotic substrate. However, when there is no antibiotic around, it’s possible that a water molecule can sneak into the active site and react with the GTP instead. The result of this interaction is that the GTP is broken down, and no antibiotic is inactivated. This wastes precious GTP for the bacterium, so any enzyme that cuts back on rates of hydrolysis will be preferable for the survival of the cell.  This mechanism may have been developed to reduce this energy wastage by the APH(2”)-Ia enzyme. In environments where there is a lot of competition and resources are scarce, enzymes that conserve energy are an enormous benefit for a bacterium.

This switch between stable and activated forms of the GTP molecule turns parts of the enzyme into a molecular motion sensor for the presence of antibiotics. When they aren’t around, the enzyme hangs out and holds on to the GTP, inactive. It’s only when the antibiotic shows up and sticks to a different part of the protein, that the enzyme undergoes changes that activate the GTP. Like a motion sensor-based system to turn your lights off when there’s no one around, flipping between these states might be an interesting way for the enzyme to turn its activity off and conserve energy when it doesn’t have an antibiotic to modify.

So what’s the big picture?

There’s a lot of places this work leads. I’ve skipped over another interesting finding that two different classes of aminoglycoside interact with the protein despite the fact that only one of those classes can actually be modified. I’ll save that for another blog post. There’s also a lot more detail on the specifics of how APH(2”)-Ia works that I’ve glossed over, which we could also explore some time later.

This mechanism isn’t too different from other mechanisms in proteins that carry out various functions in biology. This switch can be considered a type of induced fit or conformational selection of the enzyme, a well-established model for protein behaviour. The thing that makes it different is that this enzyme is an antibiotic resistance enzyme. Usually antibiotic resistance enzymes aren’t thought to be very complicated. They are thought to be inefficient but highly active machines to turn off antibiotics as fast as possible. This work shows us that an antibiotic resistance factor can be modulated and subject to regulation in a way that reduces its energetic waste.

How about the bigger picture?

Taking the inference a step further, this induced activation of the enzyme indicates that there is a greater complexity in the action of this enzyme than we previously might have expected. However, it isn’t as surprising as we might think when we remember that many antibiotic resistance factors have been around for millions of years, with a very long time to optimize their catalytic activity. We treat antibiotic resistance as something that pops up fresh when we use antibiotics, but the truth is, like we’ve discussed before, there are some forms of ancient antibiotic resistance are highly fine-tuned and regulated to respond to challenges in their natural environment.

APH(2”)-Ia is one resistance factor among many. This paper shows us that resistance factors need not just be catalytically optimized machines, that they can contain a degree of fine-tuning that regulates their activity. This nuanced activity is supported by long periods of evolutionary selection to produce highly effective resistance enzymes, ones that lead to terrifyingly effective antibiotic resistant microbes. We now understand one factor a little bit better, and hopefully that helps us just a little bit more in our efforts to beat back the surge of antibiotic resistance.

Citation:

Caldwell SJ, Huang Y, & Berghuis AM (2016). Antibiotic Binding Drives Catalytic Activation of Aminoglycoside Kinase APH(2″)-Ia. Structure, 24 (6), 935-45 PMID: 27161980

How build a protein – lessons from the Protein Engineering Canada 2016 meeting

How do you make a new protein, or a new function in an existing one?

This is the goal of the field of protein engineering. Researchers working in this field use a number of strategies to try to make proteins with new characteristics. The development of proteins with new function have applications in industry, medicine, and biotechnology.

Want a more stable or more efficient enzyme? Talk to a protein engineer.

Want to convert a protein with one function to a completely new activity? talk to a protein engineer.

Want to make a molecule with a function never before seen in nature? talk to a protein engineer.

I was able to hear from many working in the field this weekend at the second iteration of the Protein Engineering Canada meeting, held at the University of Ottawa. The meeting was brief but full of excellent talks. There were some common principles that kept coming up that I’ll try and summarize here.

1448571656

Reductionism fails in engineering proteins (for now)

Elan Eisenmesser gave an excellent talk about protein dynamics and mentioned how “the worst thing to happen to biochemistry was biochemists”. The argument is that biochemists like simple, reductionist models, but as we’ve studied proteins and how they work, we know the truth remains much more complicated than that. Many times to change the function of a protein, changes far from the site of interest are necessary, and we remain pretty terrible at predicting what those might be.

In many talks at this meeting it came up that the most effective modified enzymes are typically achieved through non-intuitive mutations. So, if we limit ourselves to deterministic changes where we predict the results, we may miss most possible opportunities to develop new properties in a molecule. Rational approaches can lead us the wrong way – screening of many different sequences are necessary to find proteins with desired properties. We may someday be able to predict function from sequences alone, but those days remain far in the future.

The challenges in screening and sorting for function

The number of possible sequences even in a relatively small protein are astronomically large, 20 residues raised to the power of the number of residues in the protein. As a result, it’s never possible to screen for function in every possible sequence. It is always necessary to reduce the number of testable sequences and structures of proteins to test to a manageable level, and this needs to be done in a smart way.

Two strategies of screening a restricted library presented were Tim Whitehead’s group’s strategy of systematically replacing every amino acid in a protein with every possibility and compete the bacteria against each other to try to alter function. Another method was Justin Siegel’s to look to nature and the diversity of sequences in the environment to find better proteins that have developed in the wild. These strategies guide us to finding new functions without having to individually screen 20100 individual proteins, something that we would still be doing until the death of the universe.

A third strategy discussed was the generation of a collection of completely new proteins, never seen before, to screen for brand new functions in proteins:

Making something from nothing

It’s much easier to break something than to build it new from scratch. This is a fact of life, dictated from the rules of thermodynamics.

But, at the same time, it’s surprisingly easy to build something. Michael Hecht gave an excellent talk indicating how his group has made a library of brand new proteins, and screens them for function. He described that for some functions, many new proteins could carry them out, without any evolution to tune the protein’s activities. So, if you have the right type of function, it might be possible to find that function in randomly-generated (although constrained) protein sequences. This has some pretty profound implications for understanding the origin of life as we know it.

The finding from this work, and in the analysis of a large family of enzymes presented by Janine Copp was that you can relatively easily get weak, promiscuous activity from primitive enzymes, which then are refined to more specialized proteins with higher specificity and activity. This is the case in nature, and also now in the lab where researchers develop new and better functions in proteins by driving them to specialization.

Getting comfortable with disorder and dynamism

Similar to problems with reductionism, an assumption that x-ray crystal structures have convinced many of is that proteins are mostly rigid and don’t move much as they carry out their function. This isn’t really true, I’ve ranted before on this site about why we need to undersand molecules are jiggly. Proteins are nearly chaotic, with interconnected networks of interactions that drive their function.

A recurring theme in this meeting was that an understanding of dynamics is necessary to develop a good grasp of function and how to change it in a molecule. Many talks including Sophie Gobeil’s talk on an antibiotic resistance enzyme and Adam Damry’s award-winning talk on engineering of a dynamic function in a protein touched on these points. It is necessary to understand dynamics to predict function, and while this remains challenging, it is possible to develop some predictive insights through carefully constructed experiments.

The emerging art of protein design is starting to mature, guided by a more comprehensive understanding of protein function, and smart strategies of how to get there. I’m excited to see where the field is headed!

A meeting well spent

In addition to all this work on engineering of proteins, my jaw dropped to see some of the amazing new T3SS structures coming out of Natalie Strynadka’s group, and Martin Schmeing’s presentation of his group’s megaenzyme studies, appropriately set to Miley Cyrus and Taylor Swift.

Overall, an amazing meeting, I hope to go again in 2018. If you’re interested in protein design and engineering, I can’t recommend the meeting enough. Hope to see you there.

 

I love thial-S-oxides so much I cry

Lots of things can make us cry. Pain. Joy. Boredom. Sadness. Toy Story 3.

There’s another common source of tears in any home, and it lives in the kitchen. Onions and other plants in the Allium genus produce compounds that trigger an involuntary reaction. Chopping these bulbs turns even the most stoic into a weeping mess in minutes.

What makes this happen? Does the onion want us to be sad? Does it want us to share in its misery? Does it want to inflict the same pain upon us that we inflict on it? How does a simple vegetable have that much power over our tear ducts?

The answer lies in biochemistry. Despite their unassuming appearance, onions pack powerful chemical weapons, ready to launch upon us at a moment’s notice. The main compound produced by onions is called the lachrymatory factor. This chemical has an unusual chemistry for a biological molecule, because of one atom. Sulfur.

It's just **sniff** so beautiful!
Lachrymatory Factor

If there’s ever weird chemistry happening in the cell, you can bet that sulfur is probably involved. Let’s explore why.

Smelly yet invaluable: the versatile chemistry of sulfur

Sulfur is interesting in biological chemistry because it can do much more than most other abundant biological elements. In its simplest form, sulfur can link up with hydrogen, to form H2S, the main ingredient of sewer gas and other unpleasant biological emissions. If you ever run in to a smell like rotten eggs, there’s a good chance sulfur is involved.

Sulfur’s superpower is its large number of stable oxidation states. A common and stable form of sulfur is sulfate, when the sulfur is linked to four oxygen atoms. Three oxygens is a sulfite, two a sulfinic acid, and one gives you a sulfenic acid. This versatility lets molecules with sulfur do a lot of things that other biological molecules can’t.

Sulfur can also form more complicated compounds when you add in carbon and nitrogen,  creating compounds like vitamins B1 (thiamine) and B7 (biotin), antibiotics penicillin and  sulfanilamide, and the critical biological molecule coenzyme A.  Sulfur also forms an integral part of protein molecules linking to carbon atoms in the amino acids cysteine and methionine. The sulfur atoms in cysteine can link with itself to form disulfides, an important part of its role in protein molecules. It can also link to oxygen atoms in a variety of ways, which let it respond to the oxidative environment, build unique compounds, including many with biological roles.

Because sulfur is happy to ignore the common rules of molecular bonding that carbon, nitrogen, and oxygen tend to obey, if frequently is used when non-typical chemistry is needed. Cells use a sulfur-containing cofactor to shuttle methyl groups around the cell, and use the sulfur-containing vitamin B1 to carry out many complex biochemical transformations.

Lachrymatory factor is another great example of this chemical versatility. The compound is an S-oxide or sulfoxizime compound. This means it carries a positively charged sulfur and negatively charged oxygen atom, directly bonded to each other. This breaks a rule that we teach in organic chemistry that you cant form stable covalent molecules with positive and negative charges adjacent to each other. Unfortunately, this is one of those rules that is more of a guideline than an actual rule, and sulfur is happy to ignore it.

pirate barbossa

That said, this arrangement of atoms in the lachrymatory factor is not stable over the long term, so it can’t be made in advance and stored for long amounts of time. It’s also a gas, so hard to contain within a plant’s tissues. So how does the onion release it so quickly when you cut into a bulb?

It’s got a hell of a defense mechanism. You don’t dare kill it!

In the Alien series, the blood of the xenomorph creatures is toxic and corrosive. Attack and injure the alien, and it spews this toxic acid, corroding whatever it comes into contact with, including any humans foolish enough to attack in the first place. This serves as a pre-emptive defence for the alien. Hurt me, you’ll get hurt, too.

3896978-7738519081-tumbl

The same strategy plays out with many plants. Plants can’t move in response to predators, so they protect themselves in a different way, with chemistry. When their cells are damaged, they produce noxious chemicals to turn away their predators. These chemicals can take many forms and actions, sometimes poisoning us, often having no effect, but sometimes having beneficial pharmaceutical properties, that help us treat disease. In fact, many potent pharmaceuticals come from plants.

Lachrymatory factor plays this role for onions, in the same manner as allicin does in garlic, and a different class of sulfur-containing compounds does in cruciferous vegetables like broccoli. Herbivores or insects who try to eat the plant will be overwhelmed by noxious chemicals, and look elsewhere for their dinner. But how does the onion actually make the compound when it’s injured?

Red Light, Green Light

Onions generate the lachrymatory factor on the spot, immediately after tissue damage. It does this using a smart solution – keep the two elements that generate the noxious chemical separate, and only allow them to come into contact if tissues are damaged. Onions generate the lachrymatory factor by breaking down precursor molecules using enzymes called alliinases. This releases the lachrymatory factor into the air, to wreak havoc on our tear ducts.

tumblr_inline_mn954ctJWP1qz4rgp

Onions keep these precursor molecules (alkyl cysteine S-oxides) and the enzymes needed to make them in different compartments, so they do not normally meet. When you slice into them with a knife, the cells are broken and the enzyme and precursor molecules meet. As the enzyme starts to break down the precursor, it releases lachrymatory factor*, which diffuses into the air. This reaction happens fast, but not instantaneously, which explains how it can take a minute or two for the vapours to accumulate enough to cause problems. As soon as the plant is damaged, it starts producing lachrymatory factor, but it takes a few minutes to accumulate enough to affect us.

Protecting yourself from lachrymatory factor

If you feel you’re a tough individual, it can be pretty embarrassing to break down in tears in the kitchen. That should be reserved for when you finish reading Where the Red Fern Grows. So how can you keep the onion tears away?

Chemistry to the rescue

We know two things: First, lachrymatory factor is produced by onions in response to injury when enzyme and precursors come together. Second, lachrymatory factor is a gas that is released from the onion’s tissues into the air around us. Based on these two facts and the basic principles of chemistry, what can you do?

  1. Work fast. It takes time for the alliinase reaction to take place, so the shorter the time freshly cut onion is in the open, the better.
  2. Use ventilation. A fan (preferably one that vents outside) can whisk the vapours away.
  3. Cool everything down. Enzymes work much faster at room temperature than at refrigerator temperature, so cold onions will produce less compound. Gases are also less volatile at colder temperatures, which also reduces how quickly it migrates into the air.
  4. Once it’s cut, heat it all up. Once an enzyme is denatured, it’s dead and doesn’t work any more. Enzymes can be denatured by cooking, and sometimes by acidic or basic treatment. Another reason to work efficiently – get those onions cooking, and they’ll stop making the chemical pretty quickly.
  5. Onion Goggles? No. Don’t do this. Just don’t.

A tear-free onion?

So, if we know just how this noxious chemical is produced, is there any way we could reduce it altogether? This has been done by a group in 2008. By reducing the amount of one of the enzymes involved in making lachrymatory factor, researchers could greatly reduce the production of the chemical from these onions. In the process, the onions even seemed to make more of other flavourful, sulfur-containing compounds, so this work certainly looks promising. One interesting consideration with a lachrymatory factor-free onion is that it might be a lot harder to grow – the chemical is a form of defence against herbivores and insects, after all!

Not much word on this work since then, but if it isn’t already underway, I’m certain new technologies might make this even more feasible in the near future. Every time I make dinner with onions, I’m reminded and hope we get one soon. A tear-free onion can’t come soon enough for me.

*Technically, it releases an intermediate that is then converted by another enzyme to the active lachrymatory factor, but for the sake of simplcity we can pretend the compound is produced immediately

Eady CC, Kamoi T, Kato M, Porter NG, Davis S, Shaw M, Kamoi A, & Imai S (2008). Silencing onion lachrymatory factor synthase causes a significant change in the sulfur secondary metabolite profile. Plant physiology, 147 (4), 2096-106 PMID: 18583530

Music of the Macromolecules

To fully understand a molecule, you first need to learn what it looks like, and then, how it moves. This isn’t easy. I’ve talked before about how unusual biological molecules can be if you’re accustomed to thinking of real-world objects. They are fundamentally flexible and dynamic in a way that everyday objects aren’t. They move chaotically, at lightning speed, crashing through a molecular mosh pit on the sub-microscopic scale.

Protein and nucleic acid macromolecules are like Rube Goldberg machines of interconnected parts. These parts move independently, but in turn influence the other parts of the system as they move. There’s different levels of complexity in this motion. Slow conformational transitions that move large sections – domains – relative to each other can take milliseconds to occur. Ultra-fast bond vibrations take only picoseconds. That’s a difference of nine orders of magnitude. In the time of a single slow domain movement, a billion bond vibrations can occur. In monetary terms, this is the difference between one cent and ten million dollars.

This is what biophysicsts refer to when they talk about “timescales of molecular behaviour“. Different types of molecular motions take dramatically different lengths of time to occur. We can only measure a subset of these motions with any one experimental technique. When we try to fully understand a molecule, we need to be aware of all of its motion across all timescales. Unfortunately, we are terrible at understanding things that span such a broad range.

Our brains are trained to think about everyday objects we can see, touch, and manipulate. Microscopic molecules act in ways that make absolutely no sense on the scale of our experience. To help make sense of this strange behaviour, we need a good metaphor.

Molecular Motions are Like Musical Harmonics

What does a molecule have in common with a musical note? You might not be able to think of any way these two things are related (you also might also be wondering what I’ve been smoking). A molecule is a collection of atoms, connected by shared electrons. A note is a small part of a Bach sonata, jazz solo, or Call Me, Maybe.

Well, we’ve discussed before how the context a molecule is in is critical for understanding downstream effects. A musical note, as well, gains more meaning by the context it is placed within. The same note means different things if it’s played within a different song, or if it comes from a pan-pipe versus an electric guitar.

But even isolated molecules and isolated musical tones share something fundamental in common. They both display a complexity of vibration, with finer, more detailed vibration superimposed on top of slower, lower-frequency behaviour.

vviibbrraattiioonn
Macromolecules show motion on many scales, superimposed on each other like resonant overtones of a musical note. Structures generated from PDB 5RAS

To a first approximation, a note is just a frequency of sound. Children of the ’90s will remember that before Napster, we could download MIDI files from the internet to play as music. Many computer sound cards rendered the tones of the MIDI as pure tones, which reflects the format the note is stored as in the MIDI file. The end result is completely devoid of soul, a heartless distillation. It lacks any of the complexity of actual recorded music. The notes are there, but without details and imperfections of that come from real instruments, it seems hollow. Real musical instruments produce so much more than just pure tones.

We know that the character of an instrument changes the nature of the sound it produces. A B♭ from a trumpet and a B♭ from a clarinet sound different to us, despite both having same fundamental frequency. What makes them sound different from each other, and from a MIDI file? In one word: overtones. Every instrument layers higher-order resonances – vibrations – on top of the fundamental tone, and those resonances are dependent on the shape, material, and other properties of the instrument. Vibrational overtones add complexity and texture to an instrument’s sound. While the main pitch of the note is the same, the structure and character of the instrument produce different superimposed frequencies that make it unique.

Like the air perturbed by a musical instrument, molecules also vibrate. These vibrations and movements are central to their function. Individual atoms undergo high speed vibration. Chains of multiple atoms turn and bounce in unison. Loose loops and “floppy bits” of dozens to hundreds of atoms contort, twist, and wiggle. Whole domains can migrate back and forth between different states. Like overtones on a musical note, these motions are superimposed on each other. While large domain movements occur, loops are wiggling, within those wiggles, amino acid side chains are bouncing, and during those bounces, individual atoms vibrate across every bond in the molecule.

Harmonic Potentials

Rotation and vibration of atomic bonds follow an energy potential pretty close to sine waves. Combining the motion of those atomic vibrations and rotations across multiple atoms produces an emergent complexity where the arrangement of atoms across one bond can influence that of the nearby atoms, and by extension, the rest of the molecule. In theory, we might be able to work out how these energy potentials govern the behaviour of a single molecule.

Alas. Were it only that simple.

In the change of structure of a molecule, small transitions of single atoms can be layered on top of larger motions. The motion of an atom depends on its own vibrations, as well as that of the rest of the molecule around it, pushing and pulling it along with larger changes. This feeds both ways. While a large transition occurs, vibrations and rotations of progressively smaller components can also exert their collective effects on the entire molecule. This chicken-and-egg problem is a big part of why the behaviour of molecules is so hard to predict, even when we know its structure. It leads to a computational problem that rapidly gets too complicated for even the most powerful supercomputers to handle easily.

So just because you know the shape of a molecule doesn’t mean you can capture the full essence of its character. Like a musical note, a molecule’s shape is just the starting point to understanding the complex way it acts on itself and its environment. Static molecular structures are like notes printed on a page. Dynamics* are those notes, played aloud, containing much more richness than the printed note alone contains.

When we combine multiple musical notes, the complexity grows even greater. Multiple notes from a single instrument like a guitar or piano interact with each other to form chords. Different instruments in a band or orchestra combine together to further increase the complexity. All of these interactions combine together to make a symphony much greater than the sum of its parts.

Likewise, combination of motions within molecules also adds to a complex whole, where the collective motion of thousands or millions of atoms can lead to much more nuanced patterns of behaviour than we might otherwise expect. Two macromolecules, playing their own melodies, can come into contact (bind) with each other, and if so, they join together in harmony. These molecules become a single, resonating entity, sometimes for a brief exchange, other times for much longer.

Just like in a symphony, the complexity grows even more as we scale up interactions of molecules to complexes, signalling pathways, cells, and even whole organisms. This intricate opera underlies all biological processes.

Fine Tuning

So, if molecules are so complex, how can we make any sense of their messy behaviour? In science, we don’t aim to simply appreciate nature, but to understand it and make predictions about the future, and to generate changes that help us innovate on existing phenomena. Our metaphor of a molecule as a musical note becomes useful to help us move from thinking about how a molecule is to how it might change.

Ask any manufacturer of a musical instrument: changes to small details of a musical instrument can dramatically influence the quality of sound you get. This is the same with molecules – changes that alter the dynamics change the character of the molecule. For example, in a protein, biological activity frequently requires large movements between domains of a protein, as well as finer motions of hinge regions, short loops, and amino acid side chains. Changes to a molecule, by post-translational modification, mutation, binding to another protein, or allosteric regulation can distort or modulate the dynamics of a protein. They change the tune of the molecule, by altering its resonances.

This resonance-tuning feature of proteins has led to many mysteries in the literature about macromolecules. With surprising frequency, mutations are found that disrupt the activity of a protein, despite being far away from the business end (the “active site”) of the protein molecule. These reductionism-breaking proteins have caused many a biochemist to throw up their hands in dismay at the apparent lack of connection between a mutant protein they identify and their observed change in molecular function. Happily, though, we’re starting to track down the culprit: dynamics.

Changes to a molecule that have very little structural change can still alter the molecule’s vibrational frequencies. A protein with an amino acid important for dynamics changed is like a band whose bass player is hung over and can’t keep time.

A paper from earlier this year demonstrates this effect very well. It came from Dorothee Kern‘s group at Brandeis. Looking at two well-known protein kinases(PMC) and reconstituting the evolutionary and biochemical pathway between the enzymes, the group found that there’s a small set of amino acids that drive the change in behaviour between the enzymes. Almost none of these amino acids are directly involved in the chemical behaviour of the protein. Like making alterations to an instrument, these mutations tune and refine the dynamic properties of the enzyme, and direct it toward different behaviour.

Molecular and structural biologists are just starting to get a good understanding of the role of mutation and chemical change to altered dynamics and function of proteins. I’ll be watching this field closely for future developments.

From Chaos, Order

The analogy of molecules as musical notes with harmonics isn’t perfect. Music depends on perfectly repeatable, precise tones (that’s not to say innovation and improvisation aren’t important, but they use the same, standard notes). Molecules have an intrinsic chaotic nature that is not really predictable at all. But while the molecule is unpredictable on the microscopic level when you look closely, take a step back and the molecule starts to average out into predictable, regular rules. From a stochastic and random process on the microscopic level, step back farther and farther, and a kind of predictable order emerges.

There’s also a difference in scale. The first overtone of a note is merely twice the frequency. Proteins have motions at least 9 orders of magnitude different. However, we could compare it to the difference in loudness our ears can perceive, the difference between a bond vibration and large macromolecular rearrangement is about the same difference in magnitude as a pin dropping when compared to a loud rock concert. The musical analogy isn’t perfect, but it helps understand a hugely complex system with thousands or millions of moving parts in a more intuitive way.

Symphonies in the Molecular World

Molecules are alien entities, very different than anything we interact with in our everyday lives. Their actions are determined first by their structures, then by their dynamics – how those structures move and vibrate. The structure and movement of these molecules results in a complex molecular symphony going on at the microscopic level. And from the single complex note that one molecule makes, it can be tuned by others, harmonize with partners, and join in with the grand symphony that goes on in the complex molecular opera of life.

Dynamics are a frontier of structural biochemistry research (a Grand Challenge, if you will). Moving forward, we continue to chip away at the mysteries of how molecules work and learn how to better predict molecular behaviour. Every time we do so, we get a little bit better at listening to the complex arias and beautiful harmonies these molecules play. Our ear gets a little bit more refined, our appreciation of this molecular orchestra more acute. The symphony goes on all around us, can you hear it?

* I’m using the definition of dynamics as it relates to molecules here, as in the field of molecular dynamics modelling. The term dynamics as it relates to music is a slightly different concept than anything we’re discussing here, so I’ll skip over it.

Citation:
C. Wilson, R. V. Agafonov, M. Hoemberger, S. Kutter, A. Zorba, J. Halpin, V. Buosi, R. Otten, D. Waterman, D. L. Theobald, D. Kern (2015). Using ancient protein kinases to unravel a modern cancer drug’s mechanism Science, 347, 882-886 : 10.1126/science.aaa1823

How Spicy Would You Like That Chemotherapy?

Molecules are abstract objects, so it’s easy to talk about one using the single property we know about it. Penicillin cures infections. Chlorophyll harvests sunlight. Cocaine gets you high. Thinking this way keeps everything simple and makes it easy to tell a story about them. Referring to molecules by a single characteristic keeps things simple.

Unfortunately, nature hates simplicity.

A molecule doesn’t know the role we’ve given it. It wiggles blindly through solution, crashing into others. It only knows the other molecules and particles it directly interacts with. It percieves nothing about the upstream or downstream effects it has on a cell, an organism, or the environment.

If a molecule finds a site where it can stably rest, it will stay there a while, often triggering nearby molecules to change as a result. These changes can in turn drive other changes of nearby molecules, and cascade along to generate the local effect of that molecule. In this interaction, both the molecule and the environment are important. Change the environment, and a molecule can have dramatically different effects. Compounds that plants use as insecticides give us an energetic buzz, or work as effective painkillers. Likewise, small changes in a molecule can dramatically change the activity of that molecule in the same environment.

So, both the molecule itself and the surrounding ones can influence the ultimate effect. Changing either can bring about completely new interactions and behaviours, with different consequences. While loyalty of molecules to a single function would make them easier to talk and think about, most of them are philanderers.

Molecules are Promiscuous

We want molecules to behave in a simple way that makes sense. We want them to be monogamous and true to a single role. Finding non-promiscuous drugs is one of the big challenges of pharmaceutical development. We need a molecule to be effective at the desired location, without interacting anywhere else. When we use a compound, we want it to be specific to its desired function and not interact with any others. Dirty drugs are rarely good ones.

We get side effects when drug molecules interact with other proteins, cells, or tissues than they were developed for. An effective nervous system drug isn’t very useful if it kills kidney cells. Unfortunately, off-target binding is the norm, rather than the exception. Compounds that exert the desired effect in one place can drive very negative effects elsewhere.

On the flip side, there are many molecules that have safely entered clinical trials to treat disease, but aren’t very good for their intended use. These relatively safe drugs can sometimes be directed toward different functions, to treat other conditions. This is known as drug repurposing. A number of effective medicines have emerged this way. Exploiting the promiscuity of compounds can help us find new uses for old drugs.

Of Chili Spice and Cancer

It’s often interesting when a molecule breaks the predefined role we’ve given to it and shows us a completely different function. For instance, a compound once evaluated for its ability to reduce high blood pressure instead inhibits antibiotic resistance enzymes. Sildenafil, a drug once tested for pulmonary hypertension, had an unexpected and lucrative side effect. A puzzling and exciting compound, rapamycin, has both antibiotic and immune suppressive effects, while also appearing to extend the lifespan of healthy mice.*

I thought of this kind of molecular versatility when I came across a paper in ACS Chemical Biology: Phosphorylation of Capsaicinoid Derivatives Provides Highly Potent and Selective Inhibitors of the Transcription Factor STAT5b. This headline means that a molecule from chili peppers can be modified to block to a protein involved in cancer progression. Out of context, this seems bananas. How would a molecule similar to hot pepper spice be used to fight cancer?

The protein targeted in this study is required for the progression of certain forms of cancer. Inhibiting its action using a small molecule drug could halt the growth of cancer in its tracks. A previously discovered inhibitor contained a group with two phosphates attached, and they decided to apply the same modification to another molecule that has the same base structure – dihydrocapsaicin. This molecule one of the main compounds responsible for the spice of hot peppers. As far as I can tell, it was chosen because it was a commonly available natural chemical that could undergo the same modifications as the previously discovered inhibitor, and possibly act the same way on the protein.

Upon testing, the modified chili spice molecule did exactly that – it blocked the protein. But why?

Burn, baby, burn!
A molecular mimic from an unconventional source

A Wolf in Phosphotyrosine Clothing

A look at the structure of the inhibitor compound provides a clue. It looks a lot like a familiar modified amino acid: phosphotyrosine. Tyrosine is one of the 20 amino acids that make up proteins. Addition of a phosphate group turns it into phosphotyrosine. Our cells use enzymes that switch it between these two forms to regulate the activity proteins.

The STAT (Signal Transducer and Activator of Transcription) transcription factors are regulated by tyrosine phosphorylation. These proteins contain tyrosines that can be phosphorylated by kinase enzymes. They also contain protein modules that tightly bind to phosphotyrosine. As a result, a pair of phosphorylated STAT molecules form a mutual handshake, gripping their phosphorylated twin tightly. Once this pair forms, the protein is able to carry out its function, turning on genes involved in cell growth and division.

Adding a molecule that binds in the place of phosphotyrosine keeps the molecule from being efficiently phosphorylated itself, and also blocks it from binding to a phosphorylated partner. This stops the activity of the protein, which is needed for the continued growth of cancer cells. By blocking the activation of STAT molecule, the progress of a cancer cell can be stopped. Small molecules that mimic phosphotyrosine could in turn be effective anti-cancer drugs.

Exploiting Molecular Promiscuity

The site that capsaicin normally binds, the ion channel TRPV1, is nothing like the STAT proteins. Completely unrelated. So, the interaction of modified capsaicin to STAT is completely independent of its role in food. Capsaicin and its derivatives may share some characteristics that help it perform both roles, but those roles are completely indepdendent.

Add some chili flakes to your curry and you trigger a hot and/or pain response. Make a couple chemical changes and inject into a tumour, you could now use a similar compound as a chemotherapy compound. And that’s just two characteristics a molecule could have. With thousands of genes in the human genome, there are countless potential targets a molecule could bind, for better and for worse. It takes smart planning and study to figure out what’s possible.

This is an interesting case of what the researchers call “semi-rational design” of a chemical compound. Taking chemicals that already exist in nature, making changes to make them look more like drugs for a specific target, they identified a new specific inhibitor of a protein. The goal is to take complicated natural molecules, through a simple transformation, convert them into more effective chemicals for the desired function. In this way, it’s possible to leverage nature’s huge diversity of chemical compounds, and tailor them to get the function we want from them.

Breaking the Mold of Molecular Function

This paper shows us that a molecule is not destined for a single role. Small changes can dramatically alter its effect. In addition, directing a molecule to a different target will result in a completely new function. There is an enormous diversity of compounds we can pull from in the lab, and in nature. If we’re smart, there are probably ready solutions out there for us to go and find.

Will this compound revolutionize the treatment of cancer? Unlikely. Will eating a spicy diet help fight the disease? Certainly not, at least not by this mechanism.

The lesson I take from this work is that we shouldn’t be too quick to brand a chemical based on a single characteristic and dismiss any of the other functions it could have. Context, environment, and chemical properties are always relevant when we discuss the action of a chemical.

*Rapamycin is a compound surrounded by extraordinary claims (and hype). There’s not enough space to go into it today, but hopefully I can talk about it in the future.

Citation:

Elumalai N, Berg A, Rubner S, & Berg T (2015). Phosphorylation of Capsaicinoid Derivatives Provides Highly Potent and Selective Inhibitors of the Transcription Factor STAT5b. ACS chemical biology PMID: 26469307

 

Untranslated Elements: CRISPR

My last couple posts have become longer than I expected. I’m going to break the pattern this week. I’m starting a recurring series of posts containing brief thoughts, centred on a single topic in the molecular biosciences. These posts will be unorganized, full of sarcasm, conjecture, and the occasional opinion. I’m calling it: Untranslated Elements.

Today, CRISPR technologies.

 t8o3d

For a brief background, CRISPR/Cas9 is a gene editing technology developed from a bacterial system for viral immunity. The CRISPR/Cas9 system is taking molecular biology by storm because of the low cost, ease of use, and unprecedented ease at which it allows us to play God with the genetic code, even our own.

Untranslated Elements:

  1. CRISPR = Clustered Regularly Interspaced Short Palindromic Repeats. This is why you don’t let academics name things. I’ll forgive it under one condition: name an associated protein KRUNCH.
  2. Despite what they’ve told you, CRISPR can not write a top-40 pop song, repair a broken marriage, or bring peace to the Middle East.
  3. Remember how siRNA used to be the solution to every problem in medicine? That’s why I restrain my enthusiasm for CRISPR, despite watching closely.
  4. CRISPR is a powerful technology, but like any technology it’s no substitute for good experimental design. Just because you have the best technology available doesn’t excuse you from using the appropriate controls.
  5. The serendipity of scientific discovery rears its head once again. This enormously promising technology came from an unexpected place – yogourt.
  6. The CRISPR discovery repeats a theme I come back to all the time: The microbial world is more complex than we give it credit for. If a problem exists, many times bacteria have already solved it. The challenge is noticing when we come across it and using it to our advantage.
  7. All the bullshit over CRISPR patent priority is totally not cool.
  8. We’ve been able to edit genes for decades, CRISPR just makes it fast and easy. How have we waited this long to have a public conversation about ethics?

Whether you’re on the bandwagon, or sniping at it from the sidelines like me, this technology will change the way we do molecular biology. Maybe even how we think about ourselves. I sure hope we’re ready.

Antibiotic Resistance as a Force of Nature

My research focuses on antibiotics – specifically antibiotic resistance. Last week I gave a seminar on my work, which was followed by some excellent questions about the origins and evolution of resistance. While I don’t personally get my hands dirty studying molecular evolution or microbial ecology, I think about these topics often, for a couple reasons. First, the origins and evolution of resistance factors have interesting implications that contextualize the structure and function of resistance factors – it helps me make sense of the molecules I study. Second and more importantly, the evolution of antibiotic resistance gets at a fundamental understanding of the environment and the world around us. We tend to focus on the problems that resistance create in medicine, but in nature, the relationship between microbes is much more complicated than we might assume. Antibiotics and resistance give us a window into the fascinating world of microorganisms and their strange and complicated existence.

In the discussion after my talk, I brought up a recent paper in Nature. In this report, a group at Harvard Medical School modelled the community dynamics of antibiotic-producing and -resistant microbes. The headline finding was that antibiotic production and resistance can stabilize microbial environments. The production and degradation of antibiotics are an intrinsic feature of mature microbial communities.

This seems counterintuitive – how would antibiotics, substances that kill bacteria, bring stability to an ecosystem?

It isn’t as ridiculous as it may sound. While a majority of antibiotics research focuses on the medical applications and repercussions, a less celebrated contingent of microbiologists look at the environmental role of antibiotics. These researchers find that antimicrobials play a more nuanced role than we have been led to believe. Rather than the chaotic battle royale we picture when we think of the microscopic struggle for survival, they find that antibiotic resistance often plays subtler roles in the microbial world. Let’s look into that world.

Microbes live in complex communities, pass toxins and signalling molecules back in forth in a complicated web of interactions. Image adapted from the Lewis Lab at Northeastern University. Image created by Anthony D'Onofrio, William H. Fowle, Eric J. Stewart and Kim Lewis.
Microbes live in complex communities, pass toxins and signalling molecules back in forth in a complicated web of interaction.
Image adapted from the Lewis Lab at Northeastern University. Image created by Anthony D’Onofrio, William H. Fowle, Eric J. Stewart and Kim Lewis.

What is Antibiotic Resistance?

We should get some definitions out of the way. An antibacterial is a chemical compound that kills or stops the growth of a bacterium. Antimicrobials are less well defined, and include antibacterials as well as antifungal and antiparasitic compounds, sometimes antivirals as well. Antibiotic was coined to specifically refer to a chemical that kills or stops bacteria but doesn’t affect animal cells – a nontoxic antibacterial. It since expanded to include some compounds that also target fungi and protists, but not antiviral compounds. In common speech, “antimicrobial” and “antibiotic” frequently mean “antibacterial”, so I’ll use them interchangeably here.

Antibiotic resistance is a catch-all term we give to many different mechanisms a bacterium can use to survive and/or grow, in the presence of an antibiotic. Every antibiotic has a specific target it interacts with, and anything that keeps the antibiotic and target from binding will result in resistance. Common mechanisms of antibiotic resistance include:

  • Chemical breakdown of the antibiotic
  • Molecular pumps that kick it out of the cell
  • Changes to the microbial target that block the antibiotic
  • Bypass the target molecule to allow the microbe to grow even when the target is productively blocked

Resistance factors are the molecules that give the bacterium resistance, by any of the above mechanisms. These resistance factors have diverse origins. In some cases, they’re still controversial. But broadly speaking, there’s two types of antibiotic resistance: new resistance and ancient resistance.

Two Origins of Antibiotic Resistance

New resistance makes sense. It is what we think of when we talk about antibiotic resistance as an example of Darwinian evolution. A spontaneous mutation emerges that confers resistance, and selective pressures drive it to succeed and take over the population. This is a common phenomenon that we have seen in the clinic and can induce in the lab, but is far from the only means by which antibiotic resistance happens.

Today I’ll focus on a second type of antibiotic resistance – the transfer of ancient antibiotic resistance factors from environmental bacteria into the strains that cause human disease. It was found in the 1970s that some antibiotic resistance factors appear to come from the microbes that produce the antibiotic. The thinking was that they act as a means of self-protection for the bacteria from their own toxin. This is one origin for resistance factors, although the original origin of many of these environmental resistance factors remain unknown.

Transfer of an environmental resistance factor to disease-causing bacteria results in resistance within the pathogen. In cases where this has happened multiple times, we get superbugs with resistance to multiple antibiotics. The collective environmenal pool of antibiotic resistance factors has been dubbed the “antibiotic resistome“. These resistance factors form a latent environmental reservoir, ready to jump into the strains that make us sick.

Antibiotic Resistance as a Healthcare Menace

Most of our concerns about antibiotic resistance come from the impact it has on medicine. Antibiotics are critical to our treatment of infectious disease, and are also necessary in prophylactic use for surgeries, cancer treatment, neonatal care, and many other intensive medical procedures. The spread of antibiotic resistance in pathogens could remove our ability to treat these infections, or care for many of our society’s most vulnerable members. This could lead to a transition to a “post-antibiotic era” where once again these miracle drugs are not available to us. A minor infection from a scraped knee, sore throat, or scratch off a rosebush could be fatal.

While spontaneous emergence of antibiotic resistance occurs, resistance frequently comes from environmental cross-over. A benign soil microbe meets a pathogen, shares some genetic material, and that pathogen becomes resistant. A notorious recent example of this is the emergence and worldwide spread of the New Delhi beta-Metallolactamase, a resistance factor that knocks out some of our last-resort antibiotics.

In the face of this ongoing menace, what do we do? For decades, the answer has always been “find more antibiotics”. This is important to do, but not enough. We search the world for more obscure microbes that might produce a new antibiotic, and we’re beginning to find a few, but we are still falling behind. Searching for new antibiotics is a game of whack-a-mole, finding new drugs as nature sends more sources of resistance to knock down. We’ve kept up for a while, but now we’re falling behind.

Some of the search for new ways of killing microbes involves looking for new targets to bind antibiotics to – reading the antibiotics literature, everyone and their grandmother wants to sell you a new potential antibiotic target. Other strategies include directly inhibiting antibiotic resistance factors, blocking bacterial toxins to “disarm” the bacterium, or more obscure methods like bacteriophages. But none of these strategies have yet led to sustainable long-term solutions for treating resistance. We may be doomed to fail – we’re going up against a fundamental feature of nature.

Antibiotic Resistance as a Force of Nature

Almost as long as we’ve had antibiotics, we’ve had antibiotic resistance. In his 1945 Nobel Prize Lecture for the 1928 discovery of penicillin, Alexander Fleming said:

It is not difficult to make microbes resistant to penicillin in the laboratory by exposing them to concentrations not sufficient to kill them

Since Fleming’s time, any newly-developed antibiotic has had only a few years of use before a case of antibiotic resistance was identified. This observation alone suggested that antibiotic resistance existed in the environment before we began to use these compounds to cure ourselves. Recent studies of permafrost cores and isolated cave systems have also produced compelling evidence that antibiotic resistance factors have existed in the environment long before humans came around.

The idea of transfer of bacterial resistance factors took some time to catch on. Discovered in the 1950s in Japan, the greater scientific establishment received this finding with disbelief and scorn. It took many additional reports for researchers to believe that resistance could move from one bacterial species to another. By the time the greater scientific community clued in, antibiotic resistance was already running rampant in some clinical strains. Since that point we have been in a constant search for new antibiotics, often falling behind the spread of resistance.

A Microbial Cold War

Starting with the first discoveries of antibiotics, we’ve considered them to be weapons in an ongoing environmental war between microbes. Selman Waksman, Nobel laureate for the discovery of streptomycin*, developed his entire research program on this assumption – that there are microbes in the environment that produce antibiotics in order to kill those around them and gain selective advantage. This program was replicated in many labs, resulting in the 1940’s-1960’s golden age of antibiotic discovery.

Waksman envisioned the environment as an ongoing microbial war. An antibiotic is the bacterium’s sword, resistance is it’s opponent’s shield. But what if that’s the wrong way to think about it?

What if this microscopic battle between bacteria was more of a Cold War? An ongoing stand-off that only occasionally breaks out into active conflict? That’s what microbial researchers seem to keep finding. Microbes play games of rock, paper, scissors using antibiotics. They punish freeloaders, engage in brinksmanship and even cooperate in difficult times. These bacteria are in competition, but this competition generates a kind of rolling stability as they hold each other in check.

Things get even more interesting when we decrease the concentration of an antibiotic below the range at which it kills – to “sub-therapeutic”, “sub-lethal”, or “sub-inhibitory” concentrations. In toxicology, we talk about how “the dose makes the poison” – the same thing applies for bacteria. At low concentrations of antibiotic, bacteria can trigger adaptive stress responses, go into dormant states, adjust their metabolism, trigger complex growth modes (biofilms) or change their behaviour in even more subtle ways. At the extreme of this concentration range, it’s even been suggested that antibiotics might instead be thought of as signalling molecules rather than toxins.

With an environment full of these compounds at various concentrations, microbes are in constant cross-talk with each other. Studies like the most recent paper on community stability find that even in mixtures of a few strains of bacteria, antibiotics and resistance can keep competing stains in check, stabilizing a community and keeping any particular one from growing out of control. With 10-50 000 different species in a single gram of soil, the interrelationships are almost limitless.

As metagenomics studies and other work teach us more about the diversity of resistance in the environment, we find that microbial communities are complex, with different producers and resistant strains in constant rolling flux. Add to this an understanding of things like quorum sensing, and a picture of a complex, dynamic ecosystem emerges. A network of chemical cross-talk forms, and we are only starting to scratch the surface of this environmental complexity.

The use of chemical compounds to influence each other are not an exception, but the rule. The diversity of microbial species also drive a diversity of chemical compounds used to fight, defend, and communicate with each other. These countless microbial compounds have a name: the parvome. We harvest chemicals from this source for use in medicine, but must remain aware that any environmental molecule will also have corresponding mechanisms of resistance.

Rather than the all-out war that Waksman envisioned, the microbial world works like international diplomacy or a financial market. Every interaction trickles through a network and affects everything else. Booms and busts happen, but on average, the system selects for a kind of greater stability, so the entire community gains as a whole. Outright conflict is a zero-sum game. Most microbes prefer a tense collaboration, quietly manipulating their neighbours but avoiding actual battle. A microscopic Cold War. That war only rarely comes to active conflict, when we strip away diversity and release the bacteria from their self-imposed order.

Fear in a Handful of Dust

When we realize that the microbial world works this way, it means that antibiotic resistance is everywhere. Screen any environmental sample for resistance and you’ll find it. If an antibiotic exists, so do its resistance factors. Even antibiotics we haven’t yet discovered have resistance in the environment. This is a fascinating and terrifying thought at the same time.

It’s terrifying because antibiotic resistance is an enormously urgent public health concern. We’re running out of time. And as we understand that resistance is all around us, we realize that we’ll never eliminate it. We can only beat it back, and we can only play the game of antibiotic-resistance whack-a-mole for so long. Resistance seems to be an intrinsic property of the microbial world that we’ll never escape. As a great mathematician once said: Life finds a way.

I’m cynical whenever I see headlines about new breakthrough antibiotics or antimicrobial game-changers. All these do is kick the eventual resistance down the road a little bit farther. Even though I study mechanisms of resistance in hopes of blocking them, I think the most important solutions to antibiotic resistance will come from systems approaches: policy, sanitation, rapid diagnostics/response, and surveillance programs. We can’t control what resistance is out there, but we can take steps to limit the transfer of that environmental resistance to pathogens. Agricultural antibiotic use requires urgent action. Improved sanitation and means of reducing the spread of pathogenic microbes is critical.

As we struggle to deal with our impending antibiotic crisis, we are starting to realize how inevitable it probably was. It emerges from a complicated network of microbial cross-talk. Countless microbes silently jostle against their neighbours, subtly nudging with chemical signals, and being poked back with molecular weapons. This microscopic opera happens around us at all times, silently shaping our world and occasionally making our worst diseases even harder to  fight.

The complexity is beautiful, and it is terrible.

* It should be noted that Waksman’s student, Albert Schatz, was heavily involved in the discovery of the compound, and by many accounts was snubbed by the Nobel committee when they presented the award to Waksman alone.

Embracing the Molecular Jiggle

A molecule is intangible. It’s too small to see, too small to feel. Trillions could fit on the sharp end of a pin. These strange entities lives in a world very different from our own, at the boundary between quantum uncertainty and statistical chaos.

Many processes in chemistry, biology, and medicine depend on our understanding of molecules in this alien world. However, it can be a challenge to accurately represent what molecules are really like. To simplify things, we often cheat and draw them as “blobology” – featureless coloured circles and squares. If we have structural data, we can do better and present them as a ball-and-stick models, ribbon drawings, or molecular surfaces. While helpful, these more detailed representations are still cheating. Images of a molecular structure all share a major limitation: they’re static. They don’t move.

A molecule’s function depends not just on its structure, but in the change of structure as it interacts with other molecules. This includes large, dramatic movements that translocate thousands of atoms, small movements of individual atoms, and everything in between. Macromolecules that carry out biological processes contain thousands to millions of atoms, each with some freedom of motion. They are intrinsically dynamic and flexible, and this motion is critical to our understanding of how they work.

I’ve mentioned before that I often think of molecules like LEGO, snapping together to build more complicated systems. But if we think about jiggly molecules, we should think less “brick” and more “jellyfish”, “slinky”, “JELL-O”, or “Flying Spaghetti Monster“. This is a case where a descriptive adjective can be really helpful, like greasy polypeptides, oily odorants, fuzzy electron density, and squishy polymers.

How can we best describe biological macromolecules? They’re jiggly.

Jiggle jiggle jiggle. T4 lysozyme, PDB ID 2LZM
Jiggle jiggle jiggle. T4 lysozyme, PDB ID 2LZM

Shake what mother nature gave you

A drop of water may look serene, but on the molecular scale, it is a violent mosh pit of collisions between molecules. Think soccer riot, demolition derby, or a playground full of kids on espresso. Particles move in all directions, flailing about wildly, constantly crashing into each other. Inside a biological cell, the chaos is even wilder, with thousands of different types of molecule bumping, wiggling, twisting, and squirming around. The Brownian motion of particles in this soup puts molecules in a state of constant fluxuation and vibration. They bend, twist, and bounce. They sample an almost infinite number of shapes, switching between states at breakneck speed.

While molecular scientists understand the complexity of this world, we can skim over it when communicating our work. Worse, sometimes we outright forget. We talk about how “the structure” of a molecule was solved. We assume that the shape of a molecule determined from crystals represents its shape at all times. We pretend that “disordered” parts of the molecule don’t exist. In many cases, these approximations are good enough to answer the questions we want to ask. Other times, they hold us back.

We should always remember the importance of flexibility. But if we know that molecules are intrinsically flexible, why do we fall back to talking about static shapes? The technology we’ve used to study molecules, and the history of the field have both played a role.

Structural biology: picking the low-hanging fruit

Structural biology has been an extremely powerful set of techniques to look at the high-resolution structure of molecules. But limitations of these techniques have trapped our thinking at times to picturing molecules as static, blocky particles. X-ray crystallography and electron microscopy calculate an average structure, which represents a huge ensemble of possible conformations. We sometimes refer to parts of molecules we can’t resolve by these techniques as “disordered”, although what we really mean is that is that all of the molecules we are looking at have different shapes, and we can’t average them into a meaningful representative model. As a byproduct of the technique, we miss some of the forest for trees. Other techniques like nuclear magnetic resonance (NMR), more easily acommodates multiple models, but because of the precedent set by crystallography, we still frequently treat NMR structures as a single model.

These techniques also bias us toward samples that are “well-behaved” – that is, they easily crystallize, purify, or otherwise make the life of the scientist easy. The problem here is that the molecules that purify or crystallize more easily are often those that show less flexibility. Lab lore dictates that flexible molecules cause problems in structural biology labs. As a result, scientists have picked a lot of the low-hanging fruit, leaving the most flexible (and some might argue, most interesting) molecules alone. As structural techniques mature, they are beginning to seriously tackle the idea of flexibility, but we still contend with a historical legacy of studying the easier, less flexible molecules.

Biochemistry: From floppy to blocky

The history of biochemistry has also affected our thinking about molecular flexibility. The history of the field tracks our growing understanding of how large molecules work. With more data and more powerful techniques, we have developed increasingly nuanced ways of thinking about these complicated microscopic machines, but that history leaves a legacy.

Without knowing details of molecular structures, the first biochemists were left to assume that strings of atoms will exist as a floppy or disorganized shape in solution, waving around unpredictably. This was changed by the father of biochemistry, Emil Fischer. In 1890 he proposed a model that changed how we viewed biological molecules. The “lock and key” model involves two molecules with rigid, complementary shapes. Features of the smaller molecule (the “key”) perfectly match features of the larger (“lock”) so that they can specifically interact. A well-defined, rigid structure is necessary for this mechanism to work.

However, alongside Hofmeister, Fischer also determined that biological macro-molecules are made as flexible chains of atoms. This raises a problem. How does a floppy string-like molecule become a blocky shape that can form the “lock” to interact with its “key”?

This problem wasn’t conclusively resolved until 1961. Anfinsen showed that the sequence of atoms in one of these floppy chains can guide the molecule to adopt a compact, blocky shape spontaneously on its own, by interacting with itself in reproducible ways encoded in the molecular sequence. The understanding that came from this work came to be known as Anfinsen’s Dogma: One sequence makes one structure. This is the blocky model of macromolecules, where floppy chains of atoms fold into a reproducible, rigid, blocky shape. More than 50 years after Anfinsen, the idea persists that molecules fold upon themselves to this single, rigid state.

And yet, it moves

We know a lot more now than we did in 1961. We know that folded molecules keep some fundamental flexibility and still move and jiggle, despite their folded shape. Anfinsen’s Dogma isn’t incompatible with this understanding, it only needs one concession: Folding a molecule into a three-dimensional shape restrains a molecule’s flexibility, but doesn’t remove it.

Over the intervening years, more complicated models for molecular behaviour have emerged that take flexibility into account. These models can sometimes still treat flexibility as the exception rather than the rule, but are a welcome improvement. Biochemists and biophysicists fight over the relative contributions of competing induced-fit and conformational selection models. Despite this bickering, these models are compatible and are starting to be reconciled in a new synthesis of molecular flexibility and action. Key to understanding this phenomenon: jiggliness. From floppy to blocky, this is now the beginning of the jiggly-molecule paradigm.

Several grand challenges in biochemistry depend on a nuanced understanding of molecular flexibility. If we want to start to solve these problems, we need to get better about talking about jiggly molecules. We need to know not just what a molecule’s structure is, but also how that molecule moves. Some specific problems that require an understanding of flexibility include:

  • Prediction of two interacting molecules. Fischer’s lock and key model is conceptually useful, but high-resolution models have shown that it is usually too simplistic. Upon interaction, molecules will change shape as they come together. It’s a rubber key in a JELL-O lock. Because of this, it is still almost impossible to predict the productive interaction of two molecules without accounting for flexibility.
  • Determining the impact of amino acid changes on molecular function. Reductionism often fails when we try to pull apart the action of a single amino acid on a protein’s function. While we can make changes that disrupt interactions, prediction of changes that form new interactions requires understanding dynamic flexibility. We also know that mutations that have no effect on the protein structure can have dramatic effects on dynamics, and hence function.
  • Allosteric effects are still impossible to predict. Changes caused by binding of a compound that alter a molecule’s properties are almost never easily determined by their shape alone. Flexibility, dynamics, and interaction energies are critical to understanding how allosteric transitions take place.
  • The active state of a protein is not well populated in experiments. The state of a protein that carries out its function is almost always not the “rest state” – that is, the most stable state. We find low-energy states in crystallography and other techniques, but the states of proteins that are poorly occupied are frequently the most important states. We usually have to infer the active state from the data we are able to measure. Understanding dynamics and flexibility are necessary to learn and model how molecules reach their active state.

Move past static structures – Embrace the molecular jiggle!

The paradigm of the jiggly molecule is starting to take hold. New technologies like free-electron lasers and improved cryo-electron microscopes are starting to allow us to look at single molecules. This will allow us to directly observe states of molecules and compare them. Single-molecule fluorescence and biophysical studies let us harvest data from single particles, to appreciate the subtleties of their action.

Molecular dynamics simulations get us closer to an ensemble-level understanding of molecular data, and are more powerful every year by Moore’s law to model complicated and flexible systems of molecules. Well-designed experiments can use NMR techniques to their true potential, to probe the flexibility and structure of biomolecules. Although in their infancy, ensemble methods are starting to be used in crystallography and scattering methods. Hybrid methodologies further combine information from many sources to begin to integrate into comprehensive models.

The developments I’m most excited about, however, have come from outside of the scientific world. Developments in animation are bringing the molecular world to life, and animators are merging the science and art of displaying molecules. The jiggliness of molecules becomes completely clear once you observe them in video.

Viewing the movement of a simulated molecule grants an intuitive understanding of the world of a molecule much better than a 1900-word blog post ever could. If a picture is worth a thousand words, an animation is worth a billion. Professional molecular animators are using experimental data to inform their illustrations of molecular behaviour. As we move from publication on printed paper journals to digital publication, these animations will play an ever-larger role in illustrating the behaviour of substances on the molecular level.

An intuitive understanding of jiggly molecules opens up a new level of problems we can approach in biochemistry. No matter what you know about molecules, appreciate the complexity these dynamic, flexible objects show. Appreciate and embrace the jiggle. If things are just right, the molecules might embrace you back.

A Tuberculosis Enzyme Decapitates Vital Energy Molecules to Kill Cells

If you cant defeat your enemies by force, defeat them with subterfuge. Mycobacterium tuberculosis, the bacterium that causes tuberculosis, lives by this mantra. While other disease-causing bacteria mount an all-out assault on the body, the tuberculosis bacteria lay low, hide, and slowly kill us from the inside out. M. tuberculosis is a master of stealth and deception. Like the Greeks entering Troy in a wooden horse, it hides from the immune system within our own cells – often the very same cells that guard us from bacterial infections. M. tuberculosis is a treacherous enemy.

In order to infect us, many bacteria use protein toxins to kill or manipulate our cells. The cholera and diphtheria pathogens are famous for producing toxins that attack our tissues. These toxins are fired as cannon blasts that break into our cells, and chemically change our own molecules to drive a toxic effect. Until recently, we thought that M. tuberculosis didn’t have any of these toxins. We thought that it accomplished its stealthy invasion through other means. It turns out, we were wrong: M. tuberculosis has a toxin, but it’s not a cannon, it’s an assassin’s blade.

TNT, a deadly enzyme produced by M. tuberculosis

Last year, researchers at the University of Alabama at Birmingham identified a toxin from M. tuberculosis that kills immune cells. They named the enzyme tuberculosis necrotizing toxin, or TNT, because it induces necrosis, or cell death, in the target immune cells. In a recent follow-up, they have now demonstrated exactly why the toxin is so deadly.

The TNT toxin is particularly nefarious. Rather than the upfront assault of the cholera and diphtheria toxins, it kills its host cells from the inside out. TNT breaks down the cell’s stores of NAD+, or nicotinamide adenine dinucleotide. This molecule is an important energy carrier molecule*, used by all life forms from the tiniest bacterium to the giant sequoia. Our cells use NAD+ to shuttle energy between biochemical processes. NAD+ harvests energy from the breakdown of glucose and other molecules, and passes that energy to other systems that drive the processes of life. If you remove a cell’s NAD+, the cell will die. This makes NAD+ a convenient target for M. tuberculosis. Destroy the NAD+, destroy the cell.

TNT Enzyme Decapitates NAD+. Artist's Interpretation.
TNT Enzyme Decapitates NAD+. Artist’s Interpretation.

The tuberculosis bacterium uses the TNT toxin to do exactly this. It acts the assassin’s blade, selectively destroying all of the NAD+ in the host cell. The enzyme “decapitates” NAD+ by breaking a critical bond, separating the head from the body of the molecule. Without their stores of NAD+, the immune cells that host the tuberculosis bacteria die, releasing them to spread to other cells. Triggering necrotic cell death also bypasses more orderly means of cell death that would allow the immune cell to sacrifice itself and quarantine the mycobacteria.

Stealth attack

In tuberculosis, some of the worst symptoms aren’t mediated by the bacteria themselves, but the immune system’s inappropriate response to the bacterium. This is part of why it wasn’t always clear that M. tuberculosis would need a toxin at all. For the most part, the M. tuberculosis lies low, waiting for its chance to strike. When it does strike, it appears to use TNT to do so in a selective and controlled way. Like the Greeks crossing the walls of Troy, the TNT enzyme has to be helped across a membrane to the host cells. Mycobateria live within compartments in the cells they infect, but in order to disrupt the metabolism of those cells, the toxin needs to reach the cytoplasm. TNT doesn’t contain any functions to get it into the cytoplasm by itself, so it has to be helped, by an export complex called ESX–1.

This is different than the cannonball toxins of cholera and diphtheria. Those toxins have their own means of forcing their way into a target cell. The TNT enzyme is actually a small enzyme, which means it doesn’t carry any parts to help cross into the host cell by itself. The researchers identified that the ESX–1 system is needed to get into the cytoplasm, although there is still a huge amount unknown about this process. This is a very interesting area of future study, because moving TNT into the cell probably involves an important switch in the bacteria’s strategy. M. tuberculosis switches from lying silently in wait, to mounting its sneak attack by cover of darkness.

Protection from a double-edged sword

There is an interesting consideration for any bacterium that makes a toxin, especially one that targets a ubiquitous molecule like NAD+. How does M. tuberculosis avoid killing itself? The bacterium synthesizes the toxin inside its own cells, but NAD+ is important for all life, including M. tuberculosis itself. How does the bacterium keep the TNT enzyme from destroying its own NAD+? Well, if this toxin is the assassin’s sword, a second protein, IFT (immunity factor for TNT), is the scabbard.

When the TNT enzyme is made in the mycobacterium, it is secreted, but if it somehow remains in the cell, it is bound by a molecule of IFT. This IFT blocks the part of the TNT enzyme that interacts with NAD+, inhibiting the enzyme. The researchers determined the structure of TNT and IFT together, and showed convincingly how the IFT would completely block TNT activity by obstructing the interaction of TNT and NAD+.

Outside of the bacterium, TNT and IFT are separated, and the toxin is active. Inside the cell, IFT acts like the sheathe that protects a swordsman from their own blade. It’s a cleverly-evolved means for M. tuberculosis to protect itself from its own weapon.

Every enzyme is a unique snowflake

The TNT enzyme is a great example of a reductionism-breaker. In a lot of molecular biology, if you want to probe an enzyme’s activity, you create site-directed mutants. In these mutants, the functional amino acids that interact with the target of the enzyme are replaced with non-functional amino acids. In this way you can test, in a straightforward way, what role the individual amino acids play. Reduce the activity of an enzyme to its constituent amino acids.

If you replace a functionally critical amino acid, you expect to get a non-functional protein. In TNT, removing the amino acid most important in related enzymes only reduced TNT’s activity by half. This isn’t much by molecular biology standards. This highlights that even though the same residue is present in this enzyme as related toxins, TNT appears to play by different rules than the rest. This actually is a fairly common story when studying enzymes, especially ones that are involved in bacterial disease**. A lot more work is needed to map out the mechanism that this enzyme uses, but this is a great reminder not to assume similar enzymes work exactly the same.

Why do I like this paper so much?

I really, really like this paper. Although full disclosure, I have a soft spot for bacterial toxins after doing an undergraduate placement in a toxin lab. There are a couple more points I’ll mention:

The researchers discovered the NAD+-breaking activity of the enzyme through some extremely clever detective work. They observed that when they produced the toxin in standard lab E. coli bacteria, the E. coli died. This happens occasionally, even with proteins that aren’t toxins. The next step they took was key. They sequenced the RNA of the expression bacteria, and found that the E. coli had up-regulated genes responsible for the synthesis of NAD+.

Some other bacterial toxins break down NAD+, notably another toxin produced by S. pyogenes. The researchers tested if TNT also acted on NAD+, and found the enzyme carried out the same reaction. I think this is a great case of critically evaluating your lab materials, and sharp thinking about the systems you work with. In this case troubleshooting lab problems appears to have turned into a huge discovery.

This research also identifies the first known bacterial toxin from M. tuberculosis. This bacterium is notable within microbiology because it tends to always play by different rules, growing slowly, using distinct chemistry and metabolism for everything it does. It would be easy to believe it fights the immune system in different ways, as well. This is partly true, as the TNT toxin is quite different from any other known toxin (it couldn’t be identified by comparisons to known toxins). But it seems M. tuberculosis uses a familiar weapon, just in an unfamiliar way, fitting its stealthy mode of infection.

Lastly, TNT and IFT are an interesting case study of the identification of unknown gene functions. The M. tuberculosis genome was sequenced in 1998. The NAD+-destroying function of mycobacteria was seen in the 1960s, as was the inhibitor function of IFT. However, without the appropriate understanding, no one could connect these functions to the genes until now. While modern sequencing technologies help us compile long lists of genes, we still need smart, careful experiments like this study to work out just what these genes do. It’s a great example of careful, inquiry-driven research in the post-genomic era.

M. tuberculosis uses the TNT toxin to decapitate a molecule that our immune cells need to live. A stealth murder weapon wielded by a treacherous infiltrator. This paper is a great piece of work illustrating how a nasty pathogen manages to sneak past our immune defences and make us sick. I’m very interested to see what we learn about this system in the future. I think this is an excellent piece of work, the UAB researchers should be proud.

Update 2015.09.17 11:21: Initially, I had assumed that all TNT will interact with an IFT before it is exported. I have learned this probably isn’t true and most TNT is exported without ever seeing an IFT.

Citation:

Sun, J., Siroy, A., Lokareddy, R., Speer, A., Doornbos, K., Cingolani, G., & Niederweis, M. (2015). The tuberculosis necrotizing toxin kills macrophages by hydrolyzing NAD Nature Structural & Molecular Biology, 22 (9), 672–678 DOI: 10.1038/nsmb.3064

* Technically, NAD+ is a redox shuttle, but that’s a discussion for another time.
** The virulence proteins of pathogens tend to change more rapidly due to the ongoing evolutionary arms race between the pathogen and the host.

A Blog About Biochemistry

Hi! Welcome!

If you’re reading this, you’ve found your way to my tiny corner of the web, where I will be writing on a regular basis about the things that excite and challenge me in science (with regular digressions, as mood strikes). I hope you might join me for the ride. You can also find me elsewhere @superhelical.

There’s one important thing to know about me at the start: I’m a huge nerd about molecules. While many students break into a cold sweat at the mention of the term “SN2 reaction”, I’ve always enjoyed organic chemistry. It can be like playing with LEGO. Except the LEGO is microscopic, you assemble the LEGO by shaking it vigorously in a flask, and some types of LEGO can kill you.

benzene_lego_tabs_03
Making a LEGO benzene that is scientifically consistent with good design is surprisingly challenging

It is fascinating to see the world through a lens of molecules and their interactions. I love thinking about the molecules that drive our lives and the environment around us. Some proteins link together when you bake a loaf of bread. Coral deposit minerals to build their skeleton. Dye molecules are linked to colour a cotton t-shirt. Even mundane things like the smell of the air after it rains or the way a hot iron straightens hair have molecular explanations. There’s a hidden richness of molecular phenomena around us. Molecules shape the world.

You can expect a heavy emphasis on protein biochemistry in this blog. Topics I find most interesting often involve biological molecules. In particular, the structure and dynamics of molecules like proteins and nucleic acids. It’s a shame that there’s a huge amount of published work on these bio-molecules that doesn’t receive much attention. I’m going to dive in and highlight some of this work, explain why I find it exciting. Hopefully I can help you to enjoy thinking about biochemistry, too.

What Exactly is Biochemistry?

So, to kick off, I’d like to start with a surprisingly difficult thing to do: define what “biochemistry” actually is.

This is a specific case of a more general question: What is any scientific discipline? Fields are labels that divide researchers and techniques into categories, even though the borders of these fields always remain fuzzy. More than techniques and objects of study, the defining features of a field is usually cultural or philosophical. I’ve learned when speaking to physicists, one of the first questions to ask is “experimental or theoretical?” because the most fundamental divide in that physics lies between theory and practice. Similarly, field biologists are a completely different breed than those who do lab work. As you get more specific, the distinctions get smaller, but you’d be very surprised how differently an immunologist and a microbiologist view the world.

When we look at the fields that study molecules of life, it doesn’t help that the names of many related disciplines are so damn confusing. Chemical Biology, Biochemistry, Molecular Biology, and Biological Chemistry all mean roughly the same thing in the dictionary. However, those of us working in these fields know that they refer to different communities that have distinct organizational cultures, and that target different types of problems using different methods and technologies.

I’ve cast around a little to try to get an idea of how some working scientists define “biochemistry” specifically, and recieved some interesting opinions from people in various fields:

  • It’s the necessary wet-lab work required to validate a computational model (Bioinformaticians)
  • It’s ELISAs and Western Blots (Immunologists)
  • It’s the damn prerequisite that makes you to memorize amino acids (Pre-Meds)
  • It’s chemistry without enough flammable solvents (Synthetic Chemists)
  • It stinks (Physicists)

Even people who work in closely aligned fields can’t agree on what biochemistry is:

  • It’s a broad umbrella, including sub-fields like chemical biology, structural biology, cell biology and systems biology
  • It’s an anachronism, and has been absorbed into “modern” fields like chemical biology, structural biology, cell biology and systems biology
  • It’s a defined set of techniques, used in fields like chemical biology, structural biology, cell biology and systems biology
  • It’s a historical administrative handle used to divide group chemical biologists, structural biologists, cell biologists and systems biologists into a single department

The definition I sympathised with the most is a little vague, but speaks to a simple truth:

  • It’s whatever you get when you look at biology with a chemical understanding

Biochemistry seeks to explain biology at the chemical scale. Start with simple building blocks like carbon, oxygen, hydrogen and nitrogen. Sprinkle in the occasional phosphorus or sulfur. How can something as complex as a tree, cat, or human emerge from these materials? Or even a single, replicating bacterial cell? There are biochemical questions to answer at every level of complexity, from shapes of sugars and amino acids, all the way to the muscle tissues of a sperm whale.

Biochemistry seeks to explain how the complexity of life can emerge from simple atoms. To do this, it takes chemical principles like thermodynamics, biological principles like natural selection, and some principles from developed from scratch, like models that explain the behaviour of enzymes. Biochemistry is an amalgam of principles from various fields, pulled together and applied toward the molecules of life.

Is the Field of Biochemistry Going Extinct?

Some might say biochemistry stopped being innovative decades ago and other fields have moved on and left the biochemists behind. That all the hard problems in biochemistry have been solved, that there’s nothing new to find.

I disagree, but it is time for a pivot.

There is one thing I might complain about the the state of biochemistry in 2015. Work in the field is often descriptive, reporting on the world but not gaining a lot of insight from it. This is a real shame, because there’s a vast amount of phenomena that still remain for biochemists to look at, study, and clarify.

A descriptive focus means that while biochemistry has generated some extremely useful tools, it is not as active a field of research as it could be. I think this is a big part of why some of the scientists seem to view biochemistry as those tools, rather than an independent area of study. To be relevant going forward, biochemists need to keep in mind that there are still Grand Challenges in biochemistry, that we have to continue to work on. These questions will drive the frontier of biochemical research. I’ll describe some of the most compelling challenges.

The Grand Challenges of Biochemistry

  1. The emergence of functional macromolecules
    Starting from simple atoms, how did the first functional molecules emerge? How have these functions changed over time. Can new functions emerge?
  2. The folding problem
    A long chain molecule needs to fold back upon itself to have a function. How a molecule does this efficiently and with high fidelity is still a mystery in many cases. How does the information in a molecule convert from chemical sequence to a three-dimensional structure?
  3. Modelling dynamics
    Many important functions of molecules depend on the fact that they are flexible machines that can bend into many different shapes as part of their function. We currently do a very poor job of modelling, understanding and predicting this flexibility. How do we understand this flexibility and predict where it can have important functional roles?
  4. Exotic chemistries in water
    Living cells don’t have access to an organic chemistry lab to create complicated molecules of life. How are exotic chemical reactions carried out, all in a water-based environment?
  5. Selectivity versus sensitivity
    Some molecules need to interact with a broad range of other molecules, others a very narrow set. How is this accomplished? What features govern this switch between sensitive, broad interactions, and highly specific, tight interactions?
  6. Non-ideal activity in real environments
    The history of biochemistry has involved purifying and isolating proteins from the complex mix of things in a cell, in order to study that protein in a test tube. But their natural environment is much more complex, full of thousands of other molecules. These other molecules will greatly affect and alter function. How can we understand the natural environment of a molecule?
  7. Ultrastructural chemistry
    Many of the most interesting molecules to study in living cells are absolutely gigantic. Normal rules of molecular behaviour break down when we get to those sizes. Modelling chemical properties at large scales is an ever-pressing challenge of biochemistry. What emergent properties occur in large molecular complexes?

What is the Future of Biochemistry?

So these Grand Challenges laid out, where does the future of biochemistry lie? Every new technology helps us tackle these problems in even greater detail. I’m optimistic that there are questions that we can now start to ask and answer, that we couldn’t even 10 years ago.

Technologies that get me excited include the high-resolution structural work now accessible at free-electron laser facilities, ever stronger supercomputers to calculate the dynamic nature of the molecules, ingenious in-lab evolution experiments to probe the change of molecules over generations, single-molecule experiments that track the behaviour of individual molecules, and many others. The future of biochemistry isn’t extinction, it’s evolution. We can now probe deeper than ever before at the inner workings of our molecular selves. The future is bright indeed.

Some may tell you biochemistry isn’t worth your time. I’m going to fight to show otherwise.

I hope you’ll come along with me.