A Motion-Sensor Switch for Antibiotic Resistance: My New Paper in the Journal Structure

fx1

I’ve been working on my thesis for the last few months, squirreled away in libraries and coffee shops, but now I’ve submitted and waiting to defend, I’m happy to share what’s happened in the meantime! A research paper I’ve been working on for a long time has been accepted, and published in the journal Structure. You can find it online, here. This paper makes up a bulk of the work in my PhD thesis, containing 10 protein crystal structures, and I’m glad to have it finally available to the world!

In the paper I describe the structure of a protein that causes antibiotic resistance. This protein makes a bacterium resistant to a class of antibiotics called aminoglycosides. They are one of the oldest classes of antibiotics and include some well-known compounds such as streptomycin, kanamycin, tobramycin, and gentamicin. They are effective antibiotics used against a broad variety of bacteria, and resistance factors that make them ineffective are a serious problem in the treatment of infections.

The protein that I work with generates aminoglycoside resistance. It chemically alters the antibiotics, turning them into inactive byproducts, making any bacterium with the protein resistant to aminoglycoside antibiotics. The protein acts as a “resistance factor” – bacteria that carry the gene for this protein use the protein to deactivate the antibiotic. They can easily break it down and go about their bacteria business instead of being killed.

This protein is called APH(2”)-Ia (more on that name later). It inactivates several different aminoglycosides. To learn how it carries out this transformation, I looked at the structure of the molecule and how it changes when it interacts with the antibiotic. To understand why the structure is important, let’s talk about what this type of molecule really is.

Enzymes: biological molecules that make chemistry happen

Proteins that carry out chemical reactions are also called enzymes. They allow a chemical change to happen that normally happens at extremely low or nonexistent rates. The enzymes that act on aminoglycoside antibiotics are collectively referred to as aminoglycoside-modifying enzymes. They deactivate antibiotics by transferring part of a common, metabolic molecule to the antibiotic. This makes the antibiotic worthless, and lets the bacterium survive the toxic effects of the compound.

An enzyme drives a reaction by forcing these chemicals together. They do this by specifically binding to the molecules, in a mechanism sometimes referred to as a lock-and-key interaction. An enzyme is specific for the molecules that it acts on, like a lock only opens for a specific key. The keys of an aminoglycoside-modifying enzyme are the antibiotic and a cellular molecule that donates a chemical group to the antibiotic. In APH(2”)-Ia, that cellular molecule is guanosine triphosphate, or GTP. The enzyme binds the antibiotic and GTP tightly, and undergoes changes in structure to drive a chemical change between these molecules. This all happens in the most important part of the protein, the active site.

The active site is the most important part of an enzyme. This part of the protein is typically a deep pocket that the rest of the molecule wraps around, where the enzyme holds and manipulates the molecules, and where chemical bonds are broken and formed. The enzyme separates these molecules from water and other compounds, and in this isolated state, the enzyme drives the chemical change to occur.

The active site of any enzyme is typically extremely sensitive to the shape and properties of the molecules it acts upon. Evolution has driven enzymes to develop a high degree of specificity for these molecules, known as substrates. An enzyme typically has only a few different substrates that it acts upon. The combination of the specificity of chemicals that an enzyme acts on and the reaction it carries out gives it its name, in this case aminoglycoside phosphotransferase.

Aminoglycoside phosphotransferase

I try to avoid saying the name of the protein I work on unless I want to sound important.

APH(2”)-Ia stands for aminoglycoside (2”)-O-phosphototransferase type Ia. It’s usually preferable to just say “the enzyme” and that’s what I’ll mostly do here. As the “type Ia” might suggest, the enzyme is part of a larger group of enzymes that all carry out a similar reaction. They use magnesium atoms, held tightly in the active site, to move a phosphate group, PO43-, from one molecule to another. In this enzyme, it moves a phosphate from GTP to the aminoglycoside antibiotic. GTP is a nucleoside triphosphate molecule, the cellular phosphate source in this reaction. You might be familiar with a similar molecule, ATP, the “energy currency of the cell”.

APH(2”)-Ia is somewhat unusual because most similar enzymes use ATP, but this enzyme uses GTP. Researchers in my lab and in other groups have studied the relationship between these proteins and GTP, and there’s still some interesting unsolved mysteries about the use of GTP in these proteins. However, for this work, I focused on the part of the molecule that is the same between ATP and GTP, the triphosphate.

These enzymes that use nucleoside triphosphates as phosphate donor have a special name: kinases. We know quite a lot about kinases. Their importance for cell biology was discovered in the 1970s-1990s and researchers learned that they are critically important regulators of cellular metabolism, cell division, and many other processes. They add phosphate groups to proteins that generate molecular communication networks in the cell. In mammalian cells kinases are typically involved in important cellular decisions, and because many of these decisions impact how a cell grows and divides, many of these kinases are involved in cancer. When it was found that aminoglycoside phosphotransferases were kinase enzymes, there was already a large amount of research to compare to on similar enzymes. However, comparisons to other enzymes only gets you part of the way. To really learn how any molecule works, you have to look at it directly.

How do you look at something a few nanometers in size?

A mantra in the molecular sciences is that structure dictates function. I argue it needs a little updating, that structure and dynamics dictate function (more here and here), but you need to have a structure before you can study how it moves. Determining the structure is where we start.

Using the techniques of structural biology, we can get a direct look at the molecules that carry out biological functions. Techniques like nuclear magnetic resonance, and electron microscopy can provide excellent structural information about biological macromolecules, but the technique I used for these experiments was X-ray crystallography. Matt Kimber, the professor who taught my undergraduate structural biology course referred to crystallography as a “one-trick pony, but it’s a damn good trick”. Well, I’ve ridden that pony right to the end of my degree.

To determine a structure by crystallography, you purify the protein that you’re interested in, and run an array of experiments in parallel to try to find conditions under which it will form crystals. Not every protein can crystallize, and not every crystal gives you good data, which makes protein crystallography an intimidating technique. The perceived risk of protein crystallography experiments drives many in the molecular sciences to treat protein crystallographers with a sort of reverence. I don’t know that that reverence is particularly well-placed, but I’ll take it all the same.

302_B3_001
Crystals of APH(2”)-Ia + GMPPNP. This is a ~3μL drop with protein and various chemicals, suspended upside-down on a glass cover slide. The bright colours are because I used a polarizing filter, crystals do cool things when you shine polarized light through them!

If you are able to make protein crystals of sufficient size and quality, then you can collect X-ray diffraction data with the crystals. Several companies sell instruments to collect this diffraction data, and there are dedicated facilities that conduct these experiments using high-intensity X-rays from accelerated electrons. The instrumentation for these experiments continues to dramatically improve, allowing us to get more and better data from our protein crystals all the time. The job of the X-ray crystallographer is much easier these days than it used to be.

Using a home-based or synchrotron source, a beam of X-rays are directed at the protein crystal. X-rays interact with the electrons of a molecule. Because of the physics of diffraction of rays from a crystal, the result is a pattern of diffraction from the x-rays that can be recorded. By measuring these diffraction spots, we can apply physical rules about how diffraction works to interpret the distribution of electrons within the crystal of proteins. From the intensity of diffracted X-ray spots, we can reconstruct the shape of the electron density in our molecule of interest.

281a31_testa_001
X-ray diffraction from a crystal of APH(2”)-Ia. The spots are X-rays diffracted from the crystal, and the intensity of those spots is related to the shape of the molecule in the crystal. The further out from the centre, the lower the “resolution” of the data – the better quality structure you can build. This crystal diffracted to ~2.15 Å, or 0.215 nanometers.

At this point, the job still isn’t done. In some cases, you have to solve what we call the phase problem, although in this case it wasn’t too much trouble so I’ll jump past it. However, there is still a considerable amount of analysis required to interpret what the electron density means, and what the shape of molecules that fill this electron density truly are. It’s the crystallographer’s job to build a molecular model that recapitulates the observed electron density as closely as possible. The point at which you consider the model “done” is an ongoing struggle, similar to that experienced by artists and writers, where there is always another brushstroke, sentence, or water molecule that could improve the final product, but then at some point you stop and call your model “finished” and interpret what it says about the chemistry of the molecule you’re studying.

Screenshot from 2016-06-23 13:20:04
Part of the model-building process for APH(2”)-Ia. The blue/purple mesh represents the electron density of the molecule, through which I build the model of the protein molecule, with yellow (carbon), red (oxygen) and blue (nitrogen) atoms linked together to form the structure of the protein. The mesh are a transformation of the experimental data, while the sticks built into it are the model of connected atoms that we interpret from this data. The cross in the bottom left represents a water molecule.

In the case of APH(2”)-Ia, a lot of the challenge for me was making the models as good as possible, and after a long struggle, they were of sufficient quality that I could use them to gain some interesting insights about how the protein works and suggested a new feature of an antibiotic resistance enzyme.

Determining the structure of APH(2”)-Ia

When I started working with APH(2”)-Ia, others had already determined the structures of three related molecules: APH(2”)-IIa, -IIIa, and -IVa. These structures give us the shape of the enzymes and some of their interactions with their substrates, but a key factor always missing was a well-defined active site set of the triphosphate ligand.

Without the triphosphate in the enzyme, we can’t get any sense of how the enzyme interacts with that molecule. And if we can’t see it, we can’t predict how it works, or understand how to affect it.

Building the shape of the first versions of APH(2”)-Ia weren’t as hard as I’d expected. I had a few nights up late, excited to carry out my next rounds of model-building and improving the data, and was proud to build models with some pretty excellent statistics for the quality of X-ray diffraction data I was working with. The challenge came when I had to start interpreting what was in the active site of the enzyme.

overview
Overview of the structure of APH(2”)-Ia. There are four copies (A-D) within the crystal structure, and I blow up one here for illustration. The three regions of the protein, the N-terminal lobe and core and helical subdomains of the C-terminal lobe are indicated. The nucleoside triphosphate binds between the N-lobe and core subdomain, while aminoglycosides bind between the core and helical subdomains.

Probing the active site of an antibiotic resistance enzyme

Remember how I talked about how the enzyme takes a phosphate group from GTP and moves it to the antibiotic? Well, in the first structures I looked at, that really didn’t seem possible. I used a GTP-like analogue molecule called GMPPNP to make the protein crystals, and in the structures, the phosphate groups of the GTP analogue were stuck in a position where they couldn’t react with the antibiotic. I named this the stabilized conformation, because it seems to be sitting in a position where it can’t react with anything.

There were several similar enzymes I could look to for comparison, and none of them show this stabilized triphosphate form. They have a different, activating conformation which directs it toward the other substrate. I was able to make the molecule adopt the activated conformation, but I had to break the protein by removing an important amino acid to make it let go of the stabilized phosphate.

conformations_equilirbrium
Stabilized and activated forms of the GMPPNP molecule in the APH(2”)-Ia active site. In both cases the magnesium atoms in the active site stay the same, but the phosphate groups (yellow) of the molecule switch to a different location, far from or close to the aminoglycoside, which contacts D374. By removing the S214 contact from the enzyme, the normal enzyme gave us the activated conformation, which suggests that the enzyme normally holds the compound in a stabilized form for some reason. Why?

So why didn’t the protein in my structures normally activate the triphosphate?

The answer came when I added antibiotic molecules to the crystals. Introducing the antibiotic after the crystals were grown could let us track the changes driven by the introduction of the antibiotic. I had a good idea from other aminoglycoside kinases that the antibiotic would bind in the same position, and initially was just trying to confirm that the antibiotic bound in the same position. As a fortuitous observation, the addition of aminoglycoside substrate drove changes in the shape of the protein, as well as the GMPPNP molecule. The antibiotic activated the triphosphate by binding to the enzyme.

The flip between these states gives us a clear way to understand how the enzyme could turn itself “on” when it encounters an antibiotic.

A motion sensor switch for antibiotics

gentamicin_induction
Gentamicin binding to the APH(2”)-Ia active site pulls the equilibrium of conformations in the active site from the stabilized to the activated form of the triphosphate group. A hydroxyl group (dark red) of gentamicin lies closest to the activated triphosphate, where it could then be phosphorylated.

So, we see a new shape of the triphosphate group that can’t modify the antibiotic. It is held in an awkward, non-reactive form in the back of the enzyme active site. When an antibiotic comes in, the enzyme shifts its shape to respond. In the process of making this change, the GTP triphosphate changes and becomes active. This catalytic switch helps the enzyme keep the GTP inactive until the antibiotic is bound and then activates its triphosphate to a shape that lets it carry out the reaction.

But why is this necessary?

This is where we have to speculate a bit. In the stabilized position, the enzyme can’t carry out its normal reaction, but it also can’t carry out a second reaction, called hydrolysis. Hydrolysis is the breakdown of a molecule caused by water. As all biological molecules are found in water, it is always available to react with an activated molecule. Normally, the kinase enzyme should transfer the phosphate the antibiotic substrate. However, when there is no antibiotic around, it’s possible that a water molecule can sneak into the active site and react with the GTP instead. The result of this interaction is that the GTP is broken down, and no antibiotic is inactivated. This wastes precious GTP for the bacterium, so any enzyme that cuts back on rates of hydrolysis will be preferable for the survival of the cell.  This mechanism may have been developed to reduce this energy wastage by the APH(2”)-Ia enzyme. In environments where there is a lot of competition and resources are scarce, enzymes that conserve energy are an enormous benefit for a bacterium.

This switch between stable and activated forms of the GTP molecule turns parts of the enzyme into a molecular motion sensor for the presence of antibiotics. When they aren’t around, the enzyme hangs out and holds on to the GTP, inactive. It’s only when the antibiotic shows up and sticks to a different part of the protein, that the enzyme undergoes changes that activate the GTP. Like a motion sensor-based system to turn your lights off when there’s no one around, flipping between these states might be an interesting way for the enzyme to turn its activity off and conserve energy when it doesn’t have an antibiotic to modify.

So what’s the big picture?

There’s a lot of places this work leads. I’ve skipped over another interesting finding that two different classes of aminoglycoside interact with the protein despite the fact that only one of those classes can actually be modified. I’ll save that for another blog post. There’s also a lot more detail on the specifics of how APH(2”)-Ia works that I’ve glossed over, which we could also explore some time later.

This mechanism isn’t too different from other mechanisms in proteins that carry out various functions in biology. This switch can be considered a type of induced fit or conformational selection of the enzyme, a well-established model for protein behaviour. The thing that makes it different is that this enzyme is an antibiotic resistance enzyme. Usually antibiotic resistance enzymes aren’t thought to be very complicated. They are thought to be inefficient but highly active machines to turn off antibiotics as fast as possible. This work shows us that an antibiotic resistance factor can be modulated and subject to regulation in a way that reduces its energetic waste.

How about the bigger picture?

Taking the inference a step further, this induced activation of the enzyme indicates that there is a greater complexity in the action of this enzyme than we previously might have expected. However, it isn’t as surprising as we might think when we remember that many antibiotic resistance factors have been around for millions of years, with a very long time to optimize their catalytic activity. We treat antibiotic resistance as something that pops up fresh when we use antibiotics, but the truth is, like we’ve discussed before, there are some forms of ancient antibiotic resistance are highly fine-tuned and regulated to respond to challenges in their natural environment.

APH(2”)-Ia is one resistance factor among many. This paper shows us that resistance factors need not just be catalytically optimized machines, that they can contain a degree of fine-tuning that regulates their activity. This nuanced activity is supported by long periods of evolutionary selection to produce highly effective resistance enzymes, ones that lead to terrifyingly effective antibiotic resistant microbes. We now understand one factor a little bit better, and hopefully that helps us just a little bit more in our efforts to beat back the surge of antibiotic resistance.

Citation:

Caldwell SJ, Huang Y, & Berghuis AM (2016). Antibiotic Binding Drives Catalytic Activation of Aminoglycoside Kinase APH(2″)-Ia. Structure, 24 (6), 935-45 PMID: 27161980

How build a protein – lessons from the Protein Engineering Canada 2016 meeting

How do you make a new protein, or a new function in an existing one?

This is the goal of the field of protein engineering. Researchers working in this field use a number of strategies to try to make proteins with new characteristics. The development of proteins with new function have applications in industry, medicine, and biotechnology.

Want a more stable or more efficient enzyme? Talk to a protein engineer.

Want to convert a protein with one function to a completely new activity? talk to a protein engineer.

Want to make a molecule with a function never before seen in nature? talk to a protein engineer.

I was able to hear from many working in the field this weekend at the second iteration of the Protein Engineering Canada meeting, held at the University of Ottawa. The meeting was brief but full of excellent talks. There were some common principles that kept coming up that I’ll try and summarize here.

1448571656

Reductionism fails in engineering proteins (for now)

Elan Eisenmesser gave an excellent talk about protein dynamics and mentioned how “the worst thing to happen to biochemistry was biochemists”. The argument is that biochemists like simple, reductionist models, but as we’ve studied proteins and how they work, we know the truth remains much more complicated than that. Many times to change the function of a protein, changes far from the site of interest are necessary, and we remain pretty terrible at predicting what those might be.

In many talks at this meeting it came up that the most effective modified enzymes are typically achieved through non-intuitive mutations. So, if we limit ourselves to deterministic changes where we predict the results, we may miss most possible opportunities to develop new properties in a molecule. Rational approaches can lead us the wrong way – screening of many different sequences are necessary to find proteins with desired properties. We may someday be able to predict function from sequences alone, but those days remain far in the future.

The challenges in screening and sorting for function

The number of possible sequences even in a relatively small protein are astronomically large, 20 residues raised to the power of the number of residues in the protein. As a result, it’s never possible to screen for function in every possible sequence. It is always necessary to reduce the number of testable sequences and structures of proteins to test to a manageable level, and this needs to be done in a smart way.

Two strategies of screening a restricted library presented were Tim Whitehead’s group’s strategy of systematically replacing every amino acid in a protein with every possibility and compete the bacteria against each other to try to alter function. Another method was Justin Siegel’s to look to nature and the diversity of sequences in the environment to find better proteins that have developed in the wild. These strategies guide us to finding new functions without having to individually screen 20100 individual proteins, something that we would still be doing until the death of the universe.

A third strategy discussed was the generation of a collection of completely new proteins, never seen before, to screen for brand new functions in proteins:

Making something from nothing

It’s much easier to break something than to build it new from scratch. This is a fact of life, dictated from the rules of thermodynamics.

But, at the same time, it’s surprisingly easy to build something. Michael Hecht gave an excellent talk indicating how his group has made a library of brand new proteins, and screens them for function. He described that for some functions, many new proteins could carry them out, without any evolution to tune the protein’s activities. So, if you have the right type of function, it might be possible to find that function in randomly-generated (although constrained) protein sequences. This has some pretty profound implications for understanding the origin of life as we know it.

The finding from this work, and in the analysis of a large family of enzymes presented by Janine Copp was that you can relatively easily get weak, promiscuous activity from primitive enzymes, which then are refined to more specialized proteins with higher specificity and activity. This is the case in nature, and also now in the lab where researchers develop new and better functions in proteins by driving them to specialization.

Getting comfortable with disorder and dynamism

Similar to problems with reductionism, an assumption that x-ray crystal structures have convinced many of is that proteins are mostly rigid and don’t move much as they carry out their function. This isn’t really true, I’ve ranted before on this site about why we need to undersand molecules are jiggly. Proteins are nearly chaotic, with interconnected networks of interactions that drive their function.

A recurring theme in this meeting was that an understanding of dynamics is necessary to develop a good grasp of function and how to change it in a molecule. Many talks including Sophie Gobeil’s talk on an antibiotic resistance enzyme and Adam Damry’s award-winning talk on engineering of a dynamic function in a protein touched on these points. It is necessary to understand dynamics to predict function, and while this remains challenging, it is possible to develop some predictive insights through carefully constructed experiments.

The emerging art of protein design is starting to mature, guided by a more comprehensive understanding of protein function, and smart strategies of how to get there. I’m excited to see where the field is headed!

A meeting well spent

In addition to all this work on engineering of proteins, my jaw dropped to see some of the amazing new T3SS structures coming out of Natalie Strynadka’s group, and Martin Schmeing’s presentation of his group’s megaenzyme studies, appropriately set to Miley Cyrus and Taylor Swift.

Overall, an amazing meeting, I hope to go again in 2018. If you’re interested in protein design and engineering, I can’t recommend the meeting enough. Hope to see you there.

 

Music of the Macromolecules

To fully understand a molecule, you first need to learn what it looks like, and then, how it moves. This isn’t easy. I’ve talked before about how unusual biological molecules can be if you’re accustomed to thinking of real-world objects. They are fundamentally flexible and dynamic in a way that everyday objects aren’t. They move chaotically, at lightning speed, crashing through a molecular mosh pit on the sub-microscopic scale.

Protein and nucleic acid macromolecules are like Rube Goldberg machines of interconnected parts. These parts move independently, but in turn influence the other parts of the system as they move. There’s different levels of complexity in this motion. Slow conformational transitions that move large sections – domains – relative to each other can take milliseconds to occur. Ultra-fast bond vibrations take only picoseconds. That’s a difference of nine orders of magnitude. In the time of a single slow domain movement, a billion bond vibrations can occur. In monetary terms, this is the difference between one cent and ten million dollars.

This is what biophysicsts refer to when they talk about “timescales of molecular behaviour“. Different types of molecular motions take dramatically different lengths of time to occur. We can only measure a subset of these motions with any one experimental technique. When we try to fully understand a molecule, we need to be aware of all of its motion across all timescales. Unfortunately, we are terrible at understanding things that span such a broad range.

Our brains are trained to think about everyday objects we can see, touch, and manipulate. Microscopic molecules act in ways that make absolutely no sense on the scale of our experience. To help make sense of this strange behaviour, we need a good metaphor.

Molecular Motions are Like Musical Harmonics

What does a molecule have in common with a musical note? You might not be able to think of any way these two things are related (you also might also be wondering what I’ve been smoking). A molecule is a collection of atoms, connected by shared electrons. A note is a small part of a Bach sonata, jazz solo, or Call Me, Maybe.

Well, we’ve discussed before how the context a molecule is in is critical for understanding downstream effects. A musical note, as well, gains more meaning by the context it is placed within. The same note means different things if it’s played within a different song, or if it comes from a pan-pipe versus an electric guitar.

But even isolated molecules and isolated musical tones share something fundamental in common. They both display a complexity of vibration, with finer, more detailed vibration superimposed on top of slower, lower-frequency behaviour.

vviibbrraattiioonn
Macromolecules show motion on many scales, superimposed on each other like resonant overtones of a musical note. Structures generated from PDB 5RAS

To a first approximation, a note is just a frequency of sound. Children of the ’90s will remember that before Napster, we could download MIDI files from the internet to play as music. Many computer sound cards rendered the tones of the MIDI as pure tones, which reflects the format the note is stored as in the MIDI file. The end result is completely devoid of soul, a heartless distillation. It lacks any of the complexity of actual recorded music. The notes are there, but without details and imperfections of that come from real instruments, it seems hollow. Real musical instruments produce so much more than just pure tones.

We know that the character of an instrument changes the nature of the sound it produces. A B♭ from a trumpet and a B♭ from a clarinet sound different to us, despite both having same fundamental frequency. What makes them sound different from each other, and from a MIDI file? In one word: overtones. Every instrument layers higher-order resonances – vibrations – on top of the fundamental tone, and those resonances are dependent on the shape, material, and other properties of the instrument. Vibrational overtones add complexity and texture to an instrument’s sound. While the main pitch of the note is the same, the structure and character of the instrument produce different superimposed frequencies that make it unique.

Like the air perturbed by a musical instrument, molecules also vibrate. These vibrations and movements are central to their function. Individual atoms undergo high speed vibration. Chains of multiple atoms turn and bounce in unison. Loose loops and “floppy bits” of dozens to hundreds of atoms contort, twist, and wiggle. Whole domains can migrate back and forth between different states. Like overtones on a musical note, these motions are superimposed on each other. While large domain movements occur, loops are wiggling, within those wiggles, amino acid side chains are bouncing, and during those bounces, individual atoms vibrate across every bond in the molecule.

Harmonic Potentials

Rotation and vibration of atomic bonds follow an energy potential pretty close to sine waves. Combining the motion of those atomic vibrations and rotations across multiple atoms produces an emergent complexity where the arrangement of atoms across one bond can influence that of the nearby atoms, and by extension, the rest of the molecule. In theory, we might be able to work out how these energy potentials govern the behaviour of a single molecule.

Alas. Were it only that simple.

In the change of structure of a molecule, small transitions of single atoms can be layered on top of larger motions. The motion of an atom depends on its own vibrations, as well as that of the rest of the molecule around it, pushing and pulling it along with larger changes. This feeds both ways. While a large transition occurs, vibrations and rotations of progressively smaller components can also exert their collective effects on the entire molecule. This chicken-and-egg problem is a big part of why the behaviour of molecules is so hard to predict, even when we know its structure. It leads to a computational problem that rapidly gets too complicated for even the most powerful supercomputers to handle easily.

So just because you know the shape of a molecule doesn’t mean you can capture the full essence of its character. Like a musical note, a molecule’s shape is just the starting point to understanding the complex way it acts on itself and its environment. Static molecular structures are like notes printed on a page. Dynamics* are those notes, played aloud, containing much more richness than the printed note alone contains.

When we combine multiple musical notes, the complexity grows even greater. Multiple notes from a single instrument like a guitar or piano interact with each other to form chords. Different instruments in a band or orchestra combine together to further increase the complexity. All of these interactions combine together to make a symphony much greater than the sum of its parts.

Likewise, combination of motions within molecules also adds to a complex whole, where the collective motion of thousands or millions of atoms can lead to much more nuanced patterns of behaviour than we might otherwise expect. Two macromolecules, playing their own melodies, can come into contact (bind) with each other, and if so, they join together in harmony. These molecules become a single, resonating entity, sometimes for a brief exchange, other times for much longer.

Just like in a symphony, the complexity grows even more as we scale up interactions of molecules to complexes, signalling pathways, cells, and even whole organisms. This intricate opera underlies all biological processes.

Fine Tuning

So, if molecules are so complex, how can we make any sense of their messy behaviour? In science, we don’t aim to simply appreciate nature, but to understand it and make predictions about the future, and to generate changes that help us innovate on existing phenomena. Our metaphor of a molecule as a musical note becomes useful to help us move from thinking about how a molecule is to how it might change.

Ask any manufacturer of a musical instrument: changes to small details of a musical instrument can dramatically influence the quality of sound you get. This is the same with molecules – changes that alter the dynamics change the character of the molecule. For example, in a protein, biological activity frequently requires large movements between domains of a protein, as well as finer motions of hinge regions, short loops, and amino acid side chains. Changes to a molecule, by post-translational modification, mutation, binding to another protein, or allosteric regulation can distort or modulate the dynamics of a protein. They change the tune of the molecule, by altering its resonances.

This resonance-tuning feature of proteins has led to many mysteries in the literature about macromolecules. With surprising frequency, mutations are found that disrupt the activity of a protein, despite being far away from the business end (the “active site”) of the protein molecule. These reductionism-breaking proteins have caused many a biochemist to throw up their hands in dismay at the apparent lack of connection between a mutant protein they identify and their observed change in molecular function. Happily, though, we’re starting to track down the culprit: dynamics.

Changes to a molecule that have very little structural change can still alter the molecule’s vibrational frequencies. A protein with an amino acid important for dynamics changed is like a band whose bass player is hung over and can’t keep time.

A paper from earlier this year demonstrates this effect very well. It came from Dorothee Kern‘s group at Brandeis. Looking at two well-known protein kinases(PMC) and reconstituting the evolutionary and biochemical pathway between the enzymes, the group found that there’s a small set of amino acids that drive the change in behaviour between the enzymes. Almost none of these amino acids are directly involved in the chemical behaviour of the protein. Like making alterations to an instrument, these mutations tune and refine the dynamic properties of the enzyme, and direct it toward different behaviour.

Molecular and structural biologists are just starting to get a good understanding of the role of mutation and chemical change to altered dynamics and function of proteins. I’ll be watching this field closely for future developments.

From Chaos, Order

The analogy of molecules as musical notes with harmonics isn’t perfect. Music depends on perfectly repeatable, precise tones (that’s not to say innovation and improvisation aren’t important, but they use the same, standard notes). Molecules have an intrinsic chaotic nature that is not really predictable at all. But while the molecule is unpredictable on the microscopic level when you look closely, take a step back and the molecule starts to average out into predictable, regular rules. From a stochastic and random process on the microscopic level, step back farther and farther, and a kind of predictable order emerges.

There’s also a difference in scale. The first overtone of a note is merely twice the frequency. Proteins have motions at least 9 orders of magnitude different. However, we could compare it to the difference in loudness our ears can perceive, the difference between a bond vibration and large macromolecular rearrangement is about the same difference in magnitude as a pin dropping when compared to a loud rock concert. The musical analogy isn’t perfect, but it helps understand a hugely complex system with thousands or millions of moving parts in a more intuitive way.

Symphonies in the Molecular World

Molecules are alien entities, very different than anything we interact with in our everyday lives. Their actions are determined first by their structures, then by their dynamics – how those structures move and vibrate. The structure and movement of these molecules results in a complex molecular symphony going on at the microscopic level. And from the single complex note that one molecule makes, it can be tuned by others, harmonize with partners, and join in with the grand symphony that goes on in the complex molecular opera of life.

Dynamics are a frontier of structural biochemistry research (a Grand Challenge, if you will). Moving forward, we continue to chip away at the mysteries of how molecules work and learn how to better predict molecular behaviour. Every time we do so, we get a little bit better at listening to the complex arias and beautiful harmonies these molecules play. Our ear gets a little bit more refined, our appreciation of this molecular orchestra more acute. The symphony goes on all around us, can you hear it?

* I’m using the definition of dynamics as it relates to molecules here, as in the field of molecular dynamics modelling. The term dynamics as it relates to music is a slightly different concept than anything we’re discussing here, so I’ll skip over it.

Citation:
C. Wilson, R. V. Agafonov, M. Hoemberger, S. Kutter, A. Zorba, J. Halpin, V. Buosi, R. Otten, D. Waterman, D. L. Theobald, D. Kern (2015). Using ancient protein kinases to unravel a modern cancer drug’s mechanism Science, 347, 882-886 : 10.1126/science.aaa1823