Michael Clarkson

84 posts · 81,039 views

Sort by Latest Post, Most Popular

View by Condensed, Full

  • January 22, 2009
  • 07:45 PM
  • 1,093 views

Man-made biological clocks

by Michael Clarkson in Conformational Flux

Numerous and diverse biological processes depend on the functioning of an internal clock. Biological timers determine your heart rate, the frequency of cell division, and the way you feel at 3 AM, among other things. Similarly, mechanical and electronic clocks serve essential functions in many kinds of man-made devices. As we begin to develop synthetic organisms for medical and industrial purposes, it will be useful for us to be able to construct timers within these micro-organisms to control their activity. In two recent papers, scientists have created molecular systems in mammalian and bacterial cells with tunable oscillation periods.Although the methods used to construct these oscillators and the kinds of cells they were made in differ significantly, the two systems had one key similarity. Both oscillators used both a positive and a negative feedback loop. In principle, it should be possible to construct an oscillating system using only a negative feedback loop. For instance, you could have a system in which a transcriptional activator enhances the expression of a functional protein as well as that of a transcriptional repressor. As the concentration of the repressor increases, that of the activator falls, causing levels of the functional protein and the repressor to fall, allowing the concentration of the activator to rise again. By tuning the lag in this system one could in theory produce an oscillator with a range of possible frequencies. Yet many systems seem to have evolved with a positive feedback loop as well (in which the activator enhances its own expression).This curious feature was the subject of a series of simulations reported by Tsai et al. (1) last July in Science. Their studies indicated that a system using only a negative feedback loop would produce a periodic oscillation just as expected. However, they also found that systems relying only on negative feedback were limited in that it was difficult to adjust the frequency of the oscillation without also altering its amplitude. Introducing a positive feedback loop stabilized the system so that the oscillator could be tuned to a wider range of frequencies without altering peak amplitude.The benefits of this approach were demonstrated in data reported by Stricker et al. last November in Nature (2). They constructed a circuit that expressed green fluorescent protein in an oscillatory manner in response to stimulation by arabinose and isopropyl β-D-thiogalactopyranoside (IPTG). They created a circuit (see their figure, right) in which every component ran off a hybrid promoter that could be activated by AraC (which binds arabinose) and inhibited by LacI (which binds IPTG). Arabinose binds to and activates AraC, while IPTG binds to and inactivates the LacI repressor, with the result that all three genes are transcribed, and the cells fluoresce due to the presence of GFP. As the concentration of LacI increases, the activating power of the IPTG decreases, leading to an eventual repression of transcription and the end of fluorescence. As the LacI proteins get degraded the IPTG concentration again becomes sufficient to activate transcription, leading to a new fluorescent phase. Stricker et al. found that they could alter the frequency and amplitude of this oscillation by altering the growth conditions of the bacteria (temperature and nutrient availability) as well as by adjusting the concentrations of the activating reagents arabinose and IPTG. By attempting to match computer models of their oscillator to the data they collected, they found that the time needed for translation, folding, and multimerization played a critical role in establishing the existence and period of the oscillation. Stricker et al. constructed an additional circuit using only negative feedback from LacI, proving that this was possible, but they found that in this case the period was not very sensitive to IPTG concentration and the oscillations were not as regular.A similar system was constructed in Chinese hamster ovary cells by Tigges et al., who described their results recently in Nature (3). The circuit they constructed used tetracycline (TC) and pristinamycin I (PI) as activating molecules. The tetracycline-dependent transactivator (tTA) served as the positive feedback lood, activating transcription of itself, GFP, and the pristinamycin-dependent transactivator (PIT). In this system, increased levels of PIT cause the production of antisense RNA to tTA, causing its mRNA to be destroyed prior to protein production. This, in turn, diminishes production of all proteins until the reduced levels of PIT allow tTA to again activate transcription. They found that they could control the period of oscillation by altering the gene dosage (i.e. the quantity of DNA used to transfect the cells).The oscillating systems constructed in these papers serve more as test cases and examinations of principles than as functional pieces of synthetic systems. You will not be using an E. coli alarm clock any time soon. However, it has always been true that you learn more from trying to build something than from trying to tear it apart. These attempts to construct artificial periodic oscillators have provided interesting insights into those that have evolved naturally. The knowledge gained from these experiments will help us to understand oscillatory systems like the circadian rhythm and cardiac pacemaker, in addition to illuminating design principles for synthetic biology.(1) - T. Y.-C. Tsai, Y. S. Choi, W. Ma, J. R. Pomerening, C. Tang, J. E. Ferrell (2008). Robust, Tunable Biological Oscillations from Interlinked Positive and Negative Feedback Loops Science, 321 (5885), 126-129 DOI: 10.1126/science.1156951(2) - Jesse Stricker, Scott Cookson, Matthew R. Bennett, William H. Mather, Lev S. Tsimring, Jeff Hasty (2008). A fast, robust and tunable synthetic gene oscillator Nature, 456 (7221), 516-519 DOI: 10.1038/nature07389(3) - Marcel Tigges, Tatiana T. Marquez-Lago, Jörg Stelling, Martin Fussenegger (2009). A tunable synthetic mammalian oscillator Nature, 457 (7227), 309-312 DOI: 10.1038/nature07616... Read more »

Jesse Stricker, Scott Cookson, Matthew R. Bennett, William H. Mather, Lev S. Tsimring, & Jeff Hasty. (2008) A fast, robust and tunable synthetic gene oscillator. Nature, 456(7221), 516-519. DOI: 10.1038/nature07389  

Marcel Tigges, Tatiana T. Marquez-Lago, Jörg Stelling, & Martin Fussenegger. (2009) A tunable synthetic mammalian oscillator. Nature, 457(7227), 309-312. DOI: 10.1038/nature07616  

  • January 13, 2009
  • 06:30 PM
  • 1,442 views

How we taste umami

by Michael Clarkson in Conformational Flux

Although we still do not know the full breadth of our flavor-sensing capabilities, human beings are known to possess receptors for at least five basic tastes. Probably you have known about the sweet, sour, salty, and bitter flavors since you were in grade school, but the fifth, umami, was less widely accepted in the West until recently. Umami is a savory flavor element that is found in many foods, including tomatoes, parmesan cheese, truffles, and many kinds of meat and seafood. The umami taste primarily detects the amino acid glutamate (hence the popularity of the food additive monosodium glutamate, or MSG), but the effect is also intensified by the presence of the nucleotide inosine monophosphate (IMP). In a recent (open access) paper in PNAS, researchers from two corporations examined the umami taste receptor to understand how this happens.The umami flavor is detected by a pair of G-protein coupled receptors (GPCRs) that have an external venus flytrap (VFT) domain in addition to their classic 7-helix trans-membrane domain (TMD). This complex is closely related to the sensor for the sweet flavor: in fact one of the receptors (called T1R3) is the same in both sensors. It is the second receptor (T1R1 for umami, T1R2 for sweet) that determines what taste is recognized. What we don't know for sure is whether it is the TMD or the VFT of these receptors that identifies the flavor component.In order to answer this question, the researchers performed an experiment known as a "domain swap". Using recombinant DNA technology they assembled two chimeric proteins, one with the VFT of umami and the TMD of sweet, and one with the VFT of sweet and the TMD of umami. They then inserted these proteins into cultured cells that would fluoresce when the receptors were activated. The authors suspected that the VFT is primarily responsible for binding the ligand. As you can see from figures 1 & 2 (this is an open access paper, so go ahead and take a look), the experiment bears this out. The chimera with the VFT of sweet caused a fluorescent response in the presence of compounds such as sucrose and aspartame, while the umami-VFT chimera reacted to glutamate and aspartate. You can also see in figure 2C that the presence of IMP dramatically enhanced the activity of glutamate in this chimera. This indicates that the VFT is also responsible for IMP synergy in the umami receptor.The hurdle in going further than this is that no structure of the umami VFT is available, which makes it difficult to figure out exactly how everything fits together. However, T1R1 has a close evolutionary relationship to the metabotropic glutamate receptors (mGluR), and a crystal structure of that VFT is available. Using conserved and homologous residues as a guide, the authors made a model of the T1R1 fold from the mGluR data. Based on this model they predicted certain amino acids that would be essential for glutamate binding in T1R1 and then mutated them in order to measure the effect. Residues that were predicted by the model to interact with the zwitterionic amino acid backbone proved to be essential for ligand recognition. Interestingly, the amino acids that contact the side-chain carboxylic acid of glutamate in mGluR are not conserved in T1R1, and mutations at the matching sites do not alter glutamate binding. However, these mutations eliminate the effect of IMP.In order to understand this behavior, the authors modeled the binding cleft in the closed state, with IMP and glutamate in place. Glutamate binds at the bottom of the cleft, with its side chain pointed outwards. This conformation puts several positively-charged residues from the two lobes of the VFT close together higher up in the cleft. The authors propose, in keeping with previous models of VFT behavior, that the binding of the glutamate lowers the energy barrier between the open and closed states of the domain, but that glutamate alone is not sufficient to hold the domain closed. Their model places IMP higher up in the cleft, where its negatively-charged phosphate interacts with the positive residues. Thus, IMP stabilizes the closed conformation of the VFT domain.Some more work here would be welcome, particularly in the form of experimental crystal structures of the T1R1 VFT that can confirm the homology model. The VFT is rather large, but using a perdeuterated sample in a high-field magnet it might be possible to confirm the population-shift mechanism using NMR experiments. Lower-resolution techniques such as FRET may also be able to catch this stabilization behavior. If the model proves to be accurate, it would serve as an interesting example of positive allostery from a population shift.Although these experiments only concerned the umami taste receptor, this allosteric mechanism may be a more general feature of certain GPCRs. The authors indicate that they have unpublished data showing similar behavior in the sweet receptor, and it may be possible to design an allosteric stabilizer for any GPCR with a VFT domain. Because the related mGluR receptors are involved in many neurological and psychological diseases, successful design of such activators may have some therapeutic value.F. Zhang, B. Klebansky, R. M. Fine, H. Xu, A. Pronin, H. Liu, C. Tachdjian, X. Li (2008). Molecular mechanism for the umami taste synergism Proceedings of the National Academy of Sciences, 105 (52), 20930-20934 DOI: 10.1073/pnas.0810174106 OPEN ACCESS... Read more »

F. Zhang, B. Klebansky, R. M. Fine, H. Xu, A. Pronin, H. Liu, C. Tachdjian, & X. Li. (2008) Molecular mechanism for the umami taste synergism. Proceedings of the National Academy of Sciences, 105(52), 20930-20934. DOI: 10.1073/pnas.0810174106  

  • January 6, 2009
  • 11:00 PM
  • 795 views

Long-range effects in the ribosome

by Michael Clarkson in Conformational Flux

Antibiotics such as chloramphenicol suppress infections by inhibiting bacteria from making proteins. They achieve this by binding to and blocking the peptidyl transferase center (PTC) of the ribosome, a large complex of RNA and protein that performs nearly polypeptide synthesis in living cells. Although PTC-binding antibiotics comprise several different families of compounds, mutations in the ribosome that confer resistance to one family often produce cross-resistance to other families. This is difficult to understand because the PTC itself is highly conserved and not very tolerant of mutations. In an upcoming paper (open access, read along) in the Proceedings of the National Academy of Sciences, a team of researchers from the Weizman Institute of Science analyze several crystal structures of the ribosome to understand how this cross-resistance arises.Davidovich et al. mapped nucleotide mutations known to confer resistance to PTC antibiotics onto x-ray crystal structures of the large ribosomal subunit from D. radiodurans in complex with antibiotics. One interesting facet of the resistance mutations became immediately apparent: they were almost all clustered on one side of the antibiotic binding site.You can see this pretty clearly in Figure 2 panels B&D. Although the antibiotics (large pink surface) are surrounded by nucleotides, most of those that are on the left side (thin tan sticks) do not confer resistance if mutated. Resistance-conferring mutations instead cluster around the "rear wall" of the PTC (to the right). The authors explain that in this region ribosomal functions primarily rely on the sugar-phosphate backbone of the rRNA. Because the backbone elements are the same for all ribonucleotide bases, mutations in this region are more likely to be tolerated without significant loss of function.Another striking feature of resistance mutations is visible in Figure 2 and quantified in Figure 3A, namely that many of these mutated bases do not contact the antibiotics directly. In particular, mutation of G2032 appears to play a role in conferring resistance to several different antibiotics. Overall, however, it appears that numerous long-range interactions can interfere with antibiotic binding.The lynchpin of these interactions seems to be U2504, a base that directly contacts the bound antibiotic in most cases. Mutations to U2504 itself do not appear to be well-tolerated, but many of the long-range mutations occur in the layer of bases surrounding it. The authors describe in detail several mechanisms by which the observed mutations might increase the flexibility of U2504, allowing it to adopt positions that could allow continued protein synthesis while reducing the binding of antibiotics. The commonality of interactions with U2504, and the importance of the structural context of the surrounding nucleotides, explains why many mutations can give rise to cross-resistance.The practical upshot of these findings is that they may serve as a guide for the design of future antibiotics. Since the majority of the drug-resistance mutations lie on the rear wall of the PTC, the effectiveness of these antibiotics may be enhanced by improving their binding to other parts of the site. With further modeling it may also be possible to design antibiotics that can compensate for flexibility at U2504. These findings also remind us that dynamics and long-range interactions can be important to the function of any biomolecule with a folded three-dimensional structure, not just proteins.C. Davidovich, A. Bashan, A. Yonath (2008). Structural basis for cross-resistance to ribosomal PTC antibiotics Proceedings of the National Academy of Sciences, 105 (52), 20665-20670 DOI: 10.1073/pnas.0810826105 OPEN ACCESS... Read more »

C. Davidovich, A. Bashan, & A. Yonath. (2008) Structural basis for cross-resistance to ribosomal PTC antibiotics. Proceedings of the National Academy of Sciences, 105(52), 20665-20670. DOI: 10.1073/pnas.0810826105  

  • November 4, 2008
  • 09:00 PM
  • 1,207 views

Video games and violence: grasping at causality

by Michael Clarkson in Conformational Flux

The latest evidence in the debate over the effects of video game violence has arrived in the November edition of the journal Pediatrics. Japanese and American psychologists, including well-known media violence researchers Craig Anderson and Douglas Gentile, report that violent video games constitute a causal risk factor for physical aggression. Perhaps unsurprisingly, the gaming internets have already expressed their disagreement with these results via angry blog postings based on secondary reporting (calmer coverage can be found at Gamasutra). A more professional critique has also been offered, in the form of a post-publication peer review by Texas A&M International University Professor Christopher Ferguson. The paper tries to sell itself as a significant piece of new proof, which it is not. Anderson et al. have found an interesting, if weak, correlation that they cannot prove to be causal, due to the limitations of the methods employed.The study has two key advantages that, in principle, make it a unique addition to our knowledge about the effects of video game violence. Firstly, it attempts to correlate physical aggression (PA) in teens and kids with habitual exposure to video game violence (HVGV) 3-6 months earlier. While the use of a timecourse alone cannot prove causation, long-term correlations are thought to suggest a causal relationship more strongly than instantaneous correlations. Secondly, the study involves several age groups from two countries, the United States and Japan. Although more children play video games in Japan than in the US, the rate of violent crime in Japanese society is much lower than in the US. This has occasionally been held out as disproof of an HVGV-PA link, but all it really establishes is that other factors play a significant role. Therefore it would be interesting to determine whether cultural differences between the US and Japan alter the effect of HVGV on PA.Three sets of children (two in Japan, one in the US) filled out questionnaires querying their gaming habits and physical aggression levels. Some months later, these same children were surveyed again to see whether their physical aggression levels had changed. The authors found that HVGV levels at the first time point had a weak correlation (r= 0.28) with PA at the second time point. This effect varied significantly over the individual datasets and was strongest in the youngest age group. However, the r value did not exceed 0.5 for any of these datasets.In layman's terms, one could see these results as evidence that HVGV predicts between 8% and 16% of the level of physical aggression, depending on the age group and nationality. I caution my readers that this interpretation is an oversimplification that depends on certain assumptions about the data to have any validity. Because no statistics of the underlying matrices are provided I cannot substantiate those assumptions, so this should not be taken as a definitive description of the study's findings. Statistics (even averages) imply a model, and should not be trusted if it cannot be proved that the model is appropriate.These results are interesting and indicate that, although the magnitude of the effect may differ between societies, there is nonetheless a universal positive correlation between HVGV and subsequent physical aggression. Despite the elaborate discussion of youth violence in the paper, this does not directly indicate a linkage to criminal behavior. Moreover, this correlation is difficult to interpret due to the study's numerous flaws.There are good reasons to wonder whether the interpretation of the questionnaires produced a valid measure of HVGV at all, the assignment of violence level by genre being particularly suspect. A more significant problem may be that HVGV and PA were assessed by different means in every single group. Each group used different delays between surveys, and each involved differently-aged children. This doesn't necessarily mean that conclusions drawn by aggregating the three are wrong, and the authors contend that agreement across the varying methodologies indicates robustness. However, the differences in method and subjects multiply the potential sources of error considerably. Since the derived correlations are so weak, this is a significant concern. In addition, because the populations differ substantially in respects other than nationality, it is impossible to accurately assess the effect of culture on the relationship between HVGV and PA. Doubtless future longitudinal studies will apply more uniform methods.This brings me to another weakness of the study. Scientists will occasionally joke that the best possible set of correlational data is the one that contains only two points, the reason being that you are assured of being able to draw a perfect, straight line through all your data. In practice, however, we know that having limited numbers of data points makes our interpretations much more likely to be wrong. A "longitudinal" study involving two questionnaires given a couple of months apart hardly provides firm footing for a long-term correlation or a causal relationship. The authors acknowledge that the study is limited in this regard, but argue that the short wait would most likely depress correlations from their true value. Still, a longer timecourse with more measurement points would be highly desirable.The authors make no attempt to account for any confounding factors other than gender. They do not seem to have taken data on family situation, peer influence, parental involvement, or school performance, although all of these factors are known to correlate to greater or lesser degrees with both PA and video game habits. If we only wish to establish that there is a correlation between HVGV and PA that's not a huge problem. However, Anderson et al. clearly mean to establish video games as a causal factor for aggression. In the absence of controls for confounding factors, that is impossible.Curiously, the authors also do not seem to have measured HVGV at the later time point. One objection to existing research linking HVGV to real-world violence has been that the observed correlations exist because people predisposed to violence choose to enjoy violent media. Testing the hypothesis that PA at the initial timepoint predicts HVGV at the second timepoint seems like an obvious thing to do, if only to squelch this objection. This seems to me particularly worthwhile, because the predictive power of HVGV for later aggression appears to be less than the instantaneous correlation between HVGV and aggression, significantly so for the older group. In light of these facts, the choice not to assess HVGV at the later time seems extremely odd.Despite these flaws, this research is a step in the right direction. We need longitudinal studies, carefully controlled for confounding factors, over a range of ages and nationalities to parse out the true effects of video games on aggressive behavior in teens and adults. I do not find the present study terribly convincing, and I particularly dislike the more sensationalistic high points of its discussion section. Nonetheless, I hope that the authors will take criticisms like those of Dr. Ferguson into account as they design studies that will more rigorously investigate the causal relationship between HVGV and PA. Only a particularly obstinate person would deny that there is a correlation between the intake of violent media, including video games, and aggressive behavior. They may inspire aggressive behavior, or serve as an outlet for existing aggression; either way, the correlation ought not be ignored. However, video games are just one, and doubtless not the most important one, of a constellation of potential factors affecting child behavior. Without a genuine analysis of the complicated causal relationships among these it is impossible to provide good advice to parents, doctors, and psychologists. The present study does not fill that gap in our understanding; it is doomed by its single-minded focus on video games and failure to account for confounding factors. While it is of value to know that the correlation between violent video games and aggression transcends national and cultural boundaries, it would be of greater value to know whether excessive playing of violent video games is a cause of aggressive behavior, a result of pre-existing aggression, or both. That is a question this research does not adequately, much less conclusively, address.C. A. Anderson, A. Sakamoto, D. A. Gentile, N. Ihori, A. Shibuya, S. Yukawa, M. Naito, K. Kobayashi (2008). Longitudinal Effects of Violent Video Games on Aggression in Japan and the United States PEDIATRICS, 122 (5) DOI: 10.1542/peds.2008-1425... Read more »

C. A. Anderson, A. Sakamoto, D. A. Gentile, N. Ihori, A. Shibuya, S. Yukawa, M. Naito, & K. Kobayashi. (2008) Longitudinal Effects of Violent Video Games on Aggression in Japan and the United States. PEDIATRICS, 122(5). DOI: 10.1542/peds.2008-1425  

  • October 22, 2008
  • 10:00 PM
  • 806 views

Making a molecular switch

by Michael Clarkson in Conformational Flux

The practical aim of the investigation of allostery is the manipulation of this property as a means to aid human health and industry. We already have in hand the sequences of numerous enzymes that carry out unique and useful chemical reactions, and recent advances suggest that we will in the near future be able to design man-made enzymes that efficiently carry out completely novel reactions. Making the fullest use of these abilities demands that we be able to regulate the enzymatic activity of interest. Fortunately, just as nature possesses a rich array of enzymatic activities, it also holds a number of binding proteins, so we don't have to start from scratch. The trick is that there must be some way to communicate the binding event from one domain into the active site of the other domain. In last week's edition of Science researchers from the University of Pennsylvania and the University of Texas Southwestern Medical Center claim to have achieved just that.The core idea of their approach is disarmingly simple. Identify a protein that has a long-range conformational response to a binding event, and locate the distal region on its surface where this response gets read out. Then, on an enzyme of interest, find a region of the surface that has an energetic connection to the active site. Join the proteins at these surfaces and voila! Now you have a regulatory switch for your enzyme!The reality, of course, is likely to be trickier. In order for efficient communication between sites to occur via these pathways, the structural dynamics of allostery at the points of attachment must be compatible. For nearly all proteins, the precise nature of the conformational reactions that drive communication are essentially unknown. Thermodynamic mutant cycle analysis cannot provide detailed mechanistic information, and structural dynamics experiments from NMR and other techniques can provide only general information about what occurs on these pathways. Only molecular dynamics simulations are likely to give us the information we need to tune the allosteric control precisely. Absent that, all you can do is just stick things together and hope for the best, which is essentially what Lee et al. did.Of course, they didn't go in totally blind. Lee et al. used the statistical coupling analysis (SCA) technique pioneered by Dr. Ranganathan to identify distal surface sites linked to the light sensitivity of a PAS domain and the enzymatic activity of a bacterial dihydrofolate reductase (DHFR). I've mentioned this technique before in connection with Ranganathan's research on the PDZ domain. The SCA results indicated that a surface loop of DHFR was energetically linked to its active site. The analysis also indicated that a region encompassing the N- and C- termini of the LOV2 PAS domain was likely to be a readout for its detection of light. These results accorded with existing knowledge about these proteins. The result with the PAS domain was particularly convenient. Because the N- and C- termini are adjacent, it meant that the PAS domain could simply be inserted at a loop site. Also conveniently, the surface identified for DHFR was a loop.Thus, Lee et al. inserted the PAS domain at two sites in DHFR. One was the loop identified by SCA, and the other was a control site equally distant from the active site but not predicted to be linked. Figure 3 of the paper shows the key result: insertion of the PAS domain at the SCA-identified site (A site), but not the control location (B site), resulted in a modest light-dependence of the hydride transfer rate for DHFR. All of the A site chimeras had substantially reduced DHFR activity, similar to the effect of a G121V mutation. Interestingly, shifting the insertion site by even a single residue completely abolished the light-dependence of the activity.Granted, the light dependence is less than twofold at room temperature; this approach did not generate a genuine light-dependent on/off switch for DHFR. However, for the reasons I mentioned before, a perfect switch is hardly something that could have been expected. What this experiment does do is prove that this approach is workable. Conceivably, with further tuning the hybrid PAS-DHFR can be made to carry out its catalytic function exclusively in the presence (or absence) of light. Since PAS domains bind a wide array of ligands, the approach can probably be adapted for various chemical triggers.On a more fundamental level, the authors claim that this result supports the view that specific surface locations in many domains may be evolutionarily-conserved loci for allosteric control. This does not mean that every PAS domain (or PDZ domain, or DHFR) actually possess allosteric properties, but it does imply that all of them have the potential to exert or receive allosteric influences. If this is true, then it may be possible to adapt a wide array of binding modules as allosteric regulators for natural and designed enzymes. As our understanding of intradomain signaling improves, our ability to make use of these approaches will only increase.J. Lee, M. Natarajan, V. C. Nashine, M. Socolich, T. Vo, W. P. Russ, S. J. Benkovic, R. Ranganathan (2008). Surface Sites for Engineering Allosteric Control in Proteins Science, 322 (5900), 438-442 DOI: 10.1126/science.1159052... Read more »

J. Lee, M. Natarajan, V. C. Nashine, M. Socolich, T. Vo, W. P. Russ, S. J. Benkovic, & R. Ranganathan. (2008) Surface Sites for Engineering Allosteric Control in Proteins. Science, 322(5900), 438-442. DOI: 10.1126/science.1159052  

  • October 16, 2008
  • 09:00 PM
  • 1,058 views

Guanidinium alters the water landscape

by Michael Clarkson in Conformational Flux

Protein stability studies that rely on the use of cosolutes to effect chemical denaturation typically use either urea or the guanidinium ion (Gdm+). Both of these chemicals unfold proteins through a process involving multiple, low-affinity binding events, but guanidinium has the larger effect. While this could be attributed simply to the strength of the interactions, it could also be a result of other chemical properties. In an upcoming paper in the Journal of Physical Chemistry, a team of researchers from the University of Pennsylvania suggest that a reorganization of hydrogen bonds in water may be partially responsible for its greater denaturing power.The reason water is a liquid rather than a gas is the existence of significant networks of transient hydrogen bonds between water molecules. Typically, water forms these bonds proficiently at a wide range of angles, which means an energetic benefit without a high entropic cost. The range of bond angles in use can be tested by assessing their vibrational frequency using infrared (IR) spectroscopy. The core of this paper comes from an experiment Scott et al. perform in which they measure the dependence of these vibrational frequencies on the concentration of Gdm+ and the temperature of the solution.They find that the presence of Gdm+ significantly alters the IR spectrum of water at high temperatures, shifting the overall peak to a lower wavenumber and causing the appearance of a shoulder around 3300 cm-1, near the main peak of the ice spectrum (Figure 2). The effect of the Gdm+ ion appears to be less at lower temperatures; as a result the appearance of the water spectra changes less at high Gdm+ concentrations than in the absence of the solute. These spectral characteristics are not observed when another positive ion (potassium) is used instead of Gdm+.The authors interpret these changes in the IR spectrum as an increase in the number of short, linear hydrogen bonds, and back this up with quantum chemical simulations. If this is correct, then it implies that Gdm+ rearranges the hydrogen bonding network of water, changing the structure to emphasize strong hydrogen bonds of a particular geometry. Moreover, this change in structure appears to be relatively resistant to changes in temperature. This is significant because the creation of protein tertiary structure is thought to be a mechanism by which systems avoid forming highly-structured networks of water hydrogen bonds. If Gdm+ induces significant water structuring, might that entropically favor the unfolding of proteins?Because urea does not appear to induce this kind of change in water structure, it should be possible to test this experimentally. The effect of Gdm+ is clearly most pronounced at higher temperatures. So one could perform a comparison of the temperature dependence of Gdm+ and urea denaturation of a protein. Proteins tend to be more stable at lower temperatures, as a result we would expect the denaturant concentration at which half of the protein population is unfolded (D1/2) to increase as the temperature decreases. If the structuring of water is significant, then the change in D1/2 should be larger for Gdm+ than for urea, because Gdm+ is decreasing in effectiveness as the temperature goes down. The direct interaction may also be temperature-dependent, but it may be possible to control for this by measuring the energy released when guanidine interacts with an intrinsically unfolded protein.Unfolding experiments typically are not interpreted in a way that that depends on the precise mechanism by which Gdm+ breaks down protein structure. Nonetheless, the specifics of this process are important if we want to use chemically denatured states as models of in vivo unfolded states. As we gain a better understanding of in vivo water structure, the characteristics of denaturant solvation may be an important consideration in experimental design.J. Nathan Scott, Nathaniel V. Nucci, Jane M. Vanderkooi (2008). Changes in Water Structure Induced by the Guanidinium Cation and Implications for Protein Denaturation Journal of Physical Chemistry A DOI: 10.1021/jp8058239... Read more »

  • September 18, 2008
  • 09:00 PM
  • 1,504 views

Where do new enzymes come from?

by Michael Clarkson in Conformational Flux

Biochemists often rave about the great wonders of enzymes, lavishing praise on the prodigious rate enhancements they produce, and their exquisite positioning of functional groups. One can quite reasonably ask how such magnificently useful proteins came into being. One accurate answer, of course, is that after a couple hundred million years evolution can get almost anything right. Another answer is that most enzymes come from other proteins, via a process called gene duplication. The genetic changes that follow one of these duplications turn two copies of one protein into two completely different proteins with diverse activities.Gene duplication events are infrequent errors of DNA replication or repair. Diploid eukaryotes such as ourselves carry two copies (or near-copies) of most genes as a matter of course, but gene duplications produce extra copies beyond that. In theory, the presence of these extra copies of a gene means that one of them can mutate freely, without the pressure of carrying out its normal job. When it drifts into a useful function, selective pressure is again applied, causing a refinement of the active site to maximize the efficiency of the new activity. The overall scheme looks something like this:Duplication → Divergence → RefinementIt may seem incredible that a vast diversity of protein structures and activities can arise simply by making copies, even imperfect copies. However, certain quirks of the translation machinery mean that small changes in DNA can amount to enormous changes in a protein's topology. For instance, an insertion or deletion of a single base can cause a frameshift mutation, producing a protein that bears no resemblance to its progenitor despite having only 1 different base pair. Many DNA triplets that normally encode amino acids are only a single base-pair mutation away from becoming a stop codon, truncating a protein and likely changing its structure significantly. Similarly, stop codons can be easily eliminated, producing much larger proteins. In eukaryotes, point mutations near the borders between introns and exons can cause new regions of DNA to be translated into protein. Of course, drastic changes like these mostly just produce useless junk, but occasionally a novel fold or function arises.More conservative alterations of a gene sequence can still produce significant changes. As I've mentioned before on this blog, some members of the Cro family of proteins have very high sequence identity and yet possess different structures. I also have not yet tired of reminding you that the chemokine lymphotactin has two different structures with a single sequence, either of which can be stabilized into an exclusive fold by a point mutation.Additionally, research from the lab of John Orban shows that a mere 7 mutations are required to convert the engineered protein GA88 (PDB) into a completely different structure, GB88 (PDB) (1). These proteins were previously shown to have different folds and functions, but the contrast between the high resolution structures (shamelessly stolen figure on the right) is striking. Moreover, the Orban lab has refined this system so that the structural conversion can be effected with only three mutations, rather than seven. What all this research indicates is that the transitions that convert a sequence from one fold into another may be sharper than previously realized; even a relatively small number of fairly conservative mutations may be able to completely transform a protein's structure.For all that, most new enzymes arising via gene duplication resemble their ancestors in identifiable ways. Often the two proteins perform the same chemical steps, and the novel function amounts to a different substrate specificity. This suggests the possibility of an alternate mechanism of gene duplication, in that a protein could evolve a novel specificity while retaining its original function. Diversifying its activities in this way would probably limit an enzyme's catalytic effect in both reactions, but a subsequent gene duplication event would allow each copy to refine its particular reaction. The scheme would look like this:Diversification → Duplication → RefinementThe advantage of this model, from an adaptationist's perspective, is that it brings selective pressure to bear at every step. Once a new function has evolved in response to environmental conditions, duplicating the gene may provide an organism a concrete advantage. After duplication, the advantage of separately refining the two activities is obvious.The two models are not as different as they might seem at first glance, because nearly every enzyme catalyzes two reactions anyway, that is, the forward and reverse reactions of an equilibrium. A "new" activity for a given enzyme can therefore result from something as simple as being targeted to a different cellular compartment or a change in specificity that involves an oppositely-oriented equilibrium.The most obvious objection to the latter model is that during the period of gene sharing prior to duplication, neither protein function will be very efficient. As a matter of fact, the appearance of a new activity does not always impair an enzyme's ability to do its original job (and indeed can even enhance that activity). Still, because of the exquisite tuning of enzyme active sites we can expect that many modifications to this region will reduce catalytic power. That being the case, how might an organism survive or thrive during the gene-sharing period? The answer, which always seems obvious in retrospect, is to make more of the less efficient enzyme, as was demonstrated in a recent paper by Sean Yu McLoughlin and Shelley Copley (2).McLoughlin and Copley took a strain of E. coli that lacked an enzyme, ArgC, that is critical for glucose metabolism. They treated these bacteria with a strong mutagen and then picked a colony that grew well on uncomplemented glucose. After showing that these bacteria had developed a novel activity equivalent to ArgC, they isolated the "new" enzyme and found that it was actually an existing enzyme, ProA, which performs similar chemistry. This enzyme had gained the ability to take over the tasks of the missing ArgC, enhancing the rate of that reaction 12-fold. The actual chemistry of these reactions was quite similar, but in gaining the ability to operate on ArgC's substrate, the activity of ProA towards its own substrate was reduced 2800-fold. The bacteria compensated for this by upregulating the production of the enzyme. A second mutation in the promoter region of the gene was helpful, but not necessary, in this respect.Because enzymes are catalysts, a small increase in protein concentration can result in a significant increase in the availability of the reaction products. Biochemists often say, seeing a 3000-fold reduction in activity, that an enzyme is dead. The reality is that it's just slower, and a living thing can compensate for that in ways not available to an isolated reaction in a test tube. Organisms have shown that they have ways to survive what an enzymologist might see as fatal.Of course, modern bacteria benefit from a number of well-tuned regulatory and feedback mechanisms that allow them to sense when particular metabolites are running low and to increase the production of proteins that can replenish them. Earlier, more primitive organisms might not have had these expedients available. Could they have survived gene sharing?Too little is known about early life forms to answer such a question definitively. However, it is interesting to note that one method of making more protein is to make more of the gene. That is, the concentration of a deficient enzyme can be increased via gene duplication. By a fortuitous coincidence, a single mechanism could both enable an organism to tolerate reduced enzymatic efficiency and allow the evolutionary process to independently refine its activities.It is also worth bearing in mind that just as ancient organisms did not necessarily resemble modern ones, ancient proteins might not have resembled the modern item. The exquisite positioning of functional groups that characterizes modern enzymes requires a rigid fold and contributes significantly to the rate accelerations they produce. However, substantial rate enhancements can still be achieved in the absence of a stiff native state.One occasional result of mutations is the formation of a molten globule, a protein that lacks a stable fold but still exists in a collapsed state with something resembling a hydrophobic core. Although that doesn't sound particularly useful, many molten globules have enzymatic or other functional activities. Recent computational studies on a molten-globule mutant of Methanococcus jannaschii chorismate mutase suggest that realistically low energy barriers can be achieved by a broader array of structural states in these proteins (3).Researchers from the lab of Arieh Warshel used a simplified model to sample the conformational space available to the molten globule enzyme (mMjCM) and a stably folded form of the enzyme (EcCM). As you might expect, the lowest-energy conformations are much more diverse for mMjCM than for EcCM. Roca et al. then computed the energy barrier for catalysis for conformations that closely resembled the ideal structure (region I), conformations which had most of the groups in the right general position but were significantly removed from the ideal (region II), and conformations that did not resemble the ideal at all (region III). For EcCM, only structures in region I had energy barriers low enough to plausibly allow catalysis. The molten globule, however, had energy barriers that would allow catalysis in region I and region II. You can see this in the figure below, which I shamelessly stole from their paper: the dotted orange line corresponds to a 16 kcal/mol energy barrier, what they felt to be the largest barrier reasonable for a catalyst. The results for mMjCM are on the left, EcCM on the right.The upshot of this is that molten globules may be able to maintain catalytic power in the face of structural diversity that causes folded proteins to fail. While the stable fold produces greater rate enhancements (note that EcCM has lower energy barriers), the molten globule tolerates a wider array of structural conditions. Consequently, proteins of this kind may be much more amenable to the addition of new functions. So long as an appropriate orientation of functional groups is reasonably likely, a protein without a rigid conformation can still achieve impressive rate enhancements.Conceivably, an early molten globule enzyme could have the ability to catalyze several different reactions, switching between the required conformations as needed, without a significant loss of catalytic power to any of them. Duplication of a multi-functional molten globule like this would allow each chemical function to be refined independently, with additional duplications and refinements giving rise to substrate specificity.The different models of gene duplication each have their own explanatory advantages, and the available evidence suggests that new proteins and enzymatic activities have evolved (even within the last century) using both routes. As this is one of nature's favored methods of generating novel activities, so it is becoming ours. The artificial enzymes recently produced by David Baker's lab were designed onto an existing protein scaffold in what could be taken as a computational mimicry of the gene duplication process.1. Y. He, Y. Chen, P. Alexander, P. N. Bryan, J. Orban (2008). NMR structures of two designed proteins with high sequence identity but different fold and function Proceedings of the National Academy of Sciences, 105 (38), 14412-14417 DOI: 10.1073/pnas.08058571052. S. Y. McLoughlin, S. D. Copley (2008). A compromise required by gene sharing enables survival: Implications for evolution of new enzyme activities Proceedings of the National Academy of Sciences, 105 (36), 13497-13502 DOI: 10.1073/pnas.08048041053. M. Roca, B. Messer, D. Hilvert, A. Warshel (2008). On the relationship between folding and chemical landscapes in enzyme catalysis Proceedings of the National Academy of Sciences, 105 (37), 13877-13882 DOI: 10.1073/pnas.0803405105... Read more »

  • August 30, 2008
  • 03:30 PM
  • 1,174 views

An enzyme with a monkey's tail

by Michael Clarkson in Conformational Flux

It is rare, but not unheard of, for a human baby to be born with a tail. Atavism of this kind is generally understood to be the result of mutations in regulatory genes that cause an ancestral pattern of development to re-emerge. A physiological step backwards through the path of descent is often easy to recognize, because many of the evolutionary relationships are known. It should also be possible to identify atavistic events in particular molecules. For instance, one can imagine that a mutation to CLC-0 might result in a reversion to the ancestral transporter function. In a recent article in PLoS Biology, researchers from Florida State University and Brandeis University identify just such a relationship in the bi-functional enzyme inosine monophosphate dehydrogenase (IMPDH). PLoS Biology is an open-access journal, so open it up and follow along.IMPDH plays a critical role in the synthesis of guanine nucleotides, an essential component of DNA. Two reactions take place in the active site — first, the inosine ring is oxidized to xanthosine, forming a covalent linkage with the enzyme, and then this bond is broken by a hydrolysis. The enzyme active site changes shape to carry out the reaction, bringing a catalytic arginine (R418) into position to activate the water for nucleophilic attack. Any time you see a complicated mechanism like this, it's natural to wonder how such a system could have evolved. Min et al. performed simulations and experiments to find out.Using a crystal structure of IMPDH as a starting point, Min et al. performed hybrid QM/MM simulations in which the atoms taking direct part in the reaction were treated with quantum mechanics, and the rest of the protein was simulated using molecular mechanics. As one would expect given the enormous reduction in catalytic rate that occurs when R418 is mutated, the reaction proceeded through the arginine when the simulation had a neutral R418 side chain. The water is stabilized by two additional side chains from T321 and Y419, and reacts almost instantaneously, without the formation of a stable hydroxide intermediate. Although this is unusual, this prediction of the simulation is consistent with isotope effect experiments.When the arginine was replaced by a glutamine in the simulation, the mechanism changed, naturally. Under these conditions, it was Y419 that activated the water for the hydrolysis, although the energy barrier was much higher (leading to a slower reaction). Again, the characteristics of the reaction indicated by the simulation line up pretty well with the results of biochemical experiments. Of course, Y419 enters the active site the same way R418 does, so the question of how the hydrolase activity could have evolved remains open.Something very interesting, however, happens when the simulation is performed with R418 in a charged state. A fully protonated arginine will have a very hard time activating water for a nucleophilic attack. The simulation indicated that under these conditions, T321 performed this role, after being activated by a nearby glutamate (E431). T321 is adjacent to cysteine 319, which is essential for the oxidation reaction, and is not located on the mobile flap. If T321 really can catalyze hydrolysis, this would mean that it is possible that IMPDH possessed an (inefficient) hydrolysis activity before it evolved the mobile flap.Because T321 only plays a signficant role in catalysis when R418 is protonated, blocking this pathway should result in decreased IMPDH activity at low pH. This is precisely what Min et al. observe in enzymatic assays (Figure 5) on a mutant in which E431 is mutated to glutamine. There is other experimental support as well: IMPDH enzymes that have been mutated at R418 usually have large isotope effects, which makes sense in light of the fact that the alternative T321 pathway involves the simultaneous transfer of two protons (rather than just one).Things get even more interesting when IMPDH is compared to one of its cousins, GMP reductase. Although GMPR catalyzes a very different reaction, the C319/T321/E431 triad is also present there. This, along with other data from sequence alignment, suggests that these three residues were also present in a similar configuration in the ancestor of these modern proteins. Over time, progressive optimization of the two proteins resulted in the T321 pathway being supplanted by the more effective R418 in IMPDH, while remaining essential in GMPR.If T321 really is a remnant of an earlier water-activating pathway, why is it conserved now that IMPDH has a much more efficient catalytic residue available? T321 is probably preserved because it stabilizes the water while it is being activated by R418. However, the other essential residue of that activating pathway (E431) is usually an inactive glutamine in eukaryotic forms of IMPDH (and some prokaryotes, as well). In these species the T321 activation pathway has been completely supplanted by the arginine pathway. Yet in the other forms of IMPDH this alternative mechanism still lingers, perhaps because of the additional activity it affords at low pH, or because it confers resistance to a particular inhibitor of the enzyme. In that sense, IMPDH's "tail" might provide an adaptive advantage quite different from that which gave rise to hydrolytic activity in the first place.Donghong Min, Helen R. Josephine, Hongzhi Li, Clemens Lakner, Iain S. MacPherson, Gavin J. P. Naylor, David Swofford, Lizbeth Hedstrom, Wei Yang, Daniel Herschlag (2008). An Enzymatic Atavist Revealed in Dual Pathways for Water Activation PLoS Biology, 6 (8) DOI: 10.1371/journal.pbio.0060206 OPEN ACCESSDisclaimer: Although I have little contact with Dr. Hedstrom's group, I am also working at Brandeis.... Read more »

Donghong Min, Helen R. Josephine, Hongzhi Li, Clemens Lakner, Iain S. MacPherson, Gavin J. P. Naylor, David Swofford, Lizbeth Hedstrom, Wei Yang, & Daniel Herschlag. (2008) An Enzymatic Atavist Revealed in Dual Pathways for Water Activation. PLoS Biology, 6(8). DOI: 10.1371/journal.pbio.0060206  

  • August 22, 2008
  • 08:00 PM
  • 1,358 views

Guided by the (blue) light

by Michael Clarkson in Conformational Flux

The ability to sense and respond to magnetic fields is a fundamental aspect of behavior in many animals. While migratory birds famously use the earth's magnetic field to navigate during, magnetic field responses occur in all manner of animals, from eels to invertebrates. Even the lowly fruit fly, best known as a reminder that you really should have taken the garbage out two days ago, can react to magnetism. While various explanations have been put forward in different species, magnetosensitivity remains fairly mysterious. In this week's Nature, researchers from the University of Massachusetts Medical School show that the blue-light photoreceptor cryptochrome plays an essential role in allowing fruit flies to detect magnetic fields.Cryptochrome (or Cry) inherited the ability to receive blue light along with its photolyase domain, which is homologous to a prokaryotic, light-dependent DNA repair protein. Cry proteins, which are present in all animals, do not perform any DNA repair work, but instead play a role in regulating the circadian rhythm. While it is not clear in all cases whether Cry's ability to absorb blue light is biologically significant in clock regulation, it is known that fruit flies (Drosophila melanogaster) use Cry to synchronize their circadian clocks. Previous experiments had suggested that the ability of fruit flies to detect magnetic fields was somehow related to photoreception, and that short wavelengths (like those sensed by Cry) had different effects from longer ones.Gegear et al. devised a relatively simple experiment to test the importance of Cry in Drosophila magnetosensing. They placed a T-junction in a box, with a magnetic coil on one side and a non-magnetic coil on the other. They released flies into the junction, with (trained) or without (naive) performing an earlier run where the magnetic field was associated with a sucrose reward. They shined a light into the box and used filters to investigate the role of specific wavelengths.They discovered that several strains of Drosophila could be trained to go to the magnetic field, although the degree of preference and the nature of the naive response differed substantially between strains. Gegear et al. chose the strain that showed the greatest response in full-spectrum light (and displayed a tendency to avoid the magnetic field in the naive state) to perform the filter experiment. Cutting off all wavelengths of light shorter than 500 nm abolished both the naive and trained responses to the magnetic field in these flies, as did filtering out all wavelengths shorter than 420 nm. If only wavelengths shorter than 400 nm were cut off, some of the trained and naive response returned. Simply dimming the light was not enough to replicate the effect of filtering. These experiments indicate that magnetic sensitivity in these flies requires light in the blue to ultraviolet range.In order to prove that cryptochrome specifically is necessary for this magnetic sensitivity, Gegear et al. took advantage of our tremendous knowledge of fly genetic manipulation to create mutant flies that did not have a functional Cry gene. No matter what wavelengths of light were used in the T-junction experiment, these flies did not respond to the magnetic field. Crossing these Cry-null mutants with normal flies restored magnetosensitivity. The authors also performed experiments to show that the circadian rhythm was not itself essential to magnetic response in the flies.Because this is a genetic experiment, it cannot address the question of whether Cry is both the blue-light photoreceptor and the magnetosensor. Going just on what we have in this paper, it is also possible that Cry acts upstream of another magnetosensor protein or is part of its downstream signaling pathway. However, in light of research that shows the flavin photoreception in other cryptochromes induces the formation of magnetically-sensitive radicals, some of which I discussed last year, it certainly seems possible that Drosophila cryptochrome does the whole job itself. As I mentioned in the case of the previous article, though, there is not yet any understanding of a mechanism by which information about magnetic field could be transduced from Cry radicals into the nervous system.Dorosophila Cry differs from other plant and animal Cry proteins in significant ways, so it's unclear whether these results have any relevance for other organisms. However, the finding that Cry is essential to Drosophila magnetosensitivity suggests at least the possibility of parallel systems in migratory birds and other species that use magnetic fields.Robert J. Gegear, Amy Casselman, Scott Waddell, Steven M. Reppert (2008). Cryptochrome mediates light-dependent magnetosensitivity in Drosophila Nature, 454 (7207), 1014-1018 DOI: 10.1038/nature07183... Read more »

Robert J. Gegear, Amy Casselman, Scott Waddell, & Steven M. Reppert. (2008) Cryptochrome mediates light-dependent magnetosensitivity in Drosophila. Nature, 454(7207), 1014-1018. DOI: 10.1038/nature07183  

  • August 19, 2008
  • 09:30 PM
  • 1,237 views

How to help an enzyme crack cocaine

by Michael Clarkson in Conformational Flux

In addition to the adverse consequences of addiction and the inconvenience of serving several years of jail time for possessing it, cocaine can cause a fatal overdose. Although this condition can be treated, no therapy presently exists that attacks the overdose by removing cocaine from the bloodstream. One possible approach to eliminating cocaine from a patient would be to accelerate the process by which it is degraded. Unfortunately, the enzymes that perform this activity in the human body are not very efficient. In an upcoming article in the Journal of the American Chemical Society, however, a group from the University of Kentucky (assisted by researchers at the University of Michigan) have remodeled the active site of butyrylcholinesterase (BChE) to achieve a 2000-fold increase in rate. This raises the possibility of producing therapeutic enzymes as a treatment for cocaine overdose.A cocaine overdose typically results in an elevated pulse rate, seizures, and hyperthermia, among other possibilities. The usual course of treatment involves addressing the symptoms — diazepam to reduce the heart rate, cooling protocols to address hyperthermia. These steps are proven to work, but they don't address the core problem: there's still a lot of cocaine floating around in the bloodstream. Treating with sedatives amounts to using one giant truck to stop another giant truck... both trucks will probably stop, but there might be a lot of collateral damage. Instead, it would be advantageous to either block the receptors that cocaine binds, or clear cocaine from the bloodstream somehow.Plasma butylcholinesterase does most of the work in metabolizing cocaine, by cleaving it into two products that no longer exert the same pharmacological effects. If BChE was a highly efficient enzyme it's unlikely that people would experience cocaine overdoses at all, but it breaks down the main form of cocaine quite slowly, with a catalytic rate (kcat) of 4.1 /min, resulting in a very long half-life for this substrate. The chemical mechanism of BChE (Figure 1) will be familiar to anyone who has taken biochemistry, being basically the same as a serine protease. Instead of a peptide bond, however, it is the ester linkage of cocaine that undergoes nucleophilic attack from an activated serine, while hydrogen bonds stabilize the evolving negative charge in an oxyanion hole.Previous efforts to optimize the activity of BChE by mutation focused on eliminating steric clashes, but Zheng et al. noted that the hydrogen bond lengths in the oxyanion hole were not optimal for stabilizing the putative transition state. They therefore decided to focus their efforts on improving the energetics of this region. To do so, they used combined quantum mechanics/ molecular mechanics (QM/MM) simulations to determine the energy barriers in simulated reaction coordinates for a number of different mutants. This has the advantage of screening potential mutants for a specific effect, which may be quicker than wet lab work, but it requires the researcher to know the catalytic mechanism and to define a region of interest in advance.By working through a series of mutations, Zheng et al. arrived at one multiple mutant of BChE that had favorable interaction energy for every residue in the oxyanion hole. When they generated this mutant in the lab, they found that it had a vastly increased catalytic rate towards cocaine, with kcat now about 5700 /min. Based on these in vitro results they decided to test the mutant BChE in vivo using mice. They found that injecting mice with 30 µg of BChE protected them from seizure and death due to cocaine overdose. While the n for this experiment is small, and the BChE was injected prior to cocaine exposure rather than after, these results suggest that the mutant BChE has potential as a therapy for cocaine overdose in humans.Obviously, further improvement would be needed before these protective effects could be realistically equaled in humans. To match the dose used in this experiment, a 180-pound man would need to be injected with 82 mg of the protein, which is a rather large amount. However, if used in conjunction with existing treatments, the required dose of BChE may be lower. If not, then translating these results into a useful therapy will require either further catalytic optimization or an enormous production effort. A significant amount of additional clinical research is required before this or any other mutant of BChE is introduced as a therapy for overdose or addiction. Nonetheless, these results illustrate the promise of enzyme optimization and design as a tool for medicine in the future.Fang Zheng, Wenchao Yang, Mei-Chuan Ko, Junjun Liu, Hoon Cho, Daquan Gao, Min Tong, Hsin-Hsiung Tai, James H. Woods, Chang-Guo Zhan (2008). Most Efficient Cocaine Hydrolase Designed by Virtual Screening of Transition States Journal of the American Chemical Society DOI: 10.1021/ja803646t... Read more »

Fang Zheng, Wenchao Yang, Mei-Chuan Ko, Junjun Liu, Hoon Cho, Daquan Gao, Min Tong, Hsin-Hsiung Tai, James H. Woods, & Chang-Guo Zhan. (2008) Most Efficient Cocaine Hydrolase Designed by Virtual Screening of Transition States. Journal of the American Chemical Society. DOI: 10.1021/ja803646t  

  • August 16, 2008
  • 03:00 PM
  • 957 views

Two great mechanisms that go great together

by Michael Clarkson in Conformational Flux

The watery interior of a cell is separated from the watery exterior of a cell by a thin double layer of lipids called the plasma membrane. The oily interior of this membrane prevents water and charged molecules from escaping the cell, while allowing hydrophobic (oil-like) molecules through. This system has many significant advantages, but cells frequently need to move charged atoms (ions) across the membrane. This job is primarily performed by two kinds of protiens: channels that create specialized tunnels through the membrane, and transporters that mechanically transfer ions across the membrane. These are distinct activities, usually carried out by different families of proteins. Recent results in the ClC family of membrane proteins, however, have demonstrated that these activities are not as distant from each other as it might seem.Ion channels basically work like tiny pipes that stick through a cell membrane. They have interior pores that are full of water, and usually possess some form of selection mechanism that lets only a particular kind of ion through. Although they can be gated — opened or closed by particular voltage states or molecular signals — they can only move ions with an electrochemical gradient. That is to say, they can only allow their particular ions to move across the membranes in a way favored by both concentration and voltage. By contrast, transporters physically translocate ions without using a watery pore. This allows them to move ions against a concentration gradient as long as they have an energy source, such as another ion's concentration gradient.The figure cave drawing at right shows a simplified situation. A membrane divides two compartments, one of which has a high concentration of negative ions (red), while the other has a high concentration of positive ions (blue). The selective channel (C) can only allow negative ions to move from the high to low concentration in this situation, because the concentration and electrical gradients oppose a movement in the opposite direction. The transporter (T), on the other hand, can push negative ions out into the area where they have a high concentration. It does this by simultaneously transporting a positive ion from high to low concentration; it uses the favorable energetics of this transport event to power the unfavorable one. Of course, there are many different kinds of transporters -- the example here is a symporter, but there are also antiporters or exchangers.Aside from dealing in ions, these two kinds of activity might seem to have little in common, and in fact most ion channels are not very closely related to ion transporters. However, the ClC family of chloride channels is unusual in that it also includes several members that are transporters. Because all these proteins are presumed to have similar structures in the membrane, there is considerable interest in understanding the key differences that separate ClC channels from ClC transporters. In the case of these proteins, it seems that the line between transporter and channel is easily blurred.You got transporter in my channel!The ClC-0 protein from the electric ray Torpedo marmorata has always been classified as a gated channel, but an odd one. Unlike many channels, it has two pores in its active configuration. These pores are closed off or gated by two processes. The fast gating occurs on the millisecond timescale and opens and closes the two pores independently of one another. The slow gating occurs on the timescale of seconds and opens and closes both pores.The odd thing is that there is a thermodynamic imbalance in this system. All things being equal, we expect that if we track the number of pores that are open over time, we should see the pattern 1 → 0 → 2 → 1 (called J+) with the same frequency that we see the pattern 1 → 2 → 0 → 1 (J-). Instead, researchers have found that J+ is observed more frequently than J-. Originally it was thought that the electrochemical gradient of chloride controlled this asymmetry, but this model didn't quantitatively match the observations. Now Lísal and Maduke have shown that the asymmetry arises from the proton electrochemical gradient, and that this is a molecular vestige of ClC antiporter function (1).The evidence for this comes from an experiment in which the chloride electrochemical gradient was held constant while the proton electrochemical gradient was changed. If the chloride gradient controls the asymmetry, we should see no differences in the J+/J- ratio during the experiment. Instead, it was observed that switching the direction of the proton gradient changed the behavior of the channel gating from almost exclusively J+ to an even mixture of J+ and J-. Further experiments demonstrated that the J+/J- ratio was proportional to the proton electrochemical gradient, and that it leveled off at 1 when the direction of the gradient was changed to favor movement of protons from inside the cell to outside the cell. This suggests that the gating mechanism is unidirectional.Because the proton gradient provides energy to produce gating asymmetry, CLC-0 must be a proton transporter as well as a chloride channel. The authors suggest that this reflects a vestigial antiporter activity (similar to several existing members of the family) that has been repurposed as a regulatory mechanism for channel gating. Although channels have very different thermodynamics from transporters, it appears that a single protein can have characteristics of both.You got channel in my transporter!But how does an antiporter transform into a channel? Because the structures of these different types of proteins tend to be quite dissimilar, it seems unlikely that the transition between these mechanisms could be accomplished with just a few point mutations. However, sequence homology in a family of proteins usually implies structural homology (with some exceptions), so the ClC channels probably evolved from antiporters in this fashion. This could happen by the destruction of gating mechanisms in the transporter.In the Cl-/H+ antiporter CLC-ec1, a central binding site for chloride is blocked by two gates. One side is blocked off by Glu148, and the other is putatively formed by the interaction of Tyr445 and Ser107. In principle, removing these gates could give rise to a watery passage through the membrane, and high transport rates similar to those of ion channels. Jayaram et al. therefore performed a series of mutations at these sites and measured the effect on transport rates (2).From previous work it was known that altering these residues uncoupled chloride transport from proton transport, which is one step towards becoming a channel. Here, Jayaram et al. find that mutating Tyr445 to a smaller side chain does not have a significant effect on chloride transport rates. Mutating Glu148 to smaller residues actually decreases the chloride transport rates. Mutating both residues, however, leads to a 100-fold increase in chloride transport over the wild-type protein. The fastest mutant, moving chloride ions at a rate of more than 35,000 /s, is not quite as fast as a channel. Still, the acceleration is significant.Jayaram et al. found that increasing the size of the substituted side chain at either site decreased the rate. This is what you would expect for a simple case of larger side chains leading to more constriction of a channel. In order to confirm that there was such a channel, they crystallized one of the mutants and analyzed it to determine whether there was a continuous pathway that water could permeate. Below, you can see a figure I shamelessly stole from their paper, with the WT protein on the left and the E148A/Y445A mutant on the right. The red dots represent a surface made with a 1.4 Å probe designed to mimic water. As you can see, the probe can go all the way through the A/A mutant, but is stopped by the intracellular gate in WT. The double mutant is a tight, but genuine channel.Because these residues are conserved in both channel and antiporter members of the ClC family, mutations like these are not likely to be the means by which one kind of protein evolves from the other. Nonetheless, they establish that the crystal structure of CLC-ec1 is likely to be a good model for occluded states of gated ClC channels. Moreover, the ease with which a specialized ClC transporter is made into a channel suggests that a progenitor protein could have switched from antiporter to channel in just a few mutations.The thermodynamics of ion translocation by channels and transporters are quite different, so it was a surprise to discover that the ClC family contained both kinds of activities. These recent papers show that the entanglement is even closer than previously thought. The ClC channel CLC-0 still retains vestigial proton transport activity, and the ClC antiporter CLC-ec1 is only a few mutations away from becoming a channel itself. These findings suggest that substantial changes in operating thermodynamics may result from small evolutionary steps, and point to a shared antiporter past for members of the ClC family.1. Jiří Lísal, Merritt Maduke (2008). The ClC-0 chloride channel is a 'broken' Cl−/H+ antiporter Nature structural & molecular biology, 15 (8), 805-810 DOI: 10.1038/nsmb.14662. H. Jayaram, A. Accardi, F. Wu, C. Williams, C. Miller (2008). Ion permeation through a Cl--selective channel designed from a CLC Cl-/H+ exchanger Proceedings of the National Academy of Sciences, 105 (32), 11194-11199 DOI: 10.1073/pnas.0804503105Disclaimer: I work on the same floor as all these guys.... Read more »

Jiří Lísal, & Merritt Maduke. (2008) The ClC-0 chloride channel is a 'broken' Cl−/H antiporter. Nature structural , 15(8), 805-810. http://www.nature.com/doifinder/10.1038/nsmb.1466

H. Jayaram, A. Accardi, F. Wu, C. Williams, & C. Miller. (2008) Ion permeation through a Cl--selective channel designed from a CLC Cl-/H exchanger. Proceedings of the National Academy of Sciences, 105(32), 11194-11199. http://www.pnas.org/cgi/doi/10.1073/pnas.0804503105

  • August 14, 2008
  • 07:00 AM
  • 1,084 views

How media resemble real life in your head

by Michael Clarkson in Conformational Flux

How does the human brain react to the communication of emotion? Does the observation or imagination of emotions have anything in common with the personal experience of them? It is possible that the brain uses a setup in which seeing a person experience an emotion, imagining that emotion, and feeling that same emotion all use completely independent circuitry. Yet since all of these experiences make references to the same emotional state, it is also reasonable to think that some of the pathways are shared. In a recent article from PLoS ONE, a team of researchers uses functional Magnetic Resonance Imaging (fMRI) to determine similarities and differences in the patterns of brain activation following various means of communicating disgust. PLoS ONE is open access, so go ahead and open the article up in another window.First, a word about fMRI, for those unfamiliar with it. As the name would suggest, fMRI is an elaboration of the standard MRI techniques used image the interior of your body without the use of potentially harmful radioactivity. Neuronal activity in the brain causes a local depletion of oxygen from the blood, followed by a localized increase in blood flow. Because the magnetic properties of iron in the blood change with its oxygenation state, it is possible to detect these hemodynamics using magnetic resonance imaging. Thus, fMRI is able to indirectly detect neural activity, although the fMRI signal lags behind activity by a few seconds. A given fMRI signal also encompasses a large number of individual neurons and therefore can only serve as a rough map to where things are happening in the brain. These temporal and spatial limitations limit the conclusions that can be drawn reliably from fMRI, but the observed correlations can provide valuable insights.Jabbi et al. used fMRI to map the neural response of subjects to various encounters with disgust. Previous research had shown that a particular region of the brain (the IFO) showed increased activity when subjects either tasted something disgusting, or viewed a short clip of someone else tasting something disgusting. For this study, Jabbi et al. had participants read short scripts (samples can be found in the supplementary materials) intended to make the reader imagine being disgusted, pleased, or not feeling anything. They found that reading disgusting passages induced a neural response in this region of interest, just as it had for the cases of tasting or observing disgust.While this may seem completely unsurprising, it bears some consideration. The experience of personal disgust differs significantly from the experience of observing disgust in others. Similarly, imagining or reading about disgust creates a very different subjective experience than, say, drinking quinine. Given that these are all quite different feelings, it is somewhat surprising that a single area is activated by all three.Of course, there is a fine line to consider here -- the passages meant to make the subjects imagine disgust may have actually disgusted them. The paragraphs that the authors make available in the supplementary materials are written in second person and involve things like accidentally ingesting animal waste. Because the subjects are reading passages that ask them to imagine themselves being disgusted, and the passages are themselves disgusting, the act of imagination may be contaminated by an immediate personal experience of disgust. In a more elaborate experiment it might be of value to use passages written in the third person. Additionally, it might be useful to employ passages in which the characters, because of particular phobias or personal experiences, are disgusted by items or actions the reader is likely to find innocuous.Whether the readers where themselves disgusted or not, the overall response in the brain differed for each of the stimuli, as shown by a map of correlated activity (Figure 2). While the area outside the IFO activated by observation was relatively small, both the disgusting taste and the disgusting scripts produced widespread activity relative to a neutral taste or script. In general there was not much overlap between the networks, except for a small region shared by the imagination and experience groups. The authors propose that the similarities of imagining, observing, and experiencing emotion are due to the common activation of the IFO, while the differences between these are due to the largely distinct networks of correlated activity. Different modes of exposure to disgust may therefore act in complementary, rather than independent, ways.Additionally, this result appears to be consistent with the view that our recognition of observed disgust and our imagination of disgust rely on an internal simulation of our own feelings of disgust. However, these experiments cannot establish exactly what a particular region of the brain is doing, so this remains an open question.While this research does not indicate whether these results can be generalized to other emotional states, this finding may interest developers of media that make use of multiple modes of communication, specifically video games. Games often rely on video cutscenes to convey story and emotion, but this approach may be wasting a significant amount of potential. The participatory nature of games makes it possible to approach emotional communication not only through the observational route, but also the experiential route.Consider the case of Agro's fall in Shadow of the Colossus. Observing the cutscene, and hearing the voice of Wander, the player can understand that Wander feels grief at this event, in much the same way that anyone watching a movie could understand it. Additionally, the emptiness of the game's landscape and the forced collaboration between the player and the Agro AI has helped to create a relationship between the player and the horse. Thus, in observing Agro's fall, the player may feel his own sense of grief at the event, increasing the emotional resonance of the moment.This suggests a possible, if lengthy, experiment. It would be interesting to compare the fMRI profile of a subjects observing Agro's fall under two conditions: one in which they have actually played the game up to that point, and another in which they have watched the game as a movie, with exploration and battles recorded previously from an expert player's run. Would the first group have activity in both the observational and experiential networks, or would each group activate a different network? What implications might these outcomes have for the development of emotionally fulfilling games?Of course fMRI studies are not some holy grail that makes everything clear. The work of Jabbi et al. has given us a rough map to where things are happening, but understanding exactly what is happening and how it is happening will require additional experiments and possibly new investigative techniques. Nonetheless, this is an interesting piece of the puzzle, and perhaps some food for thought. Mbemba Jabbi, Jojanneke Bastiaansen, Christian Keysers (2008). A Common Anterior Insula Representation of Disgust Observation, Experience and Imagination Shows Divergent Functional Connectivity Pathways PLoS ONE, 3 (8) DOI: 10.1371/journal.pone.0002939... Read more »

  • August 8, 2008
  • 08:00 AM
  • 767 views

Do conformational changes precede or follow binding?

by Michael Clarkson in Conformational Flux

The binding of a ligand to a protein rarely occurs with the simplicity of a block sliding into an appropriately-shaped hole. Protein and ligand often engage in complementary conformational changes to adapt their shapes to each other. As a result, the structure of a protein bound to its target may differ substantially from the structure of the free protein. Unfortunately, it is virtually impossible to view the binding process in fine structural detail; as a result, most of our knowledge comes from the relatively stable bound and free states. Improving biophysical techniques, however, have brought a change in the way we view some binding events.Most alterations of conformation during a binding event have historically been interpreted using the induced fit model. In this view, the protein stably maintains the free or "open" structure until it comes into contact with a ligand molecule. This encounter stimulates a conformational change so that the protein adopts the "closed" conformation that tightly holds onto the ligand. Thus, the ligand induces the conformational change necessary to form the bound, closed (BC) structure from the unbound, open (UO) structure, and the intermediate on this path is some kind of bound, open (BO) structure. This model is physically reasonable and has been very successful in interpreting many systems.However, for the past few decades an increasing amount of evidence has suggested that this is not the whole story. NMR investigations indicated that instead of remaining in a single, well-defined backbone conformation most of the time, many proteins experienced significant changes in their structure while floating free in solution. These results suggested an alternative mechanism of population shift. In this view, the protein actually samples the "closed" conformation (or something very similar) while unbound, and it is this conformation that binds to the ligand. We still go from UO to BC, but now the intermediate is an unbound, closed (UC) structure.This sounds very arcane, but it is not without functional relevance. Consider, for instance, a protein that is activated by a particular ligand. If we wish to make a drug that binds exclusively to the BC form, then we may experience unforeseen side-effects if our target protein occasionally samples a UC state. It would be useful to have a general idea of what kinds of circumstances are likely to favor a population shift model vs. an induced fit model. That is precisely what Kei-Ichi Okazaki and Shoji Takada aim to provide in an upcoming paper in Proceedings of the National Academy of Sciences (1).Okazaki and Takada performed a coarse-grained molecular dynamics simulation of glutamine binding protein. In the bound and unbound states they employed a double-well Gō model, a simplified representation of molecular forces, to represent "opening" and "closing". To switch between these states (i.e. to represent binding) they used a Monte Carlo algorithm. This approach has the advantage of being quick and relatively inexpensive from a computational standpoint, but the results must be interpreted cautiously because the physics of the model are greatly simplified. They observe UO ↔ UC and UC ↔ BC events in this system, but they also observe UO ↔ BO and BO ↔ BC events. This suggests that the simulation will be able to make predictions about both population-shift and induced-fit mechanisms.In order to try to make some predictions about the circumstances in which a particular mechanism is favored, Okazaki and Takada varied the strength and range of the binding interaction. By monitoring whether the simulated system entered the BC state from BO or UC, they could tell whether the system obeyed the induced-fit or population-shift mechanisms, respectively. They find that as either the strength or the range increase, the induced-fit mechanism is increasingly favored (Figure 4). These results make sense. If the protein regularly samples the closed state while unbound, then the amount of energy needed to reach that state is probably small, so it makes sense to see a population-shift mechanism associated with low-energy binding. Similarly, if a ligand is to associate productively with a non-optimal protein conformation, it makes sense that key interactions will be effective at long range.From these results Okazaki and Takada suggest that the binding of small hydrophobic ligands is generally likely to proceed via population shift, while the binding of large, charged ligands (such as DNA) will likely proceed via induced fit. They acknowledge, however, that the simulation is limited, particularly in its view of conformational change. Unitary transitions in which the whole protein changes its structure simultaneously are probably not the norm, particularly in the case of very large conformational changes. These changes may instead be stepwise or hierarchical. For instance, a protein or complex recognizing multiple features of a DNA strand may proceed by an apparently induced-fit mechanism, even though each individual binding event more closely resembles population-shift behavior.An additional limitation of this study is that it considers only one protein, but mechanisms of binding and conformational change may be idiosyncratic properties of particular folds. One could consider the behavior of lymphotactin, which displays clear hallmarks of the population-shift mechanism despite binding to macromolecules (heparin and a GPCR) much larger than itself, as a counterpoint to the predictions developed here. Similarly, the population shift of NtrC involves a charged phosphate group likely to have long-range interactions, although this is a post-translational modification and not a strict ligand-binding event. While the authors point to some examples that match their expectations, overall the data are not unanimously in support of their predictions. Still, the general rules laid out here provide a starting point for experimental work.Despite the limitations of the simulation, it provides a relatively efficient tool for assessing these processes in other proteins. While no simulation can yet replace experimental data, coarse-grained models like this can serve as a means to formulate testable hypotheses about the energetics of protein-ligand systems.1. Okazaki, K., Takada, S. (2008). Dynamic energy landscape view of coupled binding and protein conformational change: Induced-fit versus population-shift mechanisms. Proceedings of the National Academy of Sciences 105(32) 11182-11187. DOI: 10.1073/pnas.0802524105... Read more »

  • August 1, 2008
  • 11:00 PM
  • 721 views

NSAIDs vs. Alzheimer's: Multiple modes of action?

by Michael Clarkson in Conformational Flux

Loads of interesting stuff is going on in Alzheimer's research right now. While the hot news is about a trial showing significant benefits from going after tau tangles, a recent paper in PLoS ONE continues to investigate the pathology of the amyloid-β peptide. As I've mentioned in previous posts, cleavage of the amyloid precursor protein to a 42-residue peptide (called Aβ1-42 in this paper) initiates the formation of peptide oligomers and eventually plaques. Recent research has indicated that these oligomers are sufficient to cause the development of Alzheimer's disease, but the mechanism by which they do so remains uncertain. Sara Sanz-Blasco and colleagues show that Aβ oligomers disrupt calcium homeostasis in neurons, damaging the mitochondria and promoting apoptosis, and that certain NSAIDs can suppress these adverse mitochondrial effects (1). PLoS ONE is open access, so go ahead, open the article up in another window, and follow along.Although the appearance of plaques and neuronal death are classic hallmarks of Alzheimer's pathology, the relationship between these features is not well understood. For instance, it is possible the plaques themselves kill neurons or impair neural function. However, it seems equally likely that the appearance of plaques and the death of neurons are two distinct effects with a single cause. This view is supported by the oligomer toxicity study, but that study fails to resolve the question of exactly how Aβ oligomers kill neurons. Previous work has associated Aβ with derangement of cellular calcium (Ca2 ) management — a 2005 paper by Demuro et al. (2) showed that soluble Aβ induced an increase in intracellular Ca2 in a neuroblastoma cell line. Sanz-Blasco et al. therefore decided to directly test whether Aβ oligomers were increasing Ca2 levels in neurons, and specifically in mitochondria. In order to do this last bit they used a low-affinity aequorin targeted specifically to mitochondria.Allow me digress... to many of my readers that probably sounds like a terrible idea. If you're trying to detect a particular chemical in the cell, it seems like the best thing to do would be to get a high-affinity binding partner. And if figuring out whether there is any calcium in the mitochondria is what you want to do, then a high-affinity detector makes sense. However, when you're using a small amount of a sensor to detect changes in the concentration of a large amount of ligand, a low-affinity sensor is what you want.To see why, take a look at the graph on the right. This is just a rough calculation based on a situation where the detector is at a concentration of 100 µM and the concentration of its ligand (that you're trying to detect) changes from 10 mM to 100 mM. Note that the concentration of the detector is at most 1% that of the ligand. If the dissociation constant KD of this complex is 1 mM (blue) (a lower KD means higher affinity), then the detector is almost saturated when you start, and the percentage occupied doesn't change very much over the course of the experiment. This means that it will be very difficult to tell the difference between, say, 50 mM ligand and 100 mM ligand, because that amounts to a signal difference of 1% of the maximum response. The situation gets a little better if the KD is 10 mM (green). The lowest affinity detector here (KD = 50 mM, red) actually does the best job of distinguishing between 50 mM and 100 mM ligand, because the difference in response amounts to 17% of the total dynamic range. Ideally, you want to tune the KD of your detector in such a way that its response to changes in ligand concentration is large and linear over the range you are likely to be observing. For the last detector, this range lies between 10 and 40 mM of ligand, so that would likely be the ideal range to investigate with it.The precise numbers are different in the present paper, but the principle is the same. The affinity you want in your detector will depend on what you are trying to detect and the circumstances under which you are trying to detect it. In this case, the researchers are trying to measure changes in calcium ions over a fairly wide range, which have a pretty high concentration in mitochondria, and they're doing it using a luminescent protein, which isn't very concentrated. As a result, a relatively low-affinity detection system is best.So, what did they find? The results in Figure 1 show that Aβ oligomers and fragments cause an influx of calcium into the cytoplasm of cultured neurons, but preparations of Aβ fibrils did not cause this effect. Moreover, exposure of the cells to Aβ oligomers caused a clear influx of calcium into the mitochondria (Figure 3). This is a problem for a cell because Ca2 overload in mitochondria can cause programmed cell death, or apoptosis. Using the classic TUNEL assay, the authors of this study showed that the Aβ oligomers caused apoptosis. In addition, they showed that treatment with the oligomers caused the release of mitochondrial cytochrome c (a step in the apoptotic pathway) and that the addition of cyclosporin A, which inhibits the release of proteins from the mitochondrion, blocked cell death (Figure 4). Together, these pieces of evidence support the idea that Aβ-induced Ca2 influx into the mitochondria activates the apoptotic cascade, leading to neuronal death. These results are consistent with a very cool study published this week in Neuron (3) showing that amyloid plaques correlated with high neuronal Ca2 levels in vivo (in live mice).On its own this is pretty interesting, but Sanz-Blasco et al. push it a bit further. Because they had shown previously that some NSAIDs prevent mitochondrial Ca2 uptake in a cancer cell line, they decided to find out if they would work in this instance, too. As you can see from Figure 6, the three NSAIDs tested kept the mitochondria calcium-free, even if the cells were treated with Aβ oligomers. NSAIDs also prevented cytochrome c release and cell death (Figure 8).Some readers may recall that Kukar et al. showed that certain NSAIDs prevent oligomerization of Aβ1-42, hinting at a possible explanation of these results. However, the controls performed by Sanz-Blasco et al. show that under the conditions of these experiments the NSAIDs they used have no effect on cytosolic Ca2 concentrations (Figure 7). If it is amyloid oligomers that let Ca2 through plasma membranes, then this would appear to rule out structural disruption as a mechanism. Instead, Sanz-Blasco et al. propose that these NSAIDs specifically alter the polarity of the mitochondrial membrane in such a way as to prevent Ca2 uptake.If this is true, then NSAIDs may be able to perform a double-whammy on Alzheimer's disease. On the one hand, they appear to be capable of altering Aβ cleavage patterns to reduce the formation of toxic oligomeric precursors. In addition, they appear to have an ability to block mitochondrial breakdown and subsequent apoptosis directly. While this is encouraging, and speaks to the value of pursuing refinements of existing NSAIDs as possible Alzheimer's treatments, this experiment doesn't necessarily prove any therapeutic value. Even if the neurons are saved from death, the calcium flood may impair their function to such a degree that their continued survival doesn't matter. Only clinical trials and further research can firmly establish whether current or optimized NSAIDs can provide significant protection against Alzheimer's disease. 1. Sara Sanz-Blasco, Ruth A. Valero, Ignacio Rodríguez-Crespo, Carlos Villalobos, Lucía Núñez (2008). Mitochondrial Ca2 Overload Underlies Aβ Oligomers Neurotoxicity Providing an Unexpected Mechanism of Neuroprotection by NSAIDs PLoS ONE, 3 (7), 0-0 DOI: 10.1371/journal.pone.0002718 OPEN ACCESS2. A. Demuro, E. Mina, R. Kayed, S.C. Milton, I. Parker, C.G. Glabe (2005). Calcium Dysregulation and Membrane Disruption as a Ubiquitous Neurotoxic Mechanism of Soluble Amyloid Oligomers Journal of Biological Chemistry, 280 (17), 17294-17300 DOI: 10.1074/jbc.M500997200 OPEN ACCESS3. K Kuchibotla, S Goldman, C Lattarulo, H Wu, B Hyman, B Backsai (2008). Aβ Plaques Lead to Aberrant Regulation of Calcium Homeostasis In Vivo Resulting in Structural and Functional Disruption of Neuronal Networks Neuron, 59 (2), 214-225 DOI: 10.1016/j.neuron.2008.06.008... Read more »

Sara Sanz-Blasco, Ruth A. Valero, Ignacio Rodríguez-Crespo, Carlos Villalobos, Lucía Núñez, & Ernest Greene. (2008) Mitochondrial Ca2 Overload Underlies Aβ Oligomers Neurotoxicity Providing an Unexpected Mechanism of Neuroprotection by NSAIDs. PLoS ONE, 3(7). http://dx.doi.org/10.1371/journal.pone.0002718

  • July 14, 2008
  • 09:30 PM
  • 1,034 views

Microwave Pfu CelB 5 minutes for highest activity

by Michael Clarkson in Conformational Flux

Like many bachelors, I regularly eat meals heated up in a microwave oven. I'd like to think I eat a somewhat lower percentage of frozen dinners than others in my situation, but even when it comes to food I cooked myself I usually don't have the time or patience to cook a recipe for one every night. That means I'm often eating reheated leftovers from the trusty Radar Range. The microwave oven works using dielectric heating, a process in which the movements of dipolar bonds are coupled to an oscillating external field. Water, for instance, is a molecule with a large dipole moment, and when you cook something in a microwave a lot of the heating occurs by the accelerated movements of water molecules. In principle this kind of excitation should also occur for other kinds of polar molecules, and thus we come to an interesting study from the lab of Alex Dieters, in which microwave radiation was used to activate a cellulase from a hyperthermophile.Protein backbones consist of series of peptide bonds which include a carbonyl group, a classic example of a polar bond. Naturally, one might expect that the motion of these groups would be excited by microwave radiation. However, it does not directly follow that additional motion of the peptide backbone will actually accelerate chemical reactions, because this motion may be chaotic or unproductive. Moreover, from the fact that microwaves cook things (like eggs), we know that microwave radiation does a good job of denaturing proteins, sometimes at lower temperatures than we expect. Both of these problems can conceivably be avoided by studying a hyperthermophilic protein.Proteins from hyperthermophiles such as Pyrococcus furiosus tend to be stable and optimally active at very high temperatures, at or even exceeding the boiling point of water. At lower temperatures, they retain their stability, but tend to become inactive. In many cases this reduction in activity appears to result from squelching internal motions that may be necessary to bind or properly orient substrates. Young et al. decided to study the β-glucosidase CelB from P. furiosus as a way of understanding whether microwaves might enhance enzymatic catalysis. Because CelB has optimal activity at 110° C it should be possible to see a significant difference in activity if microwave activation works. The stability of this protein at high temperatures also suggests that you will not accidentally cook it.Sharp readers will have noticed an obvious problem with this idea—because heat activates this protein, and microwaves heat aqueous solutions, we must incorporate some kind of control in order to determine the pure effect of the radiation as opposed to the temperature. Young et al. resolve this problem by monitoring the heating of the sample during microwave irradiation, and then using a normal thermal apparatus to match this temperature profile (Figure 2). When a reaction reached 40° C using either heating method, it was quenched by the addition of a basic solution and the concentration of products was measured. Simply heating the Pfu CelB reaction to 40° C produced negligible activity, but microwaving it increased the activity by 4 orders of magnitude (i.e. a factor of 10,000). Less dramatic, but still significant, effects were observed for two other hyperthermophilic enzymes, but an enzyme from a mesophilic organism (the almond) was not activated by microwave irradiation.That the microwaves caused increased backbone motion was supported by the finding that irradiating Pfu CelB at 75° C caused it to denature; this temperature is well below the normal melting temperature of this enzyme (115° C). The authors attribute the differences in activation between CelB and the other hyperthermophiles to their lower optimal activity temperatures, but it is also possible that the particular motions enhanced by microwaves are simply not as productive in those molecules. Although all the dipoles should be affected in similar ways by microwaves, they are all oriented differently with respect to each other in the protein molecule. As a result, the induced motion may be chaotic, perhaps specifically so, and therefore the activation of a particular thermophile may depend on the nature of the motions needed for its catalytic cycle. Enzymes that require large ensemble motions of subdomains, such as adenylate kinase, might not be activated as much as a protein that simply needs to be melted a little. Examining the differences in structural dynamics of enzymes differentially activated by microwaves may be an interesting area of future study.While microwave activation is unlikely to revolutionize some of the more common uses of hyperthermophilic proteins (i.e. PCR), it does have promise. In ligations, for instance, hyperthermophilic enzymes can not be used at present because many DNA inserts denature at the optimal active temperature. With microwave activation, it may be possible to employ extremophilic ligases in these reactions, gaining the benefits of their speed and durability without having to worry about accidentally melting your DNA. Depending on the enzymes available, this technique may also prove valuable in improving mobile medical laboratories and developing novel diagnostic tools for field work.1. Young, D.D., Nichols, J., Kelly, R.M., Deiters, A. (2008). Microwave Activation of Enzymatic Catalysis. Journal of the American Chemical Society DOI: 10.1021/ja802404g... Read more »

Douglas Young, Jason Nichols, Robert M Kelly, & Alexander Deiters. (2008) Microwave Activation of Enzymatic Catalysis. Journal of the American Chemical Society. DOI: 10.1021/ja802404g  

  • June 26, 2008
  • 11:15 PM
  • 1,106 views

EGCG disrupts amyloid oligomers

by Michael Clarkson in Conformational Flux

As I mentioned in my post on the recent paper by Kukar et al., the disruption of amyloid plaques has been an ongoing focus in Alzheimer's disease research. However, plaques and inclusions are of concern in many diseases, and as a result there is a great deal of interest in finding molecules that can either dissociate, or prevent the formation of, amyloids of many different kinds of proteins. In the most recent issue of Nature Structural and Molecular Biology there is a paper suggesting that (-)-epigallocatechin gallate (EGCG) may be able to interfere with multiple proteins that form β-rich aggregates.EGCG is a chemical found in green tea (although it is doubtful you could realistically drink enough green tea to absorb the concentrations used in this study). Previous studies had suggested that it altered the aggregation behavior of α-synuclein (αS) and huntingtin. So, Ehrnhoefer et al. use highly purified EGCG in a number of experiments to determine what effect it had on αS and the amyloid-β peptide (Aβ) (1). What these proteins have in common, besides the fact that their aggregation is associated with disease, is that the single proteins take on a β-strand structure that assembles into fibrils.Ehrnhoefer et al. find that EGCG interferes with some aspect of this process, reducing the formation of the fibrils while inducing the formation of some alternate oligomeric structure. In the case of αS, the result is a spherical oligomer (Figure 1), although the gel filtration results indicate that a large spectrum of oligomeric states is formed at lower concentrations (the trace suggests that these oligomeric forms are interconverting during elution). NMR and other data show that EGCG associates directly, but non-specifically, with the protein backbone, and that the compound has strongest affinity for the C-terminus of αS, which may play a role in preventing aggregation. The EGCG-treated oligomers had reduced β-strand content (as assayed by CD). Treatment with EGCG appeared to reduce αS toxicity in cultured cells, although this was measured strictly in terms of cell death.Similar results were seen with Aβ—addition of EGCG reduced the formation of fibrils and the toxicity of amyloids towards cultured cells. Again, the oligomers formed in the presence of EGCG could be quite large, and took a spherical shape.Based on these data, the authors propose that EGCG binds preferentially to unfolded proteins and interferes with the formation of regular β-strand structure. The EGCG-bound proteins are unable to form fibrils, and therefore EGCG oligomers compete with fibrils for monomers, slowing the formation of the latter. The net effect is to divert these unfolded proteins out of amyloidogenic pathways and into alternate oligomeric structures, which appear to be nontoxic, or at least less toxic.Can EGCG or a derivative be turned into a drug to treat Alzheimer's disease, or a general treatment for amyloidoses? This is an uncertain proposition. As the authors of a commentary (2) in the same issue of NSMB note, EGCG's nonspecific assault on amyloids may damage some normal structures built on this architecture. Moreover, because EGCG seems to bind unfolded regions nonspecifically, it has the potential to interfere with any of the numerous signaling proteins that possess such regions. The potential for side effects is very high, and the continued viability of cultured cells in the presence of EGCG, while reassuring, is not a particular reason to believe the compound is safe at high concentrations in the human nervous system.The promiscuity of EGCG's interactions with unfolded regions poses another problem, in that all these proteins will act to interfere with EGCG's action on its intended target. Fairly high ratios of EGCG were necessary in these assays, and they mostly involved purified proteins. In vivo, all unfolded proteins will act to titrate EGCG out of plasma, meaning that significant quantities of this (or any other non-specific molecule like it) would need to be used in order to achieve the desired effect. This again raises the likelihood of side effects.Of course, the most severe complication arises from the nature of amyloidoses themselves. Although the obvious presence of plaques and inclusions naturally leads us to suspect that these aggregates are the agents causing the disease, increasing evidence suggests that amyloids are merely the endpoints of some other process that is the actual culprit. In the case of Alzheimer's disease, for instance, a recent article in Nature Medicine has provided very strong evidence that soluble Aβ dimers are the dominant contributors to Alzheimer's pathophysiology (go check out Ashutosh's excellent discussion of this article over at The Curious Wavefunction for more information). Now, given that these dimers do eventually form amyloid, it seems likely that they have β structure in their pathogenic form, which EGCG will probably disrupt, but this is not guaranteed. Small molecules like EGCG that prevent deposition into amyloid may actually exacerbate the problems they are meant to solve. Further research is needed to establish that the EGCG oligomers of αS and Aβ are not toxic in vivo.Diseases associated with protein aggregation continue to pose a challenge precisely because we have such a poor handle on their pathogenesis. Ehrnhoefer et al. clearly demonstrate that EGCG possesses the ability to alter the behavior of amyloidogenic unfolded proteins. While that may imply that it has promise as a broad-spectrum drug to attack these diseases, the promiscuity of its action is a cause for concern from the perspective of dosage and side effects. And, because soluble oligomers may well be the pathogenic species in many (if not all) of these diseases, our optimism about this approach must be tempered with an awareness that the actual effect of EGCG may be to enhance, rather than diminish, the toxicity of the relevant protein targets.1. Ehrnhoefer, D.E., Bieschke, J., Boeddrich, A., Herbst, M., Masino, L., Lurz, R., Engemann, S., Pastore, A., Wanker, E.E. (2008). EGCG redirects amyloidogenic polypeptides into unstructured, off-pathway oligomers. Nature structural & molecular biology, 15(6), 558-566. DOI: 10.1038/nsmb.14372. Roberts, B.E., Shorter, J. (2008). Escaping amyloid fate. Nature Structural & Molecular Biology, 15(6), 544-546. DOI: 10.1038/nsmb0608-544... Read more »

Dagmar Ehrnhoefer, Jan Bieschke, Annett Boeddrich, Martin Herbst, Laura Masino, Rudi Lurz, Sabine Engemann, Annalisa Pastore, & Erich E Wanker. (2008) EGCG redirects amyloidogenic polypeptides into unstructured, off-pathway oligomers. Nature structural , 15(6), 558-566. DOI: 10.1038/nsmb.1437  

  • June 23, 2008
  • 09:00 PM
  • 660 views

A conformational equilibrium controls the Vav DH domain

by Michael Clarkson in Conformational Flux

One emerging view of allostery, and protein behavior generally, describes function in terms of pre-existing equilibria. In this view, proteins are not like switches that get turned on and off, but rather are like dials that are turned "more on" or "more off" depending on the conditions. In this view, regulatory modifications such as phosphorylation do not enforce an active conformation so much as promote it. Because some relaxation measurements are sensitive to conformational exchange, NMR is well-suited to examine systems with this behavior. In the most recent edition of Nature Structural and Molecular Biology, a group from Michael Rosen's lab discover that this kind of equilibrium is governing the behavior of the DH domain from the Vav protein.Vav activates Rho GTPases by inducing the exchange of GDP for GTP, making it a Guanine nucleotide Exchange Factor (GEF). This activity is performed by the DH domain, and is inhibited by a small neighboring element called the acidic region (Ac). As you can see from the figure of the combined Ac and DH domains (AD) at right, Ac (red) inhibits the DH domain (blue) by forming a helix that binds to the active site (explore this structure at the PDB, noting that the numbering is off by 167). Phosphorylation of Y174 (red side chain) unfolds the helix and exposes the active site, which would be a nice model except for two things. First, as you can see, Y174 is pretty well buried in this structure, which would make it difficult to phosphorylate. Second, mutation of Y174 to phenylalanine, a residue that cannot be phosphorylated, activates DH domain activity. How can Y174 get phosphorylated? What is phosphorylation actually doing?One possible answer is that the existing structure doesn't tell the whole story. A protein, after all, doesn't have just a single structure, but rather an ensemble of structures across a population or time. While AD may spend most most of its time in this inhibited state, it's possible that sometimes it adopts an alternate conformation that allows Y174 phosphorylation. Li et al. set out to assess this possibility using measurements of R2 relaxation dispersion. Residues that have a large field-dependence of transverse relaxation (ΔR2) are undergoing some sort of conformational exchange process that changes their chemical shift between two or more states.Using CPMG experiments on methyl-bearing side chains, Li et al. identified two groups of residues engaged in conformational exchange processes, shown at left. The first group (orange side chains) have a large ΔR2 that vanishes once Y174 is phosphorylated. The second group (green side chains) have high ΔR2 in both the phosphorylated and unphosphorylated states, but the rates are slightly higher in the former. Residues that were observed to have low ΔR2 are shown with gray side chains. All of this indicates that there are two dynamic processes occuring on the microsecond-millisecond timescale. The first encompasses some change in the chemical shift of the acidic helix, while the second involves some unknown process. However, because the Group 2 residues react to the phosphorylation state, these processes are likely linked in some way. I notice that the Group 2 residues are clustered around loops and joints in the upper half of the domain (in this view), while residues not adjacent to loops or joints do not appear to have significant ΔR2. It is possible that the observed dispersion represents some flexing of the domain around these loops, and that the rate of this motion increases slightly when the binding site is unoccupied.That's all very interesting, but it's also bad news for the analysis, because it means the observed relaxation dispersion would have to be fit to a four-state model in order to obtain populations and kinetic parameters. Previous analysis by several groups has shown this to be a dubious proposition, so Li et al. take an alternative approach. Rather than try to fit out populations from the dispersion data, they make a series of mutations to AD to push the populations of the two states in one direction or the other. They find a number of states where the methyl peaks lie on a line between the open (phosphorylated) and bound (unmodified) states. The Y174F mutation lies very close to the phosphorylated state, interestingly enough, implying that the phosphate group itself is not a significant determinant of chemical shifts in the open state. Using a combination of HSQC peak positions and ΔR2 measurements, Li et al. determine for each mutant or modification what population of the ensemble is in the open state. They find that this NMR-assigned population correlates with the rate constant (kcat/KM) for phosphorylation.This implies a model in which regulation of DH by Ac involves an equilibrium between the bound and open states. In the bound state, Ac forms a helix in the binding site, an effect strongly dependent on a hydrogen bond to the OH of the Y174 (R332 may be the partner here). However, in this state Ac samples the open state about 10% of the time. While in the unbound state, Ac can be phosphorylated, a modification that prevents helix formation or binding; probably by steric interference in the binding site. This stabilization of the open state dramatically increases the chances that the Vav DH domain will be in an active state when it encounters a target. Thus, DH regulation depends on a population shift of an underlying equilibrium, not a singular on/off switch. This model has the advantage of accounting for how Y174 gets phosphorylated and why a mutation that prevents phosphorylation nonetheless leads to a constitutively activated state.In vivo, Vav consists of many other domains in addition to the AD construct used here. These domains are known to contribute to the inhibition of DH; given these results it is probable that they do so by stabilizing the bound state. Because Ac binding causes the formation of a negatively-charged surface on one side of the helix, charge stabilization is a likely mechanism. Further research will hopefully identify these mechanisms, as well as the origin of the second conformational exchange process revealed by the experiments in this paper. This study is a good example of scientists making the best use of limited data to describe an instance of this important, but difficult to characterize, regulatory mechanism.1. Li, P., Martins, I.R., Amarasinghe, G.K., Rosen, M.K. (2008). Internal dynamics control activation and activity of the autoinhibited Vav DH domain. Nature Structural & Molecular Biology, 15(6), 613-618. DOI: 10.1038/nsmb.1428... Read more »

  • June 21, 2008
  • 09:00 AM
  • 1,210 views

Structural dynamics of PDZ allostery

by Michael Clarkson in Conformational Flux

If Michele Vendruscolo were trying to get me to blog about one of his papers, he could hardly have assembled a more perfect lure than his upcoming paper in JACS. It brings together all sorts of things I've been talking about on this webpage: NMR dynamics, MD simulations, and dynamics-driven allostery (in the PDZ domain, no less). Previous investigations of this PDZ domain indicated the existence of a network of residues that had a dynamic response to ligand binding. Dhuselia et al. extend this work using molecular dynamics simulations constrained by the existing dynamics results. This leads them to discover not one, but two networks in the PDZ domain, with different properties.NMR experiments have enormous power to sensitively detect changes in dynamics resulting from a perturbation, but they are also quite limited. Because of the models we use, the parameters we can fit out of relaxation data only give us information about the magnitude and timescale of fluctuations. Chemical shift overlap and interference caused by nearby dipoles limit the number of probes. Moreover, because NMR can only measure an ensemble, it is practically impossible to extract anything other than the most general information about correlated motions. MD has answers to all of these problems, but as a general rule has done poorly at reproducing NMR data about side-chain motions, calling the validity of the conclusions into question. Vendruscolo has taken some interesting strides in this regard by employing the limited experimental dynamics data as a component of the energy function. By constraining the simulation to mimic the known dynamics, we can hopefully learn more about the sites to which we are blind, as well as what kinds of motions the experiment is sensing and how they are linked.In this instance, the authors make use of the PDZ domain previously studied by Ernesto Fuentes in Drew Lee's lab (there was also some hack working there at the time). Ernie's research followed on previous evolutionary studies indicating a network of communication in PDZ domains (local summary here), and Ernie found, by comparing the dynamics of the free and ligand-bound states, that changes in motions propagated away from the binding site to two distal surfaces. The pathways of communication compared pretty well with the evolutionary results. Dhulesia et al. aim to extend these results by determining which motions are correlated and identifying the mechanisms by which energy is transmitted. They accomplished this by running multiple parallel simulations of the free and ligand-bound states of the PDZ domain constrained by Ernie's dynamics results, as well as NOE and 3J data.They find that two regions of the protein have correlated motions internally and move in an anticorrelated fashion relative to each other (Figure 3A). One of these regions consists of part of the binding site and all of distal surface 2 (DS2), while the other includes the other half of the binding site and all of distal surface 1 (DS1). When the ligand binds, something interesting happens. The motions of DS2 become more tightly correlated to the motion of an area around V30. The tight correlation between the motions of DS1 and α2 (an element of the binding cleft) switch to a slight anticorrelation. When a ligand binds to a protein we expect a broad increase in rigidity of the complex so that the proper orientations of bonding pairs are maintained. For the most part, the simulations affirm this expectation, but not for all regions. For the binding site and DS2, the backbone mobility decreases, as expected, but the backbone mobility of DS1 increases (I am going off the text and Table 3 here, rather than Figure 3). The side chains have a similar response. This agrees with other studies indicating that the change in conformational entropy upon binding a ligand need not be homogeneous. What is more interesting is that these results imply that opposite coherent responses can be induced in a small domain by a single stimulus.Although (as far as I know) this PDZ domain has no allosteric behavior in vivo, one can imagine that the binding of a ligand at the cleft could alter the binding of other modules to this domain. The entropic penalty for binding to DS2 would be lower in this case, while the penalty for binding to DS1 would be higher. The opposed nature of the dynamic responses may be related to the broad regional anticorrelation of free-state motions; disruption of this mode (by linking the motion of β2 and α2) may shunt that energy into DS1.The authors also find, using a series of structural parameters, that a set of residues have clear structural changes. Some of them appear to be associated with coupled changes in rotameric states; the authors map out one pathway in Figure 5. Because it is a rotameric pathway, it should be possible to test whether it is essential to communication experimentally—mutation of the intermediary residues should abolish the linkage. The authors also carry out a network analysis to identify the most connected residues, a prediction that may also be testable by mutagenesis. These "structural network" residues overlap only slightly with the dynamic network, and indeed do not generally intersect with the evolutionary network either. In the absence of identified allosteric behaviors or clear energetic connectivities it's difficult to say what this disjunction means. However, the residues undergoing structural changes surround most of the residues undergoing dynamic changes. It is possible that these changes in structure provide the context that allows the changes in dynamics (or vice-versa); the two properties are inextricably linked.Although communication between the binding site and distal surfaces is proven in this PDZ domain, and appears to be a general feature of the fold, the absence of a known function for the propagation in this instance makes it tough to assess the quality of these results. However, the findings of Dhulesia et al. make it clear that this approach can produce testable predictions and explanations. Hopefully this approach will be employed in the near future to study PDZ domains known to possess allosteric properties.1. Dhulesia, A., Gsponer, J., Vendruscolo, M. (2008). Mapping of Two Networks of Residues That Exhibit Structural and Dynamical Changes upon Binding in a PDZ Domain Protein. Journal of the American Chemical Society DOI: 10.1021/ja0752080... Read more »

  • June 17, 2008
  • 11:00 PM
  • 738 views

Determinants and evolutionary mechanisms of homosexuality

by Michael Clarkson in Conformational Flux

Debates over the rights of homosexuals in the United States, particularly the right to marry, often get hung up on a thoroughly inane point: whether homosexuality is "chosen" or "innate". While this may seem to be a question of moral import, it is not, and moreover it presents a false dichotomy. Like nearly all human behaviors, sexuality is too complex to be reduced to a choice or a destiny; it is neither, or both, depending on your view. However, the degree to which different factors contribute to sexuality, and the mechanisms by which they do this, are fit subjects for scientific inquiry. Two articles this week present interesting findings, sure to be distorted by all sides of the argument, that may prove enlightening in this regard. I will endeavor, along with others, to be a resource providing an unbiased view.First, however, a plea for sanity. If science finds, by some transcension of nature, that sexual orientation is entirely chosen, or entirely innate, it does not matter to any debate over the rights of homosexuals. Men may have an innate tendency to try to spread their genes as widely as possible, but we would not forgive adultery on this basis. Toddlers have an innate tendency to become frustrated and throw tantrums, but we still make them sit in the corner. That a behavior is innate is not a basis for withholding moral judgment. And if sexual orientation is a choice? Well, we frequently forbid discrimination on the basis of chosen behaviors—religion, for instance, or political affiliation. What is truly at issue is not choice, but whether it is just to deny rights and protections to one group of citizens for no reason beyond the moral opprobrium of another group.Thus, the question of rights for homosexuals does not depend, one way or the other, on whether people choose to be gay. I firmly believe one side of this question to be in the right, but this opinion is not informed by my scientific knowledge, because it cannot be. I would urge my readers (all three of you) to view these results strictly as what they are: interesting scientific findings related to a political question that do not support one side of the argument or the other. I would ask advocates for both sides to refrain (for once) from distorting the conclusions of these reports, not only because of the raw immorality of lying, but because by misrepresenting these findings they will have sacrificed their integrity for no gain in the debate.There, I feel better now. On to the science!In the first study, a team analyzed the results of a survey of Swedish twins in hopes of parsing out the relative contributions of heredity, shared environment, and unshared environment in shaping sexuality (1). Although the survey was answered by a fairly large number of twins, the authors draw their conclusions using two questions that do not directly ask for sexual orientation. The survey only requested information about actual sexual partners, and did not address homosexual feelings that might not have been acted upon. After excluding twin pairs that were opposite-sex or unclear with respect to zygosity, they had 3826 pairs to work with, of which 5% of men and 8% of women reported at least one same-sex sexual encounter. Because it is suspected that the factors influencing homosexuality may differ between the sexes, males and females were treated separately. By comparing the concordance and discordance of sexual behaviors between monozygotic and dizygotic twins it should be possible to parse out the degree to which genetics and the environment contribute to sexuality.Despite the limited materials, the authors were able to reach some conclusions, with the caveat that the 95% confidence intervals were quite wide. For instance, for males they found that genetic factors explained 39% of the observations with respect to whether a twin had any same-sex partner in his life. However, the 95% CI on this prediction was 0% to 59%. For men, shared environmental factors appeared to contribute nothing, while unique environmental factors explained 61%. For women, it was determined that genetic factors contributed 19%, shared environment 17%, and unique environment 64%. Similar distributions were seen for comparisons of total numbers of same-sex partners. While the confidence intervals for all factors are quite large, the numbers largely agree with a previous study on Australian twins (less so with a study on American twins).Obviously, the small sample size and broad confidence intervals on these results suggest that they should be interpreted cautiously. It should also be noted that "unique environmental factors" may run the gamut from hormone exposure in utero to childhood illness to personal experiences. Many unique environmental factors, even for twins, are just as involuntary as genetics, but some result from conscious choices of the individual (which is different from choosing to be gay). Despite their limitations, these results generally support the idea that sexual orientation results from a confluence of genetic and environmental factors.That genetics play a role in homosexuality may seem curious, because in terms of the classic expression, "survival of the fittest", homosexuality would appear to be a non-starter. After all, a reluctance or outright inability to mate with the opposite sex would seem to result in a substantial reduction in reproductive fitness. However, contrary to what a certain ignoramus would have you believe, the Theory of Evolution has advanced substantially since the days of Darwin, and we are aware of numerous additional evolutionary mechanisms that operate alongside the law of natural selection. In the case of male homosexuality, a new paper by Camperio Ciani et al. argues that sexually antagonistic selection may be at work (2). PLoS ONE is open access, so feel free to open up the article in another window and skim it yourself.Camperio Ciani et al. begin with the observations that male homosexuality has a matrilineal association, and that the mothers (and maternal aunts) of homosexuals are somewhat more fecund than the population at large. From these pieces of data, and from the fact that homosexuality appears to have been present at low levels in every society that has left written records, the researchers created a set of requirements for some evolutionary simulations, based on different supposed properties of the genetic factors influencing male homosexuality (GFMH). Most of the simulations failed to satisfy the parameters. In many cases (especially with single-locus traits) the GFMH either became extinct or gained too high of a frequency; in others the matrilineal association was not preserved.Ultimately, the researchers found that the model that best fit the parameters featured two alleles (one of them X-linked), and was sexually antagonistic. What this means is that the trait increases the reproductive fitness of one sex while decreasing that of the other. For instance, a heightened sexual response to men could make women more likely to pass on their genes, while making men possessing the trait less likely to do so. Provided that the effects of this trait are balanced with respect to the population proportion of each gender, it should be possible for it to survive in a population at a relatively constant level.This result is interesting, and provides some hypotheses that can be tested with genetics. However, it does not prove that homosexuality is genetic, or even that it has a genetic component. Like all simulations, these results merely inform us that a particular possibility is consistent with what we already know. In this case, we now know that the observed aspects of homosexuality are consistent with a 2-locus trait that is sexually antagonistic. However, this model was arrived at simply through process of elimination, and there may be some superior model or more-accurate mechanism that simply hasn't yet been tested. There is always a model we haven't thought of; sometimes that model is the right one. Moreover, as Långström et al. note, some of the data used to determine criteria for successful simulations remain controversial. Camperio Ciani et al. convincingly show how the preservation of homosexuality through evolution could happen, but that is not the same as demonstrating how it did happen. That will require a positive identification of the actual GFMH.The results of Långström et al. indicate that any GFMH eventually identified, whether or not they materially resemble the predictions of Camperio Ciani et al., will only give rise to a heightened propensity for homosexuality. Environmental factors play a significant, perhaps even dominant, role in determining sexual orientation. Whether genetic or environmental, most factors contributing to homosexuality are involuntary, but some are chosen. If that answer doesn't satisfy you, perhaps you were asking the wrong question.1. Långström, N., Rahman, Q., Carlström, E., Lichtenstein, P. (2008). Genetic and Environmental Effects on Same-sex Sexual Behavior: A Population Study of Twins in Sweden. Archives of Sexual Behavior DOI: 10.1007/s10508-008-9386-12. Camperio Ciani, A., Cermelli, P., Zanzotto, G., Brooks, R. (2008). Sexually Antagonistic Selection in Human Male Homosexuality. PLoS ONE, 3(6), e2282. DOI: 10.1371/journal.pone.0002282 OPEN ACCESS... Read more »

  • June 14, 2008
  • 04:30 PM
  • 742 views

NSAIDs bind to amyloid-β

by Michael Clarkson in Conformational Flux

One of the best-known features of Alzheimer's disease pathology is the formation of proteinaceous amyloid plaques in the brain. In Alzheimer's disease these plaques are primarily formed by the amyloid-β peptide (Aβ) derived from the amyloid precursor protein (APP) by the action of β- and γ-secretase. The length of the Aβ peptide varies, but the 42-residue form (Aβ42) is more likely to form plaques and fibrils. Although it remains uncertain whether plaques are a cause of Alzheimer's disease symptoms, or merely an effect of some underlying derangement, finding some way to prevent or reduce plaque formation is a major goal in the field. This week in Nature, a team of researchers from institutions all over the US and Europe show that non-steroidal anti-inflammatory drugs (NSAIDs) may be able to accomplish these goals by binding to APP and Aβ directly.Previous research from the Koo lab indicated that some NSAIDs specifically reduced the production of the amyloidogenic Aβ42 fragment (1) both in cultured cells and in a mouse model of the disease. APP was still processed into peptides, but these were shorter and less likely to form amyloid plaques than Aβ42. Significantly, the cleavage of other γ-secretase targets was not affected, meaning that side-effects of NSAID treatment might be minimal. Although NSAIDs were expected to ameliorate Alzheimer's symptoms by reducing inflammation, Weggen et al. found that the beneficial effects were not the result of cyclooxygenase inhibition. In a follow-up paper (2), Weggen et al. used experiments on cultured cells to show that the drugs were directly modulating γ-secretase activity. These experiments also showed that mutations to presenilin-1, a core component of the γ-secretase complex, could either increase or decrease the effect of NSAIDs, suggesting that it was the protein directly affected by these drugs.Kukar et al. set out to test this hypothesis using photaffinity labeling. They took a few compounds known to alter Aβ42 levels and added a functional group that would react with a protein in the presence of UV light. These covalently-labeled proteins could then be detected, and this would serve as a relatively easy way to determine which component of the γ-secretase complex was actually binding NSAIDs. Like many cleverly-designed experiments, this failed in an interesting way: no known components of the γ-secretase complex were labeled. Fortunately, the researchers realized that there was another component to the complex they hadn't tested yet: the substrate.It turned out that the NSAIDs could label a 99-residue fragment of APP. Moreover, this labeling was reduced by other γ-secretase modulators (GSMs) and unaffected by non-GSM NSAIDs. Using a series of progressively shorter constructs, Kukar et al. localized the binding activity of GSMs to residues 28-36 of amyloid-β.This on its own is a very useful finding because it provides a target for refinement of these compounds. Knowing where and to what protein a possible drug binds makes it easier to develop assays to test new potential drugs, as well as enabling structure-based design. However, the authors took the next step and asked whether these drugs, because they bind to a region of APP known to be involved in the formation of amyloid plaques, might inhibit plaque formation directly. In cultured cells, they found that treatment with certain substrate-targeting GSMs decreased the formation of Aβ dimers and trimers even under conditions where the overall concentration of Aβ42 was not altered.This suggests that these GSMs may be able to fight the buildup of amyloid plaques in two ways. By altering where γ-secretase cleaves APP, they reduce the concentration of Aβ42. Moreover, by interfering with Aβ oligomerization they fight the formation of plaques directly. With luck, further work in medicinal chemistry will arrive at compounds that enhance both these activities. The development of compounds that significantly reduce or prevent the formation of amyloid plaques will be a great step forward for Alzheimer's research. Even if such drugs do not prove to be a cure, a clear indication that plaques don't cause Alzheimer's would be a critical insight.I want to emphasize that although these results are quite promising, they do not prove the efficacy of NSAIDs in ameliorating actual Alzheimer's symptoms. Transforming these findings into a cure or even an effective treatment will require a great deal of additional research, if it is even possible. You should not attempt to treat Alzheimer's with NSAIDs, or begin a regimen of NSAIDs or any other kind of drug or supplement, unless you have first discussed the possible risks and benefits with your doctor. And no, Minnesota, I do not mean a naturopath.1. Weggen, S., Eriksen, J.L., Das, P., Sagi, S.A., Wang, R., Pietrzik, C.U., Findlay, K.A., Smith, T.E., Murphy, M.P., Bulter, T., Kang, D.E., Marquez-Sterling, N., Golde, T.E., Koo, E.H. (2001). A subset of NSAIDs lower amyloidogenic Aβ42 independently of cyclooxygenase activity. Nature, 414(6860), 212-216. DOI: 10.1038/351025912. Weggen, S. (2003). Evidence That Nonsteroidal Anti-inflammatory Drugs Decrease Amyloid β42 Production by Direct Modulation of γ-Secretase Activity. Journal of Biological Chemistry, 278(34), 31831-31837. DOI: 10.1074/jbc.M303592200 OPEN ACCESS3. Kukar, T.L., Ladd, T.B., Bann, M.A., Fraering, P.C., Narlawar, R., Maharvi, G.M., Healy, B., Chapman, R., Welzel, A.T., Price, R.W., Moore, B., Rangachari, V., Cusack, B., Eriksen, J., Jansen-West, K., Verbeeck, C., Yager, D., Eckman, C., Ye, W., Sagi, S., Cottrell, B.A., Torpey, J., Rosenberry, T.L., Fauq, A., Wolfe, M.S., Schmidt, B., Walsh, D.M., Koo, E.H., Golde, T.E. (2008). Substrate-targeting γ-secretase modulators. Nature, 453(7197), 925-929. DOI: 10.1038/nature07055... Read more »

Thomas Kukar, Thomas B Ladd, Maralyssa A Bann, Patrick C Fraering, Rajeshwar Narlawar, Ghulam M Maharvi, Brent Healy, Robert Chapman, Alfred T Welzel, Robert W Price.... (2008) Substrate-targeting γ-secretase modulators. Nature, 453(7197), 925-929. DOI: 10.1038/nature07055  

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.