1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
illuminated-shrooms

IMAGINE THAT: BIOLUMINESCENCE IN MEDICINAL IMAGING

Modern medicine wouldn’t have gotten as far as it has without the development of medicinal imaging. This technology is used every day for the diagnosis of illness, cancers, and diseases for millions of patients around the world, and gives us the amazing ability to detect and study diseases as they progress right down to the molecular level - and it even helps to save lives.

So how can we make this technology better, cheaper, and more accessible? Bioluminescent imaging might just be the answer we’re looking for.

The birth of the GFP:

Using molecules that have a natural “glow” has been a staple in biological studies for decades. This all began in the 1960s (almost 60 years ago) when a marine biologist discovered a natural fluorescent protein derived from a crystal jellyfish off the coast of Japan. With its fluorescent properties, this protein was able to absorb ultraviolet light and, in return, release its own light as a green glow. It wasn’t long before researchers discovered that this protein could be attached to cellular components, such as proteins and DNA, and used to observe cellular processes at a molecular level.  

This marked the discovery of the infamous Green Fluorescent Protein (GFP) - a protein that has become a key asset in the study of biological systems, even to this day.

Figure 1: Structure and activation of the Green Fluorescent Protein.

This was only the beginning of “harnessing nature’s glow” to help further biological and medicinal research.

The power of medicinal imaging:

In order to begin looking at the use of bioluminescence in medicinal imaging, we first need to understand what medicinal imaging is. Medicinal imaging is the use of non-invasive technology to produce images of anatomical structures that are usually hidden from our sight. Imaging can be used for observing internal systems and tissues in real time, and to detect abnormalities within these systems, thus allowing for fast and easy diagnosis of diseases.

Over the years there have been many different methods for observing internal processes. Some of the more well-known methods of imaging include X-ray radiography, CAT scans, Magnetic resonance imaging (MRI), and ultrasound - and these are just to name a few. Although these imaging methods are non-invasive procedures, many of them are way too expensive for practical use and can also be potentially harmful to patients (for example x-ray radiation).

Fluorescent vs bioluminescent imaging: what’s the difference?

Another highly used imaging method is fluorescent imaging. Fluorescent imaging is an ideal examination technique that uses compounds called fluorophores. These compounds are capable of emitting light after being excited by an external light source. When these compounds enter an excited state they become unstable, and as the excited material stabilizes within the excited state, energy is released as heat. When the material relaxes back down to a ground state, photons of energy are re-released from the material in the form of light.

Figure 2: Fluorescence at its finest. This figure shows the energy levels involved in light release of a fluorophore.  A is the energy from the external light source, B is the energy lost through heat, and C is the light energy released by the fluorophore. The E’s represent the different excitation states, with E0 being the ground state.

Different fluorescent stains and markers can be used to tag molecules within cells for observational studies. One of the more commonly used fluorescent markers is the infamous GFP that I mentioned earlier.

Unfortunately fluorescent imaging has many limitations, the first being that fluorophores lose their ability to give off light over time. This is due to an event called photobleaching - which refers to damage that accumulates within the fluorophore from the constant excitement of its electrons –this causes the fluorophore to slowly break down and also reduces the time allotted for observation.

Although the use of fluorescent imaging allows us to observe living cells in their natural habitat, it also leaves our cells open to phototoxicity. This is a toxic response that occurs when light travelling from an external light source (needed to excite the fluorophore) comes in contact with our cells. Phototoxicity can be compared to UV damage from sun rays. (And we definitely know that UV damage is not good for our skin!)  Fluorophores are also known to generate reactive chemical species which can enhance this phototoxic effect.

To bypass the downfalls of fluorescent imaging, researchers have been looking into the possibilities of using bioluminescent imaging instead.

The Bioluminescent difference:

Bioluminescent imaging (BLI) is a sensitive, and relatively new, tool that is used to detect light energy given off from bioluminescently tagged cells and tissues to study in-vivo pocesses. Unlike fluorescent imaging, BLI does not need an external light source, since it produces its own light.

BLI works by adding bioluminescent reporter proteins to the tissues, cells, or molecules are being studied, and these natural reporter molecules are derived from bioluminescent organisms such as fireflies (D-luciferin), cnidarians (Coelenterazine and Renilla luciferin), and bacteria (the LUX operon). These bioluminescent markers can be added to cells in different ways. A couple of examples include: using modified DNA (ex: transgenic mice) or using antigens and antibodies specific to the target cells to attach the reporters molecules.

The light produced through BLI is too small to be seen with the naked eye. Imaging technology like charge coupled cameras are used to detect the light through the tissue. This camera device works by converting the incoming photons into electrical charges. These electrical charges can then be converted into an image.

Figure 3: An example of how BLI works. 1. Bioluminescent reporter tags are added to the cell to be studied, in this case cancer cells. 2. Luciferin is then added via injection to activate the luminescent enzymes and produce light. 3. The light from the reporter molecules is then visualized using CCD cameras allowing for observation of the cancer cells.

Although BLI has only been used in small in-vivo animal studies so far, it still shows a lot of promise in someday becoming a mainstream imaging technique.

The main problem with BLI is that it depends on the presence of oxygen as well as the addition of luciferin in order to work. It is also not yet powerful enough for use in larger animal testing,  mostly due to the fact that BLI signals do not travel very far through tissue, so visualization is restricted to only a few centimeters deep.

But beside these few issues, there are plenty of reasons to use bioluminescent imaging!

The benefits to using BLI include the fact that it does not need an external light source, so cells involved in the screening are not exposed to any phototoxicity and remain damage free. BLI is also much cheaper compared to other imaging methods (for example x-ray machinery), it is non-invasive, and it is an easy way to visualize a variety of in vivo cellular events as they are happening in real time. BLI can be used to study and diagnose a very wide range of molecular functions and diseases, including: gene function, cell trafficking, tumor development in cancers, disease progression, protein-protein interactions, bacterial and viral infection progression. It can also be used to test the effectiveness of newly developed drugs and antibiotics.

Future Prospects for this technology:

So what can we do with this technology?

The medicinal potential of bioluminescent imaging is enormous. With this technology we might someday be able to map brain activity in a new level of detail, or track electrical impulses translating into muscular actions. It could be used as an alternate process for diagnosis of diseases, replacing radioactive or other harmful techniques that had been previously used, or it could be merged with other imaging techniques to look into deeper tissues. BLI could also be used as an aid in drug discovery in pharmaceuticals by acting as a visual aid to determine the effectiveness of drugs.

And with millions of blood tests and scans and operations being performed every day, using bioluminescent imaging could revolutionize the way we see things both medicine and in science.

Since this technology is still developing, there is a ways to go before it’s ready for human use, but who knows where this technology could take us next? Bioluminescent imaging might someday even become as famous as the GFP.

-Lauren

Additional readings and references:

http://www.the-scientist.com/?articles.view/articleNo/41699/title/Picturing-Infection/

http://www.biolume.net/tumorlight.htm

http://www.theguardian.com/science/2014/apr/01/neurobiology-atlantic-ocean-bioluminescent-medical-imaging

Papers on bioluminescent imaging:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2713342/pdf/PROCATS26537.pdf

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3274065/pdf/sensors-11-00180.pdf

b4272
chrisfetter-blog

Investigating the Fundamental Storage Unit of Memory: Connecting to Epigenetics, Microtubules and Consciousness

So all through the term I have been busy describing how memory is thought to work via synaptic plasticity. We’ve had some fun with neurons and sea slugs! However, this has been frustrating for me since when I started this blog I really wanted to discuss new theories about how memory is stored biologically. Finally, I’ve covered enough background to do it. So here we go!!!

So first off, although definitely involved in expressing memory, there is increasingevidence that more than synaptic alteration is required to encode memory. Neurons are dynamic, synaptic plasticity or the movement on synapses occurs throughout your lifetime. Learning is a well documented form of synaptic plasticity and synapses form and strengthen with learning. These synapses until recently were thought to be static or else memory would be lost. However, recent research suggests once formed these memory derived synapses remain dynamic (Glanzman, et al., 2014). So if the synapses change, why not the memories? In a paper published in November 2014, Glanzman et al showed that overall connection, and therefore effect, remains constant between memory encoding Aplysia neurons in vitro. However, what specific synapses formed these connection varies with time (Fig. 1). If synapses can change without any memory loss occurring, something else must be recording their locations and relative strengths.

Also, unicellular organisms can learn to move through mazes and to escape faster from traps if they have encountered them before(Fernando, et al., 2009) … they seem to remember them.  Bear in mind, unicellular organisms don’t have any synapses.  So what else could be involved?

Glanzman et al suggested that epigenetics is involved with at least one component of memory encryption (Glanzman, et al., 2014). They found that inhibiting histone deacetelases increases the ability of Aplysia neurons to form memory-related synaptic connections (LTP). With addition of histone deacetylases, the sea slugs learned faster. Therefore, acetylation may be involved in memory. The authors concede though, that for epigenetics to replace the static synapse theory of memory, one needs to solve a “Vector Problem.”

image
image
Figure 1. Showing different individual synapses connecting a pre and post synaptic neuron but maintenance of overall connectivity

 Remember that a vector has two characteristics:  magnitude and direction. The “Vector Problem” here is that current epigenetic models can only account for one of these. There are epigenetic models that could record how many connections a neuron has (magnitude), but none that tell you where they are (direction). If we can’t record where the connections are, then when the initially formed memory related synapses disappear, mis-wiring of neurons is inevitable. Memories might only last a couple of days at most, at least if human synapses change the same way Aplysia synapses do (Glanzman, et al., 2014).

             It is vital that the precise number, location and relative efficiency or strength of learning induced synapses is encoded somehow within neurons for memory to work. If current epigenetic models don’t have a way to record the precise location of synapses, how could this missing half of the “vector problem” be solved?

One approach might be to look at an example were memory is not working well – for instance, in Alzheimer’s disease.  One hint may be related to what seems to be an “organizing process” within neurons – the way the protein tau helps structure a neuron’s cytoskeleton.  It does this by binding microtubules, both “directing traffic” on a microtubule, and making connections with adjacent ones. When tau is acting properly, memory seems to work.  But in Alzheimer’s disease, neurons often form “neurofibrillary tangles” made of mis-placed clumps of the protein tau (Hameroff & Penrose, 2014).  So, if malfunctioning tau correlates with memory problems … could properly functioning tau be a crucial part of the molecular basis of memory? It could a piece of the solution to the vector problem; it could also be, however, that the key is the microtubules themselves, and information perhaps stored on or through them.  While more research into the tau aspect would be fruitful and is being explored (Brunden, Trojanowski, & Lee, 2009), for now let’s shift and look more closely at the microtubules themselves, for another perspective.

Currently, memory in computational systems uses “bits”, binary states of 0’s or 1’s to store information. Craddock et al have proposed that different phosphorylation states along microtubules might encode memory in a somewhat similar way (Craddock, Tuszynski, & Hameroff, 2012). Encoding memory as strings of 0’s and 1’s based on different phosphorylation states along microtubules might be a way to encode many characteristics of a system, including relative number and strength of synapses. Also, microtubules are already in the cytosol and in dendritic spines.  This gives two possible ways that the vector problem might be solved; it could be encoded in “0’s and 1’s” type phosphorylation states.  Alternatively, since microtubules are located in axons and dendritic spines (where memory synapses form) encoding the connections directly on them might solve the spatial parameter at the outset.

The enzyme that was proposed by Craddock et al to act as the kinase to punch these phosphate 0’s and 1’s on microtubules is Calmodulin-dependent Kinase II (CaMKII). This enzyme has twelve kinase domains arranged in two hexagons on opposing faces (Fig 2). Molecular modelling shows significant binding affinity between the active form of CaMKII and microtubules. CaMKII is also activated by synaptic signaling and is known to bind NMDA receptors known to be involved in memory (for a brief review of NMDA receptors and their role in memory see α-7-receptor post) (Lisman & Raghavachari, 2014). Activation at the right time by the right stimulus while already connected to the right pathway may lead CaMKII to act on these new substrates (the microtubules), and thereby store memory.

image
Figure 2. CaMKII Kinase, Top: Schematics of inactive and active confomers, Blue = scaffold, Light and Dark Grey= kinase domains, Brown= linker regions. Bottom: Axial and Side views in more detail showing presence of Calmodulin regulated domains (Orange). Modified from Craddock et al

             Interestingly, research into the biological basis of consciousness is suggesting that microtubules may play a role(Hameroff & Penrose, 2014).  This links to the theory described above, because the efficiency of memory acquisition of events and facts (explicit memory) is intimately tied to conscious perception (Kandel, Dudai, & Mayford, 2014).A link to consciousness seems to support the theory of neural microtubule involvement in memory.  What Hameroff et al argue is that consciousness is a product of quantum events.  

A basic tenet of Quantum theory is that reality can be represented both as a “particle” and a “wave.” When reality is manifesting as a wave, different probabilities are assigned to characteristics of the system; a wave has many “peaks,” each of which correspond to one of those probabilities. This means that as a wave, the object’s location is represented by multiple probabilities rather than a single fact, and the object is many places at the same time. This is called quantum superposition. However, when you attempt to measure a system displaying quantum superposition, the wavefunction collapses and the object is in one place. This is called collapse of the wavefunction, the “Measurement Problem” or reduction in quantum mechanics.

Hameroff’s theory of consciousness builds on a recently proposed extension of quantum theory describing the collapse of quantum superposition as a natural process named Objective Reduction. On the one hand, it argues that quantum superposition is inversely proportional to the mass of the system, thus everything big enough to see is only ever in one place at one time.  On the other hand, very tiny things (e.g. things more than small enough to be inside microtubules) experience superposition more frequently.  Hameroff et al propose that the collapse of some of these quantum wavefunctions occurring within microtubules, constitute proto-conscious moments.  When enough proto-conscious moments occur and are organized correctly Hameroff et al propose they combine to generate full conscious awareness. The writers argue that neural microtubules provide a unique framework which could sustain quantum superposition within the tube cavity and organize the proto-conscious moments into consciousness. In particular, coupled magnetic or London dipole states of aromatic amino acid residues within microtubule cavities are cited as the units able to be in quantum superposition (field up and down at the same time), when the superposition is reduced to one state by objective reduction, a proto-conscious moment is achieved. These proto-conscious moments interact with other such reductions along the microtubule which then interact with microtubules in other neurons through gap junctions to provide consciousness. For more, read the paper (Hameroff & Penrose, 2014) or just search Hameroff on youtube, it’s pretty much everywhere…

For the connection to memory, phosphorylation of tubulins in microtubules would affect how the dipoles of aromatic amino acids of that tubulin reduce from quantum superposition, providing a molecular bridge between consciousness and memory. This would parallel the behavioral bridge known to exist (Kandel, Dudai, & Mayford, 2014).

Despite all these exciting theories there are still some things about memory encoding on microtubules that need experimental evidence. For the persistence of genetic memory there are numerous mechanisms for repair and maintenance. I would need to see evidence of similar maintenance pathways if covalent modification of microtubules is to represent memories that last years. Also, as previously stated the whole theory for memory encryption on microtubules requires ordered phosphorylation of microtubules by CaMKII, a kinase that has not yet been experimentally proven to phosphorylate microtubules. As to the quantum stuff…..idk……

But to bring it all together, memory related synapses aren’t static, and epigenetics only cover half the ground needed for memory. Microtubules could cover it all but more research is needed. But that’s what science is for right?

Thanks for Reading!

Chris

References  

Brunden, K., Trojanowski, J., & Lee, V. (2009).  Advances in tau-focused drug discovery for Alzheimer’s disease and related  tauopathies. Nature Reviews Drug Discovery, 783-793.

Craddock, T. J., Tuszynski, J. A., & Hameroff,  S. (2012). Cytoskeletal Signaling: Is Memory Encoded in Microtubule Lattices  by CaMKII Phosphorylation? PLoS Computational Biology, 8(3), e1002421.  doi:10.1371/journal.pcbi.1002421

Fernando, C. T., Liekens, A. M., Bingle, L. E.,  Beck, C., Lenser, T., & Stekel, D. J. (2009). Molecular circuits for  associative learning in single-celled organisms. Interface.

Glanzman, D. L., Roberts, A. C., Sun, P. Y.-W.,  Pearce, K., Cai, D., & Shanping, C. (2014). Reinstatement of long-term  memory following erasure of its behavioral and synaptic expression in  Aplysia. eLife. doi:10.7554/eLife.03896.001

Hameroff, S., & Penrose, R. (2014).  Consciousness in the universe: A review of the ‘Orch OR’ theory. Physics  of Life Reviews, 39-78.

Kandel, E. R., Dudai, Y., & Mayford, M. R.  (2014). The Molecular and Systems Biology of Memory. Cell, 163-186.

Lisman, J., & Raghavachari, S. (2014).  Biochemical Principles underlying the stable maintenance of LTP by the  CaMKII/NMDAR complex. Brain Research.

b4272
jennaetsell

Moving Towards Enlightenment

Right about now I imagine you’re sitting at a desk scrollingthrough this post on your laptop. You absentmindedly grab your coffee and takea swig, then quickly pick up your phone to check your messages. No big deal right? Well, imagine trying to do these things when your hand moves and twitches when you don’t want it to. This is the constant struggle that people with movement disorders face for even the simplest tasks. On top of the fact that motor control diseases are so debilitating, they are also very difficult to study since they involve very complex combinations of neural pathways.

Optogenetics is an emerging field in the study of neurological disorders that involves genetically expressing light-activated ion channels on specific populations of neurons and then using light to modulate their activity. This method of studying the nervous system has expanded our ability to study the functional connectivity of neurons in the different regions of the brain and the interaction between the various groups of neurons during movement, cognition, object perception, and audio-visual interactions. There are a number of debilitating diseases affecting motor function that researchers are using optogenetic models to study, including the four mentioned in this study: Parkinson’s Disease, Essential Tremor, Dystonia, and Obsessive Compulsive Disorder.

It has proven to be very challenging to understand the neural pathways involved in motor control diseases because they are so complex and involve more than one type of neuron or pathway. Damage to the basal ganglia (Figure 1) has been associated with two types of movement deficits: dyskinesias and akinesias. Dyskinesias are abnormal, involuntary movements such as Parkinson’s disease and essential tremor. Patients suffering from Parkinson’s disease experience resting tremors which are tremors that occur in muscles at rest. Engaging the affected muscles can stop these types of tremors. Akinesias are abnormal and involuntary postures and include dystonia. Patients suffering from dystonia are unable to maintain a normal posture because of the rigidity of agonist and antagonist muscles involuntarily contracting.

image

Figure 1. Diagram of the relevant brain structures thought to be involved in movement disorders. Impaired activity of the basal ganglia has been previously identified to be a factor causing several motor control diseases. It is composed of 5 types of subcortical nuclei, including the subthalmic nucleus.

To elucidate the populations of neurons involved with these movement deficits, an optogenetic model was created which takes advantage of the photo-isomerizable compound retinal that is naturally found in the body of most mammals. A Chlamydomonas gene for channelrhodopsin is delivered the neuron, often using viral transduction. Channelrhodopsin is a light-gated ion channel naturally expressed in Chlamydomonas, and the function of this molecule is what the man-made photoswitches (previously discussed here) were created to mimic. To deliver the genes to specific neurons, they are encoded into an empty viral capsule containing a primer that targets the specific neuronal population. Successful delivery of the genes encodes a protein, which forms an ion-channel that has the ability to bind the light-sensing molecule retinal. Incorporation of retinal with the ion-channel is termed rhodopsin. Channelrhodopsin-2 is an excitatory rhodopsin; in the presence of 470nm light a conformational change occurs from trans retinal to cis retinal (Figure 2). This conformational change is conferred to the rhodopsin and cations are able to cross the channel and enter the cell (Figure 3). This results in depolarization of the cell membrane and creates an action potential. The incorporation of retinal into the channel results in rhodopsin acting as a light-gated ion channel. There are also inhibitory rhodopsins (ex. Halorhodopsin) that cause an influx of anions and hyperpolarization, which blocks action potentials.

image

Figure 2. The molecular structure of rhodopsin. Retinal is converted from the trans conformation to the cis conformation when excited by 470nm wavelength light. Retinal bound to an opsin is termed rhodopsin.

Optogenetics has been successfully used for studying the neural circuitry in C. elegans, Drosophila, birds, rodents, and primates. Light is most commonly delivered by fibre optics but there is speculation that a light-emitting diode would be better since the fibre optic lasers involve passing light from an external source into the brain; whereas, light-emitting diodes generate light locally. Another benefit for light-emitting diodes is that they produce little heat thereby reducing the problem of overheating in the brain, and they consume less energy than lasers.  

image

Figure 3. Mechanism of depolarization by Channelrhodopsin-2. Conformational changes induced by 470nm light caused the channelrhodopsin gated channel to open, allowing cations to cross the channel and enter the neuron, resulting in depolarization.

The Basal Ganglia is known to be involved in movement disorders (Figure 1). Many neural fibres from other parts of the brain pass through this region. SO the result is a densely packed, complex network of neurons that is hard to study. The basal ganglia is thought to play a role in initiation and termination of movements, often disrupted in movement disorders. As well two pathways known as the striatonigral (direct) and striatopallidal (indirect) pathways are responsible for regulating motor control and so they have been of interest when looking at motor control disorders. However their projections are intermingled in the network of neurons in the basal ganglia.

This is where optogenetics come into play.

The long-held views were that the direct pathway potentiated movement and the indirect pathway inhibited movement. However, using optogenetic control of neural activity they showed that activation of either pathway both excited and inhibited basal ganglia output. This proved that the model is much more complex than the existing theory, and that movement involves a combination of the direct and indirect pathways.

Deep Brain Stimulation (DBS) is currently used as a method of effectively treating the symptoms of Parkinson’s and Essential Tremor. However, most neuroscientist have no idea how it is able to alleviate symptoms. The subthalmic nucleus (STN) is one of the five subcortical nuclei that make up the basal ganglia and it was hypothesized that DBS may be altering the activity in STN resulting in alleviation of tremor symptoms. They used optogenetics to alter activity in the STN, but they found that neither inactivating nor stimulating the STN (subthalmic nucleus) failed to relieve symptoms. Stimulation of afferent projections using optogenetics; however, was successful to reduce symptoms. This evidence therefore suggests that DBS may be altering activity of afferent projection and not he STN as previously though.

Although DBS is used to treat dyskinesias, it is ineffective for treating Dystonia and OCD. Hopefully, increased understanding of the direct and indirect pathways through the basal ganglia, and the effects of altering activity of different neurons will allow us to develop treatments for a wider range of movement disorders and improve current therapies for Parkinson’s and essential tremors.

Jenna 


References

Rossi MA, Calakos N, Yin HH. 2015. Spotlight on Movement Disorders: What Optogenetics has to Offer. Movement Disorders 00(00):1-8. DOI: 10.1002/mds.26184

http://neuroscience.uth.tmc.edu/s3/chapter06.html

b4272
kristindoiron-blog

Slow and Steady Wins the Race

You’ve all heard that children’s story, right? About thetortoise and the hare? For those of you that don’t know, the moral of the story is that slow and steady wins the race, and it turns out that according to studies done on zebrafish, this may also be the case for sperm.

So far, my blog has been centered on the epigenetic effects that a mother has on her fetus during the nine months in the womb, but what about the paternal side of things? It turns out that gametes can have more than just a genetic effect, in fact methylation patterns in sperm remain in tact from fertilization through to early embryogenesis. The embryo even develops using the sperm methylome as a building block (Figure 1).

image

Figure 1. Zebrafish oocyte being fertilized, followed by a 16 cell embryo carrying DNA with both sperm and oocyte methylome, where the oocyte methylome adapts to the sperm methylome by the mid-blastula stage. Adapted from Jiang et al., 2013.

Prior to this research, it was thought that the sperm and oocyte underwent a demethylation stage following fertilization; however, it turns out that the oocyte methylome undergoes demethylation and the fertilized egg then takes on the methylome of the sperm by the mid-blastula stage, and this is maintained through early embryogenesis.

This study was based on the methylation of CpG sites which are regions of DNA where a cytosine and an adjaccent guanine are linearly separated by one phosphate (Figure 2), and it was found that approximately 14% of CpG sites are differentially methylated between the sperm and the oocyte. It was also found that CGI methylation was reprogrammed to the sperm methylation by the mid-blastula stage. Following the mid-blastula stage, however, the methylome begins to differ from the sperm methylome. That being said though, this methylome is built off of the sperm methylome. The somatic cell methylome is also built off of the sperm methylome, making the somatic cell methylome highly congruent to the paternal gamete.

image

Figure 2. Methylated CpG site where a guanidine nucleoside and a cytosine nucleoside are separated by one phosphate, where the cytosine is methylated.

Following fertilization in zebrafish, there is no whole genome DNA demethylation similar to mammals and when comparing the sperm and mid-blastula stage methylation in zebrafish, there was only a 0.38% difference in the methylation pattern. The maternal DNA is passively demethylated before reprogramming to the sperm methylome, and the inheritance of the sperm methylome then facilitates the epigenetic regulation of embryogenesis.

In another study the inheritance of paternal epigenetics was studied through observational studies. Fish were either in a high sperm competition or a low sperm competition environment, where there was a 2:1 male to female ratio or a 2:1 female to male ratio, respectively. The fish then went through two spermatogenic cycles and sperm was taken from the fish at each one. It turns out that males in a high-competition environment produced faster, more motile sperm. Researchers also found that embryos from eggs fertilized by the high-competition sperm actually hatch faster than embryos from low-competition sperm. This may be due to increased metabolism caused by the inherited epigenetics of the sperm; however, on the other hand, these embryos also had a lower survival rate than the ones from the low sperm competition environment.

In other species it has been shown that offspring from high sperm competition environments are more successful in terms of fitness.  This supports the good sperm hypothesis which states that females will mate with particular males which have traits that indicate good sperm, leading to the fostering of fit offspring. Two ways that paternal epigenetics can be passed on are through mRNAs transferred from the sperm to the zygote, and from the marking of histones. These changes in epigenetics can be caused by hormone levels.Another hypothesis to explain the lower survival rate of embryos from high competition environments is that this environment may cause a high level of stress in the high competition males, which may alter epigenetics, and as such, be passed to the offspring.

Now, if you’ll remember back to my first post : The inheritance of stress through microRNA expression: are we perpetuating a vicious cycle?, about stress being passed down in the form of epigenetics from the maternal side.  This information about stress and sperm is interesting because if this is indeed the case, then embryos may receive stress hormones or be predisposed to certain conditions, by both their mother and their father. If stress can contribute to various diseases such as obesity, heart disease, and diabetes, than this may have a confounding effect, which could have a very negative effect on the embryo.

In evolutionary terms this is interesting because we usually think of females wanting to mate with the fittest of males; however, these males may not actually be the ones to pass on the most genes.  Females may be avoiding mating with males that are in a stressful or higher competition situation.

In conclusion, these studies have shown that epigenetics during early embryogenesis don’t only come from one side. In zebrafish, both parental gametes are responsible for inducing epigenetic changes, and according to this research, the paternal methylome may be more important than originally thought.

So, while the good sperm hypothesis states that the fastest sperm is the best sperm, it looks like it may just be that slow and steady wins the race.

Kristin

_______________________________________________________________________

References:

Zajitschek, S., Hotzy, C., Zajitschek, F., Immler, S. Short-term variation in sperm competition causes sperm mediated epigenetic effects on early offspring performance in the zebrafish. P Roy Soc. 2014. DOI: 10.1098/rspb.2014.0422

http://www.sciencedirect.com/science/article/pii/S0092867413005175

B4272
junevickieee

Trans Day of Visibility

Although this blog is dedicated to a university course that I’m taking, after seeing various posts I’ve decided to wish everyone a happy Trans Day of Visibility! I can testify that being able to be yourself is the most rewarding feeling, & I’m very thankful for the loving supportive people in my life. You owe it to yourself to be yourself. Stay strong & love you all! (her/she)

-June

transdayofvisibility tdov iamawomen
kierstenamos-blog

I’ve Got Chills, They’re Multiplyin’…

Have you ever listened to a song, and felt goosebumps when it got to a really good part? Goosebumps are a quirky phenomenon, and can happen for different reasons, like fear, cold temperatures, or admiration. They can also happen when you are listening to pleasurable music! Goosebumps are evolutionarily useful, since they cause the hair on the skin to rise, providing a thicker layer of insulation for the body. But why do we get goosebumps in response to music? It turns out that the answer may be in the release of dopamine!

 It has been known for a while that dopamine is released in response to pleasurable stimuli, including music. Dopamine is a neurotransmitter that is majorly involved in reward behavior systems in the brain. The release and localization of dopamine in the brain is complicated, but researchers have discovered that a spot in the brain called the striatum is a main center of dopamine signaling. The role of dopamine in these pleasurable reactions to music, however, has not been a focus of research in the past. In a series of experiments at McGill University in 2011, researchers took a closer look at dopamine release in response to pleasurable music, and how it is involved in the anticipation and reward aspects of listening to pleasurable music.

 It turns out that music-induced pleasure is quite difficult to measure, since both music and pleasure are subjective experiences (can’t be quantified easily). Would you and I both report the same emotional responses while listening to the same song? It’s possible, if we have very similar musical tastes, but it is unlikely. To get around this issue, the researchers at McGill used the “chills” response (also known as goosebumps!) to objectively measure their subjects’ emotional responses to pleasurable music. Goosebumps are not unique to music, but rather show a state of intense arousal, which does occur when listening to pleasurable music. But why do we get these goosebumps in response to music? The researchers did not delve into this question specifically, but my hypothesis is illustrated in the figure below, where dopamine is made from L-phenylalanine (an amino acid) and transformed into adrenaline, which is known to produce goosebumps!

For the study’s musical selections, participants were asked to choose music that was very pleasurable to them. This is an interesting approach, since most scientific experimenters choose the stimuli for their experiment. Obviously, this would not work for this study, since the participants probably wouldn’t have the same taste or emotional responses to music as the experimenters! The “chills” response allowed the researchers to objectively measure the subjects’ emotional responses to music, but what about measuring dopamine? To directly test the activity of dopamine in the striatum while the participants listened to music, the researchers used a technique called ligand-based PET (positron emission tomography) scanning. This type of scanning would allow them to see dopamine release in real-time, which is valuable and unique to experiments with music! Since the subject must lie still inside of the scanner to get an accurate scan, this type of testing would be difficult to apply to studies looking at food consumption and neural responses, for example, because the subject would have to chew the food inside of the scanner to get a real-time picture of what was going on inside their brain. When the stimulus is music, however, the subject can simply lie still within the scanner and listen! The participants were asked to rate their degree of pleasure in response to the music, and the researchers found that their ratings agreed with the intensity of the “chills” responses. The PET scans showed high autonomic nervous system activity while listening to pleasurable music, and increased dopamine transmission in the dorsal and ventral striatum on both sides of the brain.

 In order to figure out what was different between the anticipation of a good part of the song, and the actual reward that is the pleasurable part of the song, the researchers also performed fMRI scans. What they found was fascinating: in times of anticipation and times of intense pleasure, different areas of the brain are active (see figure below)! When the participants were anticipating a “good part” in the song, hemodynamic and neurochemical activity was highest in the right caudate. When the participants were experiencing great pleasure from the music (when they were getting chills), activity was highest in the right nucleus accumbens. This shows that the release of dopamine in times of intense emotional arousal in response to music happens at a location that is unique from other areas of dopamine release.

This research is fascinating because it defines the emotional satisfaction and reward that we receive from listening to music. Our neural response to music can now be compared with other tangible stimuli like food, drugs or sex, which also cause dopamine release! People have wondered for a long time about why music has been around for so many years, and why we respond to it in the way that we do. The biological purpose of music really is difficult to determine, since it is such an abstract concept when compared to something like food. The researchers proposed that this response to music might help people in their emotional development. My biggest question about this research, however, is what causes the intense emotional response in the first place? Is it the harmonies of choir members joining together in a special way, or the sounds of different musical instruments blended together? Is it the sound of intervals played in a certain sequence? Maybe that’ll be the next focus of research!

So next time you’re at a concert and you get the chills, remember that something incredible is going on inside of your brain… But until then, plug in your headphones and get that dopamine flowing!


References:

Salimpoor, V. et al. (2011). Anatomically Distinct Dopamine Release During Anticipation and Experience of Peak Emotion to Music. Nature Neuroscience, 14(2), 257-262.

Broadley, K. (2010). The Vascular Effects of Trace Amines and Amphetamines. Pharmacology & Therapeutics, 125(3), 363-375.

mariaheartly

Little Bundles of Joy: Exosomes

For many people, childhood can be summed up in 5 words: peanut butter and jam sandwiches. Unfortunately, many children will never be able to eat one because of an allergy, or they may have to avoid eating one due to allergies their peers have. Peanut allergies have doubled in the last 10 years and effect up to 3% of a population in Western countries. Our bodies have an intricate response to allergens, and it turns out that certain components in breast milk can help with long-term regulation of a child’s reaction and sensitivities to these allergens. Exosomes – small vesicles found in biological fluids like breast milk – provide an infant with key micro RNAs (miRNAs) that can regulate immune responses to allergens, and they express allergen detecting and presenting proteins. In this post, I’m going to focus on one of the immune-related effects that regulatory T cell (Treg) differentiation and function have, and how exosomes regulate this effect by providing certain miRNAs.

Exosomes are small vesicles that bud from many different types of cells, including those involved with allergic responses and intestinal epithelial cells. They contain numerous proteins associated with cell signaling and immune response, and they are packed with all sorts of compounds. Exosomes from breast milk tend to consistently have the same compounds including, but certainly not limited to, 59 immune-related miRNAs and the proteins MHC II, Hsc70, and CD81 (Figure 1). The miRNAs in these exosomes are a form of genetic material similar to DNA, usually consisting of 20-30 nucleotides, but they are not translated into protein. Instead, they exist as various structural forms and interact with other proteins, DNA, or RNA forms to modify cellular activity.

Certain miRNAs, like miR-155 and miR-148a, regulate the differentiation and function of Tregs. T cells are associated with the immunological function of various immune cells, but Tregs are a subset of T cells specifically involved in immunological tolerance and suppression, meaning they can prevent allergies! When Tregs respond to a specific allergen, they become activated and prevent T helper 2 (Th2) cell activity. Th2 activity is very important for cells to elicit an atopic allergic response; that is, a response which is very sensitive to the allergen. The idea is that by preventing Th2 activity, it will be possible to prevent allergic responses to specific allergens, like peanuts.

Back to miR-155 and miR-148a – how do they regulate the differentiation and function of these Tregs? All Treg cells express FoxP3 proteins. FoxP3, the sly fox that it is, is known as the master regulator of Treg cells because it can regulate Treg differentiation and function, so miR-155 takes advantage of FoxP3’s important role. When miR-155 enters the cell, it targets another protein, SOCS1 (suppressor of cytokine signaling 1). Normally, SOCS1 inhibits phosphorylated STAT5 (STAT5-P) and acts as a negative feedback in other pathways. By preventing SOCS1 activity, miR-155 consequently enables STAT5-P to do its job. Just to get an idea of where this pathway is headed, STAT5-P can induce gene expression of specific genes. So, if STAT5-P is active, it will, along with three other proteins, increase FoxP3 expression. Higher levels of FoxP3 has two functions: induce Treg differentiation, and increase miR-155 expression in the cell by binding to the miR-155-encoding gene called bic. The latter point is the most relevant to allergies because increasing miR-155 levels ultimately prevents Th2 hypersensitive allergic immune responses because it blocks two really important transcription factors that are associated with Th2-atopic responses. See Figure 2 for the pathway showing how miR-155 does this.

On the other hand, miR-148a regulates Treg differentiation and function via DNA demethylation to ensure stable FoxP3 expression. The FoxP3 gene has a specific sequence called Treg-specific demethylated region (TSDR) that enables stable FoxP3 expression only when it is demethylated. Of course, there has to be a protein to make life difficult, and it is called DNA methyltransferase (DNMT). This protein methylates TSDR and prevents stable FoxP3 expression. But miR-148a targets and stops DNMT. The end result is demethylation of TSDR. Basically, mir-148a + DNMT = demethylation (yay!) and DNMT = methylation (boo!) (Figure 3). Research has found that people who have atopic diseases have less demethylated TSDR Tregs. When you think about it, this makes sense because it means that in the end they won’t be able to reduce Th2 activity. Melnik et al. speculate that the presence of miRNAs could induce epigenetic modifications that will ultimately lead to Treg stability and prevention of atopic diseases.

The fact that some exosome contents can regulate Tregs is good news, but it’s not really applicable unless the exosomes from milk actually influence the infant’s immune system. In an experiment performed by Admyre et al., they found that peripheral blood mononuclear cell (PBMC – nucleated blood cells) from adult human donors have a higher level of FoxP3 expression after being incubated with breast milk exosome solutions. After observing this, the researchers suggested that human milk does have the potential to influence the infant’s immune system. The fact that milk exosomes can result in higher FoxP3 levels is a definite sign that breast milk has the potential to help reduce allergic responses through lower Th2 activity. Nonetheless, this research is still controversial, and it is unclear exactly what effects breast milk has on allergy development or avoidance.

Assuming that milk exosomes do influence the infant’s immune system, how can they pass through the digestive tract? After all, the digestive tract is a harsh, acidic environment. Gu et al. assessed miRNA stability in an attempt to answer a similar question. They found that breast milk miRNAs were stable after being subjected to harsh environments such as varying temperatures, RNases, and low pH. Knowing that miRNAs are resistant to these harsh conditions, especially low pH, it is promising because it supports the idea that infants take up the miRNAs through the digestive tract. Another recent study by Lässer et al. demonstrated that macrophages are capable of exosome uptake. This makes sense considering macrophages are an important part of the immune system. Still, other studies have shown numerous cells can take up exosomes, so it’s still not entirely clear which cells play a role in breast milk exosome uptake.

The expression of FoxP3 in breast milk-derived exosomes may be influenced by the mother’s atopic status. Melnik et al. suggest that low levels of Foxp3-expressing Tregs in a mother (as a result of allergies) may yield lower miR-155 levels in exosomes. They also suggest that this may have something to do with why atopic mothers pass on atopic allergies to infants more than atopic fathers do. 

There’s still so much to learn about this process that I can’t help but wonder what else these breast milk exosome-derived miRNAs have in store! While there are still a lot of unknowns regarding these miRNAs, it does provide lots of research questions that can be addressed like how these miRNAs interact with proteins, or what other miRNAs can reduce allergic responses. Understanding what factors contribute to allergy development and prevention may, in the future, help reduce the occurrence of some common allergies. Maybe it will someday allow everyone to enjoy the deliciousness that is a PB&J sandwich.

Until next time,

Maria

References:

Melnik BC, John SW, Schmitz G. Milk: an exosomal microRNA transmitter promoting thymic regulatory T cell maturation preventing the development of atopy? Journal of Translational Medicine. 2014; 12(43).

Gu Y,  Li M, Wang T, Liang Y, Zhong Z, Wang X, Zhou Q, Chen L, Lang Q, He Z, Chen X, GonG J, Gao X, Li X, Lv X. Lactation-Related MicroRNA Expression Profiles of Porcine Breast Milk Exosomes. International Journal of Biological Sciences. 2012; 8(1): 118-123.

Lässer C, Alikhani VS, Ekström K, Eldh M, Paredes PT, Bossios A, Sjöstrand M, Gabrielsson S, Lötvall J, Valadi H. Human saliva, plasma and breast milk exosomes contain RNA: uptake by macrophages. Journal of translational Medicine. 2011; 9(9).

Admyre C, Johansson SM, Qazi KR, Filén J-J, Lahesmaa R, Norman M, Neve EPA, Scheynuis A, Gabrielsson S. Are Present in Human Breast Milk Exosomes with Immune Modulatory Features. The Journal of Immunology. 2007; 179: 1969-1978.

Zhou Q, Li M, Wang X, Li Q, Wang T, Zhu Q, Zhou X, Wang X, Gao X, Li X. Immune-related MicroRNAs are Abundant in Breast Milk Exosomes. International Journal of Biological Sciences. 2012; 8: 118-123.

b4272 breastmilk exosome allergies
jacqrich-blog

The Ever Changing Oral Microbiome

Think about your last trip to the dentist. The vinyl chair, the bright lights, the tray of metal tools… And most likely the shame you feel when your dentist asks if you’ve been flossing. Now although this bi-annual trip to the dentist seems normal to most of us (I hope), dentist trips haven’t always been a part of the “routine” as far as human health goes. Why not? The answer lies in the composition of our ancestors’ oral microbiomes, which has been revealed through recent genome sequencing advancements.

The human oral microbiome can be persevered for an extensive period of time after death, unlike other microbial communities (such as those of the skin or mouth). This is a result of the formation of dental calculus, which thankfully has nothing to do with derivatives. Dental calculus forms in a sequential process, beginning with dense multi-species biofilms, known as dental plaque. This plaque can then become calcified by calcium phosphates precipitated from the saliva or tooth surface, causing the plaque to harden into dental calculus (tarter). Bacteria become “locked” in into the calculus, whose hard surface resembles bone. This is the perfect site for the formation of more biofilms, more plaque, and the accumulation of more dental calculus (Figure 1). Disgusting, right? Although we do out best to prevent this process from happening through regular oral hygiene practices, once tarter forms, it’s too hard to be removed using a toothbrush or floss. That’s where your dentist comes in with the scary tray of metal tools. If not removed it can lead to gingivitis (inflammation of the gums) or periodontitis (inflammation of the tissues and bone that supports the teeth).

image

Figure 1: The formation of dental calculus from dense oral biofilm (plaque) interacting with calcium phosphates from saliva and/ or the tooth surface facilitates the formation of more oral biofilm.

As you can probably imagine, our hunter/gatherer ancestors weren’t rinsing with Listerine or making regular dentist trips, so naturally they would have had a build up of dental calculus. Despite this, the structure of the bone supporting their teeth suggests that they had less periodontal disease than current human populations! By sequencing DNA persevered in ancient dental plaque, researchers have established that the oral microbiome of hunters/ gatherers was significantly more diverse than that of current human populations. Who knew plaque could actually be useful for something?! Major shifts in diet, such as those associated with the beginning of farming (10 000 years ago) and the industrial revolution (200 years ago), are thought to be the main contributors to this loss of diversity over the past 10 000 years.

By analyzing ancient human skeletons recovered from the last hunter/gatherers in Eastern Europe, as well as those of the first farmers of the agricultural era, a clear shift could be seen between the oral microbiomes the two populations. The beginning of farming was marked by increased consumption of fermentable carbohydrates, which changed the oral environment and lead to increased colonization by carbohydrate fermenting species, such as Porphyromonas gingivalis and Tannerella forsythia. These species are specifically adapted to a carbohydrate rich environment, which allowed them to outcompete other species and decrease the overall oral microbiome diversity.

Another significant decrease in diversity followed the industrial revolution about 200 years ago, which saw the large-scale production of foods containing both mono- and disaccharides. DNA sequencing of samples from before and after the revolution showed that the abundance of gram-positive anaerobic bacteria, such as Streptococcus mutans, increased greatly post revolution. The shift also correlated directly with an increased prevalence of dental calculus and periodontal disease, as well as a decrease in the overall diversity of the oral microbiome. Coincidence? I think not.

S. mutans is a gram positive anaerobic bacterium, that acts as the poster child for modern oral disease. It is known for its enhanced ability to alter its metabolic processes based on what nutrient sources are available in its environment. This makes it extremely competitive under conditions with varying nutrient sources, such as the oral biofilm. Using a process known as carbohydrate catabolite repression (CCR), S. mutans favourably metabolizes a preferred carbohydrate source in the presence of non-preferred sources. They are also able to use this process to repress of genes involved in the metabolism of non-preferred carbohydrates in order to optimize growth using the preferred carbon source until it runs out.

As with many other bacterial species, the preferred carbon source of S. mutans is glucose, whose uptake is facilitated by a phosphotransferase system (PTS). What’s special about S. mutans is that it has at least 15 identified PTSs, specific to different mono- di- and oligosaccharides, that can be expressed based on the carbohydrates available in the environment. This allows S. mutans cells to shift almost seamlessly between different carbon sources, and outcompete other species who are not able to do so. PTSs generally work as a phosphorelay system, consisting of non-specific EI (enzyme 1), a heat resistant kinase (HPr) and specific EII (enzyme II). EII is a transmembrane protease that recognizes specific carbohydrates, transports them into the cell and phosphorylates them to keep them from being transported back out. If glucose is present, its phosphorylated form is able to bind and active transcription factors that repress genes for other PTSs (Figure 2). This mechanism demonstrates one of the ways S. mutans is able to outcompete other species in the oral biofilm, which contributes to its pathogenicity.

image

Figure 2: The phosphotransferase system (PTS) of S. mutans used in carbohydrate catabolite repression (CCR), where the EII enzyme can be any of 15 mono-, di- or oligosaccharide specific transmembrane proteases. A phosphate group is transferred via a phosophorelay to an incoming carbohydrate (glucose), which can then repress genes involved in alternative metabolism.

The ability of S. mutans to adjust its metabolic processes so rapidly is only partially responsible for drop in diversity that was seen as a result of the adoption of a carbohydrate rich diet. The lactic acid producing ability of this species through anaerobic fermentation lead to an increasingly acidic environment in the oral biofilm, which allowed it to further outcompete other members of the biofilm. This acidity also causes damage to the tooth surface, one of the direct factors implicated in how this species causes oral disease.

Although maintaining good oral health today is somewhat demanding, this was not always the case. Analysis of dental plaque from ancient human remains has allowed researchers to identify that major shifts in human diet changed the oral environment enough to alter the composition of the oral microbiome. This was primarily driven by the introduction of large amounts of fermentable carbohydrates, which lead the less diverse and disease associated oral microbiome of today’s human population. So even though our ancestors didn’t have to floss, you still do!

Thanks for reading!

Jac

Here are some additional readings for you to check out:

Alder CJ et al. (2013) Sequencing ancient calcified dental plaque shows changes in oral microbiota with dietary shifts of the Neolithic and Industrial revolutions. Nature Genetics, 45: 450-455.

Moye ZD, Zeng L, Burne RA (2014) Fueling the caries process: carbohydrate metabolism and gene regulation by Streptococcus mutans. Journal of Oral Microbiology, 6.

Pihlstrom BL, Michalowicz BS, Johnson NW. (2005) Periodontal diseases. Lancet, 366(9499): 1809-1820.