Dewdrops on a spiderweb reveal the physics behind cell structures

Dewdrops on a spiderweb reveal the physics behind cell structures

As any cook knows, some liquids mix well with each other, but others do not.

For example, when a tablespoon of vinegar is poured into water, a brief stir suffices to thoroughly combine the two liquids. However, a tablespoon of oil poured into water will coalesce into droplets that no amount of stirring can dissolve. The physics that governs the mixing of liquids is not limited to mixing bowls; it also affects the behavior of things inside cells. It’s been known for several years that some proteins behave like liquids, and that some liquid-like proteins don’t mix together. However, very little is known about how these liquid-like proteins behave on cellular surfaces.

“The separation between two liquids that won’t mix, like oil and water, is known as ‘liquid-liquid phase separation,’ and it’s central to the function of many proteins,” said Sagar Setru, a 2021 Ph.D. graduate who worked with both Sabine Petry, a professor of molecular biology, and Joshua Shaevitz, a professor of physics and the Lewis-Sigler Institute for Integrative Genomics.

Such proteins do not dissolve inside the cell. Instead, they condense with themselves or with a limited number of other proteins, allowing cells to compartmentalize certain biochemical activities without having to wrap them inside membrane-bound spaces.

“In molecular biology, the study of proteins that form condensed phases with liquid-like properties is a rapidly growing field,” said Bernardo Gouveia, a graduate student chemical and biological engineering, working with Howard Stone, the Donald R. Dixon ’69 and Elizabeth W. Dixon Professor of Mechanical and Aerospace Engineering, and chair of the department.

Setru and Gouveia collaborated as co-first authors on an effort to better understand one such protein.

“We were curious about the behavior of the liquid-like protein TPX2. What makes this protein special is that it does not form liquid droplets in the cytoplasm as had been observed before, but instead seems to undergo phase separation on biological polymers called microtubules,” said Setru. “TPX2 is necessary for making branched networks of microtubules, which is crucial for cell division. TPX2 is also overexpressed in some cancers, so understanding its behavior may have medical relevance.”

Individual microtubules are linear filaments that are rod-like in shape. During cell division, new microtubules form on the sides of existing ones to create a branched network. The sites where new microtubules will grow are marked by globules of condensed TPX2. These TPX2 globules recruit other proteins that are necessary to generate microtubule growth.

The researchers were curious about how TPX2 globules form on a microtubule. To find out, they decided to try observing the process in action. First, they modified the microtubules and TPX2 so that each would glow with a different fluorescent color. Next, they placed the microtubules on a microscope slide, added TPX2, and then watched to see what would happen. They also made observations at very high spatial resolution using a powerful imaging approach called atomic force microscopy.

“We found that TPX2 first coats the entire microtubule and then breaks up into droplets that are evenly spaced apart, similar to how morning dew coats a spider web and breaks up into droplets,” said Gouveia.

Setru, Gouveia and colleagues found that this occurs because of something physicists call the Rayleigh-Plateau instability. Though non-physicists may not recognize the name, they will already be familiar with the phenomenon, which explains why a stream of water falling from a faucet breaks up into droplets, and why a uniform coating of water on a strand of spider web coalesces into separate beads.

“It is surprising to find such everyday physics in the nanoscale world of molecular biology,” said Gouveia.

Extending their study, the researchers found that the spacing and size of TPX2 globules on a microtubule is determined by the thickness of the initial TPX2 coating — that is, how much TPX2 is present. This may explain why microtubule branching is altered in cancer cells that over-express TPX2.

“We used simulations to show that these droplets are a more efficient way to make branches than just having a uniform coating or binding of the protein all along the microtubule,” said Setru.

“That the physics of droplet formation, so vividly visible to the naked eye, has a role to play down at the micrometer scales, helps establish the growing interface (no pun intended) between soft matter physics and biology,” said Rohit Pappu, the Edwin H. Murty Professor of Engineering at Washington University in St. Louis, who was not involved in the study.

“The underlying theory is likely to be applicable to an assortment of interfaces between liquid-like condensates and cellular surfaces,” adds Pappu. “I suspect we will be coming back to this work over and over again.“

“A hydrodynamic instability drives protein droplet formation on microtubules to nucleate branches,” by Sagar U. Setru, Bernardo Gouveia, Raymundo Alfaro-Aco, Joshua W. Shaevitz, Howard A. Stone and Sabine Petry, appeared in the Jan. 28 issue of Nature Physics (DOI: 10.1038/s41567-020-01141-8). This work was supported by the Paul and Daisy Soros Fellowships for New Americans; the National Science Foundation (Graduate Research Fellowship Program and via the Center for the Physics of Biological Function, PHY-1734030); the National Institutes of Health (National Cancer Institute National Research Service Award 1F31CA236160, National Human Genome Research Institute training grant 5T32HG003284, National Institute on Aging 1DP2GM123493); Pew Scholars Program (00027340); and the Packard Foundation (2014–40376).



from ScienceBlog.com https://ift.tt/36qWYxM

“Nuclear Physics”: Imaging into the Heart of a Cell



from ScienceBlog.com https://ift.tt/2MnZM7Y

Clear as Mud: How Tiny Plants Changed the Planet, 488 Million Years Ago

Clear as Mud: How Tiny Plants Changed the Planet, 488 Million Years Ago

Nearly 500 million years ago, Earth’s lowland landscapes were dominated by vast sandy, gritty plains. They then underwent a major, irreversible change, after which these landscapes became dominated by thick layers of mud.

Now, new research from Caltech explains that this drastic landscape change was instigated by the evolution of early tiny plants, like mosses and liverworts. The study was conducted as part of a collaboration between the laboratories of Woodward Fischer—professor of geobiology and associate director of the Center for Autonomous Systems and Technologies—and Michael Lamb, professor of geology. The work is described in a paper published in Science on January 29, 2021.

Prior research posited that large plants with deep roots (like trees in forests) helped hold mud, contributing to a muddy landscape. But several years ago, Earth scientists realized there was a problem with this idea: the rise of mud on Earth began before large, complex plants had evolved. In fact, the only plants that existed at the time of the great increase in mud were groups of small, centimeter-scale bryophytes—an informal collection of non-vascular plants (those that do not have a vascular system, and reproduce via spores not seeds) including mosses and liverworts. How could the appearance of such little plants lead to massive amounts of mud settling onto river floodplains? Graduate student Sarah Zeichner (MS ’20) set out to answer this question in the new study.

Lowland landscapes are largely shaped by the movement, mostly by rivers, of mud, silt, and sand. Mud is dominated by micrometer-size clay mineral particles, as opposed to sand, the particles of which are hundreds of micrometers in size and predominantly made of small rock fragments and/or the mineral quartz. The different properties of clay (the main component of mud) and sand dictate the way these materials move within rivers, which affects how they get deposited onto the landscape and preserved in the rock record.

If you mix sand and clay in water, sand particles quickly settle to the bottom, whereas the clay stays suspended; these particles do not settle to the bottom to produce a muddy bed unless the particles begin to clump together into larger aggregates that can rapidly settle, a process called flocculation.

Zeichner hypothesized that the rise of small plants like mosses propelled processes at a molecular level that initiated flocculation of clay in rivers, thus allowing mud to deposit on the landscape.

In a series of laboratory experiments, she and her team simulated how clay particles in mud interact in moving water, to help reveal how these particles could be deposited as mud, rather than staying suspended in water. In the experiments, different clay particles were mixed with organic molecules that have chemical properties similar to those produced by these early plants. The researchers found that only a small amount of organic material was needed to induce clay particles to bind together—to flocculate—and to rapidly settle within the simulated river.

“If you leave a muddy liquid on its own, it stays suspended and cloudy for days,” Zeichner says. “The clay is too fine and simply does not settle on a meaningful timescale. But we found that adding even a small amount of organic material acts as a sticky substance to glue the clay particles together, enabling them to form bigger and bigger clumps and thereby more rapidly settle to the bottom of the container.”

These experiments can enhance understanding of the rock record. Landscapes will be dominated by the suspended sediment in rivers that settles fastest onto riverbanks. “If you imagine these are then the particles floating in ancient rivers, and that river floods onto the landscape after a large rain or snowmelt, flocculated mud is going to settle out really quickly and create muddier river banks and floodplains. Without flocculation, only the sand would be left behind,” says Zeichner.

“For me, this is a big deal, because it’s a huge change in the rock record that was made by these tiny, scrappy founding plants, producing molecules that change the way sediment behaves on a global scale,” says Fischer. “Before this, the world was like a huge beach.”

The research also has ramifications for the modern carbon cycle—the global processes through which carbon is released into the atmosphere or ocean, or trapped underground. Understanding the carbon cycle is critical for making policy decisions to combat climate change.

“With the discovery of this organic-driven mud burial mechanism comes recognition of new profitable connections between the burial of mud and sequestration of carbon in lowland landscapes,” says Fischer. “There are many places around the world where the land is subsiding and has been starved of sediment due to the construction of river levees that strictly dictate the paths that water and sediment are allowed to take to control flooding. Knowing that organic carbon promotes the deposition of mud, and vice versa, raises the exciting possibility that land-use decisions for landscape restoration that allow access to fresh river sediment can also promote the sequestration of carbon.” Carbon sequestration is the process of capturing and containing carbon to reduce the amount of carbon in the atmosphere, which would ultimately help reduce climate change.

The paper is titled “Early plant organics increased global terrestrial mud deposition through enhanced flocculation.” Zeichner is the study’s lead author. In addition to Zeichner, Fischer, and Lamb, Caltech co-authors are graduate student Justin Nghiem, former research intern Nina Takashima, and former postdoctoral scholar Jan de Leeuw. Former Caltech postdoctoral scholar Vamsi Ganti now of UC Santa Barbara is also a co-author. Funding was provided by the Caltech Discovery Fund, the David and Lucile Packard Foundation, the American Chemical Society Petroleum Research Fund, the National Science Foundation, and Troy Tech High School Program.



from ScienceBlog.com https://ift.tt/3r44xm3

Scientists identify locations of early prion protein deposition in retina

Scientists identify locations of early prion protein deposition in retina

The earliest eye damage from prion disease takes place in the cone photoreceptor cells, specifically in the cilia and the ribbon synapses, according to a new study of prion protein accumulation in the eye by National Institutes of Health scientists. Prion diseases originate when normally harmless prion protein molecules become abnormal and gather in clusters and filaments in the human body and brain.

Understanding how prion diseases develop, particularly in the eye because of its diagnostic accessibility to clinicians, can help scientists identify ways to slow the spread of prion diseases. The scientists say their findings, published in the journal Acta Neuropathologica Communications, may help inform research on human retinitis pigmentosa, an inherited disease with similar photoreceptor degeneration leading to blindness.

Prion diseases are slow, degenerative and usually fatal diseases of the central nervous system that occur in people and some other mammals. Prion diseases primarily involve the brain, but also can affect the eyes and other organs. Within the eye, the main cells infected by prions are the light-detecting photoreceptors known as cones and rods, both located in the retina.

In their study, the scientists, from NIH’s National Institute of Allergy and Infectious Diseases at Rocky Mountain Laboratories in Hamilton, Montana, used laboratory mice infected with scrapie, a prion disease common to sheep and goats. Scrapie is closely related to human prion diseases, such as variant, familial and sporadic Creutzfeldt-Jakob disease (CJD). The most common form, sporadic CJD, affects an estimated one in one million people annually worldwide. Other prion diseases include chronic wasting disease in deer, elk and moose, and bovine spongiform encephalopathy in cattle.

Using confocal microscopy that can identify prion protein and various retinal proteins at the same time, the scientists found the earliest deposits of aggregated prion protein in cone photoreceptors next to the cilia, tube-like structures required for transporting molecules between cellular compartments. Their work suggests that by interfering with transport through cilia, these aggregates may provide an important early mechanism by which prion infection selectively destroys photoreceptors. At a later study timepoint, they observed similar findings in rods.

Prion protein also was deposited in cones and rods adjacent to ribbon synapses just before the destruction of these structures and death of photoreceptors. Ribbon synapses are specialized neuron connections found in ocular and auditory neural pathways, and their health is critical to the function of retinal photoreceptors in the eye, as well as hair cells in the ear.

The researchers say such detailed identification of disease-associated prion protein, and the correlation with retinal damage, has not been seen previously and is likely to occur in all prion-susceptible species, including people.

Next the researchers are hoping to study whether similar findings occur in retinas of people with other degenerative diseases characterized by misfolded host proteins, such as Alzheimer’s and Parkinson’s diseases.

Article

J. Striebel et al. Prion-induced photoreceptor degeneration begins with misfolded prion protein accumulation in cones at two distinct sites: cilia and ribbon synapses. Acta Neuropathologica Communications DOI: 10.1186/s40478-021-01120-x (2021).

Related

J Striebel et al. Microglia are not required for prion-induced retinal photoreceptor degeneration. Acta Neuropathologica Communications DOI: 10.1186/s40478-019-0702-x (2019).

J Carroll et al. Microglia are critical in host defense against prion disease. Journal of Virology DOI: 10.1128/JVI.00549-18 (2018).

Who

Bruce Chesebro, M.D., chief of NIAID’s Laboratory of Persistent Viral Diseases, is available to comment on this study.



from ScienceBlog.com https://ift.tt/2MH0Qnv

X-Ray Tomography Lets Researchers Watch Solid-State Batteries Charge, Discharge

X-Ray Tomography Lets Researchers Watch Solid-State Batteries Charge, Discharge

Using X-ray tomography, a research team has observed the internal evolution of the materials inside solid-state lithium batteries as they were charged and discharged. Detailed three-dimensional information from the research could help improve the reliability and performance of the batteries, which use solid materials to replace the flammable liquid electrolytes in existing lithium-ion batteries.

The operando synchrotron X-ray computed microtomography imaging revealed how the dynamic changes of electrode materials at lithium/solid-electrolyte interfaces determine the behavior of solid-state batteries. The researchers found that battery operation caused voids to form at the interface, which created a loss of contact that was the primary cause of failure in the cells.

“This work provides fundamental understanding of what is happening inside the battery, and that information should be important for guiding engineering efforts that will push these batteries closer to commercial reality in the next several years,” said Matthew McDowell, an assistant professor in the George W. Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering at the Georgia Institute of Technology. “We were able to understand exactly how and where voids form at the interface, and then relate that to battery performance.”

The research, supported by the National Science Foundation, a Sloan Research Fellowship, and the Air Force Office of Scientific Research, was reported Jan. 28 in the journal Nature Materials.

The lithium-ion batteries now in widespread use for everything from mobile electronics to electric vehicles rely on a liquid electrolyte to carry ions back and forth between electrodes within the battery during charge and discharge cycles. The liquid uniformly coats the electrodes, allowing free movement of the ions.

Rapidly evolving solid-state battery technology instead uses a solid electrolyte, which should help boost energy density and improve the safety of future batteries. But removal of lithium from electrodes can create voids at interfaces that cause reliability issues, limiting how long the batteries can operate.

“To counter this, you could imagine creating structured interfaces through different deposition processes to try to maintain contact through the cycling process,” McDowell said. “Careful control and engineering of these interface structures will be very important for future solid-state battery development, and what we learned here could help us design interfaces.”

The Georgia Tech research team, led by first author and graduate student Jack Lewis, built special test cells about two millimeters wide. They were designed to be studied at the Advanced Photon Source, a synchrotron facility at Argonne National Laboratory, a U.S. Department of Energy Office of Science facility located near Chicago. Four members of the team studied the changes in battery structure during a five-day period of intensive experiments.

“The instrument takes images from different directions, and you reconstruct them using computer algorithms to provide 3D images of the batteries over time,” McDowell said. “We did this imaging while we were charging and discharging the batteries to visualize how things were changing inside the batteries as they operated.”

Because lithium is so light, imaging it with X-rays can be challenging and required a special design of the test battery cells. The technology used at Argonne is similar to what is used for medical computed tomography (CT) scans. “Instead of imaging people, we were imaging batteries,” he said.

Because of limitations in the testing, the researchers were only able to observe the structure of the batteries through a single cycle. In future work, McDowell would like to see what happens over additional cycles, and whether the structure somehow adapts to the creation and filling of voids. The researchers believe the results would likely apply to other electrolyte formulations, and that the characterization technique could be used to obtain information about other battery processes.

Battery packs for electric vehicles must withstand at least a thousand cycles during a projected 150,000-mile lifetime. While solid-state batteries with lithium metal electrodes can offer more energy for a given size battery, that advantage won’t overcome existing technology unless they can provide comparable lifetimes.

“We are very excited about the technological prospects for solid-state batteries,” McDowell said. “There is substantial commercial and scientific interest in this area, and information from this study should help advance this technology toward broad commercial applications.”

In addition to those already mentioned, co-authors included Francisco Javier Quintero Cortes, Yuhgene Liu, John C. Miers, Jared Tippens, Dhruv Prakash, Thomas S. Marchese, Sang Yun Han, Chanhee Lee, Pralav P. Shetty, and Christopher Saldana from Georgia Tech; Ankit Verma, Bairav S. Vishnugopi, and Partha P. Mukherjee from Purdue University; Hyun-Wook Lee from Ulsan National Institute of Science and Technology; and Pavel Shevchenko and Francesco De Carlo from Argonne National Laboratory.

This work is partially supported by the National Science Foundation under Award No. DMR-1652471, a Sloan Research Fellowship in Chemistry, a NASA Space Technology grant, the Colciencias-Fulbright scholarship program cohort 2016, the Ministry of Trade, Industry & Energy/Korea Institute of Energy Technology Evaluation and Planning (MOTIE/KETEP)(20194010000100), the Air Force Office of Scientific Research (AFOSR) under Grant FA9550-17-1-0130, and the Scialog program sponsored jointly by Research Corporation for Science Advancement and the Alfred P. Sloan Foundation that includes a grant to Purdue University by the Alfred P. Sloan Foundation. This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsoring agencies.

CITATION: John A. Lewis, et al., “Linking Void and Interphase Evolution to Electrochemistry in Solid-State Batteries Using Operando X-Ray Tomography.” (Nature Materials, 2021) https://doi.org/10.1038/s41563-020-00903-2.

Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia  30332-0181  USA

Media Relations Contact: John Toon (404-894-6986) (jtoon@gatech.edu)

Writer: John Toon



from ScienceBlog.com https://ift.tt/3tcYIoc

Supercomputers Advance Longer-Lasting, Faster-Charging Batteries

Supercomputers Advance Longer-Lasting, Faster-Charging Batteries

In an effort to curb the rise in overall carbon vehicle emissions, the state of California recently announced a plan to ban new sales of gasoline-powered vehicles in less than 15 years – if the current governor’s order holds strong.

Now, thanks to supercomputers funded by the National Science Foundation such as Comet at the San Diego Supercomputer Center (SDSC) at UC San Diego and Stampede2 at the Texas Advanced Computing Center (TACC), the research community has been making progress on developing more reliable and efficient electric cars and light trucks as well as other products by focusing on the batteries that power them.

Three such university teams that recently were given allocations on these supercomputers include researchers from UC San Diego, Washington University in Saint Louis, and Washington State University.

“We have been working on making lithium-ion batteries longer lasting and faster charging for many years,” said Shyue Ping Ong, associate professor of nanoengineering at UC San Diego. “Comet was crucial for performing the calculations to elucidate the unique lithium insertion and diffusion mechanisms responsible for the high-rate capability in a new anode material we are developing.”

This new Li3V2O5 anode, which is a safer alternative to the typical graphite anode found in today’s lithium-ion batteries, was the focus of Ong’s recent publication in the journal Nature. The study was co-authored by UC San Diego’s Sustainable Power and Energy Director Ping Liu, who explained that this new anode approach can be cycled for more than 6,000 times with negligible capacity decay, and can charge and discharge energy rapidly, delivering over 40 percent of its capacity in 20 seconds.

This means that future lithium-ion batteries could provide greater than 70 percent more energy than current ones. Specifically, the anode proposed and studied by Ong and Liu is known as a disordered rocksalt anode.

“This Comet-enabled computational discovery points the way to other anode materials operating under the same mechanism,” said Ong. “Access to high-performance computing resources obviates the need for my group to buy and maintain our own supercomputers so that we can focus on the science.”

Details about this project can be found in this UCSD News article.

Could Lithium-Air Batteries Be Another Solution for Future Vehicle Power?

Another possibility for more efficient electric car power is the lithium-air battery (Li-air), which would be an attractive alternative as it converts oxygen in the air to electricity. While it is not on the market, it could be more popular by offering some of the highest specific energies – energy per kilogram of battery – and therefore increase a vehicle’s range per charge.

Researchers such as Rohan Mishra at Washington University in Saint Louis, along with collaborators at University of Illinois Chicago, recently made great strides in their work by creating new two-dimensional alloys that may lead to highly energy-efficient Li-air batteries with excellent stability over numerous charge-discharge cycles.

“Lithium-air batteries can provide high-energy density that makes them excellent for electric cars,” said Rohan, a computational materials scientist and assistant professor of mechanical engineering and materials science. “We have just synthesized several alloys that would allow for great progression in the development of these batteries to be feasible for manufacturing. These alloys were synthesized with guidance from intensive calculations completed on Stampede2 at TACC and Comet at SDSC.”

Now that Mishra and his colleagues have a better grasp on how these alloys react with one another, they will focus on further improving the energy efficiency of these batteries.

Mishra and his colleagues published their latest findings in the Advanced Materials journal. Additional information about their alloy development can also be found in this Washington University article.

More recently, Mishra’s team identified a family of rare compounds called electrides that can enable stable and rechargeable fluoride ion batteries. Fluoride ion batteries can help overcome issues related to the limited supply of lithium and cobalt used in current Li-ion batteries. These findings were published in the Journal of Materials Chemistry A and more information can be found in the article from Washington University.

Improved Lithium-Sulfur Batteries Offer A Third Option

Although lithium-sulfur batteries have been available for many years, they are slow to charge and must be frequently replaced. The high theoretical capacity, energy density, and low cost of sulfur has drawn more attention to this battery type as a potential answer to the future of electric-powered vehicles.

“Lithium-sulfur batteries hold great promise to meet the ever-increasing demands for high energy density power supplies,” said Jin Liu, associate professor of mechanical and materials engineering at Washington State University. “Our recent study demonstrated how slight molecular changes to current designs can greatly improve battery performance.”

Liu’s study, published in Advanced Energy Materials, included experiments that were first simulated on Comet and Stampede2. The molecular-scale detailed simulations and follow-up experiments showed that short-branched proteins (proteins composed from short-length amino acid residues) were much more effective in absorption of polysulfides compared with long-branched proteins, suppressing shuttling of polysulfides and resulting in increased battery performance.

“The proteins, in general, contain a large number of atoms and the protein structures are extremely complex,” said Liu. “The molecular simulations of such systems are computationally extensive and only possible using supercomputers such as Comet and Stampede2.”



from ScienceBlog.com https://ift.tt/3pBKV8z

Our gut-brain connection

Our gut-brain connection

“Organs-on-a-chip” system sheds light on how bacteria in the human digestive tract may influence neurological diseases.

In many ways, our brain and our digestive tract are deeply connected. Feeling nervous may lead to physical pain in the stomach, while hunger signals from the gut make us feel irritable. Recent studies have even suggested that the bacteria living in our gut can influence some neurological diseases.

Modeling these complex interactions in animals such as mice is difficult to do, because their physiology is very different from humans’. To help researchers better understand the gut-brain axis, MIT researchers have developed an “organs-on-a-chip” system that replicates interactions between the brain, liver, and colon.

Using that system, the researchers were able to model the influence that microbes living in the gut have on both healthy brain tissue and tissue samples derived from patients with Parkinson’s disease. They found that short-chain fatty acids, which are produced by microbes in the gut and are transported to the brain, can have very different effects on healthy and diseased brain cells.

Our gut-brain connection

“While short-chain fatty acids are largely beneficial to human health, we observed that under certain conditions they can further exacerbate certain brain pathologies, such as protein misfolding and neuronal death, related to Parkinson’s disease,” says Martin Trapecar, an MIT postdoc and the lead author of the study.

Linda Griffith, the School of Engineering Professor of Teaching Innovation and a professor of biological engineering and mechanical engineering, and Rudolf Jaenisch, an MIT professor of biology and a member of MIT’s Whitehead Institute for Medical Research, are the senior authors of the paper, which appears today in Science Advances.

The gut-brain connection

For several years, Griffith’s lab has been developing microphysiological systems — small devices that can be used to grow engineered tissue models of different organs, connected by microfluidic channels. In some cases, these models can offer more accurate information on human disease than animal models can, Griffith says.

In a paper published last year, Griffith and Trapecar used a microphysiological system to model interactions between the liver and the colon. In that study, they found that short-chain fatty acids (SCFAs), molecules produced by microbes in the gut, can worsen autoimmune inflammation associated with ulcerative colitis under certain conditions. SCFAs, which include butyrate, propionate, and acetate, can also have beneficial effects on tissues, including increased immune tolerance, and they account for about 10 percent of the energy that we get from food.

In the new study, the MIT team decided to add the brain and circulating immune cells to their multiorgan system. The brain has many interactions with the digestive tract, which can occur via the enteric nervous system or through the circulation of immune cells, nutrients, and hormones between organs.

Several years ago, Sarkis Mazmanian, a professor of microbiology at Caltech, discovered a connection between SCFAs and Parkinson’s disease in mice. He showed that SCFAs, which are produced by bacteria as they consume undigested fiber in the gut, sped up the progression of the disease, while mice raised in a germ-free environment were slower to develop the disease.

Griffith and Trapecar decided to further explore Mazmanian’s findings, using their microphysiological model. To do that, they teamed up with Jaenisch’s lab at the Whitehead Institute. Jaenisch had previously developed a way to transform fibroblast cells from Parkinson’s patients into pluripotent stem cells, which can then be induced to differentiate into different types of brain cells — neurons, astrocytes, and microglia.

More than 80 percent of Parkinson’s cases cannot be linked to a specific gene mutation, but the rest do have a genetic cause. The cells that the MIT researchers used for their Parkinson’s model carry a mutation that causes accumulation of a protein called alpha synuclein, which damages neurons and causes inflammation in brain cells. Jaenisch’s lab has also generated brain cells that have this mutation corrected but are otherwise genetically identical and from the same patient as the diseased cells.

Griffith and Trapecar first studied these two sets of brain cells in microphysiological systems that were not connected to any other tissues, and found that the Parkinson’s cells showed more inflammation than the healthy, corrected cells. The Parkinson’s cells also had impairments in their ability to metabolize lipids and cholesterol.

Opposite effects

The researchers then connected the brain cells to tissue models of the colon and liver, using channels that allow immune cells and nutrients, including SCFAs, to flow between them. They found that for healthy brain cells, being exposed to SCFAs is beneficial, and helps them to mature. However, when brain cells derived from Parkinson’s patients were exposed to SCFAs, the beneficial effects disappeared. Instead, the cells experienced higher levels of protein misfolding and cell death.

These effects were seen even when immune cells were removed from the system, leading the researchers to hypothesize that the effects are mediated by changes to lipid metabolism.

“It seems that short-chain fatty acids can be linked to neurodegenerative diseases by affecting lipid metabolism rather than directly affecting a certain immune cell population,” Trapecar says. “Now the goal for us is to try to understand this.”

The researchers also plan to model other types of neurological diseases that may be influenced by the gut microbiome. The findings offer support for the idea that human tissue models could yield information that animal models cannot, Griffith says. She is now working on a new version of the model that will include micro blood vessels connecting different tissue types, allowing researchers to study how blood flow between tissues influences them.

“We should be really pushing development of these, because it is important to start bringing more human features into our models,” Griffith says. “We have been able to start getting insights into the human condition that are hard to get from mice.”

The research was funded by DARPA, the National Institutes of Health, the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Environmental Health Sciences, the Koch Institute Support (core) Grant from the National Cancer Institute, and the Army Research Office Institute for Collaborative Biotechnologies.



from ScienceBlog.com https://ift.tt/36v7jZB

Study links neighborhood conditions to adolescent sleep loss

Study links neighborhood conditions to adolescent sleep loss

Conditions such as loud noise and few trees in neighborhoods seem to affect how much sleep adolescents get, according to a study in the journal Sleep. In a second study, researchers measured young people’s brainwaves to observe the troublesome effects of sleep loss on memory and cognitive function.

The findings were reported by two scientific teams funded by the National Heart, Lung, and Blood Institute (NHLBI), part of the National Institutes of Health.

According to the Centers for Disease Control and Prevention(link is external), about six out of 10 (57.8%) middle school students and seven out of 10 (72.7%) high school students in the United States do not get the recommended amount of sleep on school nights, increasing their risk for future chronic disease development. Studies have shown a link between insufficient sleep and a higher risk of obesity, diabetes, depression, anxiety, and increased risk-taking behaviors in adolescents.

In the new residential environment study, which involved 110 adolescents, the researchers found that just small increases in neighborhood noise had a negative effect on sleep. In scientific terms, each standard deviation above average noise levels was linked to a 16-minute delay in falling sleep and 25% lower odds of sleeping at least eight hours per night. When the researchers looked at the effects of green space, however, they found that the teens who lived in neighborhoods with just one standard deviation above the average number of trees fell asleep 18 minutes earlier and experienced more favorable sleep times overall.

“For adolescents, the harms of insufficient sleep are wide-ranging and include impaired cognition and engagement in antisocial behavior,” said study author Stephanie L. Mayne, Ph.D., assistant professor of pediatrics at Children’s Hospital of Philadelphia and the Perelman School of Medicine of the University of Pennsylvania. “This makes identifying strategies to prevent and treat the problem critical. Our findings suggest that neighborhood noise and green space may be important targets for interventions.”

To record their sleep times and duration, the students in Mayne’s study wore wrist actigraphy watches for 14 days in both eighth and ninth grades. Their home addresses were mapped to show sound levels, tree canopy cover, and housing and population density. The researchers then considered sex, race, parental education, household income and size, and neighborhood poverty to reach their conclusions.

In the second study, a team of researchers at the University of California (UC) Sleep Lab in Davis showed how sleep loss associated with reduced time in bed affected the brain waves of 77 adolescents aged 10 to 16. This study was part of the lab’s longitudinal investigation of brain changes in adolescence, as measured by electroencephalogram (EEG).

The researchers predefined three different sleep schedules – four consecutive nights of seven, 8.5 or 10 hours of time in bed – and participants adhered to one of them each year. They slept with electrodes attached to their scalps to capture brain activity.

“Modest sleep restriction produced strong changes in the brain waves, which may explain how sleep loss impairs adolescents’ cognitive function,” said study author Ian G. Campbell, Ph.D., a professor in UC Davis’s psychiatry and behavioral sciences department. “We were surprised by the magnitude of the effect.”

Reducing time in bed from 10 to seven hours for four consecutive nights decreased sleep duration on the fourth night by 23%, but all-night sigma activity, which is important in memory consolidation and cognitive performance, went down by 40%.

During sleep, the brain replays, analyzes information, learns, and restores itself, all processes that sustain physical and mental health and overall performance. According to the researchers, the EEG changes in response to even modest reductions of the adolescents’ time in bed indicate insufficient sleep recovery.

“These two studies provide a snapshot of the research we fund to understand the harms of sleep deficiency and its causes,” said Marishka Brown, director of the National Center on Sleep Disorders Research, NHLBI. “They contribute objectively measured evidence of how modifiable environmental factors, such as housing and neighborhood conditions, impair sleep in adolescents and how that lack of sleep can literally be seen on the brain.”

Study

Campbell, I., et al. Effects of sleep restriction on the sleep electroencephalogram of adolescents. Sleep. January 2020. https://doi.org/10.1093/sleep/zsaa280(link is external)

Mayne, S., et al. Associations of the Residential Built Environment with Adolescent Sleep Outcomes. Sleep. January 2020. https://doi.org/10.1093/sleep/zsaa276



from ScienceBlog.com https://ift.tt/3pubDQg

Your toothbrush reflects you, not your toilet

Your toothbrush reflects you, not your toilet

Good news: The bacteria living on your toothbrush reflect your mouth – not your toilet.

After studying microbial communities living on bristles from used toothbrushes, Northwestern University researchers found those communities matched microbes commonly found inside the mouth and on skin. This was true no matter where the toothbrushes had been stored, including shielded behind a closed medicine cabinet door or out in the open on the edge of a sink.

The study’s senior author, Erica Hartmann, was inspired to conduct the research after hearing concerns that flushing a toilet might generate a cloud of aerosol particles. She and her team affectionately called their study “Operation Pottymouth.”

“I’m not saying that you can’t get toilet aerosols on your toothbrush when you flush the toilet,” Hartmann said. “But, based on what we saw in our study, the overwhelming majority of microbes on your toothbrush probably came from your mouth.”

The study will be published Feb. 1 in the journal Microbiome.

Hartmann is an assistant professor of environmental engineering at Northwestern’s McCormick School of Engineering. Ryan Blaustein, a former postdoctoral fellow in Hartmann’s lab, was the paper’s first author. Blaustein is now a postdoctoral fellow at the National Institutes of Health (NIH).

Collecting samples

To obtain toothbrushes for the study, Hartmann’s team launched the Toothbrush Microbiome Project, which asked people to mail in their used toothbrushes along with corresponding metadata. Hartmann’s team then extracted DNA from the bristles to examine the microbial communities found there. They compared these communities to those outlined by the Human Microbiome Project, an NIH initiative that identified and catalogued microbial flora from different areas of the human body.

“Many people contributed samples to the Human Microbiome Project, so we have a general idea of what the human microbiome looks like,” Blaustein said. “We found that the microbes on toothbrushes have a lot in common with the mouth and skin and very little in common with the human gut.”

“Your mouth and your gut are not separate islands,” Hartmann added. “There are some microbes that we find both in the human gut and mouth, and those microbes are found on toothbrushes. But, again, those are probably coming from your mouth.”

Clean mouth, clean toothbrush

During the research, Hartmann’s team examined how many different types of microbes lived on the toothbrushes. They found people with better oral hygiene, who regularly flossed and used mouthwash, had toothbrushes with less diverse microbial communities.

“If you practice good oral hygiene, then your toothbrush also will be relatively clean,” Hartmann said. “But it’s a small difference. It’s not like people who regularly floss, brush and use mouthwash have no microbes and those who don’t have tons. There’s just a bit less diversity on toothbrushes from people who do all those things.”

 

Your toothbrush reflects you, not your toiletBy using antimicrobials, you aren’t just getting rid of microbes. You are pushing the surviving microbes toward antimicrobial resistance”
Erica Hartmann
environmental engineer

 

The researchers also found that microbes from toothbrushes of people with better oral hygiene had slightly more antimicrobial-resistance genes. Hartmann said microbes with these genes did not match the human body and were likely from air or dust in the bathroom.

Hartmann stresses that there’s no need to be alarmed by microbes living on your toothbrush. Unless your dentist recommends otherwise, people should not reach for antimicrobial toothpastes and toothbrushes.

“By using antimicrobials, you aren’t just getting rid of microbes,” Hartmann said. “You are pushing the surviving microbes toward antimicrobial resistance. In general, for most people, regular toothpaste is sufficient.”

The study, “Toothbrush microbiomes feature a meeting ground for human oral and environmental microbiota,” was supported by the Searle Leadership Fund, the NUSeq Core Facility Illumina Pilot Program and the National Institutes of Health (award number TL1R001423).



from ScienceBlog.com https://ift.tt/2YEh03P

Alcohol Causes Immediate Effects Linked to Heart Malady

Alcohol Causes Immediate Effects Linked to Heart Malady

A daily alcoholic drink for women or two for men might be good for heart health, compared to drinking more or not drinking at all. But while there is some evidence that drinking in moderation might prevent heart attacks, now a randomized, double-blinded clinical study of 100 heart patients has added a new wrinkle to the contours of the debate over alcohol and heart disease.

UC San Francisco researchers found that alcohol has an immediate effect on the heart in patients with atrial fibrillation (AFib), the most common life-threatening heart-rhythm disorder.

In the study, published online Jan. 27, 2021, in the Journal of the American College of Cardiology: Clinical Electrophysiology, electrical properties that drive the muscles of the heart to contract changed immediately in patients who were randomly assigned to an infusion of alcohol maintained at the lower limit of legal intoxication, compared to an equal number of control subjects who instead received a placebo infusion.

According to senior study author Gregory Marcus, MD, professor of medicine in the Division of Cardiology at UCSF, “The acute impact of exposure to alcohol is a reduction in the time needed for certain heart muscle cells in the left atrium to recover after being electrically activated and to be ready to activated again, particularly in the pulmonary veins that empty into the left atrium.”

“Although epidemiological studies have found an association between self-reported alcohol consumption and the development of an atrial fibrillation diagnosis, ours is the first study to point to a mechanism through which a lifestyle factor can acutely change the electrical properties of the heart to increase the chance of an arrhythmia,” Marcus said. The same changes caused by alcohol infusion in the study have earlier been associated with episodes of AFib in previous computer models and animal studies, he said.

In AFib the orderly pumping of blood through the atria, the heart’s upper chambers, is disrupted. Pumping normally is driven by regular waves of electrical signal conduction along well travelled circuits that form in the heart between cells in the muscle tissue, but in AFib electrical properties change within the atria and electrical signals travel chaotically through the chambers’ muscles, all of which can themselves conduct and perpetuate waves of electrical activation. As a result the atria pump blood inefficiently. Those who are stricken with AFib may feel the heart flutter, pound, or skip beats.

Alcohol Causes Immediate Effects Linked to Heart Malady
Gregory Marcus, MD. Photo by Noah Berger

The number of people in the U.S. with AFib is approaching 12 million, and the condition leads to 454,000 hospitalizations yearly, according to the Centers for Disease Control with Prevention. AFib contributes to about 158,000 U.S. deaths each year and is a leading cause of stroke, as blood clots can form inside fibrillation-prone atria. More commonly AFib causes fatigue, weakness, dizzy lightheadedness, difficulty breathing and chest pain.

The study patients were all undergoing a scheduled, standard “catheter ablation” procedure, the most effective method to suppress atrial fibrillation episodes. This procedure targets elimination of the electrical connection between the pulmonary veins and the left atrium, the same area noted to be affected exposure to alcohol in the current study.

Preparation for ablation surgery already required placement of catheters and electrodes in the heart chambers to monitor and pace the heart and destroy targeted tissue. For the study, investigators measured the refractory period needed by cells to recover before they could transmit electrical signals again, as well as the speed of signal conduction from one point to another within the heart. They also applied a stimulus to greatly increase the likelihood of inducing a transient AFib episode.

The speed of electrical conduction through the upper chambers did not change significantly in the study, but in comparison to placebo, alcohol infusion resulted in an average reduction of 12-milliseconds in the refractory period for tissue in the pulmonary vein, and also reduced the refractory period in significantly more sites throughout the atria. During the procedure, the number of induced AFib episodes did not differ significantly between alcohol and placebo infusion groups.

“We were able to induce AFib in large numbers of patients in both groups, but our artificial methods of inducing AFib may have overwhelmed any observable differences between the groups,” Marcus said. “Alternatively, it may be that there is a delay between the change in electrical properties caused by alcohol and the increased likelihood of triggering AFib.”

“Patients should be aware that alcohol can have immediate effects that are expected to increase risk for arrhythmias,” Marcus concludes.

Additional UCSF study authors include Eric Vittinghoff, Gregory Nah, Joshua Moss, Randall Lee, Byron Lee, Zian Tseng, Tomos Walters, Vasanth Vedantham, Rachel Gladstone, Shannon Fan, Emily Lee, Christina Fang, Kelsey Ogomori, Trisha Hue, Jeffrey Olgin, Melvin Scheinman, Henry Hsia and Edward Gerstenfeld.

Funding: the National Institutes of Health funded the study.

Disclosures: Marcus has been supported in his research by the Patient-Centered Outcomes Research Institute, Medtronic, Eight, Jawbone, and Baylis, and is a consultant and holds equity interest in InCarda.



from ScienceBlog.com https://ift.tt/3t5t3Vz

Getting to Net Zero – and Even Net Negative – is Surprisingly Feasible, and Affordable

Getting to Net Zero – and Even Net Negative – is Surprisingly Feasible, and Affordable

Getting to Net Zero – and Even Net Negative – is Surprisingly Feasible, and AffordableReaching zero net emissions of carbon dioxide from energy and industry by 2050 can be accomplished by rebuilding U.S. energy infrastructure to run primarily on renewable energy, at a net cost of about $1 per person per day, according to new research published by the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), the University of San Francisco (USF), and the consulting firm Evolved Energy Research.

The researchers created a detailed model of the entire U.S. energy and industrial system to produce the first detailed, peer-reviewed study of how to achieve carbon-neutrality by 2050. According to the Intergovernmental Panel on Climate Change (IPCC), the world must reach zero net CO2 emissions by mid-century in order to limit global warming to 1.5 degrees Celsius and avoid the most dangerous impacts of climate change.

The researchers developed multiple feasible technology pathways that differ widely in remaining fossil fuel use, land use, consumer adoption, nuclear energy, and bio-based fuels use but share a key set of strategies. “By methodically increasing energy efficiency, switching to electric technologies, utilizing clean electricity (especially wind and solar power), and deploying a small amount of carbon capture technology, the United States can reach zero emissions,” the authors write in “Carbon Neutral Pathways for the United States,” published recently in the scientific journal AGU Advances.

Transforming the infrastructure

“The decarbonization of the U.S. energy system is fundamentally an infrastructure transformation,” said Berkeley Lab senior scientist Margaret Torn, one of the study’s lead authors. “It means that by 2050 we need to build many gigawatts of wind and solar power plants, new transmission lines, a fleet of electric cars and light trucks, millions of heat pumps to replace conventional furnaces and water heaters, and more energy-efficient buildings – while continuing to research and innovate new technologies.”

In this transition, very little infrastructure would need “early retirement,” or replacement before the end of its economic life. “No one is asking consumers to switch out their brand-new car for an electric vehicle,” Torn said. “The point is that efficient, low-carbon technologies need to be used when it comes time to replace the current equipment.”

The pathways studied have net costs ranging from 0.2% to 1.2% of GDP, with higher costs resulting from certain tradeoffs, such as limiting the amount of land given to solar and wind farms. In the lowest-cost pathways, about 90% of electricity generation comes from wind and solar. One scenario showed that the U.S. can meet all its energy needs with 100% renewable energy (solar, wind, and bioenergy), but it would cost more and require greater land use.

Getting to Net Zero – and Even Net Negative – is Surprisingly Feasible, and Affordable

In the least-cost scenario to achieve net zero emissions of CO2 by 2050, wind, solar, and battery storage capacity will have to increase several-fold (left chart). Vehicles will need to be mostly electric, powered either by batteries or fuel cells (middle charts). Residential space and water heaters will also need to be electrified, powered either by heat pumps or electric heaters (right charts). (Credit: Williams, et al., 2021)

“We were pleasantly surprised that the cost of the transformation is lower now than in similar studies we did five years ago, even though this achieves much more ambitious carbon reduction,” said Torn. “The main reason is that the cost of wind and solar power and batteries for electric vehicles have declined faster than expected.”

The scenarios were generated using new energy models complete with details of both energy consumption and production – such as the entire U.S. building stock, vehicle fleet, power plants, and more – for 16 geographic regions in the U.S. Costs were calculated using projections for fossil fuel and renewable energy prices from DOE Annual Energy Outlook and the NREL Annual Technology Baseline report.

The cost figures would be lower still if they included the economic and climate benefits of decarbonizing our energy systems. For example, less reliance on oil will mean less money spent on oil and less economic uncertainty due to oil price fluctuations. Climate benefits include the avoided impacts of climate change, such as extreme droughts and hurricanes, avoided air and water pollution from fossil fuel combustion, and improved public health.

The economic costs of the scenarios are almost exclusively capital costs from building new infrastructure. But Torn points out there is an economic upside to that spending: “All that infrastructure build equates to jobs, and potentially jobs in the U.S., as opposed to sending money overseas to buy oil from other countries. There’s no question that there will need to be a well-thought-out economic transition strategy for fossil fuel-based industries and communities, but there’s also no question that there are a lot of jobs in building a low-carbon economy.”

The next 10 years

An important finding of this study is that the actions required in the next 10 years are similar regardless of long-term differences between pathways. In the near term, we need to increase generation and transmission of renewable energy, make sure all new infrastructure, such as cars and buildings, are low carbon, and maintain current natural gas capacity for now for reliability.

“This is a very important finding. We don’t need to have a big battle now over questions like the near-term construction of nuclear power plants, because new nuclear is not required in the next ten years to be on a net-zero emissions path. Instead we should make policy to drive the steps that we know are required now, while accelerating R&D and further developing our options for the choices we must make starting in the 2030s,” said study lead author Jim Williams, associate professor of Energy Systems Management at USF and a Berkeley Lab affiliate scientist.

The net negative case

Another important achievement of this study is that it’s the first published work to give a detailed roadmap of how the U.S. energy and industrial system can become a source of negative CO2 emissions by mid-century, meaning more carbon dioxide is taken out of the atmosphere than added.

According to the study, with higher levels of carbon capture, biofuels, and electric fuels, the U.S. energy and industrial system could be “net negative” to the tune of 500 million metric tons of CO2 removed from the atmosphere each year. (This would require more electricity generation, land use, and interstate transmission to achieve.) The authors calculated the cost of this net negative pathway to be 0.6% of GDP – only slightly higher than the main carbon-neutral pathway cost of 0.4% of GDP. “This is affordable to society just on energy grounds alone,” Williams said.

When combined with increasing CO2 uptake by the land, mainly by changing agricultural and forest management practices, the researchers calculated that the net negative emissions scenario would put the U.S. on track with a global trajectory to reduce atmospheric CO2 concentrations to 350 parts per million (ppm) at some distance in the future. The 350 ppm endpoint of this global trajectory has been described by many scientists as what would be needed to stabilize the climate at levels similar to pre-industrial times.

The study was supported in part by the Sustainable Development Solutions Network, an initiative of the United Nations.



from ScienceBlog.com https://ift.tt/3cnHf6E

Light-activated genes illuminate the role of gut microbes in longevity

Light-activated genes illuminate the role of gut microbes in longevity

Getting old is a complex matter. Research has shown that gut microbes are one of the factors that can influence several aspects of human life, including aging. Elucidating how a specific microbial species contributes to longevity is quite challenging given the complexity and heterogeneity of the human gut environment.

To explore the influence of bacterial products on the aging process, researchers at Baylor College of Medicine and Rice University developed a method that uses light to directly control specific gene expression and metabolite production from bacteria residing in the gut of the laboratory worm Caenorhabditis elegans.

“We used optogenetics, a method that combines light and genetically engineered light-sensitive proteins to regulate molecular events in a targeted manner in living cells or organisms,” said co-corresponding author Dr. Meng Wang, Robert C. Fyfe Endowed Chair on Aging and professor of molecular and human genetics and the Huffington Center on Aging at Baylor.

In the current work, the team engineered E. coli bacteria to produce the pro-longevity compound colanic acid in response to green light and switch off its production in red light. They discovered that shining the green light on the transparent worms carrying the modified E. coli induced the bacteria to produce colanic acid, which protected the worm’s gut cells against stress-induced mitochondrial fragmentation. Mitochondria have been increasingly recognized as important players in the aging process.Switching genes on and off with light.

“When exposed to green light, worms carrying this E. coli strain also lived longer. The stronger the light, the longer the lifespan,” said Wang, an investigator at Howard Hughes Medical Institute and member of Baylor’s Dan L Duncan Comprehensive Cancer Center. “Optogenetics offers a direct way to manipulate gut bacterial metabolism in a temporally, quantitatively and spatially controlled manner and enhance host fitness.”

Light-activated genes illuminate the role of gut microbes in longevity
Light-responsive bacteria fed to worms are visible in images of the worms’ gastrointestinal tracts. Engineers programmed the bacteria to produce a red fluorescent protein called mCherry so they would be easy to see under a microscope. When exposed to green light, the bacteria also produce a green fluorescent protein called sfGFP, which causes them to glow green. When exposed to red light, they do not produce the green protein. Worms in the left column were treated with red light. Worms in the right column were treated with green light. (Image courtesy of Jeff Tabor/Rice University; eLife, 2020.)

“For instance, this work suggests that we could engineer gut bacteria to secrete more colanic acid to combat age-related health issues,” said co-corresponding author Dr. Jeffrey Tabor, associate professor of bioengineering and biosciences at Rice University. “Researchers also can use this optogenetic method to unravel other mechanisms by which microbial metabolism drives host physiological changes and influences health and disease.”

Read the complete report in the journal eLife.

Other contributors to this work include first author Lucas A. Hartsough, Mooncheol Park, Matthew V. Kotlajich, John Tyler Lazar, Bing Han, Chih-Chun J. Lin, Elena Musteata and Lauren Gambill. The authors are affiliated with one of more of the following institutions: Baylor College of Medicine, Rice University and Howard Hughes Medical Institute.

Funding for this project was provided by Human Health Services and National Institutes of Health grants (1R21NS099870-01, DP1DK113644 and R01AT009050), National Aeronautics and Space Administration (grant NSTRF NNX11AN39H), the John S. Dunn Foundation and the Welch Foundation.

By Ana María Rodríguez, Ph.D.



from ScienceBlog.com https://ift.tt/3ct5Sil

Engineering meets biology to design innovative multifunctional surgical Biomesh

Engineering meets biology to design innovative multifunctional surgical Biomesh

Hernias form when intra-abdominal content, such as a loop of the intestine, squeezes through weak, defective or injured areas of the abdominal wall.

The condition, one of the most common soft tissue injuries, may develop serious complications, therefore hernia repair may be recommended. Repair consists of surgically implanting a prosthetic mesh to support and reinforce the damaged abdominal wall and facilitate the healing process. However, currently used mesh implants are associated with potentially adverse postsurgical complications.

“Although hernia mesh implants are mechanically strong and support abdominal tissue, making the patient feel comfortable initially, it is a common problem that about three days after surgery the implant can drive inflammation that in two to three weeks will affect organs nearby,” said Dr. Crystal Shin, assistant professor of surgery at Baylor College of Medicine and lead author of this study looking to find a solution to postsurgical hernia complications.

Mesh implants mostly fail because they promote the adhesion of the intestine, liver or other visceral organs to the mesh. As the adhesions grow, the mesh shrinks and hardens, potentially leading to chronic pain, bowel obstruction, bleeding and poor quality of life. Some patients may require a second surgery to repair the unsuccessful first.

“Inflammation is also a serious concern,” said Dr. Ghanashyam Acharya, associate professor of surgery at Baylor. “Currently, inflammation is controlled with medication or anti-inflammatory drugs, but these drugs also disturb the healing process because they block the migration of immune cells to the injury site.”

“To address these complications, we developed a non-pharmacological approach by designing a novel mesh that, in addition to providing mechanical support to the injury site, also acts as an inflammation modulating system,” Shin said.

Opposites attract

“A major innovation to our design is the development of a Biomesh that can reduce inflammation and, as a result, minimize tissue adhesion to the mesh that leads to pain and failure of the surgery,” Shin said.

Inflammatory mediators called cytokines appear where the mesh is implanted a few days after the surgery. Some of the main cytokines in the implant, IL1-β, IL6 and TNF-α, have a positive surface charge due to the presence of the amino acids lysine and arginine.

“We hypothesized that Biomesh with a negative surface charge would capture the positively charged cytokines, as opposite electrical charges are attracted to each other,” Acharya said. “We expected that trapping the cytokines in the mesh would reduce their inflammatory effect and improve hernia repair and the healing process.”

To test their new idea, the researchers used a 3-D-bioprinter to fabricate Biomesh of a polymer called phosphate crosslinked poly (vinyl alcohol) polymer (X-PVA). Through thorough experimentation, they optimized the mechanical properties so the mesh would withstand maximal abdominal pressure repeatedly without any deterioration of its mechanical strength for several months. They also showed that their Biomesh did not degrade or reduce its elastic properties over time and was not toxic to human cells.

Shin, Acharya and their colleagues have confirmed in the lab that this Biomesh can capture positively charged cytokines.

Engineering meets biology to design innovative multifunctional surgical Biomesh
Implantation of a negatively charged Biomesh captures positively charged proinflammatory cytokines from the damaged peritoneum after surgical trauma. Image credit: Scott Holmes, CMI.

 

Encouraged by these results, the researchers tested their Biomesh in a rat model of hernia repair, comparing it with a type of mesh extensively used clinically for surgical hernia repair.
Newly designed 3-D printed Biomesh minimizes postsurgical complications of hernia repair in an animal model

The newly designed Biomesh effectively minimized postsurgical complications of hernia repair in an animal model. The researchers examined the Biomesh for four weeks after it was implanted. They found that the newly designed Biomesh had captured about three times the amount of cytokines captured by the commonly used mesh. Cytokines are short-lived in the body. As they degrade, they enable the mesh to capture more cytokines.

Importantly, no visceral tissues had adhered to the newly designed Biomesh, while the level of tissue adhesion was extreme in the case of the commonly used mesh. These results confirmed that the new Biomesh is effective at reducing the effects of the inflammatory response and in preventing visceral adhesions. In addition, the new mesh did not hinder abdominal wall healing after surgical hernia repair in animal models.

“This Biomesh is unique and designed to improve outcomes and reduce acute and long-term complications and symptoms associated with hernia repair.”

With more than 400,000 hernia repair surgeries conducted every year in the U.S., the new Biomesh would fulfill a major unmet need,” Shin said.

“There is no such multifunctional composite surgical mesh available, and development of a broadly applicable Biomesh would be a major advancement in the surgical repair of hernia and other soft tissue defects. We are conducting further preclinical studies before our approach can be translated to the clinic. Fabricating the Biomesh is highly reproducible, scalable and modifiable.”

“This concept of controlling inflammation through the physicochemical properties of the materials is new. The mesh was originally designed for mechanical strength. We asked ourselves, can we create a new kind of mesh by making use of the physical and chemical properties of materials?” said Acharya. “In the 1950s, Dr. Francis C. Usher at Baylor’s Department of Surgery developed the first polypropylene mesh for hernia repair.

We have developed a next-generation mesh that not only provides mechanical support but also plays a physiological role of reducing the inflammatory response that causes significant clinical problems.”

Read the complete study in the journal Advanced Materials.

Other contributors to this work include Fernando J. Cabrera, Richard Lee, John Kim, Remya Ammassam Veettil, Mahira Zaheer, Kirti Mhatre and Bradford G. Scott who are affiliated with Baylor College of Medicine. Aparna Adumbumkulath and Pulickel M. Ajayan are at Rice University and Steven A. Curley is at Christus Health Institute.

This work was supported by Baylor College of Medicine seed funding.

Learn more about the Shin lab and the Acharya lab:

Current research interests of the Shin lab focus on developing broadly applicable drug delivery systems for surgical applications with enhanced therapeutic efficacy by integrating nanotechnology and 3-D bioprinting technology. She is currently working on developing controlled release nanowafer therapeutics (a hydrogel-based drug delivery system), nanodrug delivery systems for wound healing and pain management, and theranostics, a combination of therapeutics and diagnostics, for image-guided drug delivery.

Acharya’s research program focuses on the development of advanced materials for regenerative engineering by integrating nanofabrication, 3-D-nanolithography and controlled drug delivery strategies. He works at the interface of medicine, bioengineering, chemistry and pharmaceutics.

By Ana María Rodríguez, Ph.D.



from ScienceBlog.com https://ift.tt/2Yu3lMg

Featured Post

Prof. Dr. Thomas Braunbeck | University of Heidelberg, Germany | Best Researcher Award

  International Research Awards on New Science Inventions Join us for the International Research Awards on New Science Inventions, a premie...

Popular