Validating the physics behind the new MIT-designed fusion experiment

Validating the physics behind the new MIT-designed fusion experiment

Two and a half years ago, MIT entered into a research agreement with startup company Commonwealth Fusion Systems to develop a next-generation fusion research experiment, called SPARC, as a precursor to a practical, emissions-free power plant.

Now, after many months of intensive research and engineering work, the researchers charged with defining and refining the physics behind the ambitious tokamak design have published a series of papers summarizing the progress they have made and outlining the key research questions SPARC will enable.

Overall, says Martin Greenwald, deputy director of MIT’s Plasma Science and Fusion Center and one of the project’s lead scientists, the work is progressing smoothly and on track. This series of papers provides a high level of confidence in the plasma physics and the performance predictions for SPARC, he says. No unexpected impediments or surprises have shown up, and the remaining challenges appear to be manageable. This sets a solid basis for the device’s operation once constructed, according to Greenwald.

Greenwald wrote the introduction for a set of seven research papers authored by 47 researchers from 12 institutions and published today in a special issue of the Journal of Plasma Physics. Together, the papers outline the theoretical and empirical physics basis for the new fusion system, which the consortium expects to start building next year.

SPARC is planned to be the first experimental device ever to achieve a “burning plasma” — that is, a self-sustaining fusion reaction in which different isotopes of the element hydrogen fuse together to form helium, without the need for any further input of energy. Studying the behavior of this burning plasma — something never before seen on Earth in a controlled fashion — is seen as crucial information for developing the next step, a working prototype of a practical, power-generating power plant.

Such fusion power plants might significantly reduce greenhouse gas emissions from the power-generation sector, one of the major sources of these emissions globally. The MIT and CFS project is one of the largest privately funded research and development projects ever undertaken in the fusion field.

“The MIT group is pursuing a very compelling approach to fusion energy.” says Chris Hegna, a professor of engineering physics at the University of Wisconsin at Madison, who was not connected to this work. “They realized the emergence of high-temperature superconducting technology enables a high magnetic field approach to producing net energy gain from a magnetic confinement system. This work is a potential game-changer for the international fusion program​.”

The SPARC design, though about the twice the size as MIT’s now-retired Alcator C-Mod experiment and similar to several other research fusion machines currently in operation, would be far more powerful, achieving fusion performance comparable to that expected in the much larger ITER tokamak being built in France by an international consortium. The high power in a small size is made possible by advances in superconducting magnets that allow for a much stronger magnetic field to confine the hot plasma.

The SPARC project was launched in early 2018, and work on its first stage, the development of the superconducting magnets that would allow smaller fusion systems to be built, has been proceeding apace. The new set of papers represents the first time that the underlying physics basis for the SPARC machine has been outlined in detail in peer-reviewed publications. The seven papers explore the specific areas of the physics that had to be further refined, and that still require ongoing research to pin down the final elements of the machine design and the operating procedures and tests that will be involved as work progresses toward the power plant.

The papers also describe the use of calculations and simulation tools for the design of SPARC, which have been tested against many experiments around the world. The authors used cutting-edge simulations, run on powerful supercomputers, that have been developed to aid the design of ITER. The large multi-institutional team of researchers represented in the new set of papers aimed to bring the best consensus tools to the SPARC machine design to increase confidence it will achieve its mission.

The analysis done so far shows that the planned fusion energy output of the SPARC tokamak should be able to meet the design specifications with a comfortable margin to spare. It is designed to achieve a Q factor — a key parameter denoting the efficiency of a fusion plasma — of at least 2, essentially meaning that twice as much fusion energy is produced as the amount of energy pumped in to generate the reaction. That would be the first time a fusion plasma of any kind has produced more energy than it consumed.

The calculations at this point show that SPARC could actually achieve a Q ratio of 10 or more, according to the new papers. While Greenwald cautions that the team wants to be careful not to overpromise, and much work remains, the results so far indicate that the project will at least achieve its goals, and specifically will meet its key objective of producing a burning plasma, wherein the self-heating dominates the energy balance.

Limitations imposed by the Covid-19 pandemic slowed progress a bit, but not much, he says, and the researchers are back in the labs under new operating guidelines.

Overall, “we’re still aiming for a start of construction in roughly June of ’21,” Greenwald says. “The physics effort is well-integrated with the engineering design. What we’re trying to do is put the project on the firmest possible physics basis, so that we’re confident about how it’s going to perform, and then to provide guidance and answer questions for the engineering design as it proceeds.”

Many of the fine details are still being worked out on the machine design, covering the best ways of getting energy and fuel into the device, getting the power out, dealing with any sudden thermal or power transients, and how and where to measure key parameters in order to monitor the machine’s operation.

So far, there have been only minor changes to the overall design. The diameter of the tokamak has been increased by about 12 percent, but little else has changed, Greenwald says. “There’s always the question of a little more of this, a little less of that, and there’s lots of things that weigh into that, engineering issues, mechanical stresses, thermal stresses, and there’s also the physics — how do you affect the performance of the machine?”

The publication of this special issue of the journal, he says, “represents a summary, a snapshot of the physics basis as it stands today.” Though members of the team have discussed many aspects of it at physics meetings, “this is our first opportunity to tell our story, get it reviewed, get the stamp of approval, and put it out into the community.”

Greenwald says there is still much to be learned about the physics of burning plasmas, and once this machine is up and running, key information can be gained that will help pave the way to commercial, power-producing fusion devices, whose fuel — the hydrogen isotopes deuterium and tritium — can be made available in virtually limitless supplies.

The details of the burning plasma “are really novel and important,” he says. “The big mountain we have to get over is to understand this self-heated state of a plasma.”

“The analysis presented in these papers will provide the world-wide fusion community with an opportunity to better understand the physics basis of the SPARC device and gauge for itself the remaining challenges that need to be resolved,” says George Tynan, professor of mechanical and aerospace engineering at the University of California at San Diego, who was not connected to this work. “Their publication marks an important milestone on the road to the study of burning plasmas and the first demonstration of net energy production from controlled fusion, and I applaud the authors for putting this work out for all to see.”​

Overall, Greenwald says, the work that has gone into the analysis presented in this package of papers “helps to validate our confidence that we will achieve the mission. We haven’t run into anything where we say, ‘oh, this is predicting that we won’t get to where we want.” In short, he says, “one of the conclusions is that things are still looking on-track. We believe it’s going to work.”



from ScienceBlog.com https://ift.tt/34b3Jlr

Birthing in better hospitals could save lives of Black, Native mothers

Birthing in better hospitals could save lives of Black, Native mothers

A new study from researchers at the UC Berkeley School of Public Health and Stanford University School of Medicine has determined that higher severe maternal morbidity rates for Black, American Indian/Alaska Native, and mixed-race women may be reduced if they had delivered in the same hospitals as non-Hispanic White women.

Researchers specifically targeted severe maternal morbidity (SMM), an umbrella term for a set of 21 adverse health complications including eclampsia and heart failure that can occur during childbirth. SMM has emerged as a growing public health crisis. The CDC reports that “the overall rate of SMM increased almost 200%” between 1993 and 2014, with Black women experiencing these outcomes at 2-3 times the rate of White women. The factors explaining the sharp increase in SMM nationally, as well as the persistent disparities by race and ethnicity, are inadequately understood, leaving few options to prevent short- and long-term health consequences for women and their newborns.

The researchers reviewed more than 3 million California birth records from 2007-2012 to see if hospital-level factors (such as teaching affiliation and proportion of SMM deliveries) could explain racial disparities in maternal outcomes related to giving birth.

“We found that the prevalence of SMM in California was highest in Black women and double that of White women (2.1% vs. 1.1%), a disparity that we know is increasing over time based on prior research by our team. We hypothesized that birth hospital might be an important underlying contributor to these disparities, given that national data suggests that Black women tend to deliver at hospitals that have worse outcomes,” said Mahasin Mujahid, lead author of a paper published in August in the American Journal of Obstetrics and Gynecology and associate professor of epidemiology and Chancellor’s Professor of Public Health at UC Berkeley’s School of Public Health.

Mujahid’s research found 33% of White women delivered in hospitals with the highest tertile of SMM rates compared to 53% of Black women. “Our model found that if Black women gave birth at the same distribution of hospitals as White women, this would have resulted in 156 fewer cases of SMM in Black women, representing a 7.8% reduction in the Black-White disparity,” Mujahid said.

The findings highlight the critical need for more research on the potential role of structural racism in shaping differential access to high quality hospitals based on race and ethnicity and in determining the within-hospital experiences of minoritized women such as experiences of discrimination that may disproportionately affect their birth experiences.

“We are in the midst of a national reckoning on the impacts of structural racism on Black Americans,” said Mujahid. “It is imperative that we include the health disparities experienced by historically marginalized women. SMM is one of a number of health problems that disproportionately affect racially and ethnically minoritized women, with particularly devastating consequences for Black and Native American women. More work is urgently needed to uncover the systemic factors that produce these disparities and to develop targeted interventions that promote health equity.”

https://www.ajog.org/article/S0002-9378(20)30845-0/pdf



from ScienceBlog.com https://ift.tt/3if95RF

Project applies human-centered design to in-person voting

Project applies human-centered design to in-person voting

As the United States prepares for November’s general election, almost every step of the voting process is being revamped and reevaluated to ensure that COVID-19 will not spread in local communities when millions of Americans cast their ballot in the fall.

While some states are expanding their vote-by-mail programs, many precincts are still expecting a high turnout for in-person voting.

Helping election administrators and poll workers prepare for safe in-person voting is a team at the Stanford Hasso Plattner Institute of Design, also known as the d.school. In May 2020, they partnered with the Healthy Elections Project, a joint effort between Stanford and Massachusetts Institute of Technology (MIT), to develop and promote best practices for a safe and secure election this November.

“The United States is making the most fundamental transformation to its election infrastructure in the shortest period of time in recent memory,” said Nathaniel Persily, the James B. McClatchy Professor Law and former Senior Research Director of the Presidential Commission on Election Administration.

“When it became clear that we needed to redesign our polling places, going to the d.school – world experts in design – was the natural place to look,” added Persily, who co-founded the Healthy Elections Project with Charles Stewart III from MIT.

The d.school’s task was to figure out how to apply human-centered design – an approach to finding and solving problems that put people’s mindsets and behaviors at the center of the process – to designing safe polling places during a pandemic. “Elections are a series of experiences,” said project collaborator Nadia Roumani, a senior designer with the d.school’s Designing for Social Systems Program. “One of the things that human-centered design brings to the voting process is the ability to understand and acknowledge the complexity of that experience and, when appropriate, make it more accessible.”

Toward this end, the group created the 2020 Healthy Polling Places Guidebook, a 51-page document that offers practical examples for how to prepare a safe environment for in-person voting.

The guidebook draws its inspiration from some of the several dozen statewide primary and run-off elections that have been held across the U.S. since COVID-19 was declared a pandemic by the World Health Organization on March 11, 2020. As local election administrators rethink their own voting procedures to incorporate public health recommendations like social distancing to reduce the spread of COVID-19 they are turning to earlier elections to learn what worked, Roumani said.

Early on, Roumani and her colleagues at the d.school partnered with several organizations that have extensive experience working with elections officials. As Roumani learned, state and county regulations for running elections are both highly technical and incredibly decentralized. Every state, county and town administer their own elections differently.

“Part of our work has been to serve as a design coach for some of these organizations and help them take what they already have, which is robust, thorough and very detail-oriented, and make it more visual, digestible, action-oriented and experience-centered,” Roumani said.

As election officials prepare for safe and clean environments for both workers and voters, the guidebook highlights dozens of examples that show what every step of the voting process has looked like so far in the pandemic. Included are photos of the signage voters saw when they entered their local polling place; the clear, plexiglass barriers they encountered when checking in; and the floor markings they followed when exiting.

Accompanying each of these images are brief but thorough descriptions of what election administrators might consider if they were to pursue one of these options, including step-by-step guidance and checklists.

The 2020 Healthy Polling Places Guidebook also features examples of what outdoor voting could look like. For example, included is an image of a tent outside the town hall from a primary election held in April in Dunn, Wisconsin, that offered people an alternative to indoor voting.

The guidebook even shows alternative examples for collecting ballots – such as curbside voting and drive-through voting – which allowed people to vote without leaving their vehicle. It also offers suggested language, links to resources administrators can use to layout their worksites and reminders for how to promote and maintain safety throughout the day. Included as well are practical tips and a training module for how to manage stressful situations that may arise, such as how to deal with a voter who forgets their face covering or refuses to wear one at all.

“The other part is understanding that there are potentially some emotional moments and anxiety-provoking moments that poll workers may face that we need to design for,” Roumani said.

Preparing election officials for challenges ahead

In addition to partnering with people like Nadia Roumani and her team at the d.school, the Healthy Elections Project has collaborated with dozens of academics, civic organizations, election administrators and election administration experts to address other challenges the pandemic poses to officials and local jurisdictions, including how to expand mail-in and absentee voting programs.

While there are some states that have spent years rolling out efforts for their mail-in voting programs, other states are having to do it in a matter of months. Some jurisdictions do not have the expertise to make these changes so quickly ­– which is where the Healthy Election Project steps in.

“We really need the best available research to try to educate election officials, voters and NGOs on how to pull off this election in a safe and secure way,” Persily said. “The goal of the healthy elections project is to really turn that research into action.”

Since the Healthy Elections Project launched in April, students from Stanford and MIT have been researching and drafting relevant memos that include specific recommendations and resources to election officials making critical changes to their infrastructure. One report, for example, goes into granular detail of what supplies jurisdictions might consider purchasing to make their polling places pandemic-proof, what they might need to expand vote-by-mail programs, as well as timelines to avoid bottlenecks in the supply chain.

As the election draws closer, the Healthy Election Project will continue to prepare and provide elections administrators with additional tools and resources to manage issues that may arise, such as managing mail ballots, analyzing election data and communicating with voters. There is also a growing amount of litigation regarding election rules during the pandemic, and the Healthy Elections Project is tracking these as well to keep election officials and voters up-to-date of issues in their jurisdictions.

“It’s incredibly difficult during the pandemic to try to effectuate changes in election administration across the country, but we are trying to do our best,” said Persily. “We hope that we’re making at least a small contribution to make it a smoother election.”

Poll worker recruitment

Another key issue to emerge from research conducted by Stanford and MIT students involved in the Healthy Elections Project was the need to recruit poll workers. In a detailed memo analyzing some of the recent primary and run-off elections held during the pandemic, students reported how some states had to rapidly respond to staffing shortages because of the pandemic. Typically, more than half of poll workers have been over the age of 60 – the demographic most at risk of experiencing health complications due to COVID-19.

“When we have a poll worker recruitment shortage, election officials have no choice but to consolidate or combine polling places, which in some cases can make it more difficult for voters to get to the polls. It can also lead to longer lines, more crowding and more processing delays at single polling locations,” said Stanford law student Chelsey Davidson who has been working fulltime on the Healthy Elections Project.

To have a successful and healthy election in November, new poll workers are needed. The Healthy Elections Project has rolled out a robust poll worker recruitment effort. They’ve also partnered with Power the Polls to recruit new poll workers to staff in-person voting locations on Election Day. Stanford’s d.school has also been addressing the issue of poll worker recruitment as well – they created the Pollworker Screening Tool, an easily adaptable application form that local election officials can use to evaluate volunteers.



from ScienceBlog.com https://ift.tt/2S9yqlo

New Brain Cell-Like Nanodevices Work Together To Identify Mutations In Viruses

New Brain Cell-Like Nanodevices Work Together To Identify Mutations In Viruses

In the September issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. They have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

New Brain Cell-Like Nanodevices Work Together To Identify Mutations In Viruses
An electron micrograph of the artificial neuron. The niobium dioxide layer (in yellow) endows the device with neuron-like behavior.

R. Stanley Williams

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. By mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” Kumar said. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

New Brain Cell-Like Nanodevices Work Together To Identify Mutations In Viruses
Networks of artificial neurons connected together can solve toy versions of the the viral quasispecies reconstruction problem.

Rachel Barton/Texas A&M Engineering

 

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

Ziwen Wang from Stanford University also contributed to this research.

This research was funded by the National Science Foundation, the Department of Energy and the Texas A&M X-Grants program.



from ScienceBlog.com https://ift.tt/33gxodF

Spinal Cord Stimulation Reduces Pain and Motor Symptoms in Parkinson’s Disease Patients

Spinal Cord Stimulation Reduces Pain and Motor Symptoms in Parkinson’s Disease Patients

A team of researchers in the United States and Japan reports that spinal cord stimulation (SCS) measurably decreased pain and reduced motor symptoms of Parkinson’s disease, both as a singular therapy and as a “salvage therapy” after deep brain stimulation (DBS) therapies were ineffective.

Writing in the September 28, 2020 issue of Bioelectronic Medicine, first author Krishnan Chakravarthy, MD, PhD, assistant professor of anesthesiology at University of California San Diego School of Medicine,  and colleagues recruited 15 patients with Parkinson’s disease, a neurodegenerative disorder that is commonly characterized by physical symptoms, such as tremors and progressive difficulty walking and talking, and non-motor symptoms, such as pain and mental or behavioral changes.

The mean age of the patients was 74, with an average disease duration of 17 years. All of the patients were experiencing pain not alleviated by previous treatments. Eight had undergone earlier DBS, a non-invasive, pain therapy in which electrical currents are used to stimulate specific parts of the brain. Seven patients had received only drug treatments previously.

Researchers implanted percutaneous (through the skin) electrodes near the patients’ spines, who then chose one of three types of electrical stimulation: continuous, on-off bursts or continuous bursts of varying intensity.

Following continuous programmed treatment post-implantation, the researchers said all patients reported significant improvement, based on the Visual Analogue Scale, a measurement of pain intensity, with a mean reduction of 59 percent across all patients and stimulation modes.

Seventy-three percent of patients showed improvement in the 10-meter walk, a test that measures walking speed to assess functional mobility and gait, with an average improvement of 12 percent.

And 64 percent of patients experienced improvements in the Timed Up and Go (TUG) test, which measures how long it takes a person to rise from a chair, walk three meters, turn around, walk back to the chair and sit down. TUG assesses physical balance and stability, both standing and in motion. Average TUG improvement was 21 percent.

The authors said the findings suggest SCS may have therapeutic benefit for patients with Parkinson’s in terms of treatment for pain and motor symptoms, though they noted further studies are needed to determine whether improved motor function is due to neurological changes caused by SCS or simply decreased pain.

“We are seeing growing data on novel uses of spinal cord stimulation and specific waveforms on applications outside of chronic pain management, specifically Parkinson’s disease,” said Chakravarthy, pain management specialist at UC San Diego Health. “The potential ease of access and implantation of stimulators in the spinal cord compared to the brain suggests that this is a very exciting area for future exploration.”

Co-authors include: Rahul Chaturvedi and Rajiv Reddy, UC San Diego; Takashi Agari, Tokyo Metropolitan Neurological Hospital; Hirokazu Iwamuro, Juntendo University, Tokyo; and Ayano Matsui, National Center Hospital of Neurology and Psychiatry, Tokyo.



from ScienceBlog.com https://ift.tt/36iBC6r

Throwing a warm sheet over our understanding of ice and climate

Throwing a warm sheet over our understanding of ice and climate

Temperatures at Earth’s highest latitudes were nearly as warm after Antarctica’s polar ice sheets developed as they were prior to glaciation, according to a new study led by Yale University. The finding upends most scientists’ basic understanding of how ice and climate develop over long stretches of time.

The study, based on a reconstruction of global surface temperatures, gives researchers a better understanding of a key moment in Earth’s climate history — when it transitioned from a “greenhouse” state to an “icehouse” state. The study appears in the journal Proceedings of the National Academy of Sciences the week of Sept. 28.

This work fills in an important, largely unwritten chapter in Earth’s surface temperature history,” said Pincelli Hull, assistant professor of earth and planetary studies at Yale, and senior author of the study.

Charlotte O’Brien, a former Yale Institute for Biospheric Studies (YIBS) Donnelley Postdoctoral Fellow who is now a postdoctoral research associate at University College London, is the study’s lead author.

During the Eocene period (from 56 to 34 million years ago), temperatures at Earth’s higher latitudes were much higher than they are today. The formation of polar ice sheets began near the end of the Eocene period — and has been linked by many scientists to the onset of global cooling during the Oligocene period (33.9 to 23 million years ago).

Although there has been much scientific focus on the development of Antarctic glaciation, there have been relatively few sea surface temperature records for the Oligocene period.

The researchers generated new sea surface temperature models for the Oligocene at two ocean sites in the western tropical Atlantic and the southwestern Atlantic. They combined the new data with other existing sea surface temperature estimates for the Oligocene and Eocene epochs, plus data from climate modeling.

The result was a reconstruction of how surface temperatures evolved at a key moment in Earth’s climate history, as it transitioned from a greenhouse state to an icehouse state with Antarctic glaciation.

Our analysis revealed that Oligocene ‘icehouse’ surface temperatures were almost as warm as those of the late Eocene ‘greenhouse’ climate,” O’Brien said.

The study estimated that global mean surface temperatures (GMSTs) during the Oligocene were roughly 71 to 75 degrees Fahrenheit, similar to late Eocene GMSTs of about 73 degrees Fahrenheit. For context, in 2019 the GMST average was 58.7 degrees Fahrenheit, according to the National Oceanic and Atmospheric Administration.

This challenges our basic understanding of how the climate works, as well as the relationship between climate and ice volume through time,” O’Brien said.

The late Yale professor Mark Pagani was a co-author of the study. Additional co-authors were Yale senior research scientist Ellen Thomas, former Yale researchers James Super and Leanne Elder, and Purdue University professor Matthew Huber.

The National Science Foundation and the YIBS Donnelley Environmental Fellowship program funded the research.



from ScienceBlog.com https://ift.tt/2EMejGR

Window for Slowing COVID’s Spread was Smaller than Projected

Window for Slowing COVID’s Spread was Smaller than Projected

A new Duke University-led analysis shows that during the early months of the COVID pandemic, the average number of new infections caused by an infected individual (i.e. the basic reproduction number, R0) was 4.5, or more than twice as many as the initial 2.2 rate estimated by the World Health Organization at the time.

At that higher rate of infectious spread, governments had just 20 days from the first reported cases to implement non-pharmaceutical interventions stringent enough to reduce the transmission rate to below 1.1 and prevent widespread infections and deaths, the analysis shows.

If delays in implementing these interventions allowed the reproduction rate to remain above 2.7 for at least 44 days – as was the case in many of the 57 countries studied – any subsequent interventions were unlikely to be effective.

“These numbers confirm that we only had a small window of time to act, and unfortunately that’s not what happened in most countries,” said the Gabriel Katul, Theodore S. Coile Distinguished Professor of Hydrology and Micrometeorology at Duke, who led the study.

We can’t undo the consequences of that inaction, but we can use the insights from the new study to prepare for a second wave of COVID or future pandemics, he said. Katul and his colleagues published their peer-reviewed study Sept. 24 in the open-access journal PLOSOne.

“Being able to estimate transmission rates at different phases of a disease’s spread and under different conditions helps identify the timing and type of interventions that may work best, the hospital capacity we’ll need, and other critical considerations,” Katul said.

For instance, the new analysis estimates that achieving herd immunity from COVID requires 78% of a population to no longer be susceptible to it. That can help inform decisions about how many vaccines are needed.

To arrive at their estimates, the researchers used a conventional “susceptible-infectious-removed” (SIR) mathematical model to analyze confirmed new COVID cases reported daily from January to March 2020 in 57 countries. They also used the model to analyze mortalities based on the so-called Infection Fatality Rate that accommodates both symptomatic and asymptomatic cases. The SIR model is widely used by epidemiologists to track and project changes in disease status among populations who are susceptible to a disease, infected with it, or recovered from it (and thus “removed” from the general pool).

Using the model allowed Katul and his team to chart the disease’s early-phase transmission rate under different conditions and intervention scenarios; identify changes in those rates over time; and project how many cases and deaths ultimately might occur under different intervention scenarios until herd immunity is achieved. It also allowed them to determine, in hindsight, how soon intervention strategies should have been put into place to slow or stop the virus’ spread.

To explore whether transmission rates differed at regional versus national scales, the scientists also used the SIR model to analyze data on new cases and deaths in individual provinces, countries or cities in Italy and the United Kingdom. Initial rates of transmission differed in some of the locations, but over time the differences evened out.

The impact of super-spreaders — infected people who infect a large number of others — was also found to even out over time.

Despite some short-term spikes caused by super-spreaders, or other factors such as ramp-ups in testing, inferred local rates of transmissions all converged over time to a global average of about 4.5 new cases per infected individual where early-phase intervention was insufficient or nonexistent, Katul noted.

“In the end, it all comes down to timely, effective intervention,” he said. “The best defense against uncontrolled future outbreaks is to put stringent safety protocols in place at the first sign of an outbreak and make use of the tools science has provided us.”

The case and mortality data used in the study came from the European Center on Disease Prevention and Control.

Katul conducted the analysis with Assaad Mrad, a doctoral student at Duke’s Nicholas School; Sara Bonetti of ETH Zurich and University College London; Gabriele Manoli of University College London; and Anthony Parolari of Marquette University. Bonetti, Manoli and Parolari all are doctoral graduates or former post-doctoral researchers at Duke University.

CITATION: “Global Convergence of COVID-19 Basic Reproduction Number and Estimation from Early-Time SIR Dynamics,” Gabriel G. Katul, Assaad Mrad, Sara Bonetti, Gabriele Manoli and Anthony J. Parolari; Sept. 24, 2020, PLOSOne. DOI: 10.1371/journal.pone.0239800

Note: Gabriel Katul is available for additional comment at gaby@duke.edu.



from ScienceBlog.com https://ift.tt/3cIfKTc

Disastrous duo: heatwaves and droughts

Disastrous duo: heatwaves and droughts

Simultaneous heatwaves and droughts are becoming increasingly common in western parts of the Unites States, according to a new study led by researchers from McGill University. Periods of dry and hot weather, which can make wildfires more likely, are becoming larger, more intense, and more frequent because of climate change.

In a study published by Science Advances, the researchers analyzed heat and drought events across the contiguous United States over the past 122 years. They found that combined dry and hot events have not only increased in frequency, but also in size geographically. Where such events were once confined to small parts of the United States, now they cover whole regions, such as the entire west coast and parts of the Northeast and Southeast.

“Dry-hot events can cause large fires. Add wind and a source of ignition, and this results in ‘megafires’ like the 2020 fires across the west coast of the United States. Drought and record-breaking heatwaves, coupled with a storm that brought strong winds and 12,000 lightning events in a span of 72 hours, caused more than 500 wildfires,” says lead author Mohammad Reza Alizadeh, a PhD student under the supervision of Professor Jan Adamowski in the Department of Bioresource Engineering at McGill University.

The researchers also found that dry and hot weather events are intensifying, with longer periods of drought and higher temperatures. These dual “dry-hot extremes” are not only self-intensifying – more heat causes more drought and vice versa – but are also self-propagating, meaning they are able to move from region to region. “As increased temperatures are driving and expanding aridity, droughts and heatwaves move from one region to downwind regions,” says Alizadeh. These extremes can be particularly damaging for agricultural production and ecosystems, they warn.

According to the researchers, the trigger for these hot-dry events is shifting. Looking back at the catastrophic Dust Bowl of the 1930s, they explain that the dust storms were driven by a lack of rainfall coupled with poor land management practices. In recent decades, however, dry-hot disasters are driven more often by excess heat than a lack of rainfall.

The future will bring us more of these disasters, if the current warming trends continue, the researchers caution. They suggest their findings could be used to inform climate mitigation and adaptation efforts. “We need to understand how things are changing in order to adapt,” says Professor Jan Adamowski.

About the study

“A century of observations reveals increasing likelihood of continental-scale compound dry-hot extremes” by Mohammad Reza Alizadeh, Jan Adamowski, Mohammad Reza Nikoo, Amir AghaKouchak, Philip Dennison, and Mojtaba Sadegh is published in Science Advances.

DOI: https://doi.org/10.1126/sciadv.aaz4571



from ScienceBlog.com https://ift.tt/3jioUII

New model examines how societal influences affect U.S. political opinions

New model examines how societal influences affect U.S. political opinions

Northwestern University researchers have developed the first quantitative model that captures how politicized environments affect U.S. political opinion formation and evolution.

Using the model, the researchers seek to understand how populations change their opinions when exposed to political content, such as news media, campaign ads and ordinary personal exchanges. The math-based framework is flexible, allowing future data to be incorporated as it becomes available.

“It’s really powerful to understand how people are influenced by the content that they see,” said David Sabin-Miller, a Northwestern graduate student who led the study. “It could help us understand how populations become polarized, which would be hugely beneficial.”

 

New model examines how societal influences affect U.S. political opinions
Daniel Abrams

 

“Quantitative models like this allow us to run computational experiments,” added Northwestern’s Daniel Abrams, the study’s senior author. “We could simulate how various interventions might help fix extreme polarization to promote consensus.”

The paper will be published on Thursday (Oct. 1) in the journal Physical Review Research.

Abrams is an associate professor of engineering sciences and applied mathematics in Northwestern’s McCormick School of Engineering. Sabin-Miller is a graduate student in Abrams’ laboratory.

Researchers have been modeling social behavior for hundreds of years. But most modern quantitative models rely on network science, which simulates person-to-person human interactions.

The Northwestern team takes a different, but complementary, approach. They break down all interactions into perceptions and reactions. A perception takes into account how people perceive a politicized experience based on their current ideology. A far-right Republican, for example, likely will perceive the same experience differently than a far-left Democrat.

 

New model examines how societal influences affect U.S. political opinionsThis model could help us understand how populations become polarized, which would be hugely beneficial.”
David Sabin-Miller
graduate student

 

After perceiving new ideas or information, people might change their opinions based on three established psychological effects: attraction/repulsion, tribalism and perceptual filtering. Northwestern’s quantitative model incorporates all three of these and examines their impact.

“Typically, ideas that are similar to your beliefs can be convincing or attractive,” Sabin-Miller said. “But once ideas go past a discomfort point, people start rejecting what they see or hear. We call this the ‘repulsion distance,’ and we are trying to define that limit through modeling.”

People also react differently depending on whether or not the new idea or information comes from a trusted source. Known as tribalism, people tend to give the benefit of the doubt to a perceived ally. In perceptual filtering, people — either knowingly through direct decisions or unknowingly through algorithms that curate content — determine what content they see.

“Perceptual filtering is the ‘media bubble’ that people talk about,” Abrams explained. “You’re more likely to see things that are consistent with your existing beliefs.”

Abrams and Sabin-Miller liken their new model to thermodynamics in physics — treating individual people like gas molecules that distribute around a room.

“Thermodynamics does not focus on individual particles but the average of a whole system, which includes many, many particles,” Abrams said. “We hope to do the same thing with political opinions. Even though we can’t say how or when one individual’s opinion might change, we can look at how the whole population changes, on average.”



from ScienceBlog.com https://ift.tt/36jLxIV

About 14% of cerebral palsy cases may be tied to brain wiring genes

About 14% of cerebral palsy cases may be tied to brain wiring genes

In an article published in Nature Genetics, researchers confirm that about 14% of all cases of cerebral palsy, a disabling brain disorder for which there are no cures, may be linked to a patient’s genes and suggest that many of those genes control how brain circuits become wired during early development. This conclusion is based on the largest genetic study of cerebral palsy ever conducted. The results led to recommended changes in the treatment of at least three patients, highlighting the importance of understanding the role genes play in the disorder. The work was largely funded by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health.

“Our results provide the strongest evidence to date that a significant portion of cerebral palsy cases can be linked to rare genetic mutations, and in doing so identified several key genetic pathways  involved,” said Michael Kruer, M.D., a neurogeneticist at Phoenix Children’s Hospital and the University of Arizona College of Medicine – Phoenix and a senior author of the article. “We hope this will give patients living with cerebral palsy and their loved ones a better understanding of the disorder and doctors a clearer roadmap for diagnosing and treating them.”

Cerebral palsy affects approximately one in 323 children(link is external) in the United States. Signs of the disorder appear early in childhood resulting in a wide range of permanently disabling problems with movement and posture, including spasticity, muscle weakness, and abnormal gait. Nearly 40% of patients need some assistance with walking. In addition, many patients may also suffer epileptic seizures, blindness, hearing and speech problems, scoliosis, and intellectual disabilities.

Since its first official description in 1862, scientists have hotly debated whether cerebral palsy is caused by problems at birth. For instance, it is known that babies born prematurely or who experience a lack of blood flow or oxygen during birth have a greater chance of suffering from the disorder. Later though, researchers concluded that a majority (85-90%) of all cases are congenital(link is external), or born with the disease, and some studies had suggested that cerebral palsy could be inherited. Despite this, the causes of many children’s cases had remained elusive.

Then in 2004, scientists discovered the first genetic mutation known to cause cerebral palsy. Since then several more mutations have been identified and depending on how an experiment was performed, scientists have estimated that anywhere from 2 to 30% of all cases may be linked to a misspelling in a patient’s DNA. In this study, the researchers provided support for a previous estimate and highlighted which genes may play a critical role in the disorder.

“Cerebral palsy is one of neurology’s oldest unresolved mysteries. The results from this study show how advances in genomic research provide scientists with the hard evidence they need to unravel the causes behind this and other debilitating neurological disorders,” said Jim Koenig, Ph.D., program director at NINDS.

The study was led by Sheng Chih (Peter) Jin, Ph.D., assistant professor of genetics at Washington University School of Medicine, St. Louis, and Sara A. Lewis, Ph.D., a post-doc in the lab Dr. Kruer leads.

The researchers searched for what are known as “de novo,” or spontaneous, mutations in the genes of 250 families from the United States, China, and Australia through a collaboration made possible by the International Cerebral Palsy Genomics Consortium(link is external). These rare mutations are thought to happen when cells accidentally make mistakes copying their DNA as they multiply and divide. An advanced technique, called whole exome sequencing, was used to read out and compare the exact codes of each gene inscribed in the chromosomes of the patients with that of their parents. Any new differences represented de novo mutations that either happened while a parent’s sperm or egg cell multiplied or after conception.

Initially the researchers found that the cerebral palsy patients had higher levels of potentially harmful de novo mutations than their parents. Many of these mutations appeared to be concentrated in genes that are highly sensitive to the slightest changes in the DNA letter code. In fact, they estimated that about 11.9% of the cases could be explained by damaging de novo mutations. This was especially true for the idiopathic cases which had no known cause and represented the majority (62.8%) of cases in the study.

Approximately another 2% of the cases appeared to be linked to recessive, or weaker, versions of genes. This raised the estimate of cases that could be linked to genetic problems from 11.9% to 14%, as has been previously reported.

Moreover, the results led to recommendations for more tailored treatments of three patients.

“The hope of human genome research is that it will help doctors find the best, most personalized, matches between treatments and diseases. These results suggest that this may be possible for some patients with cerebral palsy,” said Chris Wellington, program director in the Division of Genome Sciences at the NIH’s National Institute of Human Genome Research, which also provided support for the study.

When the researchers looked more closely at the results, they found that eight genes had two or more damaging de novo mutations. Four of these genes, labeled RHOBFBXO31DHX32, and ALK, were newly implicated in CP while the other four had been identified in previous studies.

The researchers were especially surprised by the RHOB and FBXO31 results. Two cases in the study had the same spontaneous mutation in RHOB. Likewise, two other cases had the same de novo mutation in FBXO31.

“The odds of this randomly happening are incredibly low. This suggests that these genes are highly linked to cerebral palsy,” said Dr. Jin.

The researchers also looked at the genes behind other brain development disorders and found that about 28% of the cerebral palsy genes identified in this study have been linked to intellectual disability, 11% to epilepsy and 6.3% to autism spectrum disorders. In contrast, the researchers found no significant overlap between cerebral palsy genes and those involved with the neurodegenerative disorder Alzheimer’s disease which attacks the brain later in life.

“Our results support the idea that cerebral palsy is not one narrow disease but a spectrum of overlapping neurodevelopmental problems,” said Dr. Lewis.

Further analysis of the results suggested that many of the genes they found in this study, including six of the eight genes that had two or more de novo mutations, control the wiring of neural circuits during early development. Specifically, these genes are known to be involved in either the construction of protein scaffolds that line the perimeters of neural circuits or in the growth and extension of neurons as they wire up.

Experiments on fruit flies, formally known as Drosophila melanogaster, supported this idea. To do this, the researchers mutated fly versions of the wiring genes they identified in the cerebral palsy patients. They found that mutations in 71% of these genes caused flies to have problems with movement, including walking, turning, and balancing. The results suggested that these genes play a critical role in movement. They estimated that there was only a 3% chance these problems would happen if they had blindly mutated any gene in the fly genome.

“Treatments for cerebral palsy patients have not changed for decades,” said Dr. Kruer. “In the future, we plan to explore how these results can be used to change that.”

These studies were supported by the NIH (NS106298NS091299HG006504HD050846HL143036), the Cerebral Palsy Alliance Research Foundation, the Doris Duke Charitable Foundation (CSDA 2014112), the Scott Family Foundation, Cure CP, the National Health and Medical Research Council (Australia; grant 1099163), The Tenix Foundation, the National Natural Science Foundation of China (U1604165), Henan Key Research Program of China (171100310200), VINNOVA (Sweden’s Innovation Agency; 2015-04780), the James Hudson Brown-Alexander Brown Coxe Postdoctoral Fellowship at the Yale University School of Medicine, and the American Heart Association (18POST34060008).

https://www.ninds.nih.gov is the nation’s leading funder of research on the brain and nervous system. The mission of NINDS is to seek fundamental knowledge about the brain and nervous system and to use that knowledge to reduce the burden of neurological disease.



from ScienceBlog.com https://ift.tt/3igIELA

Sleep test may help diagnose and predict dementia in older adults

Sleep test may help diagnose and predict dementia in older adults

Dementia is a growing problem for people as they age, but it often goes undiagnosed. Now investigators at Harvard-affiliated Massachusetts General Hospital (MGH) and Beth Israel Deaconess Medical Center have discovered and validated a marker of dementia that may help clinicians identify patients who have the condition or are at risk of developing it. The findings are published in JAMA Network Open.

The team recently created the Brain Age Index (BAI), a model that relies on artificial intelligence and a large set of sleep data to estimate the difference between a person’s chronological age and the biological age of their brain when computed through electrical measurements (with an electroencephalogram, or EEG) during sleep. A higher BAI signifies deviation from normal brain aging, which could reflect the presence and severity of dementia.

“The model computes the difference between a person’s chronological age and how old their brain activity during sleep ‘looks,’ to provide an indication of whether a person’s brain is aging faster than is normal,” said senior author M. Brandon Westover, investigator in the Department of Neurology at MGH and director of Data Science at the MGH McCance Center for Brain Health. “This is an important advance, because before now it has only been possible to measure brain age using brain imaging with magnetic resonance imaging, which is much more expensive, not easy to repeat, and impossible to measure at home,” added Elissa Ye, the first author of the study and a member of Westover’s laboratory. She noted that sleep EEG tests are increasingly accessible in non-sleep laboratory environments, using inexpensive technologies such as headbands and dry EEG electrodes.

To test whether high BAI values obtained through EEG measurements may be indicative of dementia, the researchers computed values for 5,144 sleep tests in 88 individuals with dementia, 44 with mild cognitive impairment, 1,075 with cognitive symptoms but no diagnosis of impairment, and 2,336 without dementia. BAI values rose across the groups as cognitive impairment increased, and patients with dementia had an average value of about four years older than those without dementia. BAI values also correlated with neuropsychiatric scores from standard cognitive assessments conducted by clinicians before or after the sleep study.

“Because quite feasible to obtain multiple nights of EEG, even at home, we expect that measuring BAI will one day become a routine part of primary care, as important as measuring blood pressure,” said co-senior author Alice D. Lam, an investigator in the Department of Neurology at MGH. “BAI has potential as a screening tool for the presence of underlying neurodegenerative disease and monitoring of disease progression.”



from ScienceBlog.com https://ift.tt/2HCL8Hi

Increasing stability decreases ocean productivity, reduces carbon burial

Increasing stability decreases ocean productivity, reduces carbon burial

As the globe warms, the atmosphere is becoming more unstable, but the oceans are becoming more stable, according to an international team of climate scientists, who say that the increase in stability is greater than predicted and a stable ocean will absorb less carbon and be less productive.

Stable conditions in the atmosphere favor fair weather. However, when the ocean is stable, the layers of the ocean do not mix. Cooler, oxygenated water from beneath does not rise up and deliver oxygen and nutrients to waters near the surface, and warm surface water does not absorb carbon dioxide and bury it at depth.

“The same process, global warming, is both making the atmosphere less stable and the oceans more stable,” said Michael Mann, distinguished professor of atmospheric sciences and director of the Earth System Science Center at Penn State. “Water near the ocean’s surface is warming faster than the water below. That makes the oceans become more stable.”

Just as hot air rises, as is seen in the formation of towering clouds, hot water rises as well because it is less dense than cold water. If the hottest water is on top, vertical mixing in the oceans slows. Also, melting ice from various glaciers introduces fresh water into the upper layers of the oceans. Fresh water is less dense than salt water and so it tends to remain on the surface as well. Both elevated temperature and salinity cause greater ocean stratification and less ocean mixing.

“The ability of the oceans to bury heat from the atmosphere and mitigate global warming is made more difficult when the ocean becomes more stratified and there is less mixing,” said Mann. “Less downward mixing of warming waters means the ocean surface warms even faster, leading, for example, to more powerful hurricanes. Global climate models underestimate these trends.”

Mann and his team are not the first to investigate the impact of a warming climate on ocean stratification, but they are looking at the problem in a different way. The team has gone deeper into the ocean than previous research and they have a more sophisticated method of dealing with gaps in the data. They report their results today (Sept. 29) in Nature Climate Change.

“Other researchers filled in gaps in the data with long-term averages,” said Mann. “That tends to suppress any trends that are present. We used an ocean model to fill in the gaps, allowing the physics of the model to determine the most likely values of the missing data points.”

According to Mann, this is a more dynamic approach.

“Using the more sophisticated physics-based method, we find that ocean stability is increasing faster than we thought before and faster than models predict, with worrying potential consequences,” he said.

Other researchers on this project were Guancheng Li, Lijing Cheng and Jiang Zhu, International Center for Climate and Environment Sciences, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, Center for Ocean Mega-Science, Qingdao and University of Chinese Academy of Science, Beijing; Kevin E Trenberth, National Center for Atmospheric Research; and John P. Abraham, School of Engineering, University of St. Thomas, St. Paul, Minnesota.

The Chinese Academy of Sciences, National Key R&D Program of China, National Center for Atmospheric Research and the U.S. National Science Foundation supported this research.



from ScienceBlog.com https://ift.tt/36noP2G

Last-resort life support option helped majority of critically ill COVID-19 patients survive

Last-resort life support option helped majority of critically ill COVID-19 patients survive

It saved lives in past epidemics of lung-damaging viruses. Now, the life-support option known as ECMO appears to be doing the same for many of the critically ill COVID-19 patients who receive it, according to an international study led by a University of Michigan researcher.

The 1,035 patients in the study faced a staggeringly high risk of death, as ventilators and other care failed to support their lungs. But after they were placed on ECMO—extracorporeal membrane oxygenation—their actual death rate was less than 40%. That’s similar to the rate for patients treated with ECMO in past outbreaks of lung-damaging viruses and other severe forms of viral pneumonia.

The new study published in The Lancet provides strong support for the use of ECMO in appropriate patients as the pandemic rages worldwide. It may help more hospitals that have ECMO capability understand which of their COVID-19 patients might benefit from the technique, which channels blood out of the body and into a circuit of equipment that adds oxygen directly to the blood before pumping it back into regular circulation.

Still, the international team of authors cautions that patients who show signs of needing advanced life support should receive it at hospitals with experienced ECMO teams, and that hospitals shouldn’t try to add ECMO capability mid-pandemic.

Global cooperation to achieve results

The study was made possible by a rapidly created international registry that has given critical care experts near real-time data on the use of ECMO in COVID-19 patients since early in the year.

Hosted by the Extracorporeal Life Support Organization (ELSO), the registry includes data submitted by the 213 hospitals on four continents whose patients were included in the new analysis. The study includes data on patients age 16 or older who were started on ECMO between January 16 and May 1, and follows them until death, discharge from the hospital, or August 5, whichever occurred first.

“These results from hospitals experienced in providing ECMO are similar to past reports of ECMO-supported patients, with other forms of acute respiratory distress syndrome or viral pneumonia,” says co-lead author Ryan Barbaro of Michigan Medicine, U-M’s academic medical center. “These results support recommendations to consider ECMO in COVID-19 if the ventilator is failing. We hope these findings help hospitals make decisions about this resource-intensive option.”

Co-lead author Graeme MacLaren of the National University Health System in Singapore said most centers in the study did not need to use ECMO for COVID-19 very often.

“By bringing data from over 200 international centers together into the same study, ELSO has deepened our knowledge about the use of ECMO for COVID-19 in a way that would be impossible for individual centers to learn on their own,” he said.

Insights into patient outcomes

Seventy percent of the patients in the study were transferred to the hospital where they received ECMO. Half of these were actually started on ECMO—likely by the receiving hospital’s team—before they were transferred. This reinforces the importance of communication between ECMO-capable hospitals and non-ECMO hospitals that might have COVID-19 patients who could benefit from ECMO.

The new study could also help identify which patients will benefit most if they are placed on ECMO.

“Our findings also show that mortality risk rises significantly with patient age, and that those who are immunocompromised, have acute kidney injuries, worse ventilator outcomes or COVID-19-related cardiac arrests are less likely to survive,” said Barbaro, who chairs ELSO’s COVID-19 registry committee and provides ECMO care as a pediatric intensive care physician at U-M’s C.S. Mott Children’s Hospital.

“Those who need ECMO to replace cardiac function as well as lung function also did worse. All of this knowledge can help centers and families understand what patients might face if they are placed on ECMO.”

Co-senior author Daniel Brodie of New York Presbyterian Hospital said the lack of reliable information early in the pandemic hampered the research team’s ability to understand the role of ECMO for COVID-19.

“The results of this large-scale international registry study, while hardly definitive evidence, provide a real-world understanding of the potential for ECMO to save lives in a highly selected population of COVID-19 patients,” said Brodie, who shares senior authorship with Roberto Lorusso of the Maastricht University Medical Center in the Netherlands and Alain Combes of Sorbonne University in Paris.

A robust statistical approach

Because the ELSO database does not track what happens to patients once they are discharged to home, other hospitals and long-term acute care or rehabilitation facilities, the study used a statistical approach based on in-hospital mortality up to 90 days after the patient was put on ECMO. This also allowed the team to account for the 67 patients who were still in the hospital as of August 5, whether they were still on ECMO, in the ICU or in step-down units.

Last-resort life support option helped majority of critically ill COVID-19 patients survive

The study tracked the outcomes for more than 1,000 patients for 90 days after they were placed on ECMO life support.

Philip Boonstra of the U-M School of Public Health, helped design the study using a “competing risk” approach, based on his experience handling the statistical design and analysis of long-term data from clinical trials for cancer.

“We used 90-day in-hospital mortality because this is the highest-risk period and because it allows us to use the information we have to the fullest, even if we don’t know the final outcome for every patient,” he said.

Having data through August, when only a small number of the patients in the study remained in the hospital, was important—though data are missing on a small number of patients. And even though patients who were discharged to their homes or a rehabilitation facility will likely have a long recovery ahead after the intensive level of care involved in ECMO, they are likely to survive based on past data. However, the fate of those who went to LTAC facilities, which provide long-term care at a near-ICU level, is less certain.

More about the study and next steps

More than half of the patients in the study were treated in hospitals in the United States and Canada, including Michigan Medicine’s own hospitals. U-M’s Robert Bartlett, emeritus professor of surgery and a co-author of the new paper, is considered a key figure in the development of ECMO, including the first use in adults in the 1980s. He led the development of the initial guidance for the use of ECMO in COVID-19.

“ECMO is the final step in the algorithm for managing life-threatening lung failure in advanced ICUs,” Bartlett said. “Now we know it is effective in COVID-19.”

As of Aug. 5, 380 of the patients in the study had died in the hospital, more than 80% of them within 24 hours of a proactive decision to discontinue ECMO care because of a poor prognosis. Of the remaining patients, 57% had gone home or to a rehabilitation center (311 patients) or had been discharged to another hospital or a long-term acute care center (277 patients). The rest were still in the hospital but had reached 90 days after the start of ECMO.

The new study adds to the information used to create the ECMO COVID-19 guidelines published by ELSO, which is in part based on past randomized controlled trials of ECMO’s use in ARDS.

Barbaro and others are studying the longer-term effects of ECMO care for any patient; he leads a team that has recently received a National Institutes of Health grant for a long-term study of children who have survived after treatment with ECMO.

Meanwhile, the ELSO registry continues to track the care of patients placed on ECMO because of COVID-19. Christine Stead, chief executive officer of ELSO, credits the rapid pivot and intense teamwork among ECMO centers and their staff for the strength of the new paper.

“We started with a WeChat dialogue with teams in China, who were able to share knowledge and help their counterparts in Japan be ready for the spread to their country,” she said. “We asked all the centers that take part in ELSO to change their practice, and begin entering data about patients as soon as they were placed on ECMO, rather than waiting until they were discharged from the hospital. This has allowed us to achieve something that will help hospitals make more informed decisions, based on meaningful data, as the pandemic continues.”



from ScienceBlog.com https://ift.tt/36gsmjn

Machine Learning Takes on Synthetic Biology: Algorithms Can Bioengineer Cells for You

Machine Learning Takes on Synthetic Biology: Algorithms Can Bioengineer Cells for You

If you’ve eaten vegan burgers that taste like meat or used synthetic collagen in your beauty routine – both products that are “grown” in the lab – then you’ve benefited from synthetic biology. It’s a field rife with potential, as it allows scientists to design biological systems to specification, such as engineering a microbe to produce a cancer-fighting agent. Yet conventional methods of bioengineering are slow and laborious, with trial and error being the main approach.

Now scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new tool that adapts machine learning algorithms to the needs of synthetic biology to guide development systematically. The innovation means scientists will not have to spend years developing a meticulous understanding of each part of a cell and what it does in order to manipulate it; instead, with a limited set of training data, the algorithms are able to predict how changes in a cell’s DNA or biochemistry will affect its behavior, then make recommendations for the next engineering cycle along with probabilistic predictions for attaining the desired goal.

“The possibilities are revolutionary,” said Hector Garcia Martin, a researcher in Berkeley Lab’s Biological Systems and Engineering (BSE) Division who led the research. “Right now, bioengineering is a very slow process. It took 150 person-years to create the anti-malarial drug, artemisinin. If you’re able to create new cells to specification in a couple weeks or months instead of years, you could really revolutionize what you can do with bioengineering.”

Working with BSE data scientist Tijana Radivojevic and an international group of researchers, the team developed and demonstrated a patent-pending algorithm called the Automated Recommendation Tool (ART), described in a pair of papers recently published in the journal Nature Communications. Machine learning allows computers to make predictions after “learning” from substantial amounts of available “training” data.

In “ART: A machine learning Automated Recommendation Tool for synthetic biology,” led by Radivojevic, the researchers presented the algorithm, which is tailored to the particularities of the synthetic biology field: small training data sets, the need to quantify uncertainty, and recursive cycles. The tool’s capabilities were demonstrated with simulated and historical data from previous metabolic engineering projects, such as improving the production of renewable biofuels.

In “Combining mechanistic and machine learning models for predictive engineering and optimization of tryptophan metabolism,” the team used ART to guide the metabolic engineering process to increase the production of tryptophan, an amino acid with various uses, by a species of yeast called Saccharomyces cerevisiae, or baker’s yeast. The project was led by Jie Zhang and Soren Petersen of the Novo Nordisk Foundation Center for Biosustainability at the Technical University of Denmark, in collaboration with scientists at Berkeley Lab and Teselagen, a San Francisco-based startup company.

To conduct the experiment, they selected five genes, each controlled by different gene promoters and other mechanisms within the cell and representing, in total, nearly 8,000 potential combinations of biological pathways. The researchers in Denmark then obtained experimental data on 250 of those pathways, representing just 3% of all possible combinations, and that data were used to train the algorithm. In other words, ART learned what output (amino acid production) is associated with what input (gene expression).

Then, using statistical inference, the tool was able to extrapolate how each of the remaining 7,000-plus combinations would affect tryptophan production. The design it ultimately recommended increased tryptophan production by 106% over the state-of-the-art reference strain and by 17% over the best designs used for training the model.

“This is a clear demonstration that bioengineering led by machine learning is feasible, and disruptive if scalable. We did it for five genes, but we believe it could be done for the full genome,” said Garcia Martin, who is a member of the Agile BioFoundry and also the Director of the Quantitative Metabolic Modeling team at the Joint BioEnergy Institute (JBEI), a DOE Bioenergy Research Center; both supported a portion of this work. “This is just the beginning. With this, we’ve shown that there’s an alternative way of doing metabolic engineering. Algorithms can automatically perform the routine parts of research while you devote your time to the more creative parts of the scientific endeavor: deciding on the important questions, designing the experiments, and consolidating the obtained knowledge.”

More data needed

The researchers say they were surprised by how little data was needed to obtain results. Yet to truly realize synthetic biology’s potential, they say the algorithms will need to be trained with much more data. Garcia Martin describes synthetic biology as being only in its infancy – the equivalent of where the Industrial Revolution was in the 1790s. “It’s only by investing in automation and high-throughput technologies that you’ll be able to leverage the data needed to really revolutionize bioengineering,” he said.

Radivojevic added: “We provided the methodology and a demonstration on a small dataset; potential applications might be revolutionary given access to large amounts of data.”

The unique capabilities of national labs

Besides the dearth of experimental data, Garcia Martin says the other limitation is human capital – or machine learning experts. Given the explosion of data in our world today, many fields and companies are competing for a limited number of experts in machine learning and artificial intelligence.

Garcia Martin notes that knowledge of biology is not an absolute prerequisite, if surrounded by the team environment provided by the national labs. Radivojevic, for example, has a doctorate in applied mathematics and no background in biology. “In two years here, she was able to productively collaborate with our multidisciplinary team of biologists, engineers, and computer scientists and make a difference in the synthetic biology field,” he said. “In the traditional ways of doing metabolic engineering, she would have had to spend five or six years just learning the needed biological knowledge before even starting her own independent experiments.”

“The national labs provide the environment where specialization and standardization can prosper and combine in the large multidisciplinary teams that are their hallmark,” Garcia Martin said.

Synthetic biology has the potential to make significant impacts in almost every sector: food, medicine, agriculture, climate, energy, and materials. The global synthetic biology market is currently estimated at around $4 billion and has been forecast to grow to more than $20 billion by 2025, according to various market reports.

“If we could automate metabolic engineering, we could strive for more audacious goals. We could engineer microbiomes for therapeutic or bioremediation purposes. We could engineer microbiomes in our gut to produce drugs to treat autism, for example, or microbiomes in the environment that convert waste to biofuels,” Garcia Martin said. “The combination of machine learning and CRISPR-based gene editing enables much more efficient convergence to desired specifications.”

This research is part of the Agile BioFoundry and JBEI, supported by the Department of Energy, and also received support from the Novo Nordisk Foundation and the European Commission. ART is available for licensing on GitHub.



from ScienceBlog.com https://ift.tt/30hH7hR

Featured Post

🔬 Investigating Membrane Fouling from Microplastic-Contaminated Water🧪

 💧 Microplastic pollution has become a growing concern in surface water sources such as rivers, lakes, and reservoirs. These tiny plastic ...

Popular