Making the case for hydrogen in a zero-carbon economy

Making the case for hydrogen in a zero-carbon economy
Making the case for hydrogen in a zero-carbon economy

As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

“As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

“Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

Adding up the costs

California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

“We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

A tool for energy investors

When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

“As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study.



from ScienceBlog.com https://ift.tt/2WDy7VE

The flower clock: How a small protein helps flowers to develop right and on time

The flower clock: How a small protein helps flowers to develop right and on time

How flowers form properly within a limited time frame has been a mystery, at least until now. Researchers from Japan and China have discovered how a multi-tasking protein helps flowers to develop as expected.

In a study published in Proceedings of the National Academy of Sciences U.S.A., researchers from Nanjing University and Nara Institute of Science and Technology have revealed that a small protein plays multiple roles to ensure that floral reproductive organs are formed properly within a short space of time.

Flowers develop from floral meristems, which differentiate to produce the sepals, petals, stamens, and carpels. The proper development of these floral organs depends on meristem development being completed within a certain time period. In the early stages of flower development, stem cells provide the cell source for floral organ formation. In floral meristems, stem cell activities are maintained via a feedback loop between WUSCHEL (WUS), a gene that identifies floral stem cells, and CLAVATA3 (CLV3), a stem cell marker gene that is activated and sustained by WUS.

“A small protein called KNUCKLES (KNU) represses WUS directly, which leads to the completion of floral stem cell activity at the right time,” says lead author Erlei Shang of the study. “What isn’t fully understood is how the robust floral stem cell activity finishes within a limited time period to ensure carpel development.”

“The team’s research revealed that in Arabidopsis thaliana, KNU can completely deactivate the robust floral meristems at a particular floral stage, thanks to the multiple functions that KNU carries out via its position-specific roles,” says senior author Toshiro Ito.

KNU both represses and silences WUS, and directly represses CLV3 and CLV1 (a gene that encodes a receptor for the CLV3 peptide). Consequently, KNU eliminates the CLV3-WUS feedback loop via transcriptional and epigenetic mechanisms (i.e., those that do not involve changes in the underlying DNA sequences). Additionally, KNU interacts physically with the WUS protein, which inhibits WUS from sustaining CLV3, disrupting interactions that are required for the maintenance of floral meristems.

“Our results reveal a regulatory pathway where KNU plays a key role in supporting the completion of floral meristem development within a short time window, and ensures that flower reproductive organs are properly formed,” says corresponding author Bo Sun.

The results of this research will be useful for genetic studies of food crop species such as rice, tomatoes, and maize. An understanding of the floral meristem termination mechanism discovered in this study will benefit crop yields for food production globally.



from ScienceBlog.com https://ift.tt/3zB3Cye

Seaweed farms in river estuaries cut prevent environmental pollution

Seaweed farms in river estuaries cut prevent environmental pollution

A new study by Tel Aviv University and University of California, Berkeley proposes a model according to which the establishment of seaweed farms in river estuaries significantly reduces nitrogen concentrations in the estuary and prevents pollution in estuarine and marine environments. The study was headed by doctoral student Meiron Zollmann, under the joint supervision of Prof. Alexander Golberg of the Porter School of Environmental and Earth Sciences and Prof. Alexander Liberzon of the School of Mechanical Engineering at the Iby and Aladar Fleischman Faculty of Engineering, Tel Aviv University. The study was conducted in collaboration with Prof. Boris Rubinsky of the Faculty of Mechanical Engineering at UC Berkeley. The study was published in the prestigious journal Communications Biology.

As part of the study, the researchers built a large seaweed farm model for growing the ulva sp. green macroalgae in the Alexander River estuary, hundreds of meters from the open sea. The Alexander River was chosen because the river discharges polluting nitrogen from nearby upstream fields and towns into the Mediterranean Sea. Data for the model were collected over two years from controlled cultivation studies.

Researchers explain that nitrogen is a necessary fertilizer for agriculture, but it comes with an environmental price tag. Once nitrogen reaches the ocean, it disperses randomly, damaging various ecosystems. As a result, the state local authorities spend a great deal of money on reducing nitrogen concentrations in water, following national and international conventions that limit nitrogen loading in the oceans, including in the Mediterranean Sea.

“My laboratory researches basic processes and develops technologies for aquaculture,” explains Prof. Golberg. “We are developing technologies for growing seaweed in the ocean in order to offset carbon and extract various substances , such as proteins and starches, to offer a marine alternative to terrestrial agricultural production. In this study, we showed that if seaweed is grown according to the model we developed, in rivers’ estuaries, they can absorb the nitrogen to conform to environmental standards and prevent its dispersal in water and thus neutralize environmental pollution. In this way, we actually produce a kind of “natural decontamination facility” with significant ecological and economic value since seaweed can be sold as biomass for human use.

The researchers add that the mathematical model predicts farm yields and links seaweed yield and chemical composition to nitrogen concentration in the estuary. “Our model allows marine farmers, as well as government and environmental bodies, to know, in advance, what the impact will be and what the products of a large seaweed farm will be – before setting up the actual farm,” adds Meiron Zollman. “Thanks to mathematics, we know how to make the adjustments also concerning large agricultural farms and maximize environmental benefits, including producing the agriculturally desired protein quantities.”

“It is important to understand that the whole world is moving towards green energy, and seaweed can be a significant source,” adds Prof. Liberzon, “and yet today, there is no single farm with the proven technological and scientific capability. The barriers here are also scientific: We do not really know what the impact of a huge farm will be on the marine environment. It is like transitioning from a vegetable garden outside the house to endless fields of industrial farming. Our model provides some of the answers, hoping to convince decision-makers that such farms will be profitable and environmentally friendly. Furthermore, one can imagine even more far-reaching scenarios. For example, green energy: “If we knew how to utilize the growth rates for energy in better percentages, it would be possible to embark on a one-year cruise with a kilogram of seaweed, with no additional fuel beyond the production of biomass in a marine environment.”

“The interesting connection we offer here is growing seaweed at the expense of nitrogen treatment,” concludes Prof. Golberg. “In fact, we have developed a planning tool for setting up seaweed farms in estuaries to address both environmental problems while producing economic benefit. We offer the design of seaweed farms in river estuaries containing large quantities of agriculturally related nitrogen residues to rehabilitate the estuary and prevent nitrogen from reaching the ocean while growing the seaweed itself for food. In this way, aquaculture complements terrestrial agriculture.”



from ScienceBlog.com https://ift.tt/3gNuZxy

Racing heart may alter decision-making brain circuits

Racing heart may alter decision-making brain circuits

Anxiety, addiction, and other psychiatric disorders are often characterized by intense states of what scientists call arousal: The heart races, blood pressure readings rise, breaths shorten, and “bad” decisions are made. In an effort to understand how these states influence the brain’s decision-making processes, scientists at the Icahn School of Medicine at Mount Sinai analyzed the data from a previous study of non-human primates. They found that two of the brain’s decision-making centers contain neurons that may exclusively monitor the body’s internal dynamics. Furthermore, a heightened state of arousal appeared to rewire one of the centers by turning some decision-making neurons into internal state monitors.

“Our results suggest that the brain’s decision-making circuits may be wired to constantly monitor and integrate what is happening inside the body. Because of that, changes in our level of arousal can alter the way that these circuits work,” said Peter Rudebeck, PhD, Associate Professor in the Nash Family Department of Neuroscience and Friedman Brain Institute at Mount Sinai and the senior author of the study published in PNAS (Proceedings of the National Academy of Sciences). “We hope that these results will help researchers gain a better understanding of the brain areas and fundamental cellular processes that underlie several psychiatric disorders.”

The study was led by Atsushi Fujimoto, MD, PhD, an Instructor in Dr. Rudebeck’s lab who previously studied how the brain controls risk-taking.

For years scientists have described the relationship between arousal and decision-making performance as a “U-shaped curve.” Basically, a little of bit of arousal—such as that experienced after a cup of coffee—might produce peak performance. But too much or too little arousal increases the chances that the brain will make slow or incorrect decisions.

Initial results from this study supported this idea. The researchers analyzed data from a previous set of experiments that tested the ability of three rhesus monkeys to decide between receiving two rewards: either a lot of tasty juice or a little. Dr. Rudebeck performed these experiments while working as a post-doctoral fellow at the National Institute of Mental Health. As expected, the monkeys consistently chose to have more juice, and on average they made this decision faster when their hearts were beating faster, supporting the idea that an aroused state fosters better performance.

Next, the researchers analyzed the electrical activity recorded from neurons in two of the brain’s decision centers called the orbitofrontal cortex and dorsal anterior cingulate cortex.

They found that the activity of about a sixth of the neurons in either area correlated with fluctuations in heart rate. In other words, if an animal’s heart rate changed, then the activity of these cells would also change by either speeding up or slowing down. This activity appeared to be unaffected by the decisions made about the different rewards that the monkeys were receiving. Meanwhile, the activity of the remaining cells in each area appeared to be primarily involved in the decision-making process.

“Brain scanning studies have suggested that bodily arousal alters the activity of these decision-making centers. Our results both support this idea on a cellular level and suggest that the sole job of some these neurons is to track the body’s internal, or interoceptive, states,” Dr. Fujimoto said. “The next question we had was: ‘What might happen during the type of heightened arousal states seen in patients who suffer from anxiety, addiction, and other psychiatric disorders?’”

To answer the question, the researchers analyzed the data obtained after the amygdala, the brain’s emotional center, was surgically turned off in each animal. This raised heart rates by up to 15 beats per minute. Now, in this higher arousal state, the faster the animals’ hearts beat, the slower they were to choose a reward. This suggests that when the animals’ arousal state was heightened, it actually hampered the decision-making process.

When the team looked at the neural activity, they found something even more interesting. The heightened arousal state appeared to alter the roles that the neurons played during decision-making. In both brain centers, the researchers saw evidence of a decrease in the number of neurons involved in the decision-making process. Moreover, in the dorsal anterior cingulate cortex, the number of neurons that appeared to track internal states rose slightly. This altered the balance of information represented in this area, as if the neural signals for decision making were “hijacked” by arousal.

“Although not definitive, our results suggest that a heightened arousal state degrades and takes control of the decision-making circuits in the brain,” Dr. Rudebeck said. “We plan to continue studying how arousal can influence higher brain functions and how this contributes to psychiatric disorders.”

This study was supported by the National Institutes of Health  (MH110822), the NIH’s BRAIN Initiative (MH117040), the NIH Intramural Research Program at the National Institute of Mental Health (MH002886), the Takeda Science Foundation, and a Brain & Behavior Research Foundation Young Investigator Grant (#28979).



from ScienceBlog.com https://ift.tt/3jvaoji

Genetic background can increase Hispanics’ risk for omega-3 deficiency

Genetic background can increase Hispanics' risk for omega-3 deficiency

Hispanic people with a high percentage of American Indigenous ancestry are at increased risk of an omega-3 nutritional deficiency that could affect their heart health and contribute to harmful inflammation, new research suggests.

Researchers at the University of Virginia School of Medicine and their collaborators have linked American Indigenous ancestry with increased risk of omega-3 fatty acid deficiency among Hispanic Americans. Found in foods such as fatty fish and certain nuts, omega-3s are thought to be important in preventing heart disease and play an important role in the immune system.

Doctors can use the new findings, the researchers say, to identify Hispanic patients at risk of omega-3 deficiency and to help them correct the problem with nutritional guidance or supplements. This could help the patients avoid heart problems and other health issues down the road.

“Our research provides a path toward precision nutrition in which dietary recommendations can be tailored to an individual’s genetic background,” said researcher Ani Manichaikul, PhD, of UVA’s Center for Public Health Genomics and Department of Public Health Sciences.

Omega-3 Fatty Acids

The new findings underscore the need for doctors to look beyond simple racial and ethnic classifications, the researchers say. The results are an important reminder that there is tremendous genetic diversity within patient populations, and particularly among Hispanic Americans, the scientists note.

In a new scientific paper outlining their findings, the researchers describe how most Hispanic Americans have ancestry that is either predominantly European and American Indigenous or predominantly European and African. Members of the former group often trace their family roots to Mexico, Central America or South America, while members of the latter primarily have their roots in Cuba, the Dominican Republic or Puerto Rico.

To better understand the potential effect of American Indigenous ancestry on the body’s ability to process omega-3s, the researchers looked at naturally occurring variations in a particular cluster of genes in 1,102 Hispanic-American study participants. This gene cluster, known as the fatty acid desaturase cluster, or FADS, helps determine how the body uses both omega-6 and omega-3 fatty acids.

The scientists concluded that the gene variations most associated with low fatty acid levels occurred much more frequently in Hispanic people with greater American Indigenous ancestry. These variations were also associated with increased levels of triglycerides, a type of fat found in the blood, and with several other metabolic and inflammatory traits as well.

“Each person carries two copies of the FADS gene – one from their mom and one from their dad. Individuals who carry two copies of the version of FADS that is much more common with American Indigenous ancestry will be at the greatest risk of omega-3 deficiencies,” Manichaikul said. “Omega-3 fatty acids have broad-ranging roles in brain development, prevention of cardiovascular disease, diabetes and cancer. Understanding each individual’s risk of omega-3 fatty acid deficiency will be one step in a broader journey toward disease prevention and better health overall.”

The findings suggest that American Indigenous ancestry could offer a simple and effective way for doctors to identify Hispanic patients at risk for fatty acid deficiencies. Instead of a pill, a care provider might prescribe a healthy eating plan to compensate for that deficiency.

In the future, when genetic testing becomes more routine, doctors might examine a particular patient’s FADS cluster to further refine their nutritional recommendations, the scientists say.

“We anticipate that many individuals may have a reasonable idea about their own proportions of American Indigenous ancestry, which they could consider together with physicians, dieticians or genetic counselors as a tool to assess their own risks of omega-3 deficiency,” Manichaikul said. “Our genes show that humans are diverse creatures, reflecting adaptation to a variety of environments and diets over time. Modern dietary recommendations should consider that diversity by tailoring recommendations to each individual’s genes.”

The researchers have published their findings in the scientific journal Communications Biology.



from ScienceBlog.com https://ift.tt/2Wx7vWF

Paper: Use patent law to curb unethical human-genome editing

Paper: Use patent law to curb unethical human-genome editing

Editor’s notes: To contact Jacob S. Sherkow, call 217-300-3936; email jsherkow@illinois.edu.

The paper “Governing human germline editing through patent law” is available online.

DOI: 10.1001/jama.2021.13824



from ScienceBlog.com https://ift.tt/3gQhM75

New Report Shows Technology Advancement and Value of Wind Energy

New Report Shows Technology Advancement and Value of Wind Energy

Wind energy continues to see strong growth, solid performance, and low prices in the U.S., according to a report released by the U.S. Department of Energy (DOE) and prepared by Lawrence Berkeley National Laboratory (Berkeley Lab). With levelized costs of just over $30 per megawatt-hour (MWh) for newly built projects, the cost of wind is well below its grid-system, health, and climate benefits.

“Wind energy prices ­– ­particularly in the central United States, and supported by federal tax incentives – remain low, with utilities and corporate buyers selecting wind as a low-cost option,” said Berkeley Lab Senior Scientist Ryan Wiser. “Considering the health and climate benefits of wind energy makes the economics even better.”

Key findings from the DOE’s annual “Land-Based Wind Market Report” include:

  • Wind comprises a growing share of electricity supply. U.S. wind power capacity grew at a record pace in 2020, with nearly $25 billion invested in 16.8 gigawatts (GW) of capacity. Wind energy output rose to account for more than 8% of the entire nation’s electricity supply, and is more than 20% in 10 states. At least 209 GW of wind are seeking access to the transmission system; 61 GW of this capacity are offshore wind and 13 GW are hybrid plants that pair wind with storage or solar. 
    New Report Shows Technology Advancement and Value of Wind Energy

    Credit: Berkeley Lab

  • Wind project performance has increased over time. The average capacity factor (a measure of project performance) among projects built over the last five years was above 40%, considerably higher than projects built earlier. The highest capacity factors are seen in the interior of the country.
  • Turbines continue to get larger. Improved plant performance has been driven by larger turbines mounted on taller towers and featuring longer blades. In 2010, no turbines employed blades that were 115 meters in diameter or larger, but in 2020, 91% of newly installed turbines featured such rotors. Proposed projects indicate that total turbine height will continue to rise.
  • Low wind turbine pricing has pushed down installed project costs over the last decade. Wind turbine prices are averaging $775 to $850/kilowatt (kW). The average installed cost of wind projects in 2020 was $1,460/kW, down more than 40% since the peak in 2010, though stable for the last three years. The lowest costs were found in Texas. 
    New Report Shows Technology Advancement and Value of Wind Energy

    Credit: Berkeley Lab

  • Wind energy prices remain low, around $20/MWh in the interior “wind belt” of the country. After topping out at $70/MWh for power purchase agreements executed in 2009, the national average price of wind has dropped. In the interior “wind belt” of the country, recent pricing is around $20/MWh. In the West and East, prices tend to average $30/MWh or more. These prices, which are possible in part due to federal tax support, fall below the projected future fuel costs of gas-fired generation.
  • Wind prices are often attractive compared to wind’s grid-system market value. The value of wind energy sold in wholesale power markets is affected by the location of wind plants, their hourly output profiles, and how those characteristics correlate with real-time electricity prices and capacity markets. The market value of wind declined in 2020 given the drop in natural gas prices, averaging under $15/MWh in much of the interior of the country; higher values were seen in the Northeast and in California.
  • The average levelized cost of wind energy is down to $33/MWh. Levelized costs vary across time and geography, but the national average stood at $33/MWh in 2020—down substantially historically, though consistent with the previous two years. (Cost estimates do not count the effect of federal tax incentives for wind.) 
    New Report Shows Technology Advancement and Value of Wind Energy

    Credit: Berkeley Lab

  • The health and climate benefits of wind in 2020 were larger than its grid-system value, and the combination of all three far exceeds the current levelized cost of wind. Wind generation reduces power-sector emissions of carbon dioxide, nitrogen oxides, and sulfur dioxide. These reductions, in turn, provide public health and climate benefits that vary regionally, but together are economically valued at an average of $76/MWh-wind nationwide in 2020. 
    New Report Shows Technology Advancement and Value of Wind Energy

    Credit: Berkeley Lab

  • The domestic supply chain for wind equipment is diverse. For wind projects recently installed in the U.S., domestically manufactured content is highest for nacelle assembly (over 85%), towers (60% to 75%), and blades and hubs (30% to 50%), but is much lower for most components internal to the nacelle.

Berkeley Lab’s contributions to this report were funded by the U.S. Department of Energy’s Wind Energy Technologies Office.

# # #

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 14 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.



from ScienceBlog.com https://ift.tt/2WDNJIs

Alcohol Can Cause Immediate Risk of Atrial Fibrillation

Alcohol Can Cause Immediate Risk of Atrial Fibrillation

A single glass of wine can quickly – significantly – raise the drinker’s risk for atrial fibrillation, according to new research by UC San Francisco.

The study provides the first evidence that alcohol consumption substantially increases the chance of the heart rhythm condition occurring within a few hours. The findings might run counter to a prevailing perception that alcohol can be “cardioprotective,” say the authors, suggesting that reducing or avoiding alcohol might help mitigate harmful effects.

The paper is published Aug. 30 in Annals of Internal Medicine.

“Contrary to a common belief that atrial fibrillation is associated with heavy alcohol consumption, it appears that even one alcohol drink may be enough to increase the risk,” said Gregory Marcus, MD, MAS, professor of medicine in the Division of Cardiology at UCSF.

“Our results show that the occurrence of atrial fibrillation might be neither random nor unpredictable,” he said. “Instead, there may be identifiable and modifiable ways of preventing an acute heart arrhythmia episode.”

Atrial fibrillation (AF) is the most common heart arrhythmia seen clinically, but until now research has largely focused on risk factors for developing the disease and therapies to treat it, rather than factors that determine when and where an episode might occur. AF can lead to loss of quality of life, significant health care costs, stroke and death.

Large studies have shown that chronic alcohol consumption can be a predictor of the condition, and Marcus and other scientists have demonstrated that it is linked to heightened risks of a first diagnosis of atrial arrhythmias.

The research centered on 100 patients with documented AF who consumed at least one alcoholic drink a month. The patients were recruited from the general cardiology and cardiac electrophysiology outpatient clinics at UCSF. People with a history of alcohol or substance use disorder were excluded, as were those with certain allergies, or who were changing treatment for their heart condition.

Each wore an electrocardiogram (ECG) monitor for approximately four weeks, pressing a button whenever they had a standard-size alcoholic drink. They were also all fitted with a continuously recording alcohol sensor. Blood tests reflecting alcohol consumption over the previous weeks were periodically administered. Participants consumed a median of one drink per day throughout the study period.

Researchers found that an AF episode was associated with two-fold higher odds with one alcoholic drink, and three-fold higher odds with two or more drinks within the preceding four hours. AF episodes were also associated with an increased blood alcohol concentration.

The authors note study limitations, including that patients might have forgotten to press their monitor buttons or that they minimized the number of button presses due to embarrassment, although these considerations would not have affected alcohol sensor readings. Additionally, the study was limited to those with established AF, not to the general population.

“The effects seem to be fairly linear: the more alcohol consumed, the higher the risk of an acute AF event,” said Marcus. “These observations mirror what has been reported by patients for decades, but this is the first objective, measurable evidence that a modifiable exposure may acutely influence the chance that an AF episode will occur.”

Additional UCSF study authors are Eric Vittinghoff, PhD; Sean Joyce, ScB; Vivian Yang, BA; Gregory Nah, MA; Edward P. Gerstenfeld, MD; Joshua D. Moss, MD; Randall J. Lee, MD, PhD; Byron K. Lee, MD; Zian H. Tseng, MD, MAS; Vasanth Vedanthan, MD, PhD; Jeffrey E. Olgin, MD; Melvin M. Scheinman, MD; Henry Hsia, MD; Rachel Gladtone, BA; Shannon Fan, BA; Emily Le, BS; Christina Fang, BA, Kelsey Ogomori, BA; Robin Fatch, MPH; Judith A. Hahn, PhD, MA.

Disclosures: See paper.

Funding: The study was funded by R01AAO22222 and K24 AA022586 from the National Institute on Alcohol Abuse and Alcoholism.



from ScienceBlog.com https://ift.tt/3zycxQU

Patients with Covid Delta more likely to be hospitalized than patients with Alpha

Patients with Covid Delta more likely to be hospitalized than patients with Alpha

In a new study published in The Lancet Infectious Diseases, researchers at Public Health England and the MRC Biostatistics Unit, University of Cambridge, found that the estimated risk of hospital admission was two times higher for individuals diagnosed with the Delta variant of the SARS-CoV-2 virus, compared to those with the Alpha variant, after adjusting for differences in age, sex, ethnicity, deprivation, region of residence, date of positive test and vaccination status. When broadening the scope to look at the risk of either hospital admission or emergency care attendance, the risk was 1.45 times higher for Delta than Alpha.

This is the largest study to date to report on the risk of hospitalisation outcomes for cases with the Delta compared to the Alpha variant, using 43,338 Alpha and Delta cases confirmed through whole-genome sequencing who tested positive for COVID-19 between 29th March and 23rd May 2021. It is crucial to note that most of the Alpha and Delta cases in the study were unvaccinated or only partially vaccinated: 74% were unvaccinated, 24% were partially vaccinated, and only 2% were fully vaccinated. The results from this study therefore primarily tell us about the risk of hospital admission for those who are unvaccinated or partially vaccinated. Given the small number of hospitalised vaccinated cases, it has not been possible to estimate reliably if the hospitalisation risk differed between Delta and Alpha cases who had been fully vaccinated.

The Delta variant is now the most common SARS-CoV-2 lineage in several higher-income and lower-income countries on all continents, currently accounting for more than 99% of new cases in England [2]. The evidence provided in this study therefore has implications for healthcare practice, planning and response in countries with ongoing or future Delta variant outbreaks, particularly in unvaccinated or partially vaccinated populations. As previous studies have shown Delta and Alpha spread more rapidly than previous variants [2–4], the combination of faster transmission and the current study’s finding of higher risk of severe disease requiring hospital admission in unvaccinated populations implies a more severe burden on healthcare of Delta outbreaks than of Alpha epidemics.

Previous studies have shown the available COVID-19 vaccines are highly effective against symptomatic infections with the Alpha variant [5], and are effective against symptomatic infections with the Delta variant, particularly after a full vaccination cycle with two doses [6,7]. For those who despite vaccination become infected, the vaccination protects against admission to hospital [8].

Dr Anne Presanis, Senior Statistician at the MRC Biostatistics Unit said:

“Our analysis highlights that in the absence of vaccination, any Delta outbreaks will impose a greater burden on healthcare than an Alpha epidemic. Getting fully vaccinated is crucial for reducing an individual’s risk of symptomatic infection with Delta in the first place, and, importantly, of reducing a Delta patient’s risk of severe illness and hospital admission.”

Dr Gavin Dabrera, Consultant Epidemiologist at Public Health England, said:

“This study confirms previous findings that people infected with Delta are significantly more likely to require hospitalisation than those with Alpha, although most cases included in the analysis were unvaccinated.

We already know that vaccination offers excellent protection against Delta and as this variant accounts for over 99% of COVID-19 cases in the UK, it is vital that those who have not received two doses of vaccine do so as soon as possible.

It is still important that if you have COVID-19 symptoms, stay home and get a PCR test as soon as possible.”

Reference:
Katherine A Twohig et al. ‘Hospital admission and emergency care attendance risk for SARS-CoV-2 delta (B.1.617.2) compared with alpha (B.1.1.7) variants of concern: a cohort study.’ The Lancet Infectious Diseases (2021). DOI: 10.1016/S1473-3099(21)00475-8



from ScienceBlog.com https://ift.tt/3DvbF1L

Study ties air pollution to disparities in Alzheimer’s risk

Study ties air pollution to disparities in Alzheimer’s risk

For decades, research has shown the risk for developing Alzheimer’s disease in the United States is dramatically higher among African American populations than in non-Hispanic white populations. Scientists have suspected a variety of contributing factors, but the underlying reasons have remained unclear.

Now, a new study in The Journals of Gerontologyconducted in collaboration with researchers across the country, points to environmental neurotoxins – specifically, ambient fine particles in the air known as PM2.5 – as possible culprits in the disproportionate number of African American, particularly Black women, affected by dementia.

Zeroing in on fine particle pollution as a factor in racial/ethnic disparities

“Data increasingly show that older people are more likely to develop dementia if they live in locations with high PM2.5, and African American populations are more likely to live in neighborhoods near polluting facilities — like power-generating and petrochemical plants – that emit the particulate matter,” said corresponding author Jiu-Chiuan Chen, MD, ScD, associate professor of population and public health sciences at the Keck School of Medicine of USC. “Our study demonstrates that older Black women live in locations with higher levels of PM2.5, and we ask whether their elevated exposure could account for the higher numbers of Alzheimer’s cases. The evidence does reveal a positive association.”

After adjusting for risk factors such as smoking, education, income, diabetes, hypertension and BMI, Black women still had roughly two times greater a risk of developing Alzheimer’s disease than white women, the researchers found.

Chen and his colleagues report two major novel findings in their study, both with important implications. The first is that the increased risk of Alzheimer’s disease in older Black women may be partly explained by their elevated exposure to PM2.5; the second is that older Black women are more susceptible to its adverse effects.

“Our work offers the scientific community an important perspective on the study of dementia; namely, that we must have a greater awareness of environmental racism that can impact brain aging and disproportionately affect people of color,” Chen says. “There is also a key regulatory takeaway, which is that we have to continue enforcing the Clean Air Act, with its mandate to provide a safe margin for air quality that will protect the health of susceptible populations.”

Air pollution and brain health

The study grew out of a long-standing partnership between USC faculty and researchers with the Women’s Health Initiative Memory Study based at Wake Forest University. This research focused on studying Black women compared to non-Hispanic white women, because Black women carry the greatest burden of Alzheimer’s disease and related dementias in the US. As it begins to unravel the complexities of racial disparities in Alzheimer’s disease, the study also elucidates the need for future research.

“Our study suggests that PM2.5 may contribute to the difference in Alzheimer’s disease risk based on race, and we also demonstrated that older African American women may be more susceptible to the particulate matter, but we still don’t know why,” Chen says. “Why are these particles more neurotoxic to Black women than to non-Hispanic whites? Going forward, we plan to look for answers by studying the effects of things like nutrition and brain structure.”

When he was first recruited to USC in 2009, Chen set out to develop a new research program to study the health effects of air pollution exposure beyond its well-known impact on airways and lungs. “At the time, I was one of the first faculty members studying environmental influences on brain health,” Chen recalls. “Today, the university has several leading programs, funded by the National Institutes of Health, examining how air pollution exposures affect neurodevelopment and brain aging. An increasing number of USC faculty are trying to better understand whether and why air pollution can cause more damage to the human brains in minority populations or communities with social disadvantages.  Our study is just the beginning of vital scientific work that needs to be done.”



from ScienceBlog.com https://ift.tt/3zycxAo

Human Mini-Lungs Grown in Lab Dishes are Closest Yet to Real Thing

Human Mini-Lungs Grown in Lab Dishes are Closest Yet to Real Thing

Since the COVID-19 pandemic reached the United States in early 2020, scientists have struggled to find laboratory models of SARS-CoV-2 infection, the respiratory virus that causes COVID-19. Animal models fell short; attempts to grow adult human lungs have historically failed because not all of the cell types survived.

Undaunted, stem cell scientists, cell biologists, infectious disease experts and cardiothoracic surgeons at University of California San Diego School of Medicine teamed up to see if they could overcome multiple hurdles.

Writing in a paper publishing August 31, 2021 in eLife, the team describes the first adult human “lung-in-a-dish” models, also known as lung organoids that represent all cell types. They also report that SARS-CoV-2 infection of the lung organoids replicates real-world patient lung infections, and reveals the specialized roles various cell types play in infected lungs.

“This human disease model will now allow us to test drug efficacy and toxicity, and reject ineffective compounds early in the process, at ‘Phase 0,’ before human clinical trials begin,” said Pradipta Ghosh, MD, professor, director of the Institute for Network Medicine and executive director of the HUMANOID Center of Research Excellence (CoRE) at UC San Diego School of Medicine. Ghosh co-led the study with Soumita Das, PhD, associate professor of pathology at UC San Diego School of Medicine and founding co-director and chief scientific officer of HUMANOID CoRE.

Stem cell scientists at the HUMANOID CoRE, led by Das, reproducibly developed three lung organoid lines from adult stem cells derived from human lungs that had been surgically removed due to lung cancer. With a special cocktail of growth factors, they were able to maintain cells that make up both the upper and lower airways of human lungs, including specialized alveolar cells known as AT2.

By infecting the lung organoids with SARS-CoV-2, the team discovered that the upper airway cells are critical for the virus to establish infection, while the lower airway cells are important for the immune response. Both cell types contribute to the overzealous immune response, sometimes called a cytokine storm that has been observed in severe cases of COVID-19.

A computational team led by Debashis Sahoo, PhD, assistant professor of pediatrics at UC San Diego School of Medicine and of computer science and engineering at Jacobs School of Engineering, validated the new lung organoids by comparing their gene expression patterns — which genes are “on” or “off” —  to patterns reported in the lungs of patients who succumbed to the disease, and to those that they previously uncovered from databases of viral pandemic patient data.

Whether infected or not with SARS-CoV-2, the lung organoids behaved similar to real-world lungs. In head-to-head comparisons using the same yardstick (gene expression patterns), the researchers showed that their adult lung organoids replicated COVID-19 better than any other current lab model. Other models, for example fetal lung-derived organoids and models that rely only on upper airway cells, allowed robust viral infection, but failed to mount an immune response.

“Our lung organoids are now ready to use to explore the uncharted territory of COVID-19, including post-COVID complications, such as lung fibrosis,” Das said. “We have already begun to test drugs for their ability to control viral infection — from entry to replication to spread — the runaway immune response that is so often fatal, and lung fibrosis.”

Since their findings in human organoids are more likely to be relevant to human disease than findings in animal models or cell lines, the team hopes that successful drug candidates can be rapidly progressed to clinical trials.

“Because our HUMANOID CoRE lung organoids are scalable, personalized, propagatable and cost-effective, they are quite unlike any other existing model,” said Ghosh. “This is a significant advance that can enable the modeling of lung diseases and pandemics beyond COVID-19. In fact, other academic and industry partners are already beginning to use these organoids in disease modeling and drug discovery. This is when I feel that translational research is immediately transformative.”

Co-authors include: Courtney Tindle, MacKenzie Fuller, Ayden Fonseca, Sahar Taheri, Stella-Rita Ibeawuchi, Gajanan Dattatray Katkar, Amanraj Claire, Vanessa Castillo, Moises Hernandez, Hana Russo, Jason Duran, Ann Tipps, Grace Lin, Patricia A. Thistlethwaite, Thomas F. Rogers, UC San Diego; Nathan Beutler, Scripps Research; Laura E. Crotty Alexander, UC San Diego and VA San Diego Healthcare System; Ranajoy Chattopadhyay, UC San Diego and Cell Applications, Inc.

Funding for this research came, in part, from the National Institutes for Health (grants 1R01DK107585-01A1, 3R01DK107585-05S1, R01-AI141630, CA100768, CA160911, R01-AI 155696, R00-CA151673, R01-GM138385, R01-HL32225, UCOP-R00RG2642, UCOP-R01RG3780), Sanford Stem Cell Clinical Center at UC San Diego Health and VA San Diego Healthcare System.



from ScienceBlog.com https://ift.tt/3DuY2zE

How future trains could be less noisy

How future trains could be less noisy

by Sarah Wild

Rail transportation is core to Europe’s plans to become carbon neutral by 2050, but noisy trains are an obstacle that will need to be first overcome.

‘We have a lot of resistance from people (living) beside the tracks who are against all construction and upgrades of the lines,’ said Rudiger Garburg, senior consultant for noise and vibrations technology at German railway company Deutsche Bahn AG. ‘It really is a bottleneck, (when) we speak about transforming transport and transferring traffic from road to rail.’

Greenhouse gas emissions from transport in Europe increased in 2018 and 2019, according to the European Environment Agency, and road transport was responsible for almost three-quarters of those emissions. In its ‘Sustainable and Smart Mobility Strategy’, the European Commission aims to shift traffic from road to rail and double its high-speed passenger rail traffic across Europe by 2030 and double rail freight by 2050.

To get community buy-in, however, governments and rail companies need to reduce rail noise. ‘Noise is always a problem of the system, not just the train,’ said Garburg, who is a member of Shift2Rail’s FINE 1 and FINE 2 projects to reduce noise, vibrations and energy use. A railway system includes the trains, their wheels, the rails, and the tracks that support them.

For passenger and freight trains, which move at between 60km and 200km per hour, the noise is mainly generated between the wheels and the rail. However, it is very difficult to determine which part of the system is making the noise.

FINE 1, which involved partners in rail, mobility and automation, was a broad project to curb excess noise and energy from trains. It looked to model and predict noise sources, among other objectives. This information is vital for both regulators and for train manufacturers.

The project, for example, was able to simulate the noise both inside and outside the train made by cast iron wheels compared to composite-material wheels. ‘In the past, trains used a cast-iron braking system for the wheels,’ Garburg explained. While good for braking, the iron sheared over time, making the wheels very rough – and noisy. ‘In past years, we’ve worked very hard to find more braking blocks (made out of) composite materials, not cast iron.’

Limit

In 2019, Europe’s revised train noise standards, part of a larger suite of rail specifications, came into force. Unlike cars, where manufacturers produce thousands of vehicles, train manufacturers only produce a limited number. ‘You cannot build a prototype, test it, and work on it,’ explained Garburg. ‘If you build a new train, you have to guarantee that your train adheres to this limit of noise, similar to air pollution and so on.’

As part of FINE 1, project members developed verifiable, realistic requirements to characterise noise sources. These specifications are important to create standards for manufacturers to follow, and ultimately make trains quieter. Its successor, FINE 2, plans to take this research even further and fine-tune its noise source prediction models.

‘In FINE 2, we have special measurement procedures,’ Garburg said. The team uses an ‘acoustic camera’, an array of 30-40 microphones, to capture the sound of the rail system. He likens it to a thermal camera, in which ‘you see yellow, red and green parts of a building’ to create a heat map. ‘We will use such procedures for the noise to get pictures that show you clearly where the main noise sources are,’ he said.

Better models could also enable train manufacturers to possibly obtain virtual certification for their trains to show that they adhere to the EU standards. All trains need to be certified by regulatory authorities before they are allowed on the track, but this process can be expensive and time-consuming.

In Shift2Rail’s partner project TRANSIT, virtual certification is the ‘dream on the horizon’, says Ines Lopez Arteaga, a professor of mechanical engineering at Eindhoven University of Technology in the Netherlands. ‘It would save a lot of money and time and resources to be able to do it based on calculations.’

But there are many research milestones to reach first, said Prof. Lopez Arteaga, who is TRANSIT project leader. We have tools to predict where train noise originates, but they could be improved, she says.

‘Noise is always a problem of the system, not just the train.’

Rudiger Garburg, Deutsche Bahn AG

Certified

At the moment, she says it is possible to measure the overall noise a train makes on the tracks, but the estimation of the separate components needs to be more accurate. With this information, it would be possible to not only make trains quieter, but make it easier to get new trains certified.

But trains also need to be tested on a specific type of track – one whose smoothness would not handicap or give overly positive noise measurements. She likens trains on the track to a child playing with marbles. ‘If you roll a marble on a table, it makes noise. With trains, it’s the wheels on the track. You get a different noise depending on the roughness of the table, if it’s stainless steel or wooden, for example,’ she said.

‘It is not easy to find the right track,’ explained Prof. Lopez Arteaga. ‘You would expect that with so many thousands of kilometres of rail in Europe that it shouldn’t be such a problem, but it is.’

One of the project’s goals is to translate the noise measurements from one track to another. ‘That would be a really big advantage,’ she said. ‘That would reduce the constraints on the type of track you can test on.’

Noise

There are also other components, aside from the wheels and rails, that add to the noise a train makes when it passes.

For example, older trains had their air conditioning units underneath the carriage, but modern trains have been lowered to allow people with less mobility to enter and exit the carriage more easily. As a result, air conditioning units are now on top of the carriage, where they add to the train noise.

‘The models we are developing with help from manufacturers aim to establish better requirements for their equipment,’ said Prof. Lopez Arteaga.

The aspect of the project that she is particularly excited about is modelling is the identification of noise sources on high-speed trains. ‘They want us to identify the noise, but also the direction it’s going. That’s really, really a challenge.’

Once the TRANSIT team has its results, it will share them with FINE 2 to evaluate in order to verify their findings, Prof. Lopez Arteaga says.

All of these are ‘small steps’ on the way to characterising the behaviour of the whole rail system, she says. ‘I love trains; they are a really interesting system. The whole railway – the network, the system, the trains – is so complex, there’s much more than meets the eye.’

The research in this article was funded by the Shift2Rail initiative

This article was originally published in Horizon, the EU Research and Innovation Magazine.


from ScienceBlog.com https://ift.tt/3Bq4l5I

Who can bend light for cheaper internet?

Who can bend light for cheaper internet?
Who can bend light for cheaper internet?

Wide Area Networks (WANs), the global backbones and workhorses of today’s internet that connect billions of computers over continents and oceans, are the foundation of modern online services. As Covid-19 has placed a vital reliance on online services, today’s networks are struggling to deliver high bandwidth and availability imposed by emerging workloads related to machine learning, video calls, and health care. 

To connect WANs over hundreds of miles, fiber optic cables that transmit data using light are threaded throughout our neighborhoods, made of incredibly thin strands of glass or plastic known as optical fibers. While they’re extremely fast, they’re not always reliable: They can easily break from weather, thunderstorms, accidents, and even animals. These tears can cause severe and expensive damage, resulting in 911 service outages, lost connectivity to the internet, and inability to use smartphone apps. 

Scientists from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and from Facebook recently came up with a way to preserve the network when the fiber is down, and to reduce cost. Their system, called “ARROW,” reconfigures the optical light from a damaged fiber to healthy ones, while using an online algorithm to proactively plan for potential fiber cuts ahead of time, based on real-time internet traffic demands. 

ARROW is built on the crossroads of two different approaches: “failure-aware traffic engineering,” a technique that steers traffic to where the bandwidth resources are during fiber cuts, and “wavelength reconfiguration,” which restores failed bandwidth resources by reconfiguring the light. 

Though this combination is powerful, the problem is mathematically difficult to solve because of its NP-hardness in computational complexity theory

The team created a novel algorithm that can essentially create “LotteryTickets” as an abstraction for the “wavelength reconfiguration problem” on optical fibers and only feed essential information into the “traffic engineering problem.” This works alongside their “optical restoration method,” which moves the light from the cut fiber to “surrogate’’ healthy fibers to restore the network connectivity. The system also takes real-time traffic into account to optimize for maximum network throughput. 

Using large-scale simulations and a testbed, ARROW could carry 2 to 2.4 times more traffic without having to deploy new fibers, while maintaining the network highly reliable. 

“ARROW can be used to improve service availability, and enhance the resiliency of the internet infrastructure against fiber cuts. It renovates the way we think about the relationship between failures and network management — previously failures were deterministic events, where failure meant failure, and there was no way around it except over-provisioning the network,” says MIT postdoc Zhizhen Zhong, the lead author on a new paper about ARROW. “With ARROW, some failures can be eliminated or partially restored, and this changes the way we think about network management and traffic engineering, opening up opportunities for rethinking traffic engineering systems, risk assessment systems, and emerging applications too.”

The design of today’s network infrastructures, both in data centers and in wide-area networks, still follow the “telephony model,” where network engineers treat the physical layer of networks as a static black box with no reconfigurability. 

As a result, the network infrastructure is equipped to carry the worst-case traffic demand under all possible failure scenarios, making it inefficient and costly. Yet, modern networks have elastic applications that could benefit from a dynamically reconfigurable physical layer, to enable high throughput, low latency, and seamless recovery from failures, which ARROW helps enable.  

In traditional systems, network engineers decide in advance how much capacity to provide in the physical layer of the network. It might seem impossible to change the topology of a network without physically changing the cables, but since optical waves can be redirected using tiny mirrors, they’re capable of quick changes: no rewiring required. This is a realm where the network is no longer a static entity but a dynamic structure of interconnections that may change depending on the workload. 

Imagine a hypothetical subway system where some trains might fail once in a while. The subway control unit wants to plan how to distribute the passengers to alternative routes while considering all possible trains and traffic on them. Using ARROW, then, when a train fails, the control unit just announces to the passengers the best alternative routes to minimize their travel time and avoid congestion. 

“My long-term goal is to make large-scale computer networks more efficient, and ultimately develop smart networks that adapt to the data and application,” says MIT Assistant Professor Manya Ghobadi, who supervised the work. “Having a reconfigurable optical topology revolutionizes the way we think of a network, as performing this research requires breaking orthodoxies established for many years in WAN deployments.’ 

To deploy ARROW in real-world wide-area networks, the team has been collaborating with Facebook and hopes to work with other large-scale service providers. “The research provides the initial insight into the benefits of reconfiguration. The substantial potential in reliability improvement is attractive to network management in production backbone,” says Ying Zhang, a software engineer manager at Facebook who collaborated on this research. 

“We are excited that there would be many practical challenges ahead to bring ARROW from research lab ideas to real-world systems that serve billions of people, and possibly reduce the number of service interruptions that we experience today, such as less news reports on how fiber cuts affect internet connectivity,” says Zhong. “We hope that ARROW could make our internet more resilient to failures with less cost.” 

Zhong wrote the paper alongside Ghobadi; MIT graduate student Alaa Khaddaj; and Facebook engineers Jonathan Leach, Ying Zhang, and Yiting Xia. They presented the research at ACM’s SIGCOMM conference.

This work was led by MIT in collaboration with Facebook. The technique is being evaluated for deployment at Facebook. Facebook provided resources for performing the research. The MIT affiliated authors were supported by Advanced Research Projects Agency–Energy, the Defense Advanced Research Projects Agency, and the U.S. National Science Foundation.



from ScienceBlog.com https://ift.tt/3gK936m

Drug delivery capsule could replace injections for protein drugs

Drug delivery capsule could replace injections for protein drugs

In recent years, scientists have developed monoclonal antibodies — proteins that mimic the body’s own immune defenses — that can combat a variety of diseases, including some cancers and autoimmune disorders such as Crohn’s disease. While these drugs work well, one drawback to them is that they have to be injected.

A team of MIT engineers, in collaboration with scientists from Brigham and Women’s Hospital and Novo Nordisk, is working on an alternative delivery strategy that could make it much easier for patients to benefit from monoclonal antibodies and other drugs that usually have to be injected. They envision that patients could simply swallow a capsule that carries the drug and then injects it directly into the lining of the stomach.

“If we can make it easier for patients to take their medication, then it is more likely that they will take it, and healthcare providers will be more likely to adopt therapies that are known to be effective,” says Giovanni Traverso, the Karl van Tassel Career Development Assistant Professor of Mechanical Engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital.

In a study appearing today in Nature Biotechnology, the researchers demonstrated that their capsules could be used to deliver not only monoclonal antibodies but also other large protein drugs such as insulin, in pigs.

Traverso and Ulrik Rahbek, vice president at Novo Nordisk, are the senior authors of the paper. Former MIT graduate student Alex Abramson and Novo Nordisk scientists Morten Revsgaard Frederiksen and Andreas Vegge are the lead authors.

Targeting the stomach

Most large protein drugs can’t be given orally because enzymes in the digestive tract break them down before they can be absorbed. Traverso and his colleagues have been working on many strategies to deliver such drugs orally, and in 2019, they developed a capsule that could be used to inject up to 300 micrograms of insulin.

That pill, about the size of a blueberry, has a high, steep dome inspired by the leopard tortoise. Just as the tortoise is able to right itself if it rolls onto its back, the capsule is able to orient itself so that its needle can be injected into the lining of the stomach. In the original version, the tip of the needle was made of compressed insulin, which dissolved in the tissue after being injected into the stomach wall.

The new pill described in the Nature Biotechnology study maintains the same shape, allowing the capsule to orient itself correctly once it arrives in the stomach. However, the researchers redesigned the capsule interior so that it could be used to deliver liquid drugs, in larger quantities — up to 4 milligrams.

Delivering drugs in liquid form can help them reach the bloodstream more rapidly, which is necessary for drugs like insulin and epinephrine, which is used to treat allergic responses.

The researchers designed their device to target the stomach, rather than later parts of the digestive tract, because the amount of time it takes for something to reach the stomach after being swallowed is fairly uniform from person to person, Traverso says. Also, the lining of the stomach is thick and muscular, making it possible to inject drugs while mitigating harmful side effects.

The new delivery capsule is filled with fluid and also contains an injection needle and a plunger that helps to push the fluid out of the capsule. Both the needle and plunger are held in place by a pellet made of solid sugar. When the capsule enters the stomach, the humid environment causes the pellet to dissolve, pushing the needle into the stomach lining, while the plunger pushes the liquid through the needle. When the capsule is empty, a second plunger pulls the needle back into the capsule so that it can be safely excreted through the digestive tract.

Significant levels

In tests in pigs, the researchers showed that they could deliver a monoclonal antibody called adalimumab (Humira) at levels similar to those achieved by injection. This drug is used to treat autoimmune disorders such as inflammatory bowel disease and rheumatoid arthritis. They also delivered a type of protein drug known as a GLP-1 receptor agonist, which is used to treat type 2 diabetes.

“Delivery of monoclonal antibodies orally is one of the biggest challenges we face in the field of drug delivery science,” Traverso says. “From an engineering perspective, the ability to deliver monoclonal antibodies at significant levels really transforms how we start to think about the management of these conditions.”

Additionally, the researchers gave the animals capsules over several days and found that the drugs were delivered consistently each time. They also found no signs of damage to the stomach lining following the injections, which penetrate about 4.5 millimeters into the tissue.

David Brayden, a professor of advanced drug delivery at University College Dublin, who was not involved in the research, described the new approach as “a very exciting advance for the potential oral delivery of macromolecules. That similar blood levels to those arising from injections of these types of drugs can be achieved by stomach administration to large animals is a technical landmark for the field.”

The MIT team is now working with Novo Nordisk to further develop the system.

“Although it is still early days, we believe this device has the potential to transform treatment regimens across a range of therapeutic areas,” Rahbek says. “The ongoing research and development of this approach mean that several drugs that can currently only be administered via parenteral injections (non-oral routes) might be administered orally in the future. Our aim is to get the device into clinical trials as soon as possible.”

Other authors of the paper include MIT’s David H. Koch Institute Professor Robert Langer, Brian Jensen Mette Poulsen, Brian Mouridsen, Mikkel Oliver Jespersen, Rikke Kaae Kirk, Jesper Windum, Frantisek Hubalek, Jorrit Water, Johannes Fels, Stefan Gunnarsson, Adam Bohr, Ellen Marie Straarup, Mikkel Wennemoes Hvitfeld Ley, Xiaoya Lu, Jacob Wainer, Joy Collins, Siddartha Tamang, Keiko Ishida, Alison Hayward, Peter Herskind, Stephen Buckley, and Niclas Roxhed.

The research was funded by Novo Nordisk, the National Institutes of Health, the National Science Foundation, MIT’s Department of Mechanical Engineering, Brigham and Women’s Hospital’s Division of Gastroenterology, and the Viking Olof Bjork scholarship trust.



from ScienceBlog.com https://ift.tt/2V06sxD

Turning cameras off during virtual meetings can reduce fatigue

Turning cameras off during virtual meetings can reduce fatigue

More than a year after the pandemic resulted in many employees shifting to remote work, virtual meetings have become a familiar part of daily life. Along with that may come “Zoom fatigue” – a feeling of being drained and lacking energy following a day of virtual meetings.

New research conducted by Allison Gabriel, McClelland Professor of Management and Organizations and University Distinguished Scholar in the University of Arizona Eller College of Management, suggests that the camera may be partially to blame.

Gabriel’s research, published in the Journal of Applied Psychology, looks at the role of cameras in employee fatigue and explores whether these feelings are worse for certain employees.

“There’s always this assumption that if you have your camera on during meetings, you are going to be more engaged,” Gabriel said. “But there’s also a lot of self-presentation pressure associated with being on camera. Having a professional background and looking ready, or keeping children out of the room are among some of the pressures.”

After a four-week experiment involving 103 participants and more than 1,400 observations, Gabriel and her colleagues found that it is indeed more tiring to have your camera on during a virtual meeting.

“When people had cameras on or were told to keep cameras on, they reported more fatigue than their non-camera using counterparts,” Gabriel said. “And that fatigue correlated to less voice and less engagement during meetings. So, in reality, those who had cameras on were potentially participating less than those not using cameras. This counters the conventional wisdom that cameras are required to be engaged in virtual meetings.”

Gabriel also found that these effects were stronger for women and for employees newer to the organization, likely due to added self-presentation pressures.

“Employees who tend to be more vulnerable in terms of their social position in the workplace, such as women and newer, less tenured employees, have a heightened feeling of fatigue when they must keep cameras on during meetings,” Gabriel said. “Women often feel the pressure to be effortlessly perfect or have a greater likelihood of child care interruptions, and newer employees feel like they must be on camera and participate in order to show productiveness.”

Gabriel suggests that expecting employees to turn cameras on during Zoom meetings is not the best way to go. Rather, she says employees should have the autonomy to choose whether or not to use their cameras, and others shouldn’t make assumptions about distractedness or productivity if someone chooses to keep the camera off.

“At the end of the day, we want employees to feel autonomous and supported at work in order to be at their best. Having autonomy over using the camera is another step in that direction,” Gabriel said.

This research was co-authored by Eller doctoral student Mahira Ganster, Kristen M. Shockley with the University of Georgia, Daron Robertson with Tucson-based health care services company BroadPath Inc., Christopher Rosen with the University of Arkansas, Nitya Chawla with Texas A&M University and Maira Ezerins with the University of Arkansas.



from ScienceBlog.com https://ift.tt/3gJQi33

Featured Post

Prof. Dr. Thomas Braunbeck | University of Heidelberg, Germany | Best Researcher Award

  International Research Awards on New Science Inventions Join us for the International Research Awards on New Science Inventions, a premie...

Popular