Study reveals chemical link between wildfire smoke and ozone depletion

The Australian wildfires in 2019 and 2020 were historic for how far and fast they spread, and for how long and powerfully they burned. All told, the devastating “Black Summer” fires blazed across more than 43 million acres of land, and extinguished or displaced nearly 3 billion animals. The fires also injected over 1 million tons of smoke particles into the atmosphere, reaching up to 35 kilometers above Earth’s surface — a mass and reach comparable to that of an erupting volcano.

Now, atmospheric chemists at MIT have found that the smoke from those fires set off chemical reactions in the stratosphere that contributed to the destruction of ozone, which shields the Earth from incoming ultraviolet radiation. The team’s study, appearing this week in the Proceedings of the National Academy of Sciences, is the first to establish a chemical link between wildfire smoke and ozone depletion.

In March 2020, shortly after the fires subsided, the team observed a sharp drop in nitrogen dioxide in the stratosphere, which is the first step in a chemical cascade that is known to end in ozone depletion. The researchers found that this drop in nitrogen dioxide directly correlates with the amount of smoke that the fires released into the stratosphere. They estimate that this smoke-induced chemistry depleted the column of ozone by 1 percent.

To put this in context, they note that the phaseout of ozone-depleting gases under a worldwide agreement to stop their production has led to about a 1 percent ozone recovery from earlier ozone decreases over the past 10 years — meaning that the wildfires canceled those hard-won diplomatic gains for a short period. If future wildfires grow stronger and more frequent, as they are predicted to do with climate change, ozone’s projected recovery could be delayed by years. 

“The Australian fires look like the biggest event so far, but as the world continues to warm, there is every reason to think these fires will become more frequent and more intense,” says lead author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT. “It’s another wakeup call, just as the Antarctic ozone hole was, in the sense of showing how bad things could actually be.”

The study’s co-authors include Kane Stone, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, along with collaborators at multiple institutions including the University of Saskatchewan, Jinan University, the National Center for Atmospheric Research, and the University of Colorado at Boulder.

Chemical trace

Massive wildfires are known to generate pyrocumulonimbus — towering clouds of smoke that can reach into the stratosphere, the layer of the atmosphere that lies between about 15 and 50 kilometers above the Earth’s surface. The smoke from Australia’s wildfires reached well into the stratosphere, as high as 35 kilometers.

In 2021, Solomon’s co-author, Pengfei Yu at Jinan University, carried out a separate study of the fires’ impacts and found that the accumulated smoke warmed parts of the stratosphere by as much as 2 degrees Celsius — a warming that persisted for six months. The study also found hints of ozone destruction in the Southern Hemisphere following the fires.

Solomon wondered whether smoke from the fires could have depleted ozone through a chemistry similar to volcanic aerosols. Major volcanic eruptions can also reach into the stratosphere, and in 1989, Solomon discovered that the particles in these eruptions can destroy ozone through a series of chemical reactions. As the particles form in the atmosphere, they gather moisture on their surfaces. Once wet, the particles can react with circulating chemicals in the stratosphere, including dinitrogen pentoxide, which reacts with the particles to form nitric acid.

Normally, dinitrogen pentoxidereacts with the sun to form various nitrogen species, including nitrogen dioxide, a compound that binds with chlorine-containing chemicals in the stratosphere. When volcanic smoke converts dinitrogen pentoxide into nitric acid, nitrogen dioxide drops, and the chlorine compounds take another path, morphing into chlorine monoxide, the main human-made agent that destroys ozone.

“This chemistry, once you get past that point, is well-established,” Solomon says. “Once you have less nitrogen dioxide, you have to have more chlorine monoxide, and that will deplete ozone.”

Cloud injection

In the new study, Solomon and her colleagues looked at how concentrations of nitrogen dioxide in the stratosphere changed following the Australian fires. If these concentrations dropped significantly, it would signal that wildfire smoke depletes ozone through the same chemical reactions as some volcanic eruptions.

The team looked to observations of nitrogen dioxide taken by three independent satellites that have surveyed the Southern Hemisphere for varying lengths of time. They compared each satellite’s record in the months and years leading up to and following the Australian fires. All three records showed a significant drop in nitrogen dioxide in March 2020. For one satellite’s record, the drop represented a record low among observations spanning the last 20 years.

To check that the nitrogen dioxide decrease was a direct chemical effect of the fires’ smoke, the researchers carried out atmospheric simulations using a global, three-dimensional model that simulates hundreds of chemical reactions in the atmosphere, from the surface on up through the stratosphere.

The team injected a cloud of smoke particles into the model, simulating what was observed from the Australian wildfires. They assumed that the particles, like volcanic aerosols, gathered moisture. They then ran the model multiple times and compared the results to simulations without the smoke cloud.

In every simulation incorporating wildfire smoke, the team found that as the amount of smoke particles increased in the stratosphere, concentrations of nitrogen dioxide decreased, matching the observations of the three satellites.

“The behavior we saw, of more and more aerosols, and less and less nitrogen dioxide, in both the model and the data, is a fantastic fingerprint,” Solomon says. “It’s the first time that science has established a chemical mechanism linking wildfire smoke to ozone depletion. It may only be one chemical mechanism among several, but it’s clearly there. It tells us these particles are wet and they had to have caused some ozone depletion.”

She and her collaborators are looking into other reactions triggered by wildfire smoke that might further contribute to stripping ozone. For the time being, the major driver of ozone depletion remains chlorofluorocarbons, or CFCs — chemicals such as old refrigerants that have been banned under the Montreal Protocol, though they continue to linger in the stratosphere. But as global warming leads to stronger, more frequent wildfires, their smoke could have a serious, lasting impact on ozone.

“Wildfire smoke is a toxic brew of organic compounds that are complex beasts,” Solomon says. “And I’m afraid ozone is getting pummeled by a whole series of reactions that we are now furiously working to unravel.”

This research was supported in part by the National Science Foundation and NASA.



from ScienceBlog.com https://ift.tt/Edme5OH

‘Fingerprinting’ minerals to better understand how they are affected by meteorite collisions

When a space rock survives the turbulent passage through Earth’s atmosphere and strikes the surface, it generates shockwaves that can compress and transform minerals in the planet’s crust. Since these changes depend on the pressure produced upon impact, experts can use features in Earth’s minerals to learn about the meteorite’s life story, from the moment of collision all the way back to the conditions from which the celestial bodies originate.

“If you compare an average mineral to one that’s been involved in a meteoritic impact, you’ll find some unique features in the shocked one,” says Arianna Gleason, a scientist at the Department of Energy’s SLAC National Accelerator Laboratory. “On the outside, they retain some of their original crystalline form, but inside they become disordered and full of beautiful interlocking linear formations called lamellae.”

Plagioclase, the most abundant mineral in the Earth’s crust, is one of the most commonly used minerals for painting a fuller picture of meteoritic impacts. However, the pressure at which this mineral loses its crystalline shape and becomes disordered – and how this process, called amorphization, plays out – is the subject of ongoing debate.

In a new experiment, SLAC researchers mimicked meteoritic impacts in the lab to explore how plagioclase transforms during shock compression. They discovered that amorphization begins at pressures much lower than previously assumed. They also discovered that, upon release, the material partially recrystallizes back into the original shape, demonstrating a memory effect that could potentially be harnessed for materials science applications. Their results, published today in Meteoritics and Planetary Science, could lead to more accurate models for learning about meteoritic impacts, including how fast meteors were traveling and the pressure they produced upon collision.

“The development of new tools and techniques allows us to recreate these impacts in the lab to get new information and see what’s happening in even greater detail,” says SLAC scientist Roberto Alonso-Mori, who co-led the research. “It really brings astronomy and planetary science right to our fingertips.”

Fingerprinting minerals

Using the Matter in Extreme Conditions (MEC) instrument at SLAC’s Linac Coherent Light Source (LCLS) X-ray laser, the researchers struck a sample of plagioclase with a high-power optical laser to send a shockwave through it. As the shockwave traveled through the sample, the researchers hit the sample with ultrafast X-ray laser pulses from LCLS at different points in time. Some of these X-rays then scattered into a detector and formed diffraction patterns.

“Just like every person has their own set of fingerprints, the atomic structure of each mineral is unique,” says Gleason. “Diffraction patterns reveal that fingerprint, allowing us to follow how the sample’s atoms rearranged in response to the pressure created by the shockwave.”

The researchers could also tune the optical laser to different energies to see how the diffraction pattern changed at different pressures.

“Our experiment allowed us to watch the amorphization as it actually happened,” Alonso-Mori says. “We discovered that it actually starts at a lower pressure than we thought. We also found that the starting and ending ‘fingerprints’ were very similar, giving us evidence of a memory effect in the material. It changes how we think about the different shock stages of these processes and will help us refine the models we use to understand these impacts.”

Beauty from destruction

In follow-up experiments, the researchers plan to capture and analyze information about debris kicked up during the impact. This would allow them to get a more complete picture of the impact and do side-by-side comparisons with what experts might find in the field to further improve models of meteoritic collisions. They also plan to explore other minerals and use more powerful lasers and larger volumes of material, which could provide insight into larger-scale processes such as planet formation.

Gleason adds that she’s excited about the light this research could shed on minerals found not only on Earth but also on other planets and extraterrestrial bodies. Further insights into how these minerals are affected by extreme impacts could unlock new information about astrophysical phenomena.

“I remember taking mineralogy and petrology as an undergrad and looking at these minerals through a microscope. As we changed the lighting, we illuminated all these beautiful details,” she says. “And now we’re able to understand, on an atomic level, how some of these intricate, gorgeous structures form, and in fact it correlates to this extreme, earth-shattering process. It’s fascinating that something so destructive could generate something so delicate and beautiful.”

LCLS is a DOE Office of Science user facility. This research was supported by the Office of Science.

Citation: A. Gleason et al. Meteoritics and Planetary Science, 16 February 2022 (doi.org/10.1111/maps.13785)

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.



from ScienceBlog.com https://ift.tt/UugOky7

The life of Pi: Ten years of Raspberry Pi

Eben Upton, Raspberry Pi co-founder
Eben Upton, Raspberry Pi co-founder

The most successful computer ever to come out of the UK celebrates its tenth anniversary this year.

Which is a more brutal environment – a large factory or a child’s bedroom? And what does one have to do with the other?

When Eben Upton, co-founder of Raspberry Pi and a University of Cambridge alumnus, was thinking about what he wanted an ultra-low-cost computer to be, one of the key requirements was that it be durable: able to withstand being tossed into a backpack hundreds of times.

Now, more than a decade later, a computer that was designed in part to withstand the rough and tumble of childhood has found a home in tens of thousands of industrial applications throughout the world, representing around 40% of its annual sales.

Raspberry Pi has created a whole new class of computing device, transforming the way engineers design control systems in industry, and has become a standard component of intelligent interfacing. Its adaptability, stability and low price make it ideal for applications including electric vehicle charging, Internet of Things (IoT), or retrofitting factory machinery so that it can be monitored digitally to spot faults that can slow down production.

Today, the $35 credit-card-sized device is the best-selling computer to come out of the UK. The Raspberry Pi has also created over 300 jobs, both at the Sony Europe B.V.’s Factory in PencoedWales, where it’s manufactured, and at the Raspberry Pi Foundation, a charity based in Cambridge that promotes the study of basic computer science in schools.

Ten years after the first Raspberry Pi was shipped in 2012, more than 40 million of the devices have been sold worldwide, creating a market worth in excess of $1 billion, plus more in peripherals.

Raspberry Pi wasn’t invented to boost shareholder value or turn its founders into billionaires: it was initially created to increase the number and calibre of students applying to study computer science at Cambridge, to give young people access to programmable hardware at a low price, and to equip a new generation with the coding skills that are so important in today’s economy. All profits are returned to the Raspberry Pi Foundation to support its educational programmes worldwide.

Raspberry Pi has helped kick-start a renewed interest in coding for young people through the development of online resources, coding clubs, programmes and competitions, which have reached millions of people from more than 100 countries. It runs two large networks of after-school clubs: CoderDojo and Code Club, and produces a range of online resources to support self-directed learning around the world.

The Raspberry Pi Foundation is part of a consortium — together with STEM Learning and BCS, The Chartered Institute for IT — that is running the National Centre for Computing Education in England. This is a government-funded initiative providing comprehensive support for schools and colleges in England to offer a world-leading computing education, from Key Stage 1 through to A level.

Through the National Centre for Computing Education, Raspberry Pi is supporting teachers by providing an extensive range of professional development and certification, bursaries for training, curriculum and teaching resources, community support, and more. To date, the consortium has supported more than 26,000 teachers from over 12,000 primary and secondary schools in England.

“Given how far we’ve come, it’s sort of funny to remember how parochial our ambitions were at the start,” said Upton, who was a Director of Studies at St John’s College after completing his PhD at the Department of Computer Science and Technology in 2006.

“There was a massive decline in the number of people applying to study computer science, which was sort of heart-breaking in a place like Cambridge, with its incredibly rich computing heritage. There was a feeling that if we could get a program or a piece of hardware into the hands of young people at the right point in their lives, we might be able to do something to reverse that decline.”

Upton grew up with a BBC Micro: one of the two great Cambridge-designed computers of the 1980s (the second being the Sinclair Spectrum) that taught a whole generation of UK computer hobbyists how to code. The Micro not only sparked Upton’s love of programming, but made him into a major enthusiast for low-cost computing which young people could own for themselves.

“There were lots of conversations going around at the time, that we should remake the BBC Micro, that it might be an answer to get young people interested in coding,” said Upton. “We thought if anyone was going to do it, it should be Cambridge.”

Designing a low-cost, high-functioning computer required expertise in technology, trends, education and manufacturing. The team – including Professors Robert Mullins and Alan Mycroft from Cambridge’s Department of Computer Science and Technology, Cambridge serial entrepreneur and business angel Jack Lang, as well as other Cambridge computer scientists, engineers, and manufacturing entrepreneurs – came up with four interlocking requirements that the Raspberry Pi had to meet.

The first was that it should be a programmable piece of hardware. The second was that it should be fun: children in the 1980s might have become interested in computers because of games, but the nature of computers at the time meant that many of them got into programming almost by accident. The Raspberry Pi had to have that same degree of fun to it and be able to do all of the things that young people might expect from a PC.

The third requirement was affordability: the team settled on a price point of $25, partly because that was roughly the cost of a textbook, and it was low enough that most families could afford it, and schools would be able to subsidise the small number of families who couldn’t.

The last requirement was robustness. “We really wanted children to have their own Raspberry Pi, and that meant it had to withstand being put in a school bag and taken out a thousand times,” said Upton.

“People saw that it was stable and capable, which made it ideal for all kinds of applications,” said Roger Thornton, Raspberry Pi’s Director of Applications. “Raspberry Pi is now so much more than a fun way for young people to learn how to code. It’s reliable and robust – if you bought one of the original Raspberry Pis back in 2012, it will still work with the 2022 operating system. That longevity is important for industrial applications.”

From the design of the first prototype in 2006 to 2011, the team worked to satisfy the four requirements. Before the Raspberry Pi, no one had produced a low-cost computer that could run an operating system like Linux. The aims of the project were highly democratic: the team wanted to make ownership of an accessible and low-cost device available to all, which in turn would open up the world of programming to people from different ages and backgrounds.

The team’s design was eventually realised with just three chips. With the addition of a low-cost keyboard and a TV for display, it can act as a fully functional programmable computer. It can also be networked and used in groups for more advanced students or for industrial use.

The Raspberry Pi launched in 2012. In 2013, it won the INDEX Design Award, and in 2017, it won the Royal Academy of Engineering’s MacRobert Award, the UK’s longest-running prize for engineering innovation.

At first, the Raspberry Pi was manufactured in China, but production was moved to Sony Europe B.V.’s UK Technology Centre (Sony UK TEC) at Pencoed in south Wales in 2012. “The intention was always to make it in the UK,” said Thornton. “We can easily visit the factory and understand what’s going on there, which has been especially useful during the pandemic.”

Almost all of the actual manufacturing of Raspberry Pis is automated: the people who work in the factory mostly monitor and look after the machines. “This automated approach means that Wales is the most cost-effective place in the world to make a Raspberry Pi, with improved throughput and quality,” said Thornton. “We’ve brought good-paying jobs to the UK, while still producing the low-cost computer that was our original vision.”

“We knew from the start that keeping the cost of manufacturing as low as possible was imperative,” said Sony UK TEC Managing Director Steve Dalton OBE. “This is indeed a challenge with no simple or one-off solution.

“Finding ways to make our process more efficient today than it was the day before and exploring every aspect of our processes – from small changes in the day-to-day, to major automation projects – allows us to maintain a low manufacturing cost while continuing to grow our team.”

“Raspberry Pi is a British product, created by British engineers, that has found a huge global market,” said Upton. “South Wales has a long tradition of manufacturing, and we are pleased that the success of Raspberry Pi has secured new, skilled jobs in the area. Global success can be transformative within local communities, and we’re proud to be part of that here in Pencoed.”

Another unique aspect of the Raspberry Pi model is that it was set up as a non-profit: the Raspberry Pi Foundation. “Being set up as a non-profit means we had a different set of drivers than a for-profit company: maybe that’s encouraged us to go harder than we might have done if we’d had a narrow focus on short-term profitability,” said Upton. “But because of the success of the product and the business, the Raspberry Pi Foundation has been able to pursue a range of activities beyond anything we first imagined.”

As it enters its second decade, Raspberry Pi is now making and designing its own chips and microcontrollers, in addition to computers, opening up a whole new range of potential applications.

Raspberry Pi is part of a coalition that’s delivered curriculum reform, teacher training reform, and a revival in hobbyist computing, to numbers beyond the 1980s. The young coders of the 2010s and 2020s however, are far more diverse in terms of gender, ethnicity, background and geography.

“Back in 2007, we felt a little bit like a voice in the wilderness, but we were far from the only people to perceive the gaps in computer education,” said Upton. “By the time we really started deploying Raspberry Pi at scale in 2012, there were a lot of other organisations doing the same thing, and that’s been a great thing to see.”

And what of the team’s original goal: to attract more students to study computer science at Cambridge? From the time that Upton and the team started working on the Raspberry Pi to the tenth anniversary of its launch, the number of undergraduate students applying to Cambridge’s undergraduate computer science course has increased eight-fold, from a low point of around 200 applicants in 2007 to more than 1,600 in 2021.

“It’s not something Raspberry Pi can claim full or even the majority of responsibility for,” said Upton. “But it’s been wonderful to be part of a community and a coalition that’s been able to deliver that kind of change over the course of a decade.”



from ScienceBlog.com https://ift.tt/mkV17pW

Molecule Produced After Skin Injury is Shown to Accelerate Hair Growth

A molecule that skin tissue produces after injury appears to accelerate hair growth, suggesting a potential new target to reverse hair loss, researchers at Duke University School of Medicine report.

In a study appearing online Feb. 24 in the journal Stem Cell Reports, the researchers describe a unique role for a molecule called thymic stromal lymphopoietin (TSLP). They found TSLP plays a key role in prompting hair growth after an injury and during the normal hair growth cycle.

“Skin repair after injury is a highly complex process,” said lead author Jessica L. Shannon, a graduate student in the departments of Dermatology and Immunology at Duke University School of Medicine. “TSLP does not show up immediately after injury, but appears about four days into the healing process. Previous studies described how TSLP triggers immune responses involved in tissue repair, so we initially questioned whether TSLP can speed up wound healing.”

Surprisingly, Shannon said, the molecule did not hasten wound repair. Instead, she and colleagues found that it interacted with stem cells residing in hair follicles, accelerating the onset of hair growth.

In laboratory studies using mice, the researchers found that an injection of TSLP between layers of the skin promoted hair growth. Conversely, when they neutralized the TSLP receptor in skin to block its action, hair growth after injury was inhibited.

“This work is a result of serendipitous observation that changed the direction of my thesis project and how we think about epithelial cell interactions with immune modulators during regeneration,” Shannon said.

Shannon said the research is now aiming to define the precise mechanisms involved, which could then lead to potential new approaches to harness the molecule to promote hair growth and regeneration.

In addition to Shannon, study authors include David L. Corcoran, John C. Murray, Steven F. Ziegler, Amanda S. MacLeod, and Jennifer Y. Zhang.

The study received support from the National Institute of Allergy and Infectious Diseases (R01-AI139207), the National Institute of Arthritis and Muscular Skeletal and Skin Diseases (R01-AR068991) and the Department of Dermatology of Duke University School of Medicine.



from ScienceBlog.com https://ift.tt/OqytUNJ

Repurposing FDA-approved drugs may help combat COVID-19

In-cell protease assay exhibiting the inhibition of protease activity SARS-CoV-2 Mpro by MG-101. When live cells expressing SARS-CoV-2 Mpro tagged with an emerald fluorescent protein (green) are treated with MG-101, the protease activity of Mpro is inhibited, and the fluorescent protein localizes to the endoplasmic reticulum (green). When untreated (DMSO control), the enzyme is active, and the fluorescent protein localizes to the nucleus (blue). Credit: Penn State. All Rights Reserved.
In-cell protease assay exhibiting the inhibition of protease activity SARS-CoV-2 Mpro by MG-101. When live cells expressing SARS-CoV-2 Mpro tagged with an emerald fluorescent protein (green) are treated with MG-101, the protease activity of Mpro is inhibited, and the fluorescent protein localizes to the endoplasmic reticulum (green). When untreated (DMSO control), the enzyme is active, and the fluorescent protein localizes to the nucleus (blue). Credit: Penn State. All Rights Reserved.

Several FDA-approved drugs — including for type 2 diabetes, hepatitis C and HIV — significantly reduce the ability of the Delta variant of SARS-CoV-2 to replicate in human cells, according to new research led by scientists at Penn State. Specifically, the team found that these drugs inhibit certain viral enzymes, called proteases, that are essential for SARS-CoV-2 replication in infected human cells.

“The SARS-CoV-2 vaccines target the spike protein, but this protein is under strong selection pressure and, as we have seen with Omicron, can undergo significant mutations,” said Joyce Jose, assistant professor of biochemistry and molecular biology, Penn State. “There remains an urgent need for SARS-CoV-2 therapeutic agents that target parts of the virus other than the spike protein that are not as likely to evolve.”

Previous research has demonstrated that two SARS-CoV-2 enzymes — proteases including Mpro and PLpro — are promising targets for antiviral drug development. Pfizer’s COVID-19 therapy Paxlovid, for example, targets Mpro. According to Jose, these enzymes are relatively stable; therefore, they are unlikely to develop drug-resistant mutations rapidly.

Katsuhiko Murakami, professor of biochemistry and molecular biology, Penn State, noted that these virus proteases, because of their capabilities to cleave, or cut, proteins, are essential for SARS-CoV-2 replication in infected cells.

“SARS-CoV-2 produces long proteins, called polyproteins, from its RNA genome that must be cleaved into individual proteins by these proteases in an ordered fashion leading to the formation of functional virus enzymes and proteins to start virus replication once it enters a cell,” Murakami explained. “If you inhibit one of these proteases, further spread of SARS-CoV-2 in the infected person could be stopped.”

The findings published Feb. 25 in the journal Communications Biology.

The team designed an assay to rapidly identify inhibitors of the Mpro and PLpro proteases in live human cells.

“Although other assays are available, we designed our novel assay so it could be conducted in live cells, which enabled us to simultaneously measure the toxicity of the inhibitors to human cells,” said Jose.

The researchers used their assay to test a library of 64 compounds — including inhibitors of HIV and hepatitis C proteases; cysteine proteases, which occur in certain protozoan parasites; and dipeptidyl peptidase, a human enzyme involved in type 2 diabetes — for their ability to inhibit Mpro or PLpro. From the 64 compounds, the team identified eleven that affected Mpro activity and five that affected PLpro activity based on a cut-off of 50% reduction in protease activity with 90% cell viability.

Anoop Narayanan, associate research professor of biochemistry and molecular biology, monitored the activity of the compounds using live confocal microscopy.

“We designed the experiment so that if the compound was affecting the proteases, you would see fluorescence in certain areas of the cell,” said Narayanan.

Next, the team evaluated the antiviral activity of the 16 PLpro and Mpro inhibitors against SARS-CoV-2 viruses in live human cells in a BSL-3 facility, the Eva J. Pell ABSL-3 Laboratory for Advanced Biological Research at Penn State, and discovered that eight of them had dose-dependent antiviral activities against SARS-CoV-2. Specifically, they found that Sitagliptin and Daclatasvir inhibit PLpro, and MG-101, Lycorine HCl and Nelfinavir mesylate inhibit Mpro. Of these, the team found that MG-101 also hindered the virus’s ability to infect cells by inhibiting protease processing of the spike protein.

“We found that when the cells were pretreated with the selected inhibitors, only MG-101 affected the virus’ entry into cells,” said Narayanan.

In addition, the researchers found that treating cells with a combination of Mpro and PLpro inhibitors had an additive antiviral effect, providing even greater inhibition of SARS-CoV-2 replication.

“In cell culture, we showed that if you combine Mpro and PLpro inhibitors, you have a stronger effect on the virus without increasing toxicity,” said Jose. “This combination inhibition is highly potent.”

To investigate the mechanism by which MG-101 inhibits the activity of Mpro protease, the scientists, including Manju Narwal, postdoctoral scholar in biochemistry and molecular biology, used X-ray crystallography to obtain a high-resolution structure of MG-101 in complex with Mpro.

“We were able to see how MG-101 was interacting with the active site of Mpro,” said Narwal. “This inhibitor mimics the polyprotein and binds in a similar manner to the protease, thereby blocking the protease from binding to and cutting the polyprotein, which is an essential step in the virus’s replication.”

Murakami added, “By understanding how the MG-101 compound binds to the active site, we can design new compounds that may be even more effective.”

Indeed, the team is in the process of designing new compounds based on the structures they determined by X-ray crystallography. They also plan to test the combination drugs that they already demonstrated to be effective in vitro in mice.

Although the scientists studied the Delta variant of SARS-CoV-2, they said the drugs will likely be effective against Omicron and future variants because they target parts of the virus that are unlikely to mutate significantly.

“The development of broad-spectrum antiviral drugs against a wide range of coronaviruses is the ultimate treatment strategy for circulating and emerging coronavirus infections,” said Jose. “Our research shows that repurposing certain FDA-approved drugs that demonstrate effectiveness at inhibiting the activities of Mpro and PLpro may be a useful strategy in the fight against SARS-CoV-2.”

Other authors on the paper include Sydney A. Majowicz, graduate student, and Shay A. Toner, undergraduate student, Penn State; Carmine Varricchio, postdoctoral research associate, and Andrea Brancale, professor of medicinal chemistry, Cardiff University; and Carlo Ballatore, professor of medicinal chemistry, University of California, San Diego.

The National Institutes of Health, Welsh Government Office for Science and Huck Institutes of the Life Sciences at Penn State (COVID-19 Seed Grant for Jose Laboratory) supported this research.



from ScienceBlog.com https://ift.tt/4AUvc5X

Incidence of COVID-19 was 8 Times Higher in Unvaccinated vs. Vaccinated Students

Unvaccinated students had eight times the incidence of COVID-19 infection compared to vaccinated students in a North Carolina independent school, according to a study by the ABC Science Collaborative appearing online Feb. 22 in the journal Pediatrics.

Researchers analyzed COVID-19 data from more than 1,100 students in grades 6-12 from Aug. 1-Nov. 12, 2021. During the study period, the Centers for Disease Control and Prevention classified COVID-19 county transmission as high, and the Delta variant comprised more than 99% of infections in the region.

School policy required universal masking indoors after Aug. 9, 2021. The school’s ventilation system used upgraded air filters but did not install high efficiency particulate air (HEPA) filters. Physical distancing was minimal, and there was no routine surveillance testing of students or staff.

As of November 2021, the school reported 829 (73.5%) students in grades 6-12 were vaccinated and 299 (26.5%) were unvaccinated. Twenty unvaccinated students reported a COVID-19 infection during the study period, compared to seven vaccinated students. Among the unvaccinated students who tested positive for COVID=19, 16 were symptomatic, compared to five of the vaccinated students.

Of the 27 infections, only two were classified as within-school transmissions, both a result of unmasked exposures to unvaccinated cases.

Vaccine effectiveness against COVID-19 infection in this study was 88%, providing evidence that vaccination is a critical component of safely continuing in-person education.

Unvaccinated students had eight times higher incidence of documented COVID-19 infection. Less than 1% of vaccinated students reported infection.

“These findings indicate that vaccination, along with other school-based safety measures, such as masking, play a critical role in minimizing transmissions within schools and keeping students in school,” said Ibukun Kalu, M.D., assistant professor in the Department of Pediatrics at Duke University School of Medicine. “As school districts lift universal masking policies, being vaccinated becomes the strongest tool to prevent COVID-19 in students.

Providence Day School student Pavan Thakkar, in partnership with the ABC Science Collaborative, conceptualized and designed the study, drafted the initial manuscript, reviewed and revised the manuscript, designed the data collection instruments, collected data, and carried out the initial analyses.



from ScienceBlog.com https://ift.tt/9zj2gKs

Colossal black holes locked in dance at heart of galaxy

Caught in an epic cosmic waltz, two supermassive black holes appear to be orbiting around each other every two years. A team of researchers has discovered the pair of supermassive black holes caught in the act of merging 13 billion light-years away.

 

The two massive bodies are each hundreds of millions of times the mass of our sun and span a distance roughly fifty times the size of our own solar system. When the pair merge in roughly 10,000 years, the collision is expected to shake space and time itself, sending gravitational waves across the universe.

The study, which uses University of Michigan data collected at the now-closed U-M Radio Astronomy Observatory at the Peach Mountain Observatory, was led by a team of astronomers at Caltech and includes U-M astronomer and research scientist Margo Aller.

The researchers discovered evidence for this scenario within a fiercely energetic object known as a quasar. Quasars are active cores of galaxies in which a supermassive black hole is siphoning material from a disk encircling it.

In some quasars, the supermassive black hole creates a jet shooting out at near the speed of light. PKS 2131-021 belongs to a subclass of quasars called blazars in which the jet is pointing towards the Earth. Astronomers knew that quasars could possess two orbiting supermassive black holes but finding direct evidence for this has proven difficult.

Reporting in The Astrophysical Journal Letters, the researchers argue that PKS 2131-021 is the second known candidate for a pair of supermassive black holes caught merging. This first candidate pair, within a quasar called OJ 287, orbits at greater distances every 9 years.

“A lot of the information in the radio band of light that we have on blazars actually has come out of this wonderful project that we had at the Peach Mountain site,” Aller said. “Since you can’t resolve them, you have to look for indirect evidence for the presence of a binary supermassive black hole system, and that has come from things such as looking for the effect on motions of nearby stars or looking at spectral lines or looking for repeating flares with the same spacing and shape.”

The tell-tale evidence for the study’s finding came in the form of a light curve that spans 45 years. According to the study, a powerful jet emanating from one of the two black holes within PKS 2131-021 is shifting back and forth due to the pair’s orbital motion. This causes periodic changes in the quasar’s radio-light frequencies.

Five different observatories and one spacecraft registered these oscillations: Caltech’s Owens Valley Radio Observatory (OVRO) made the discovery and very strong supporting evidence, clinching the discovery, came from the University of Michigan Radio Astronomy Observatory (UMRAO), MIT’s Haystack Observatory, the Very Long Baseline Array (VLBA) of the National Radio Astronomy Observatory, Metsähovi Observatory in Finland, and the Wide-field Infrared Survey Explorer (WISE).

“When we realized that the peaks and troughs of the light curve detected from recent times matched the peaks and troughs observed between 1975 and 1983, we knew something very special was going on,” said Sandra O’Neill, lead author of the new study and an undergraduate student at Caltech who is mentored by Tony Readhead, professor emeritus of astronomy.

The combination of the radio data yields a nearly perfect sinusoidal light curve unlike anything observed from quasars before. The UMRAO data are particularly important, Aller says, because they overlap with data from the Haystack Observatory and OVRO, providing a continuous light curve. Additionally, researchers need observations of many flares for a convincing result, requiring many decades of observation.

“The problem is that you need a very long time-base of data to get a significant result—and the paper has this absolutely gorgeous light curve of 45 years,” Aller said. “The results show this behavior. It shows that it’s persistent, and 31 years of that light curve are the Michigan data. The probability of showing the same repeating pattern in this study by chance would be 1 in 635,000.”

Revealing the 45-Year Light Curve

Readhead says the discoveries unfolded like a “good detective novel,” beginning in 2008 when he and colleagues began using the 40-meter telescope at OVRO to study how black holes convert material they “feed” on into relativistic jets, or jets traveling at speeds up to 99.9% that of light. They had been monitoring the brightness of more than 1,000 blazars for this purpose when, in 2020, they noticed a unique case.

PKS 2131 was varying not just periodically, but sinusoidally, meaning that the powerful jet emanating from one of the black holes in the pair was shifting in a predictable way because of the orbital motion of the black hole. The question, Readhead says, then became how long has this sine wave pattern been going on?

The research team went through archival radio data to look for past peaks in the light curves that matched predictions based on the more recent OVRO observations. They looked through data from the VLBA and found that the data confirmed the peaks in the UMRAO and OVRO data. These data show that the variable signal comes from the inner region of the jet.

Aller began working at the UMRAO in the mid-1970s after getting her Ph.D. at U-M. Her husband and research partner Hugh Aller set up the technical aspects of the observatory, including automating the relatively small telescope, while Aller focused on choosing which celestial bodies the telescope would target.

“Because we did not have to apply for time on a public instrument, if something exciting happened, we could change our program within a few hours and observe it,” Aller said. “We have this wonderful database, which is still being used, and it’s data from that program you’re seeing in this paper. We used this relatively small telescope to probe fundamental questions about these blazars.”

Archival observations from UMRAO also identified this 2005 peak and further showed that there were no sinusoidal variations for 20 years before that time—until 1981 when another predicted peak was observed. The researchers also found that the Haystack Observatory had made radio observations of PKS 2131-021 between 1975 and 1983. These data revealed another peak matching their predictions, this time occurring in 1976.

Readhead compares the system of the jet moving back and forth to a ticking clock and says that, even though the periodic signal disappeared for 20 years, likely due to changes in the activity of the black hole, it reappeared with the same phase and period.

The reason for the sinusoidal variations were at first a mystery, but Roger Blandford, a professor of particle physics and astrophysics at Stanford University currently on sabbatical at Caltech, came up with a simple and elegant physical model to explain the sinusoidal shape of the variations: that a blazer whose supermassive black hole is orbiting another supermassive black hole will produce sinusoidal variations of the amplitude observed, Readhead said.

Future observations using pulsar timing arrays may be able to detect gravitational waves rippling outward from the quasar, which would provide more clues about what lurks within.

“We were not expecting blazars with orbiting supermassive black holes to produce a beautiful sine wave curve like this,” Readhead said. “Nature often does something even weirder than you expect.”

The UMRAO research program was supported by a series of grants from the National Science Foundation and NASA. Funding for the operation of UMRAO was provided by U-M.



from ScienceBlog.com https://ift.tt/5tQBwr1

Daily Activities Like Washing Dishes Reduced Heart Disease Risk in Senior Women

Seniors take note, running or brisk walking is not the only way to reduce the risk of heart disease. Simply being “up and about” performing routine activities, referred to as daily life movement, including housework, gardening, cooking and self-care activities like showering can significantly benefit cardiovascular health.

Compared to women with less than two hours per day of daily life movement, those women with at least four hours of daily life movement had a 43% lower risk of cardiovascular disease, 43% lower risk of coronary heart disease, 30% lower risk of stroke and notably, a 62% lower risk of cardiovascular disease death.

Reporting in the Feb. 22, 2022 online edition of the Journal of the American Heart Association, a multi-institutional team led by researchers at the Herbert Wertheim School of Public Health and Human Longevity Science at University of California San Diego studied the impact of daily life movement to cardiovascular disease risk.

“The study demonstrates that all movement counts towards disease prevention,” said first author Steve Nguyen, Ph.D., M.P.H., postdoctoral scholar at the Herbert Wertheim School of Public Health. “Spending more time in daily life movement, which includes a wide range of activities we all do while on our feet and out of our chairs, resulted in a lower risk of cardiovascular disease.”

Researchers used a machine-learning algorithm to classify each minute spent while awake into one of five behaviors: sitting, sitting in a vehicle, standing still, daily life movement, or walking or running. Daily life movement encompasses activities occurring when standing and walking within a room or patio, such as when getting dressed, preparing meals or gardening.

As part of the Women’s Health Initiative Objective Physical Activity and Cardiovascular Health study, researchers measured the physical activity of nearly 5,416 American women, who were aged 63 to 97 and who did not have heart disease at the start of the study.

Participants wore a research-grade accelerometer for up to seven days to get accurate measures of how much time they spent moving and, importantly, the types of common daily life behaviors that result in movement and are not often included in prior studies of light and moderate-to-vigorous intensity physical activity. Those prior studies typically focused on intensity and duration of activities like running and brisk walking while the current study measured smaller movements at varying intensity during activities like cooking.

Cardiovascular disease continues to be the leading cause of death among both women and men in the United States with rates highest in adults aged 65 or older.

In this study, 616 women were diagnosed with cardiovascular disease, 268 with coronary heart disease, 253 had a stroke, and 331 died of cardiovascular disease.

“Much of the movement engaged in by older adults is associated with daily life tasks, but it may not be considered physical activity. Understanding the benefits of daily life movement and adding this to physical activity guidelines may encourage more movement,” said senior author Andrea LaCroix, Ph.D., M.P.H., Distinguished Professor and chief of the Division of Epidemiology at the Herbert Wertheim School of Public Health.

Co-authors include: John Bellettiere and Loki Natarajan, UC San Diego; Guangxing Wang and Chongzhi Di, Fred Hutchinson Cancer Research Center; and Michael J. LaMonte, University at Buffalo – SUNY.

This research was funded, in part, by the National Institute on Aging (P01 AG052352, 5T32AG058529-03) and the National Heart, Lung, and Blood Institute (R01 HL105065). The Women’s Health Initiative was funded by the National Heart, Lung, and Blood Institute (75N92021-D00001, 75N92021D00002, 75N92021D00003, 75N92021D00004, 75N92021D00005).

Disclosures: LaCroix has been a paid consultant on a NIH grant for the Fred Hutchinson Cancer Research Center.

DOI: 10.1161/JAHA.121.023433



from ScienceBlog.com https://ift.tt/k27WsVt

A green ‘sea change’ as water transport makes its move

All aboard! Europe’s ferry industry has set sail for an emissions-free future. It’s leading the eco-friendly revolution with electric and hydrogen-powered boats that are destined to make urban transport more sustainable.

In just a few months’ time, passengers in Stavanger, Norway, will be able to begin commuting on a revolutionary ferry that doesn’t produce any greenhouse gas emissions. Called Medstraum, which means both “to go with the flow” and “with electricity” in Norwegian, it will be the first high-speed vessel in the world that runs purely on electric power, replacing a diesel-powered ferry that currently shuttles people to surrounding islands.

If the trial goes well, similar vessels could soon operate in other cities too. ‘We’re in a very exciting period,’ said Mikal Dahle, a project manager at public transport company Kolumbus AS in Stavanger, Norway, and coordinator of the TrAM project which is developing the catamaran ferry. ‘We are now finalising the vessel and getting it ready.’

Medstraum is an example of the new and sustainable modes of transport set to transform urban mobility. In the EU, emissions from transport account for about 25% of total greenhouse gas emissions and are the main cause of air pollution in cities.

Furthermore, most people use roads to get around in urban areas where traffic jams have become a huge problem and cost an estimated €110 billion a year in Europe. ‘Waterways are underused for the time being and could be a great alternative,’ said Virginie Seurat, the VP at Seabubbles, a company developing a hydrogen-powered boat.

Our waterborne travel also needs to get a lot greener to meet the EU’s goal of reducing transport-related emissions by 90% by 2050. Existing high-speed craft, for example, are typically powered by fossil fuels and produce significant amounts of emissions.

‘It’s much more polluting to travel with (conventional) fast ferries compared to aeroplanes,’ noted Dahle. ‘A proper reduction in CO2 emissions is one of the main challenges for inshore vessels.’

Rethinking electric boat production

Dahle and his colleagues in the TrAM project are tackling this challenge with a novel design and production method for zero-emission electric vessels operating in coastal waters and inland waterways. Cost is still a barrier since these vessels are more expensive to build compared to those powered by diesel fuel, but the new approach should make them more affordable.

‘The goal is to establish and validate a methodology for the design and production of (electric) craft that reduces the overall cost by 25%,’ said Dahle. ‘We want to make it possible for a large market to invest in zero-emission vessels.’

Their new approach is based on modularisation, where a boat is divided into different functional parts, such as the hull and passenger section, which are in turn subdivided into individual components, like the batteries and electrical equipment in the energy module.

The idea is that a new vessel could be designed and built by piecing together pre-existing modules instead of starting from scratch, making the process more efficient and cost-effective. ‘Some (parts) are standardised, like the seats in the vessel, so we can pick out exactly what we need for a boat at relatively low cost since they are produced in certain volumes,’ explained Dahle. ‘Then we have other things that need to be adjusted for each vessel like the hull shape and the motors.’

Setting sail for new electric vessels

Medstraum is the first vessel being created using this approach, and aims to demonstrate its feasibility. Built from lightweight aluminium to reduce energy consumption will also allow the vessel to be easily recycled after use and contribute to the circular economy. The vessel will be able to carry around 150 passengers at speeds of up to 42 km/h and will make 16 round trips per day. The ferry’s electric battery will be charged each time it stops at Stavanger.

Dahle and his colleagues will use the same approach to develop two other boats. One will be designed to transport either passengers or goods on the River Thames in London, while the other will be used on inland waterways in Belgium and will therefore need to be adapted for different purposes and environments. The London craft will be required to travel at a higher speed and have a larger capacity than the Stavanger vessel, for example, while the boat to be used in Belgium will need to meet different rules and regulations.

A ‘flying’ boat powered by hydrogen

City commuters could also soon use a ‘flying’ water taxi to get around thanks to another team aiming to lower the environmental impact of water transport. Seurat’s colleagues have developed the first zero-emission hydrofoil craft that glides above waves powered by a hydrogen fuel cell and battery as part of the Seabubbles project.

‘The idea is to offer citizens new solutions that are a step forward in terms of a green way of life,’ said Baptiste Arribe, the strategy director at Seabubbles in Annecy, France.

The futuristic-looking craft, which is made of composite fibres, can operate in two different modes and has been developed for waterways, lakes and marine zones. When travelling at less than 12 km/h, its hydrofoils are retracted and it navigates like a conventional vessel. However, at higher speeds its foils are deployed and the hull is lifted 60 centimetres above the water’s surface, which results in a smooth ride even in choppy waters. ‘People are excited about the passenger experience because there are zero waves and no noise,’ said Seurat.

The ‘flying’ mode has environmental advantages too. It uses 35% less energy compared to the conventional mode since gliding on the foils reduces the surface area of the boat immersed in water and hence the amount of friction.

Charging versus refuelling

When they developed the prototype, the team initially planned to power the craft with electricity produced from solar panels and hydropower. But they later decided to switch to hydrogen power since a boat could travel further on a full tank of the gas compared to a single charge. It would also take less time to refuel; a hydrogen top-up that takes just a few minutes allows the boat to run for about two and a half hours.

While the latest Seabubble boat still runs on electricity, it is generated by a hydrogen fuel cell. It also contains a battery that is charged by the fuel cell when the craft is cruising to provide extra power when needed, for example during acceleration. Artificial intelligence is used to optimise the use of energy between the battery and fuel cell to make the boat as energy efficient as possible. ‘We combine the avant-garde in energy and (the latest) technology with our control system,’ Seurat explained.

Constructing the first Seabubbles

The first Seabubble boats are currently being assembled at a shipyard on the shores of Lake Annecy in France. They will be available to European buyers in a few months’ time and later to the international market.

Able to carry up to 12 passengers, the vessels are of particular interest to private services for use as a shuttle for hotels located on the waterfront or as a quiet craft to take visitors around a nature reserve without disturbing wildlife.

While these vessels could also be used for public transport, cost remains a barrier. However, the EU’s commitment to supporting the large-scale deployment of clean hydrogen technologies by 2030 should make it easier to implement Seabubbles more widely. ‘In the beginning, we need the support of governments to create the overall hydrogen infrastructure,’ concluded Seurat. ‘Then everything will come together, and I think we will change the mobility field.’

The research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.



from ScienceBlog.com https://ift.tt/6wONgEK

Using artificial intelligence to find anomalies hiding in massive datasets

Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

“In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

Probing probabilities

The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

“The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says.

This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

Their method is especially powerful because this complex graph structure does not need to be defined in advance — the model can learn the graph on its own, in an unsupervised manner.

A powerful technique

They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

“For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us,” Chen says.

Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.

This work was funded by the MIT-IBM Watson AI Lab and the U.S. Department of Energy.



from ScienceBlog.com https://ift.tt/q0GNV53

A new, inexpensive catalyst speeds the production of oxygen from water

An electrochemical reaction that splits apart water molecules to produce oxygen is at the heart of multiple approaches aiming to produce alternative fuels for transportation. But this reaction has to be facilitated by a catalyst material, and today’s versions require the use of rare and expensive elements such as iridium, limiting the potential of such fuel production.

Now, researchers at MIT and elsewhere have developed an entirely new type of catalyst material, called a metal hydroxide-organic framework (MHOF), which is made of inexpensive and abundant components. The family of materials allows engineers to precisely tune the catalyst’s structure and composition to the needs of a particular chemical process, and it can then match or exceed the performance of conventional, more expensive catalysts.

The findings are described today in the journal Nature Materials, in a paper by MIT postdoc Shuai Yuan, graduate student Jiayu Peng, Professor Yang Shao-Horn, Professor Yuriy Román-Leshkov, and nine others.

Oxygen evolution reactions are one of the reactions common to the electrochemical production of fuels, chemicals, and materials. These processes include the generation of hydrogen as a byproduct of the oxygen evolution, which can be used directly as a fuel or undergo chemical reactions to produce other transportation fuels; the manufacture of ammonia, for use as a fertilizer or chemical feedstock; and carbon dioxide reduction in order to control emissions.

But without help, “these reactions are sluggish,” Shao-Horn says. “For a reaction with slow kinetics, you have to sacrifice voltage or energy to promote the reaction rate.” Because of the extra energy input required, “the overall efficiency is low. So that’s why people use catalysts,” she says, as these materials naturally promote reactions by lowering energy input.

But until now, these catalysts “are all relying on expensive materials or late transition metals that are very scarce, for example iridium oxide, and there has been a big effort in the community to find alternatives based on Earth-abundant materials that have the same performance in terms of activity and stability,” Román-Leshkov says. The team says they have found materials that provide exactly that combination of characteristics.

Other teams have explored the use of metal hydroxides, such as nickel-iron hydroxides, Román-Leshkov says. But such materials have been difficult to tailor to the requirements of specific applications. Now, though, “the reason our work is quite exciting and quite relevant is that we’ve found a way of tailoring the properties by nanostructuring these metal hydroxides in a unique way.”

The team borrowed from research that has been done on a related class of compounds known as metal-organic frameworks (MOFs), which are a kind of crystalline structure made of metal oxide nodes linked together with organic linker molecules. By replacing the metal oxide in such materials with certain metal hydroxides, the team found, it became possible to create precisely tunable materials that also had the necessary stability to be potentially useful as catalysts.

“You put these chains of these organic linkers next to each other, and they actually direct the formation of metal hydroxide sheets that are interconnected with these organic linkers, which are then stacked, and have a higher stability,” Román-Leshkov says. This has multiple benefits, he says, by allowing a precise control over the nanostructured patterning, allowing precise control of the electronic properties of the metal, and also providing greater stability, enabling them to stand up to long periods of use.

In testing such materials, the researchers found the catalysts’ performance to be “surprising,” Shao-Horn says. “It is comparable to that of the state-of-the-art oxide materials catalyzing for the oxygen evolution reaction.”

Being composed largely of nickel and iron, these materials should be at least 100 times cheaper than existing catalysts, they say, although the team has not yet done a full economic analysis.

This family of materials “really offers a new space to tune the active sites for catalyzing water splitting to produce hydrogen with reduced energy input,” Shao-Horn says, to meet the exact needs of any given chemical process where such catalysts are needed.

The materials can provide “five times greater tunability” than existing nickel-based catalysts, Peng says, simply by substituting different metals in place of nickel in the compound. “This would potentially offer many relevant avenues for future discoveries.” The materials can also be produced in extremely thin sheets, which could then be coated onto another material, further reducing the material costs of such systems.

So far, the materials have been tested in small-scale laboratory test devices, and the team is now addressing the issues of trying to scale up the process to commercially relevant scales, which could still take a few years. But the idea has great potential, Shao-Horn says, to help catalyze the production of clean, emissions-free hydrogen fuel, so that “we can bring down the cost of hydrogen from this process while not being constrained by the availability of precious metals. This is important, because we need  hydrogen production technologies that can scale.”

The research team included others at MIT, Stockholm University in Sweden, SLAC National Accelerator Laboratory, and Institute of Ion Beam Physics and Materials Research in Dresden, Germany. The work was supported by the Toyota Research Institute.



from ScienceBlog.com https://ift.tt/uWHGnYi

More sensitive X-ray imaging

Scintillators are materials that emit light when bombarded with high-energy particles or X-rays. In medical or dental X-ray systems, they convert incoming X-ray radiation into visible light that can then be captured using film or photosensors. They’re also used for night-vision systems and for research, such as in particle detectors or electron microscopes.

Researchers at MIT have now shown how one could improve the efficiency of scintillators by at least tenfold, and perhaps even a hundredfold, by changing the material’s surface to create certain nanoscale configurations, such as arrays of wave-like ridges. While past attempts to develop more efficient scintillators have focused on finding new materials, the new approach could in principle work with any of the existing materials.

Though it will require more time and effort to integrate their scintillators into existing X-ray machines, the team believes that this method might lead to improvements in medical diagnostic X-rays or CT scans, to reduce dose exposure and improve image quality. In other applications, such as X-ray inspection of manufactured parts for quality control, the new scintillators could enable inspections with higher accuracy or at faster speeds.

The findings are described today in the journal Science, in a paper by MIT doctoral students Charles Roques-Carmes and Nicholas Rivera; MIT professors Marin Soljacic, Steven Johnson, and John Joannopoulos; and 10 others.

While scintillators have been in use for some 70 years, much of the research in the field has focused on developing new materials that produce brighter or faster light emissions. The new approach instead applies advances in nanotechnology to existing materials. By creating patterns in scintillator materials at a length scale comparable to the wavelengths of the light being emitted, the team found that it was possible to dramatically change the material’s optical properties.

To make what they coined “nanophotonic scintillators,” Roques-Carmes says, “you can directly make patterns inside the scintillators, or you can glue on another material that would have holes on the nanoscale. The specifics depend on the exact structure and material.” For this research, the team took a scintillator and made holes spaced apart by roughly one optical wavelength, or about 500 nanometers (billionths of a meter).

“The key to what we’re doing is a general theory and framework we have developed,” Rivera says. This allows the researchers to calculate the scintillation levels that would be produced by any arbitrary configuration of nanophotonic structures. The scintillation process itself involves a series of steps, making it complicated to unravel. The framework the team developed involves integrating three different types of physics, Roques-Carmes says. Using this system they have found a good match between their predictions and the results of their subsequent experiments.

The experiments showed a tenfold improvement in emission from the treated scintillator. “So, this is something that might translate into applications for medical imaging, which are optical photon-starved, meaning the conversion of X-rays to optical light limits the image quality. [In medical imaging,] you do not want to irradiate your patients with too much of the X-rays, especially for routine screening, and especially for young patients as well,” Roques-Carmes says.

“We believe that this will open a new field of research in nanophotonics,” he adds. “You can use a lot of the existing work and research that has been done in the field of nanophotonics to improve significantly on existing materials that scintillate.”

“The research presented in this paper is hugely significant,” says Rajiv Gupta, chief of neuroradiology at Massachusetts General Hospital and an associate professor at Harvard Medical School, who was not associated with this work. “Nearly all detectors used in the $100 billion [medical X-ray] industry are indirect detectors,” which is the type of detector the new findings apply to, he says. “Everything that I use in my clinical practice today is based on this principle. This paper improves the efficiency of this process by 10 times. If this claim is even partially true, say the improvement is two times instead of 10 times, it would be transformative for the field!”

Soljacic says that while their experiments proved a tenfold improvement in emission could be achieved in particular systems, by further fine-tuning the design of the nanoscale patterning, “we also show that you can get up to 100 times [improvement] in certain scintillator systems, and we believe we also have a path toward making it even better,” he says.

Soljacic points out that in other areas of nanophotonics, a field that deals with how light interacts with materials that are structured at the nanometer scale, the development of computational simulations has enabled rapid, substantial improvements, for example in the development of solar cells and LEDs. The new models this team developed for scintillating materials could facilitate similar leaps in this technology, he says.

Nanophotonics techniques “give you the ultimate power of tailoring and enhancing the behavior of light,” Soljacic says. “But until now, this promise, this ability to do this with scintillation was unreachable because modeling the scintillation was very challenging. Now, this work for the first time opens up this field of scintillation, fully opens it, for the application of nanophotonics techniques.” More generally, the team believes that the combination of nanophotonic and scintillators might ultimately enable higher resolution, reduced X-ray dose, and energy-resolved X-ray imaging.

This work is “very original and excellent,” says Eli Yablonovitch, a professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, who was not associated with this research. “New scintillator concepts are very important in medical imaging and in basic research.”

Yablonovitch adds that while the concept still needs to be proven in a practical device, he says that, “After years of research on photonic crystals in optical communication and other fields, it’s long overdue that photonic crystals should be applied to scintillators, which are of great practical importance yet have been overlooked” until this work.

The research team included Ali Ghorashi, Steven Kooi, Yi Yang, Zin Lin, Justin Beroz, Aviram Massuda, Jamison Sloan, and Nicolas Romeo at MIT; Yang Yu at Raith America, Inc.; and Ido Kaminer at Technion in Israel. The work was supported, in part, by the U.S. Army Research Office and the U.S. Army Research Laboratory through the Institute for Soldier Nanotechnologies, by the Air Force Office of Scientific Research, and by a Mathworks Engineering Fellowship.



from ScienceBlog.com https://ift.tt/znGIKTt

Chemical synthesis yields potential antibiotic

Chemists at MIT have developed a novel way to synthesize himastatin, a natural compound that has shown potential as an antibiotic.

Using their new synthesis, the researchers were able not only to produce himastatin but also to generate variants of the molecule, some of which also showed antimicrobial activity. They also discovered that the compound appears to kill bacteria by disrupting their cell membranes. The researchers now hope to design other molecules that could have even stronger antibiotic activity.

“What we want to do right now is learn the molecular details about how it works, so we can design structural motifs that could better support that mechanism of action. A lot of our effort right now is to learn more about the physicochemical properties of this molecule and how it interacts with the membrane,” says Mohammad Movassaghi, an MIT professor of chemistry and one of the senior authors of the study.

Brad Pentelute, an MIT professor of chemistry, is also a senior author of the study, which appears today in Science. MIT graduate student Kyan D’Angelo is the lead author of the study, and graduate student Carly Schissel is also an author.

Mimicking nature

Himastatin, which is produced by a species of soil bacteria, was first discovered in the 1990s. In animal studies, it was found to have anticancer activity, but the required doses had toxic side effects. The compound also showed potential antimicrobial activity, but that potential hasn’t been explored in detail, Movassaghi says.

Himastatin is a complex molecule that consists of two identical subunits, known as monomers, that join together to form a dimer. The two subunits are hooked together by a bond that connect a six-carbon ring in one of the monomers to the identical ring in the other monomer.

This carbon-carbon bond is critical for the molecule’s antimicrobial activity. In previous efforts to synthesize himastatin, researchers have tried to make that bond first, using two simple subunits, and then added more complex chemical groups onto the monomers.

The MIT team took a different approach, inspired by the way this reaction is performed in bacteria that produce himastatin. Those bacteria have an enzyme that can join the two monomers as the very last step of the synthesis, by turning each of the carbon atoms that need to be joined together into highly reactive radicals.

To mimic that process, the researchers first built complex monomers from amino acid building blocks, helped by a rapid peptide synthesis technology previously developed by Pentelute’s lab.

“By using solid-phase peptide synthesis, we could fast-forward through many synthetic steps and mix-and-match building blocks easily,” D’Angelo says. “That’s just one of the ways that our collaboration with the Pentelute Lab was very helpful.”

The researchers then used a new dimerization strategy developed in the Movassaghi lab to connect two complex molecules together. This new dimerization is based on the oxidation of aniline to form carbon radicals in each molecule. These radicals can react to form the carbon-carbon bond that hooks the two monomers together. Using this approach, the researchers can create dimers that contain different types of subunits, in addition to naturally occurring himastatin dimers.

“The reason we got excited about this type of dimerization is because it allows you to really diversify the structure and access other potential derivatives very quickly,” Movassaghi says.

Membrane disruption

One of the variants that the researchers created has a fluorescent tag, which they used to visualize how himastatin interacts with bacterial cells. Using these fluorescent probes, the researchers found that the drug accumulates in the bacterial cell membranes. This led them to hypothesize that it works by disrupting the cell membrane, which is also a mechanism used by at least one FDA-approved antibiotic, daptomycin.

The researchers also designed several other himastatin variants by swapping in different atoms in specific parts of the molecule, and tested their antimicrobial activity against six bacterial strains. They found that some of these compounds had strong activity, but only if they included one naturally occurring monomer along with one that was different.

“By bringing two complete halves of the molecule together, we could make a himastatin derivative with only a single fluorescent label. Only with this version could we do microscopy studies that offered evidence of himastatin’s localization within bacterial membranes, because symmetric versions with two labels did not have the right activity,” D’Angelo says.

Andrew Myers, a professor of chemistry at Harvard University, says that the new synthesis features “fascinating new chemical innovations.”

“This approach permits oxidative dimerization of fully synthetic monomer subunits to prepare the antibiotic himastatin, in a manner related to its biosynthesis,” says Myers, who was not involved in the research. “By synthesizing a number of analogs, important structure-activity relationships were revealed, as well as evidence that the natural product functions at the level of the bacterial envelope.”

The researchers now plan to design more variants that they hope might have more potent antibiotic activity.

“We’ve already identified positions that we can derivatize that could potentially either retain or enhance the activity. What’s really exciting to us is that a significant number of the derivatives that we accessed through this design process retain their antimicrobial activity,” Movassaghi says.

The research was funded by the National Institutes of Health, the Natural Sciences and Engineering Research Council of Canada, and a National Science Foundation graduate research fellowship.



from ScienceBlog.com https://ift.tt/1LEc3sV

Featured Post

Prof. Dr. Thomas Braunbeck | University of Heidelberg, Germany | Best Researcher Award

  International Research Awards on New Science Inventions Join us for the International Research Awards on New Science Inventions, a premie...

Popular