GlyNAC improves biomarkers in humans and extends lifespan in rodents

Antioxidants proved a bust for life extension almost 25 years ago, but glutathione stands out as an exception. We lose glutathione as we age, and supplementing to increase glutathione levels has multiple benefits, possibly on lifespan.

Glutathione is manufactured in the body via an ancient mechanism taking as input cysteine, glutamic acid, and glycine. Supplementing N-Acetyl Cysteine (NAC) and glycine are independently associated with health benefits, and possibly increased lifespan. Glutamine seems to be in adequate supply for most of us.

Each cell manufactures its own glutathione. (GSH is an abbreviation for the reduced form of glutathione.) Concentrations of GSH within a cell a typically 1,000-fold higher than in blood plasma. When we look for glutathione deficiency, we measure the blood level, because that is convenient. It is much harder to measure intracellular levels of GSH. These two studies [20112013] demonstrated that intracellular levels decline with age more consistently and more severely than blood levels. People in their 70s have less than ¼ the glutathione (in red blood cells) that they had when they were in their 20s. The same study also found that intracellular levels of cysteine and glycine but not glutamate decline with age.

Supplementing with NAC is already known to boost glutathione levels. But here is a motivation to try a combination of glycine and NAC, dubbed “GlyNAC” to see if we can do even better. This work has been spearheaded by Rajagopal Sekhar.

In humans, “Supplementing GlyNAC for a short duration of 2 wk corrected the intracellular deficiency of glycine and cysteine, restored intracellular GSH synthesis, corrected intracellular GSH deficiency, lowered OxS, improved MFO, and lowered insulin resistance.” [Sekhar] Most of these benefits are theoretical. Lowering oxidation levels is a double-edged sword. MFO=mitochondrial fatty acid oxidation, and this benefit is on firmer footing. Membranes are made of fatty acids, and mitochondrial efficiency, like most everything in the body, depends on highly selective membranes. The crowning benefit is improved insulin sensitivity, and we can be fairly confident this leads to longer healthspan.

The two recent studies, in humans and mice, are indeed impressive.

The small human study found that “GlyNAC supplementation for 24 weeks in OA corrected RBC-GSH deficiency, OxS, and mitochondrial dysfunction; and improved inflammation, endothelial dysfunction, insulin-resistance, genomic-damage, cognition, strength, gait-speed, and exercise capacity; and lowered body-fat and waist-circumference.” Though they didn’t measure methylation age, this constellation of improvements gives us confidence that people were looking and acting younger.

In older (71-80 yo) subjects 24 weeks of GlyNAC supplementation raised intracellular GSH levels from 0.4 mmol to 1.2, compared to 1.8 in young adults. (Levels were measured in red blood cells.)

Two central players in aging are inflammation and insulin resistance; both showed excellent response.

Inflammation decreased markedly: Average C-reactive protein (CRP) dropped from 4.9 to 3.2 (compared to 2.4 for young people). IL-6 dropped from 4.8 to 1.1 (ref 0.5 for young). TNFα dropped from 98 to 59 (ref 45).

Insulin resistance fell just as dramatically, along with fasting glucose and plasma insulin.

Cognitive performance improved markedly! as did grip strength, endurance, and gait speed.

GlyNAC subjects lost a lot of weight — 9% of body weight in 24 weeks. This is both very good news and a hint that some of the benefits of GlyNAC may be caloric restriction mimetic effects, indirectly due to suppression of appetite or of food absorption.

Is all this evidence of a decrease in biological age?

But the effects faded weeks after the treatment stopped. This, I believe, is different from resetting methylation age. There is not a lot of data yet to test this, but I believe that methylation is close to the source of aging; in other words, the body senses its age by its epigenetic state, and adjusts repair and protection levels accordingly. Thus changing epigenetics to a younger state, IMO, effectively induces an age change in the body.

If this is correct, then my guess is that GlyNAC does not set back methylation age, based on the fact that the effects must be continually renewed by daily doses of glycine and NAC. On the other hand, mitochondria are such a central player in expressing multiple symptoms of aging that it may well be that continuous treatment with GlyNAC leads to longer lifespan.

…and indeed that is what was just reported in a mouse study. 16 mice lived 24% longer with GlyNAC supplementation, compared to 16 controls. 24% is impressive (see table below). For example, rapamycin made headlines a decade ago with an average lifespan increase of 14%. (In other studies, rapamycin was associated with even greater life extension.) The winner in this table is a Russian pineal peptide, which claims 31% increase in lifespan. I have previously bemoaned the fact that this eye-popping work from the St Petersburg laboratories of Anisimov and Khavinson has not been replicated in the West (though Russian peptides are now commercialized in he West). 

Table source: https://ift.tt/uB6TMaU
(This is a sample — not a complete list.)

Treatment Lifespan increase
Epithalamin 31%
Thymus Peptide 28%
Rapamycin 26%
N-Acetylcysteine 24%
GlyNAC 2022 24%
Spermidine 24%
Acarbose 22%
Phenformin 21%
Ethoxyquin 20%
Vanadyl sulfate 12%
Aspirin 8%

An asterisk must be placed next to the new 24% life extension from GlyNAC. Eleven years ago, Flurkey, found the same 24% life extension with NAC alone. NAC supplementation without glycine is known to increase glutathione production. Do we need glycine in addition, or is cysteine the bottleneck? Levels of both free glycine and cysteine decline with age. This would suggest that supplementation of both should be more effective than supplementing NAC alone. But I was unable to find any study that asked whether GSH levels are raised to a greater extent by GlyNAC than by NAC alone.

Glycine supplementation in large amounts mimics methionine restriction, which is a known but impractical life extension strategy.

If you decide to take glycine, it should be at bedtime, and in large amounts, a teaspoon or two. (I did this for awhile using glycine as a sweetener in hot chocolate soymilk, until I decided it ruined the taste of the chocolate drink. Whether this is a sound reason for tailoring an anti-aging agenda I’ll leave you to decide.)

All this work comes out of the laboratory of Rajagopal V. Sekhar at Baylor College of Medicine in Texas. It’s time that a broader life extension community joined in the action. I’m grateful to Dr Sekhar for commenting on earlier drafts of this article.



from ScienceBlog.com https://ift.tt/feCOmpW

Protecting refugee animals is vital

A month into the Russian invasion in Ukraine, several journalists covering the refugee emergency have reported on the lengths that some refugees are taking to bring their pets across the conflict frontlines and international borders into relative safety. Ukraine’s neighbouring countries were quick to allow all refugees to bring their non-human animal household members without documentation, and non-profit organisations responded to the call to save animals caught up in the conflict or its border areas, even including farm and zoo animals.

Meanwhile, reports highlighted how after an arduous journey from their lost homes in Ukraine, some refugees were forced to give up their pets upon arrival in emergency shelters, and while airlines provided free tickets across their networks to refugees, these excluded their beloved animals, eliciting an emotional public response. These wrongs were soon addressed by other countries across the European Union easing immigration and import regulations for the Ukrainian refugee population. Requirements for veterinary vaccinations and quarantine, developed to protect animal and human health, have now been lifted EU-wide.

This focus and resource provision for the protection of refugee animals is unprecedented.
The solidarity shown today is uncommon in other refugee contexts, where displaced populations face increasing challenges in crossing borders, with animals often completely banned. Lacking formal structures, independent charities are struggling to save the pets of Syria, while other domesticated animals such as livestock are consistently excluded from humanitarian refugee responses out of concern for public health.

A more holistic view must be taken to lives, livelihoods and mental health.
Animals can provide nutritious food and support mental health, particularly during displacement. The inconsistencies on how to deal with animal displacement uncover gaps in jurisdictions and expertise of humanitarian responders in complex emergencies.

Evidence across global emergencies shows how animals, including pets and livestock, are excluded from relief and refugee camps out of public health concern, rooted in a lack of contextual knowledge and resources to provide veterinary support. This exclusion is counterproductive if it results in animal owners engaging in risky behaviours to retain access to their animals, which are often essential to their lives and livelihoods.

Based on my research on zoonotic disease dynamics among displaced livestock keepers across continents, the risks taken often have unintended outcomes for the safety and wellbeing of both humans and animals, with negative consequences for food- and biosecurity.

While the protection of human refugees remains a priority, the central role that animals play in peoples’ lives needs to be better acknowledged by policy makers and humanitarian responders.
During displacement, livestock becomes an important decisive factor in terms of transportation, movement route and destination location.

While animal health does have an impact on human health, with over 60% of human pathogens originating in animal species, there is little evidence of refugee animals increasing zoonotic disease outbreaks. Instead these are associated with the collapse and destruction of veterinary health services and border control.

Emergencies are complex and fluid; however, humanitarian responses follow standardised protocols, based on collaborating professional silos rather than integrated relief services. Current policy and response frameworks remain anthropocentric, and are not well suited for the inclusion of animals in emergency responses. There is a need to expand policies, responses, and wider theoretical frameworks based on solidarity across species.

Refugee policies and responses need to be more inclusive, based on the principles of solidarity and compassion with all human and non-human animals across contexts and origins.
Photo by Jorge Salvador on Unsplash

Responses need to further integrate sectors and involve a range of agencies including emergency services, law enforcement, environmental health, animal charities, and veterinary professionals. Importantly, better contextualisation and support of local stakeholders is required before, during and after the emergency. This must include the affected community, which is arguably more familiar with zoonotic disease risks than humanitarian responders working within an exclusionary framework.

The welcoming response to Ukrainian refugees and their animals shows us that another humanitarian approach is possible. It is now time to extend this to all refugee contexts.

Dorien Braam is a final year PhD Candidate at the Disease Dynamics Unit at the University of Cambridge’s Department of Veterinary Medicine, focusing on zoonotic disease dynamics in displaced populations.



from ScienceBlog.com https://ift.tt/PZX68tC

An exoskeleton with your name on it

Leo Medrano, a PhD student in the Neurobionics Lab at the University of Michigan, tests out an ankle exoskeleton on a two-track treadmill. By allowing the user to directly manipulate the exoskeleton’s settings while testing it on a treadmill, preferences that are difficult to detect or measure, such as comfort, could be accounted for by the users themselves. Image credit: University of Michigan Robotics Institute
Leo Medrano, a PhD student in the Neurobionics Lab at the University of Michigan, tests out an ankle exoskeleton on a two-track treadmill. By allowing the user to directly manipulate the exoskeleton’s settings while testing it on a treadmill, preferences that are difficult to detect or measure, such as comfort, could be accounted for by the users themselves. Image credit: University of Michigan Robotics Institute

To transform human mobility, exoskeletons need to interact seamlessly with their user, providing the right level of assistance at the right time to cooperate with our muscles as we move.

To help achieve this, University of Michigan researchers gave users direct control to customize the behavior of an ankle exoskeleton.

Not only was the process faster than the conventional approach, in which an expert would decide the settings, but it may have incorporated preferences an expert would have missed. For instance, user height and weight, which are commonly used metrics for tuning exoskeletons and robotic prostheses, had no effect on preferred settings.

“Instead of a one-size-fits-all level of power, or using measurements of muscle activity to customize an exoskeleton’s behavior, this method uses active user feedback to shape the assistance a person receives,” said Kim Ingraham, first author of the study in Science Robotics, and a recent mechanical engineering Ph.D. graduate.

Experts usually tune the wide-ranging settings of powered exoskeletons to take into account the varied characteristics of human bodies, gait biomechanics and user preferences. This can be done by crunching quantifiable data, such as metabolic rate or muscle activity, to minimize the energy expended from a user, or more simply by asking the user to repeatedly compare between pairs of settings to find which feels best.

What minimizes energy expenditure, however, may not be the most comfortable or useful. And asking the user to select between choices for numerous settings could be too time consuming and also obscures how those settings might interact with each other to affect the user experience.

By allowing the user to directly manipulate the settings, preferences that are difficult to detect or measure could be accounted for by the users themselves. Users could quickly and independently decide what features are most important—for example, trading off comfort, power or stability, and then selecting the settings to best match those preferences without the need for an expert to retune.

“To be able to choose and have control over how it feels is going to help with user satisfaction and adoption of these devices in the future,” Ingraham said. “No matter how much an exoskeleton helps, people won’t wear them if they are not enjoyable.”

To test the feasibility of such a system, the research team outfitted users with Dephy powered ankle exoskeletons and a touch screen interface that displayed a blank grid. Selecting any point on the grid would alter the torque output of the exoskeleton on one axis, while changing the timing of that torque on the alternate axis.

When told to find their preference while walking on a treadmill, the set of users who had no previous experience with an exoskeleton were, on average, able to confirm their optimal settings in about one minute, 45 seconds.

“We were surprised at how precisely people were able to identify their preferences, especially because they were totally blinded to everything that was happening—we didn’t tell them what parameters they were tuning, so they were only selecting their preferences based on how they felt the device was assisting them,” Ingraham said.

In addition, user preference changed over the course of the experiment. As the first-time users gained more experience with the exoskeleton, they preferred a higher level of assistance. And, those already experienced with exoskeletons preferred a much greater level of assistance than the first-time users.

These findings could help determine how often retuning of an exoskeleton needs to be done as a user gains experience and supports the idea of incorporating direct user input into preference for the best experience.

“This is fundamental work in exploring how to incorporate people’s preference into exoskeleton control,” said Elliott Rouse, senior author of the study, an assistant professor of mechanical engineering and a core faculty member of the Robotics Institute. “This work is motivated by our desire to develop exoskeletons that go beyond the laboratory and have a transformative impact on society.

“Next is answering why people prefer what they prefer, and how these preferences affect their energy, their muscle activity, and their physiology, and how we could automatically implement preference-based control in the real world. It’s important that assistive technologies provide a meaningful benefit to their user.”

The research was supported by the National Science Foundation, the D. Dan and Betty Kahn Foundation and the Carl Zeiss Foundation in cooperation with the German Scholars Organization, in addition to hardware and technical assistance from Dephy Inc. Ingraham is now a postdoctoral researcher at the University of Washington.

Written by Dan Newman



from ScienceBlog.com https://ift.tt/wzsa6OC

Women’s full participation in renewables is essential to the just transition

The shift towards clean, secure energy hinges on the participation of women. © Waraphorn Aphai, Shutterstock

The shift towards clean, secure energy hinges on the participation of women. © Waraphorn Aphai, Shutterstock

The transition to energy security and climate-neutrality means we need to close the gender gap to fully involve women in a technical, scientific and business transformation.

While it has been in the works for some time, the EU’s strategy to move away from dependency on fossil fuels has gained a new impetus with geopolitical developments in Europe.

Already, on 8 March, the European Commission proposed the outline of a plan for joint European action for more affordable, secure and sustainable energy. The goal is to reduce demand for Russian gas by two-thirds by the end of this year.

The shift towards clean, secure energy supplies in Europe and efforts to tackle climate change hinge on several key factors. One factor you may not yet have thought about is a better inclusion of women in developing the technical solutions required.

Diverse thinkers

‘With the complexity and challenges of the 21st-century problems, we need diverse thinkers and diverse leaders,’ said Sandrine Dixson-Declève, co-president of the Club of Rome and thought leader in climate, energy and sustainable development.

The Club of Rome conducts research into new thinking about complex, planetary-scale problems. ‘We cannot do it with just a male perception of the world,’ she said.

Women remain underrepresented in scientific, technical and engineering (STEM) disciplines, despite growing demand. They make up just 38% of PhDs in physical sciences and engineering (27%). Only 24% of self-employed professionals in science, engineering and ICT are women.

Society misses out when there’s a lack of gender equality. ‘Women tend to lead with a more long-term vision in what they want to achieve, and tend to lead without just a focus on power gains, but in finding solutions,’ said Dixson-Declève.

Climate-neutral cities

Professor Doris Damyanovic at the University of Natural Resources and Life Sciences, Vienna, focuses on sustainability in urban planning and landscape planning.

She has a special interest in gender issues and climate-neutral cities. She is calling for a rethink in urban planning, with an expansion of green and open spaces.
‘The important thing is to consider gender, age, but also social and cultural background in local planning,’ she said.

‘We work on designing open public spaces with more trees or maybe to use blue infrastructure such as a water fountain,’ said Damyanovic. On hot days, water fountains could make cities more liveable by reducing temperatures.

A challenge for many European cities is to build affordable housing in locations where people want to live, with good transport links.

Dependable and affordable public transport can take people out of cars and reduce consumption of fossil fuels.

‘How can you have nice green areas, but keep housing affordable? This is always a big challenge,’ Damyanovic said.

People experience climate change differently according to their gender, age, ethnic and cultural backgrounds, noted Damyanovic. Those on low incomes, with health issues, a migratory background or a low level of education are especially dependent on climate resilient public spaces.

‘Women are not per se more vulnerable than men, but many of these vulnerability characteristics apply more frequently to them due to structure disadvantages,’ said Damyanovic.

‘Climate change has profound implications for gender equality and social justice,’ she said.

Dixson-Declève agrees that women are often bearing the brunt of climate change, while also taking leadership in terms of fighting for women’s rights and climate rights.

‘This is reflected in the youth movement today, where you see that it is being run not just by Greta [Thunberg], but also by many other young women.’ said Dixson-Declève.

Technical degrees

Dr Maria Luisa Hernandez Latorre is a Spanish industrial engineer who co-founded Ingelia in 2008, to build industrial plants that recover resources from waste biomass.

Often, this comprises leftovers from the food and beverage industries, agriculture and forestry residues and organic waste. The plants recover chemicals such as carbon, nitrogen and phosphorus. One byproduct is nutrient-rich water with potential for use by local farmers as a fertiliser.

In Hernandez Latorre’s industrial engineering course at the Polytechnic University of Valencia, women were few and far between. So too when she began her engineering career.

‘Most places I worked in, I was alone, or maybe with one other woman, along with 60 (men),’ she said. She points out that a technical background is very important in industry.

‘Take a look at who is managing companies, whether big or small,’ she said. ‘Most of them have a technical degree.’

Energy innovation

According to Eurostat, renewable energy made up 37% of gross electricity consumption in 2020, up from 34% in 2019. Greening the fuel supply is a major ambition for Europe.

Solar power is the fastest-growing sector, but it still has room to expand beyond the 14% share it provided in 2020.

‘Italy is a sunny place, and we should have more solar cells on our buildings,’ said Dr Alessandra Giannuzzi, Italian physicist who carried out research on this technology at University of Bologna in Italy.

She began her career with an interest in astrophysics. Following her degree, however, she devoted attention to practical problems in energy and the environment, by applying insights from optics in astronomy to solar concentrators.

These are mirror-like devices that concentrate sunlight onto a receiver which uses solar energy to generate electricity. ‘There are technological similarities between ground-based telescopes and some types of solar concentrators,’ said Giannuzzi.

She says part of the problem with the lack of women in physics lies with societal attitudes, including from women themselves.

‘A lot of people have said to me, “Oh you studied physics, but you are a woman. No, I couldn’t do that. It is too complicated,” said Giannuzzi . ‘But this is a mental block. It is about intelligence and mental skills, and we are the same in that sense.’

As part of its commitment to promoting gender equality in research and innovation, last year the EU launched Women TechEU supporting 50 women-led tech start-ups with a budget of €3.8 million.

‘Women are excellent innovators. We really need to integrate women into all levels of companies,’ said Hernandez Latorre. The absence of women from technical projects and board rooms has negative repercussions for business.

Women can play a key role in ‘contributing to management bodies of companies to think out of the box, promote innovation and implement new ways of management,’ she said.

Huge change

‘On the energy transition, our perspective is, that women can make a huge change,” said Ioannis Konstas, the project manager for W4RES.

The goal of the project is to develop the role of women in the renewable heating and cooling market all across Europe, through technical and business supports. It also collects key data about women’s participation in the industry.

The role of women in the energy sector is growing to becoming “an entrepreneur, (a) person willing to pursue a career in the tech sector and make a significant change,” he said.

Inclusivity is no longer a luxury either, a widespread acceptance of rapid change is essential. Recent events underline the feeling that “we have an elephant in the room”, said Konstas. The lack of women participating in key roles in the renewable energy sector is unsustainable.

The traditional model of for-profit management in the sector leaves other considerations behind. ‘Women tend to be more open-minded, more inclusive in their approach,’ said Konstas.

Dixson-Declève noted that while ‘gender equality is not at the level that it needs to be, it is getting better.’

She added that a more female holistic approach to the European economy, by men and women, is needed to shift away from power games, and towards values that matter, such as the environment, health care, education and well-being.

A new study recently launched by the European Commission is designed to assess women’s participation in the field of green energy transition. It will help to determine ways to increase the role of women in the sector. Conducted by the Directorate General for Research and Innovation, the study will also aim to determine ways in which the demand for new skills in the energy sector can be met.

The research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.



from ScienceBlog.com https://ift.tt/xTcmynK

With new industry, a new era for cities

Kista Science City, just north of Stockholm, is Sweden’s version of Silicon Valley. Anchored by a few big firms and a university, it has become northern Europe’s main high-tech center, with housing mixed in so that people live and work in the same general area.

Around the globe, a similar pattern is visible in many urban locales. Near MIT, Kendall Square, once home to manufacturing, has become a biotechnology and information technology hub while growing as a residential destination. Hamburg, Germany, has redeveloped part of its famous port with new business, recreation, and housing. The industrial area of Jurong, in Singapore, now features commerce, residential construction, parks, and universities. Even Brooklyn’s once-declining Navy Yard has become a mixed-use waterfront area.

In place after place, cities have developed key neighborhoods by locating 21st-century firms near residential dwellings and street-level commerce. Instead of heavy industry pushing residents out of cities, advanced manufacturing and other smaller-scale forms of business are drawing people back in, and “re-shaping the relationships between cities, people, and industry,” as MIT Professor Eran Ben-Joseph puts it in a new book co-authored with Tali Hatuka.

The book, “New Industrial Urbanism: Designing Places for Production,” was published this week by Routledge, providing a broad overview of a major trend in city form, from two experts in the field. Ben-Joseph is the Class of 1922 Professor of Landscape Architecture and Urban Planning at MIT; Hatuka is a planner, architect, and professor of urban planning and head of the Laboratory of Contemporary Urban Design at Tel Aviv University.

“New Industrial Urbanism is a socio-spatial concept which calls for reassessing and re-shaping the relationships between cities, people, and industry,” the authors write in the book. “It suggests shaping cities with a renewed understanding that an urban location and setting give industry a competitive advantage,” stemming from access to a skilled labor force, universities, and the effects of clustering industry firms together. 

As such, they add, “This concept calls for a [new] paradigm shift in the way we understand and address production in cities and regions.”

An opportunity to regenerate

In the book, Ben-Joseph and Hatuka place “new industrial urbanism” in contrast to earlier phases of city development. From about 1770 to 1880, in their outline, cities saw the emergence of heavy industry and smoke-spewing factories without much regard to planning.

Thus from about 1880 to 1970, some planners and architects began creating idealized forms for industrial cities and sometimes developed entirely planned industrial communities in exurban areas. By about 1970, though, a third phase took hold: deindustrialization, as residents started leaving older industrial cities en masse, while industry globalized and set up factories in some previously nonindustrial countries. Between 1979 and 2010, as Ben-Joseph and Hatuka note, the U.S. lost 41 percent of its manufacturing jobs.

In response to all this, authors see new industrial urbanism as a fourth phase, in which city form and industry interact. The current moment, as they write, is characterized by “hybridity.” Because some forms of current industry feature cleaner and more sustainable production, formerly undesirable industrial zones can now contain a more appealing mix of advanced industry, commerce, residential units, educational and other research institutions, and recreation.

As punishing as the loss of manufacturing has been in the U.S. and other places, the emergence of higher-tech production represents “an opportunity to regenerate urban areas and redefine the role of industry in the city,” Ben-Joseph and Hatuka write.

As the authors detail, city leaders take differing approaches to the issue of revitalization. Some places feature clustering, building strength in one particular industry. This is true of Kendall Square with biotechnology, or even Wageningen, the “Food Valley” of the Netherlands, where scores of agribusiness firms have located within a compact region.

Other cities must more thoroughly reinvent a deindustrialized spot, as in Brooklyn, Hamburg, and Jurong, keeping some historic structures intact. And some places, including Barcelona and Portland, Oregon, have taken a hybrid approach within their own city limits, encouraging new businesses at many scales, and many forms of land use.  

As “New Industrial Urbanism” emphasizes, there is not one royal road toward rebuilding cities. In Munich, the headquarters of BMW rise up in a four-cylinder tower from the 1970s, a reference to the company’s vehicles. Next to the tower is a massive BMW assembly plant, sprawled out over many acres. Over time, residential growth has “gradually grown around the area,” as Ben-Joseph and Hatuka put it. Because the plant is geared toward assembly alone, not materials production, it is more environmentally feasible to see residential growth nearby. The outcome is viable industry juxtaposed with living areas. 

“Our book is trying to show the various ways by which cities can address the changing contemporary relationships between city and industry,” Ben-Joseph says. “The cases that we describe and the concepts that we put forward represent the growing recognition of the role industry plays in the world’s total economic activity. They teach us that industrial development is always contextual and culturally dependent, and it is these variants that contribute to the evolution of different types and forms of industrial ecosystems.”

Wearing it well

As Ben-Joseph and Hatuka also emphasize, the pursuit of industry to help rebuild cities does not have to focus strictly on high-tech firms. In Los Angeles’ Garment District, as the book also details, changes in zoning threatened to disperse a thriving, century-old cluster of manufacturers.

Those manufacturers soon banded into a productive business improvement district; policymakers saw the wisdom of a hybrid approach to zoning that let manufacturers stay in place. (“Like farmland, industrial land is hard to reclaim once replaced by other functions,” Ben-Joseph and Hatuka write.) As a result, about 4,000 garment manufacturers remain in Los Angeles, providing crucial income to communities that have long depended on it.

“Just as we often do with housing policies, it is essential that we design strategic land-use mechanisms that protect and enhance existing industrial uses within our cities” Ben-Joseph adds. “Cases like downtown Los Angeles shows that cities are beginning to recognize the value of centrally located industrial land and the need to address pressures to convert these areas to up-scale housing and displacing existing manufacturers.”

As a book, “New Industrial Urbanism” has been almost a decade in the making. Along the way, the authors hosted a symposium about the topic at MIT in 2014, and helped curate an exhibit at MIT’s Wolk Gallery on the subject the same year. Through support from MIT and Tel-Aviv University, the book is also available as an open access publication.

Experts in the field have praised “New Industrial Urbanism.” Karen Chapple, a professor and director of the University of Toronto’s School of Cities, has noted that while some people have “embraced the notion of advanced manufacturing locating in cities, the literature has lacked a compelling and detailed vision of what a new industrial urbanism would actually encompass. This comprehensive volume fills that gap, with a powerful visual analysis thoroughly grounded in economic theory and historical context.”

For his part, Ben-Joseph is pleased by the trends toward a new industrial urbanism in many parts of the globe.

“We have seen a lot of progress in most countries,” Ben-Joseph says.

Still, he observes, much more is possible, in the U.S. and beyond. As the authors write, “re-evaluating manufacturing should be a primary goal of planners, urban designers, and architects. Awareness of this goal is critical to the future development of cities worldwide.”



from ScienceBlog.com https://ift.tt/GfhWbBU

Solving the challenges of robotic pizza-making

Imagine a pizza maker working with a ball of dough. She might use a spatula to lift the dough onto a cutting board then use a rolling pin to flatten it into a circle. Easy, right? Not if this pizza maker is a robot.

For a robot, working with a deformable object like dough is tricky because the shape of dough can change in many ways, which are difficult to represent with an equation. Plus, creating a new shape out of that dough requires multiple steps and the use of different tools. It is especially difficult for a robot to learn a manipulation task with a long sequence of steps — where there are many possible choices — since learning often occurs through trial and error.

Researchers at MIT, Carnegie Mellon University, and the University of California at San Diego, have come up with a better way. They created a framework for a robotic manipulation system that uses a two-stage learning process, which could enable a robot to perform complex dough-manipulation tasks over a long timeframe. A “teacher” algorithm solves each step the robot must take to complete the task. Then, it trains a “student” machine-learning model that learns abstract ideas about when and how to execute each skill it needs during the task, like using a rolling pin. With this knowledge, the system reasons about how to execute the skills to complete the entire task.

The researchers show that this method, which they call DiffSkill, can perform complex manipulation tasks in simulations, like cutting and spreading dough, or gathering pieces of dough from around a cutting board, while outperforming other machine-learning methods.

Beyond pizza-making, this method could be applied in other settings where a robot needs to manipulate deformable objects, such as a caregiving robot that feeds, bathes, or dresses someone elderly or with motor impairments.

“This method is closer to how we as humans plan our actions. When a human does a long-horizon task, we are not writing down all the details. We have a higher-level planner that roughly tells us what the stages are and some of the intermediate goals we need to achieve along the way, and then we execute them,” says Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL), and author of a paper presenting DiffSkill.

Li’s co-authors include lead author Xingyu Lin, a graduate student at Carnegie Mellon University (CMU); Zhiao Huang, a graduate student at the University of California at San Diego; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences at MIT and a member of CSAIL; David Held, an assistant professor at CMU; and senior author Chuang Gan, a research scientist at the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Learning Representations.

Student and teacher

 The “teacher” in the DiffSkill framework is a trajectory optimization algorithm that can solve short-horizon tasks, where an object’s initial state and target location are close together. The trajectory optimizer works in a simulator that models the physics of the real world (known as a differentiable physics simulator, which puts the “Diff” in “DiffSkill”). The “teacher” algorithm uses the information in the simulator to learn how the dough must move at each stage, one at a time, and then outputs those trajectories.

Then the “student” neural network learns to imitate the actions of the teacher. As inputs, it uses two camera images, one showing the dough in its current state and another showing the dough at the end of the task. The neural network generates a high-level plan to determine how to link different skills to reach the goal. It then generates specific, short-horizon trajectories for each skill and sends commands directly to the tools.

The researchers used this technique to experiment with three different simulated dough-manipulation tasks. In one task, the robot uses a spatula to lift dough onto a cutting board then uses a rolling pin to flatten it. In another, the robot uses a gripper to gather dough from all over the counter, places it on a spatula, and transfers it to a cutting board. In the third task, the robot cuts a pile of dough in half using a knife and then uses a gripper to transport each piece to different locations.

A cut above the rest

DiffSkill was able to outperform popular techniques that rely on reinforcement learning, where a robot learns a task through trial and error. In fact, DiffSkill was the only method that was able to successfully complete all three dough manipulation tasks. Interestingly, the researchers found that the “student” neural network was even able to outperform the “teacher” algorithm, Lin says.

“Our framework provides a novel way for robots to acquire new skills. These skills can then be chained to solve more complex tasks which are beyond the capability of previous robot systems,” says Lin.

Because their method focuses on controlling the tools (spatula, knife, rolling pin, etc.) it could be applied to different robots, but only if they use the specific tools the researchers defined. In the future, they plan to integrate the shape of a tool into the reasoning of the “student” network so it could be applied to other equipment.

The researchers intend to improve the performance of DiffSkill by using 3D data as inputs, instead of images that can be difficult to transfer from simulation to the real world. They also want to make the neural network planning process more efficient and collect more diverse training data to enhance DiffSkill’s ability to generalize to new situations. In the long run, they hope to apply DiffSkill to more diverse tasks, including cloth manipulation.

This work is supported, in part, by the National Science Foundation, LG Electronics, the MIT-IBM Watson AI Lab, the Office of Naval Research, and the Defense Advanced Research Projects Agency Machine Common Sense program.



from ScienceBlog.com https://ift.tt/QVJ0kaU

Enjoy your grubs: How nuclear winter could impact food production

Eating palm weevil grubs like these would be one way to get protein into the diet of people in the tropics in a nuclear winter, according to the researchers. Credit: GettyImages Biggereye. All Rights Reserved.
Eating palm weevil grubs like these would be one way to get protein into the diet of people in the tropics in a nuclear winter, according to the researchers. Credit: GettyImages Biggereye. All Rights Reserved.

The day after lead author Daniel Winstead approved the final proofs for a study to be published in Ambio, the journal of the Royal Swedish Academy of Sciences, Russia put its nuclear forces on high alert.

“In no way, shape or form had I thought that our work — ‘Food Resilience in a Dark Catastrophe: A new Way of Looking at Tropical Wild Edible Plants’ — would be immediately relevant while we were working on it,” said the research technologist in Penn State’s College of Agricultural Sciences. “In the short term, I viewed it as an abstract concept.”

Winstead and study co-author Michael Jacobson, professor of forest resources, had to look back at the Cold War era to get information for their review.

“So, it did not enter my mind that it would be something that could happen anytime soon,” Winstead said. “This paper was published during this latest invasion by Russia into Ukraine, but our work on it began two years ago. The idea that nuclear war could break out now was unthinkable to me.”

The research acknowledges what has been widely agreed upon for decades: In higher latitude countries — such as nuclear powers the U.S. and Russia — there would be no agricultural production and little food gathering possible in a nuclear winter after an all-out conflagration. If warring countries unleashed large portions of their nuclear arsenals, the resulting global, sun-blocking cloud would turn the ground to permafrost.

A nuclear war would cause global blockage of the sun for several years due to injections of black carbon soot into the upper atmosphere, covering most of the planet with black clouds, the researchers said. Computer models predict that a large nuclear war, primarily between Russia and the U.S., could inject upwards of 165 million tons of soot into the upper atmosphere from more than 4,000 nuclear bomb explosions and ensuing wildfires.

Such a nuclear war could result in less than 40% of normal light levels near the equator and less than 5% normal light levels near the poles, with freezing temperatures in most temperate regions and severe precipitation reductions — just half of the worldwide average — according to the study. Post-catastrophe conditions, which could last 15 years in some wet tropical forests such as those in the Congo and Amazon basins, could cause a 90% reduction in precipitation for several years after such an event.

But tropical forests would offer an opportunity for limited food production and gathering by local inhabitants because, despite the dense soot clouds, the region would be warmer. In the study, researchers classified wild, edible plants into seven main categories, augmented by forest insects: fruits, leafy vegetables, seeds/nuts, roots, spices, sweets and protein.

In a nuclear winter, the study shows, the following foods would be available in varying degrees in tropical forests: konjac, cassava, wild oyster mushroom, safou, wild spinaches, vegetable amaranths, palms, mopane worm, dilo, tamarind, baobab, enset, acacias, yam and palm weevil.

The researchers chose 33 wild, edible plants from a list of 247 and considered their potential for cultivation in tropical forests in post-nuclear war conditions. Their selections were complicated by the fact that in the tropics there are relatively few food-bearing plants that are both drought tolerant and shade or low-light tolerant.

Post-catastrophe conditions would be unlivable for humans in many areas around the world, and agriculture may not be possible, the researchers concluded. This study shows how just a few of the many tropical wild, edible plants and insects could be used for short-term emergency food cultivation and foraging after an atmospheric soot injection from a catastrophic event such as a nuclear war.

The world’s tropical forests hold many underutilized crops and resources, Jacobson pointed out. This study offers a new perspective on global food security and resilience using forest foods, along with policy and preparedness recommendations.

“But regardless of the risk of nuclear war, there are numerous other existential threats, not least being climate change,” he said. “Meeting food security — and nutrition — in the face of any of these risks is clearly one of humanity’s major challenges over the next decades. To that end, it is imperative that we better understand our food production, supply and value chains to make them less vulnerable and more adaptable in times of crises.”

This study is part of a much larger project, “Research on Emergency Food Resilience,” underway at Penn State. Open Philanthropy provided funding for this work.



from ScienceBlog.com https://ift.tt/byZT4GU

Oleic acid, a key to activating the brain’s ‘fountain of youth’

Many people dread experiencing the cognitive and mood declines that often accompany reaching an advanced age, including memory disorders such as Alzheimer’s disease and mood conditions like depression. At Baylor College of Medicine and the Jan and Dan Duncan Neurological Research Institute (Duncan NRI) at Texas Children’s Hospital researchers have been investigating new ways to prevent or treat these and other related conditions.

In a study published in the Proceedings of the National Academy of Sciences, the team reports a missing piece of the puzzle of how memory and mood are sustained and regulated in the brain. Their findings may inform potential new therapeutic strategies to counteract cognitive and mood decline in patients with neurological disorders.

Neurogenesis: The brain’s ‘fountain of youth’

“Years ago, scientists thought that the adult mammalian brain was not able to repair and regenerate. But research has shown that some brain regions have the capacity of generating new neurons, a process called neurogenesis. The hippocampus region of the adult mammalian brain has the ongoing capacity to form new neurons, to repair and regenerate itself, enabling learning and memory and mood regulation during the adult life,” said co-corresponding author Dr. Mirjana Maletic-Savatic, associate professor of pediatricsneurology at Baylor and Texas Children’s and an investigator at the Duncan NRI.

“Ever since neurogenesis was discovered, it has been envisioned as ‘the fountain of youth.’ But, with increasing age, in certain diseases or after exposure to certain drugs or insults, neurogenesis decreases and this has been associated with cognitive decline and depression,” said Maletic-Savatic.

In this study, the team searched for a way to tap into the fountain of youth, to reignite the process of neurogenesis to prevent its decline or restore it.

“We knew that neurogenesis has a ‘master regulator,’ a protein within neural stem cells called TLX that is a major player in the birth of new neurons. We however did not know what stimulated TLX to do that. Nobody knew how to activate TLX,” said co-corresponding author Dr. Damian Young, associate professor of pharmacology and chemical biology and of pathology and immunology at Baylor and Texas Children’s and member of Baylor’s Dan L Duncan Comprehensive Cancer Center. Young also is associate director of the Center for Drug Discovery at Baylor.

Oleic acid activates neurogenesis

“We discovered that a common fatty acid called oleic acid binds to TLX and this increases cell proliferation and neurogenesis in the hippocampus of both young and old mice,” said co-first author Dr. Prasanna Kandel, who was in the graduate program of Integrative Molecular and Biomedical Sciences at Baylor while working on this project. “This oleic acid is produced within the neural stem cells in order to activate TLX.”

While oleic acid also is the major component in olive oil, however, this would not be an effective source of oleic acid because it would likely not reach the brain, the researchers explained. It must be produced by the cells themselves.

TLX is now a druggable target

The finding that oleic acid regulates TLX activation has major therapeutic implications. “TLX has become a ‘druggable’ target, meaning that knowing how it is activated naturally in the brain helps us to develop drugs capable of entering the brain and stimulating neurogenesis,” Young said.

This is incredibly exciting because this strategy could potentially provide a new way of treating Alzheimer’s disease and depression, debilitating diseases in need of effective treatments,” said Young.

“Beside the scientific progress, I am hopeful that the current findings and ongoing related work will have real impact on people who are in need of improved and effective therapies, like my mother who suffers from clinical depression,” Kandel said.

Read a commentary about this work, here.

Other contributors to this work include Fatih Semerci, Rachana Mishra, William Choi, Aleksandar Bajic, Dodge Baluya, LiHua Ma, Kevin Chen, Austin Cao, Tipwarin Phongmekhin, Nick Matinyan, Alba Jiménez-Panizo, Srinivas Chamakuri, Idris O. Raji, Lyra Chang, Pablo Fuentes-Prior, Kevin R. MacKenzie, Caroline L. Benn, Eva Estébanez-Perpiñá, Koen Venken and David D. Moore. The authors are affiliated with one of more of the following institutions: Baylor College of Medicine, Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, University of Texas, MD Anderson, Houston; Rice University, Universidad de Barcelona, Biomedical Research Institute Sant Pau and Pfizer Regenerative Medicine.

This project was supported by the Cancer Prevention and Research Institute of Texas (CPRIT) Core Facility Support Award (CPRIT-RP180672), the NIH (CA125123 and RR024574) and the BCM IDDRC Grant (P50HD10355) from the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The work was partially supported by Baylor College of Medicine start-up funds, the Albert and Margaret Alkek Foundation, the McNair Medical Institute at the Robert and Janice McNair Foundation, the CPRIT grant R1313 (V.K.); the R. P. Doherty, Jr. Welch Chair in Science (Q-0022, D.D.M.), BCM Seed Funding 1P20CA221731-01A1 (D.W.Y.); and NIGMS R01 GM120033. Further support was provided by Cynthia and Antony Petrello Endowment, Mark A. Wallace Endowment, McKnight Foundation, Dana Foundation, and BCM Computational and Integrative Biomedical Research Center seed grant.

By Ana María Rodríguez, Ph.D.



from ScienceBlog.com https://ift.tt/ypc0oZn

Fighting discrimination in mortgage lending

Although the U.S. Equal Credit Opportunity Act prohibits discrimination in mortgage lending, biases still impact many borrowers. One 2021 Journal of Financial Economics study found that borrowers from minority groups were charged interest rates that were nearly 8 percent higher and were rejected for loans 14 percent more often than those from privileged groups.

When these biases bleed into machine-learning models that lenders use to streamline decision-making, they can have far-reaching consequences for housing fairness and even contribute to widening the racial wealth gap.

If a model is trained on an unfair dataset, such as one in which a higher proportion of Black borrowers were denied loans versus white borrowers with the same income, credit score, etc., those biases will affect the model’s predictions when it is applied to real situations. To stem the spread of mortgage lending discrimination, MIT researchers created a process that removes bias in data that are used to train these machine-learning models.

While other methods try to tackle this bias, the researchers’ technique is new in the mortgage lending domain because it can remove bias from a dataset that has multiple sensitive attributes, such as race and ethnicity, as well as several “sensitive” options for each attribute, such as Black or white, and Hispanic or Latino or non-Hispanic or Latino. Sensitive attributes and options are features that distinguish a privileged group from an underprivileged group.

The researchers used their technique, which they call DualFair, to train a machine-learning classifier that makes fair predictions of whether borrowers will receive a mortgage loan. When they applied it to mortgage lending data from several U.S. states, their method significantly reduced the discrimination in the predictions while maintaining high accuracy.

“As Sikh Americans, we deal with bias on a frequent basis and we think it is unacceptable to see that transform to algorithms in real-world applications. For things like mortgage lending and financial systems, it is very important that bias not infiltrate these systems because it can emphasize the gaps that are already in place against certain groups,” says Jashandeep Singh, a senior at Floyd Buchanan High School and co-lead author of the paper with his twin brother, Arashdeep. The Singh brothers were recently accepted into MIT.

Joining Arashdeep and Jashandeep Singh on the paper are MIT sophomore Ariba Khan and senior author Amar Gupta, a researcher in the Computer Science and Artificial Intelligence Laboratory at MIT, who studies the use of evolving technology to address inequity and other societal issues. The research was recently published online and will appear in a special issue of Machine Learning and Knowledge Extraction.

Double take

DualFair tackles two types of bias in a mortgage lending dataset — label bias and selection bias. Label bias occurs when the balance of favorable or unfavorable outcomes for a particular group is unfair. (Black applicants are denied loans more frequently than they should be.) Selection bias is created when data are not representative of the larger population. (The dataset only includes individuals from one neighborhood where incomes are historically low.)

The DualFair process eliminates label bias by subdividing a dataset into the largest number of subgroups based on combinations of sensitive attributes and options, such as white men who are not Hispanic or Latino, Black women who are Hispanic or Latino, etc.

By breaking down the dataset into as many subgroups as possible, DualFair can simultaneously address discrimination based on multiple attributes.

“Researchers have mostly tried to classify biased cases as binary so far. There are multiple parameters to bias, and these multiple parameters have their own impact in different cases. They are not equally weighed. Our method is able to calibrate it much better,” says Gupta.

After the subgroups have been generated, DualFair evens out the number of borrowers in each subgroup by duplicating individuals from minority groups and deleting individuals from the majority group. DualFair then balances the proportion of loan acceptances and rejections in each subgroup so they match the median in the original dataset before recombining the subgroups.

DualFair then eliminates selection bias by iterating on each data point to see if discrimination is present. For instance, if an individual is a non-Hispanic or Latino Black woman who was rejected for a loan, the system will adjust her race, ethnicity, and gender one at a time to see if the outcome changes. If this borrower is granted a loan when her race is changed to white, DualFair considers that data point biased and removes it from the dataset.

Fairness vs. accuracy

To test DualFair, the researchers used the publicly available Home Mortgage Disclosure Act dataset, which spans 88 percent of all mortgage loans in the U.S. in 2019, and includes 21 features, including race, sex, and ethnicity. They used DualFair to “de-bias” the entire dataset and smaller datasets for six states, and then trained a machine-learning model to predict loan acceptances and rejections.

After applying DualFair, the fairness of predictions increased while the accuracy level remained high across all states. They used an existing fairness metric known as average odds difference, but it can only measure fairness in one sensitive attribute at a time.

So, they created their own fairness metric, called alternate world index, that considers bias from multiple sensitive attributes and options as a whole. Using this metric, they found that DualFair increased fairness in predictions for four of the six states while maintaining high accuracy.

“It is the common belief that if you want to be accurate, you have to give up on fairness, or if you want to be fair, you have to give up on accuracy. We show that we can make strides toward lessening that gap,” Khan says.

The researchers now want to apply their method to de-bias different types of datasets, such as those that capture health care outcomes, car insurance rates, or job applications. They also plan to address limitations of DualFair, including its instability when there are small amounts of data with multiple sensitive attributes and options.

While this is only a first step, the researchers are hopeful their work can someday have an impact on mitigating bias in lending and beyond.

“Technology, very bluntly, works only for a certain group of people. In the mortgage loan domain in particular, African American women have been historically discriminated against. We feel passionate about making sure that systemic racism does not extend to algorithmic models. There is no point in making an algorithm that can automate a process if it doesn’t work for everyone equally,” says Khan.

This research is supported, in part, by the FinTech@CSAIL initiative.



from ScienceBlog.com https://ift.tt/9Feigy7

New method purifies hydrogen from heavy carbon monoxide mixtures

Chris Arges (right), Penn State associate professor of chemical engineering, proposes using high-temperature proton-selective polymer electrolyte membranes, or PEMs, to separate hydrogen from other gases in an ACS Energy Letters paper. Co-author Deepra Bhattacharya, Penn State doctoral student in chemical engineering, is seen at left. Credit: Kelby Hochreither/Penn State. All Rights Reserved.
Chris Arges (right), Penn State associate professor of chemical engineering, proposes using high-temperature proton-selective polymer electrolyte membranes, or PEMs, to separate hydrogen from other gases in an ACS Energy Letters paper. Co-author Deepra Bhattacharya, Penn State doctoral student in chemical engineering, is seen at left. Credit: Kelby Hochreither/Penn State. All Rights Reserved.

Refining metals, manufacturing fertilizers and powering fuel cells for heavy vehicles are all processes that require purified hydrogen. But purifying, or separating, that hydrogen from a mix of other gases can be difficult, with several steps. A research team led by Chris Arges, Penn State associate professor of chemical engineering, demonstrated that the process can be simplified using a pump outfitted with newly developed membrane materials.

The researchers used an electrochemical hydrogen pump to both separate and compress hydrogen with an 85% recovery rate from fuel gas mixtures known as syngas and 98.8% recovery rate from conventional water gas shift reactor exit stream — the highest value recorded. The team detailed their approach in ACS Energy Letters.

Traditional methods for hydrogen separations employ a water gas shift reactor, which involves an extra step, according to Arges. The water gas shift reactor first converts carbon monoxide into carbon dioxide, which is then sent through an absorption process to separate the hydrogen from it. Then, the purified hydrogen is pressurized using a compressor for immediate use or for storage.

The key, Arges said, is to use high-temperature, proton-selective polymer electrolyte membranes, or PEMs, which can separate hydrogen from carbon dioxide and carbon monoxide and other gas molecules quickly and cost-effectively. The electrochemical pump, equipped with the PEM and other new materials Arges developed, is more efficient than conventional methods because it simultaneously separates and compresses hydrogen from gas mixtures. It also can operate at temperatures of 200 to 250 degrees Celsius — 20 to 70 degrees higher than other high-temperature-PEM-type electrochemical pumps — which improves its ability to separate hydrogen from the unwanted gasses.

“This is an effective and potentially cost saving way to purify hydrogen, especially when there is a large carbon monoxide content,” Arges said. “No one has ever purified hydrogen to this extent with a gas feed that contained more than 3% of carbon monoxide using an electrochemical hydrogen pump, and we achieved it with mixtures that consist of up to 40% carbon monoxide by using a relatively new class of high-temperature PEM and electrode ionomer binder materials.”

To carry out the separation, Arges’ team created an electrode “sandwich,” where electrodes with opposing charges form the “bread” and the membrane is the “deli meat.” The electrode ionomer binder materials are designed to keep the electrodes together, like the gluten of the bread.

In the pump, the positively charged electrode, or bread slice, breaks down the hydrogen into two protons and two electrons. The protons pass through the membrane, or deli meat, while the electrons travel externally through the pump using a wire that touches the positively charged electrode. The protons then travel through the membrane to the negatively charged electrode and recombine with the electrons to re-form the hydrogen.

The PEM works by permitting the passage of protons but preventing the larger molecules of carbon monoxide, carbon dioxide, methane and nitrogen from coming through, according to Arges. For the electrodes to work effectively in the hydrogen pump, Arges and his team synthesized a special phosphonic acid ionomer binder that acts as an adhesive to keep the electrode particles together.

“The binder is effective for making a mechanically robust, porous electrode that permits gas transport so hydrogen can react on the electrocatalyst surface while also shuttling protons to and from the membrane,” Arges said.

The researchers plan to investigate how their approach and tools will aid in purifying hydrogen when stored in existing natural gas pipelines. Distributing and storing hydrogen in this manner has never been accomplished, but holds great interest, according to Arges. He explained that hydrogen could aid in generating electric power via a fuel cell or turbine generator to support solar or wind energy-based systems and a variety of more sustainable applications.

“The challenge is that hydrogen has to be stored at low concentrations in the pipeline — less than 5% — because it can degrade the pipeline, but end-use applications require more than 99% pure hydrogen,” Arges said.

Arges filed two U.S. patent applications on components used in this research while he was on faculty at Louisiana State University. One is on high-temperature PEMs, and the other is on the electrochemical hydrogen pump using the high-temperature PEMs and phosphonic acid ionomer electrode binder. He is currently licensing the technology for a start-up company he co-founded with his wife, Hiral Arges, called Ionomer Solutions LLC.

Deepra Bhattacharya, Penn State doctoral student in chemical engineering, co-authored the paper. Other contributors include Gokul Venugopalan, postdoctoral researcher in the Chemistry and Nanoscience Research Center at the National Renewable Energy Laboratory in Golden, Colorado, and former doctoral student of Arges; and Evan Andrews, Luis Briceno-Mena, José Romagnoli and John Flake, chemical engineering researchers from Louisiana State University.

The U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy funded this work.



from ScienceBlog.com https://ift.tt/ojebF3g

Hundreds of new mammal species waiting to be found

At least hundreds of so-far unidentified species of mammals are hiding in plain sight around the world, a new study suggests.

Researchers found that most of these hidden mammals are small bodied, many of them bats, rodents, shrews, and moles.

These unknown mammals are hidden in plain sight partly because most are small and look so much like known animals that biologists have not been able to recognize they are actually a different species, said study co-author Bryan Carstens, a professor of evolution, ecology and organismal biology at The Ohio State University.

“Small, subtle differences in appearance are harder to notice when you’re looking at a tiny animal that weighs 10 grams than when you’re looking at something that is human-sized,” Carstens said.

“You can’t tell they are different species unless you do a genetic analysis.”

The study was published today (March 28, 2022) in the journal Proceedings of the National Academy of Sciences.

The team, led by Ohio State graduate student Danielle Parsons, used a supercomputer and machine-learning techniques to analyze millions of publicly available gene sequences from 4,310 mammal species, as well as data on where the animals live, their environment, life history and other relevant information.

This allowed them to build a predictive model to identify the taxa of mammals that are likely to contain hidden species.

“Based on our analysis, a conservative estimate would be that there are hundreds of species of mammals worldwide that have yet to be identified,” Carstens said.

That finding, in itself, would not be surprising to biologists, he said. Only an estimated 1 to 10% of Earth’s species have been formally described by researchers.

“What we did that was new was predict where these new species are most likely to be found,” Carstens said.

Results showed unidentified species are most likely to be found in the families of small-bodied animals, such as bats and rodents.

The researchers’ model also predicted hidden species would most likely be found in species that have wider geographic ranges with higher variability in temperature and precipitation.

Many of the hidden species are also likely to occur in tropical rain forests, which is not surprising because that’s where most mammal species occur.

But many unidentified species are also likely living here in the United States, Carstens said.  His lab has identified some of them.  For example, in 2018, Carstens and his then-graduate student Ariadna Morales published a paper showing that the little brown bat, found in much of North America, is actually five different species.

That study also showed a key reason why it is important to identify new species.  One of the newly delimited bats had a very narrow range where it lived, just around the Great Basin in Nevada – making its protection especially critical.

“That knowledge is important to people who are doing conservation work. We can’t protect a species if we don’t know that it exists.  As soon as we name something as a species, that matters in a lot of legal and other ways,” Carstens said.

Based on the results of this study, Carstens estimates that somewhere near 80% of mammal species worldwide have been identified.

“The shocking thing is that mammals are very well described compared to beetles or ants or other types of animals,” he said.

“We know a lot more about mammals than many other animals because they tend to be larger and are more closely related to humans, which makes them more interesting to us.”

The study was supported by the National Science Foundation and the Ohio Supercomputer Center.

Other co-authors were Tara Pelletier, assistant professor of biology at Radford University; and Jamin Wieringa and Drew Duckett, graduate students at Ohio State.



from ScienceBlog.com https://ift.tt/rAYcvpd

Fighting cancer with sound-controlled bacteria

Since its invention, chemotherapy has proven to be a valuable tool in treating cancers of many kinds, but it has a big downside. In addition to killing cancer cells, it can also kill healthy cells like the ones in hair follicles, causing baldness, and those that line the stomach, causing nausea.

Scientists at Caltech may have a better solution: genetically engineered, sound-controlled bacteria that seek and destroy cancer cells. In a new paper appearing in the journal Nature Communications, researchers from the lab of Mikhail Shapiro, professor of chemical engineering and Howard Hughes Medical Institute investigator, show how they have developed a specialized strain of the bacteria Escherichia coli (E. coli) that seeks out and infiltrates cancerous tumors when injected into a patient’s body. Once the bacteria have arrived at their destination, they can be triggered to produce anti-cancer drugs with pulses of ultrasound.

“The goal of this technology is to take advantage of the ability of engineered probiotics to infiltrate tumors, while using ultrasound to activate them to release potent drugs inside the tumor,” Shapiro says.

The starting point for their work was a strain of E. coli called Nissle 1917, which is approved for medical uses in humans. After being injected into the bloodstream, these bacteria spread throughout the body. The patient’s immune system then destroys them—except for those bacteria that have colonized cancerous tumors, which offer an immunosuppressed environment.

To turn the bacteria into a useful tool for treating cancer, the team engineered them to contain two new sets of genes. One set of genes is for producing nanobodies, which are therapeutic proteins that turn off the signals a tumor uses to prevent an anti-tumor response by the immune system. The presence of these nanobodies allow the immune system to attack the tumor. The other set of genes act like a thermal switch for turning the nanobody genes on when the bacteria reaches a specific temperature.

By inserting the temperature-dependent and nanobody genes, the team was able to create strains of bacteria that only produced the tumor-suppressing nanobodies when warmed to a trigger temperature of 42–43 degrees Celsius. Since normal human body temperature is 37 degrees Celsius, these strains do not begin producing their anti-tumor nanobodies when injected into a person. Instead, they quietly grow inside the tumors until an outside source heats them to their trigger temperature.

But how do you heat bacteria that are located in one specific location, potentially deep inside the body where a tumor is growing? For this, the team used focused ultrasound (FUS). FUS is similar to the ultrasound used for imaging internal organs, or a fetus growing in the womb, but has higher intensity and is focused into a tight point. Focusing the ultrasound on one spot causes the tissue in that location to heat up, but not the tissue surrounding it; by controlling the intensity of the ultrasound, the researchers were able to raise the temperature of that tissue to a specific degree.

“Focused ultrasound allowed us to activate the therapy specifically inside a tumor,” says Mohamad Abedi (PhD ’21), a former PhD student in Shapiro’s group who co-led the project and is now a postdoctoral fellow at the University of Washington. “This is important because these potent drugs, which are so helpful in tumor treatment, can cause significant side effects in other organs where our bacterial agents may also be present.”

To test whether their engineered strain of bacteria worked as intended, the research team injected bacterial cells into lab mice afflicted with tumors. After giving the bacteria time to infiltrate the tumors, the team used ultrasound to warm them.

Through a series of trials, the researchers found that mice treated with this strain of bacteria and ultrasound showed much slower tumor growth than mice treated only with ultrasound, mice treated only with the bacteria, and mice that were not treated at all.

However, the team also found that some of the tumors in treated mice did not shrink at all.

“This is a very promising result because it shows that we can target the right therapy to the right place at the right time,” Shapiro says. “But as with any new technology there are a few things to optimize, including adding the ability to visualize the bacterial agents with ultrasound before we activate them and targeting the heating stimuli to them more precisely.”

The researchers’ paper, “Ultrasound-controllable engineered bacteria for cancer immunotherapy,” appears in the March 24 issue of Nature Communications. Shapiro’s and Abedi’s co-authors include Michael S. Yao (BS ’21), formerly of Caltech and now at the University of Pennsylvania, who is co-lead author; David R. Mittelstein (MS ’16, PhD ’20), formerly of Caltech and now at UC San Diego; Avinoam Bar Zion, visitor in chemical engineering at Caltech; Margaret B. Swift of the Howard Hughes Medical Institute; Audrey Lee-Gosselin, formerly of Caltech and now at the Indiana University School of Medicine; Pierina Barturen-Larrea, research technician in Caltech’s Division of Chemistry and Chemical Engineering; and Marjorie T. Buss, graduate student in chemical engineering.

Funding for the research was provided by the Sontag Foundation, the Army Institute for Collaborative Biotechnologies, and the Defense Advanced Research Projects Agency.



from ScienceBlog.com https://ift.tt/ibSoJzc

Scorpions’ venomous threat to mammals a relatively new evolutionary step

Despite their reputation as living fossils, scorpions have remained evolutionarily nimble — especially in developing venom to fend off the rise of mammal predators. A new genetic analysis of scorpions’ toxin-making reveals recent evolutionary steps and may actually be a boon for researchers studying scorpion venom’s benefits to human health.

An international team of researchers led by University of Wisconsin–Madison biologists has assembled the largest evolutionary tree of scorpions yet, showing seven independent instances in which the distinctive eight-legged creatures evolved venom compounds toxic to mammals.

“The last major changes to their body shape, their morphology, happened about 430 million years ago, when they left the water and moved onto land,” says Carlos Santibáñez-López, a former postdoctoral researcher at UW–Madison and lead author of the new study published today in the journal Systematic Biology. “But we know now that they have evolved in very important ways much more recently.”

With the help of collaborators around the world, Santibáñez-López collected specimens representing 100 scorpion species and extracted from their venom glands samples of RNA, a strip of instructions transcribed from DNA to tells cells which proteins (like venom) to make. By collecting the RNA shortly after the scorpions killed an insect meal, Santibáñez-López, was able to focus on the genes actively making toxins while the scorpion replenished its venom supply.

Building a family tree based on the differences in venom, the researchers could see that while scorpions had split into two major families around 300 million years ago — Buthidae and Iuridae, which would give rise to the 22 families of modern scorpions —that division came long before any scorpion evolved venom toxins that target mammals. And for good reason. There weren’t any mammals to speak of.

Closer to 70 million years ago, during the dawn of the age of mammals, the Cenozoic era, new animals like shrews — and later bats, rodents, mongooses and badgers — would develop a taste for scorpions. But the scorpions had a few tricks up their curved tails.

“We found that when these toxic scorpions diverge from their relatives, it correlates with the appearance of the mammals that prey on them,” says Santibáñez-López, now a professor at Western Connecticut State University. “It suggests that when the mammals that eat these scorpions appeared, the scorpions started developing these weapons to defend themselves.”

It didn’t hurt that the venom the scorpions used to disable the insects they ate was not that far, chemically speaking, from toxins that would work on their rising predators.

“The toolkit was there,” says Prashant Sharma, study co-author and a UW–Madison professor of integrative biology. “They had an available pool of genes to draw from that were making toxins that could target insect nervous systems. It didn’t take much change to adapt those genes to make toxins that target specific functions in mammal nerve cells.”

The breadth of the new genomic data is such that the researchers can follow the stages of development as scorpions grew more dangerous to mammals.

“We think we’ve caught, through this large data set, evidence for that stepwise acquisition of defenses,” says Sharma, whose work is supported by the National Science Foundation. “They go from scorpions that have an insect-specific toxin that’s used for prey capture, to something that at some point had aspects toxic to both insects and mammals, to finally mammal-specific toxins used as a deterrent, essentially, as a way to keep predators at bay.”

Even before that happened, scorpions had diverged into a wide range of species across much of the planet. But, according to the new genomic analysis, mammal-specific toxins evolved independently in five separate branches of the Buthidae family alone.

The distinctions are strong enough that they may help reorganize much of scorpion taxonomy. They may also help advance burgeoning research on the human medical applications of scorpion venom compounds.

Scientists have identified scorpion toxins with pharmaceutical potential, such as antimicrobial, anti-inflammatory and anti-tumor properties. Scorpion venom that is attracted to tumor cells can be saddled with a glowing protein and used as “tumor paint,” guiding surgeons as they remove cancerous masses from patients.

The catch is, most researchers are limited to studying just a few scorpion species that are handy to them. The new study, grounded in the differences in toxic compounds between scorpion species, could open a world of venom chemistry for clinical research.

“Now those labs will have this running library of all the genes that are being expressed in 100 different species, and they can study organisms based upon what they have available in their venom glands, rather than just because they happen to be outside at the moment,” Sharma says. “We hope that will accelerate this kind of translational biology and the search for biomedical applications.”

This research was supported by grants from the National Science Foundation (IOS-1552610, 2013/50297-0 and DOB-1343578) and the National Geographic Society Expeditions Council.

###

— Chris Barncard, barncard@wisc.edu



from ScienceBlog.com https://ift.tt/EcTsfS8

Featured Post

Digital Science launches new cutting-edge AI writing tools for 20+ million Overleaf users

London, UK — Tuesday 24 June 2025 More than 20 million research writers worldwide now have immediate access to powerful new AI features fr...

Popular