Researchers unlocking potential for next-generation medical scanning
January 5, 2018, University of York
Researchers have developed a new way to magnetise molecules found naturally in the human body, paving the way for a new generation of low-cost magnetic resonance imaging (MRI) technology that would transform our ability to diagnose and treat diseases including cancer, diabetes and dementia.
While still in the early stages, research reported today in the journal Science Advances has made significant steps towards a new MRI method with the potential to enable doctors to personalise life-saving medical treatments and allow real-time imaging to take place in locations such as operating theatres and GP practices.
MRI, which works by detecting the magnetism of molecules to create an image, is a crucial tool in medical diagnostics. However, current technology is not very efficient - a typical hospital scanner will effectively detect only one molecule in every 200,000, making it difficult to see the full picture of what's happening in the body.
Improved scanners are now being trialed in various countries, but because they operate in the same way as regular MRI scanners - using a superconducting magnet - these new models remain bulky and cost millions to buy.
The research team, based at the University of York, has discovered a way to make molecules more magnetic, and therefore more visible - an alternative method which could produce a new generation of low-cost and highly sensitive imaging techniques.
Professor Simon Duckett from the Centre for Hyperpolarisation in Magnetic Resonance at the University of York said: "What we think we have the potential to achieve with MRI what could be compared to improvements in computing power and performance over the last 40 years. While they are a vital diagnostic tool, current hospital scanners could be compared to the abacus, the recent development of more sensitive scanners takes us to Alan Turing's computer and we are now attempting to create something scalable and low-cost that would bring us to the tablet or smartphone".
The research team has found a way to transfer the "invisible" magnetism of parahydrogen - a magnetic form of hydrogen gas - into an array of molecules that occur naturally in the body such as glucose, urea and pyruvate. Using ammonia as a carrier, the researchers have been able to "hyperpolarise" substances such as glucose without changing their chemical composition, which would risk them becoming toxic.
It is now theoretically possible that these magnetised, non-harmful substances could be injected into the body and visualised. Because the molecules have been hyperpolarized there would be no need to use a superconducting magnet to detect them - smaller, cheaper magnets or even just the Earth's magnetic field would suffice.
If the method were to be successfully developed it could enable a molecular response to be seen in real time and the low-cost, nontoxic nature of the technique would introduce the possibility of regular and repeated scans for patients. These factors would improve the ability of the medical profession to monitor and personalise treatments, possibly resulting in more successful outcomes for individuals.
"In theory, it would provide an imaging technique that could be used in an operating theatre," added Duckett. "For example, when a surgeon extracts a brain tumour from a patient they aim to remove all the cancerous tissue while at the same time removing as little healthy tissue as possible. This technique could allow them to accurately visualise cancerous tissue at a far greater depth there and then."
The research also has the potential to bring MRI to countries in the developing world that don't have the uninterrupted power supplies or infrastructure to operate current scanners.
As well as its applications in medicine and general healthcare, the method could also provide benefits to the chemical and pharmaceutical industries in addition to environmental and molecular science.
Dr Peter Rayner, Research Associate at the University of York, said: "Our method reflects one of the most significant advances in magnetic resonance in the last decade".
Research Associate, Dr Wissam Iali added, "Given Magnetic Resonance Spectroscopy is of vital importance to the UK's chemical and pharmaceutical industries, I see significant opportunities for them to harness our approach to improve their competitiveness."
Using parahydrogen to hyperpolarize amines, amides, carboxylic acids, alcohols, phosphates and carbonates is published in Science Advances.
Explore further: Molecules in the body more visible in new detection system, say scientists
More information: "Using parahydrogen to hyperpolarize amines, amides, carboxylic acids, alcohols, phosphates, and carbonates" Science Advances (2018). advances.sciencemag.org/content/4/1/eaao6250
Journal reference: Science Advances
Provided by: University of York
Want To Live Longer? Here's What You Need To Know About Longevity
January 4, 2018 — 10:40 AM
As the director of the University of Southern California’s (USC) Longevity Institute and the mind behind the ProLon Fasting-Mimicking Diet, Dr. Valter Longo is one of the world’s premier experts on health and longevity. A biochemist by training, he studies the fundamental mechanisms of aging so we can truly understand what’s happening in the body—and how to slow it down. His new book, The Longevity Diet, aims to teach us all how to eat and live for a long, active life.
Most people are discouraged and often confused by nutritional news. Nutrient groups (fats, proteins, and carbohydrates), and also specific foods like eggs and coffee have all been described in scientific journals and the media as both good and bad for you. How do you decide what’s right for you and your health? In fact, proteins, fats, and carbohydrates can be considered both good and bad for you depending on type and consumption. For example, proteins are essential for normal function, yet high levels of proteins, and particularly those from red meat and other animal sources, have been associated with increased incidence of several diseases. So we need a better system to filter out the noise and extract beneficial dietary information.
This is why I formulated the "Five Pillars of Longevity." This method is based on my own studies and also on the studies of many other laboratories and clinicians. It uses five research areas to determine whether a nutrient or combination of nutrients is good or bad for health and to identify the ideal combination of foods for optimal longevity.
I believe that many popular strategies and diets are inappropriate or only partially correct because they are based on just one or two pillars. This is important because while one nutrient may be protective against one condition or disease, it can negatively affect another, or it can protect middle-age individuals but hurt the very young or the elderly. An example: In adults age 70 and below, eating a relatively high-calorie diet will in most cases lead to weight gain and an increase in the risk for developing certain diseases. Yet in individuals over age 70, the same diet and the consequent moderate weight gain can be protective against certain diseases and overall mortality. This is why it is important to follow the advice of someone who has an in-depth understanding of the complex relationship between nutrition, aging, and disease.
The Five Pillars of Longevity create a strong foundation for dietary recommendations and a filtering system to evaluate thousands of studies related to aging and disease while also minimizing the burden of dietary change. When dietary choices are based on all of the Five Pillars, they are unlikely to be contradicted or undergo major alterations as a consequence of new findings.
1. Basic research.
Without understanding how nutrients—such as proteins and sugars—affect cellular function, aging, age-dependent damage, and regeneration, it is difficult to determine the type and quantity of nutrients needed to optimize healthy longevity. Without animal studies to determine whether a diet can in fact extend longevity, in addition to having acute effects on general health, it is difficult to translate the basic discoveries to human interventions. As I mentioned earlier, I first started working with mice and humans in Walford’s lab, but I soon discovered that a far simpler unicellular organism, yeast, could help us identify the fundamental properties of organisms. These could then be applied to humans, furnishing information related to molecular aspects of longevity—in particular, the ones linked to evolutionary principles. Using yeast, we were able to generate the differential stress resistance and sensitization theories that served as the foundation for a number of clinical trials testing the effect of fasting-mimicking diets in combination with cancer therapies. This basic research is where every one of our studies begins.
This is the study of the causes and important risk factors for disease and other health-related conditions in defined populations. Studying population-based risk factors is crucial to testing hypotheses generated by basic research. For example, if you hypothesize that excess sugar promotes abdominal fat storage and resistance to insulin, epidemiological research should confirm that people who consume high quantities of sugars have a high waist circumference and an increased risk for diabetes. After my initial focus on the genetics of aging, I carried out epidemiological studies related to aging and diseases, which taught me the tremendous value of understanding the health consequences of behavior in large populations.
Photo: Trinette Reed
3. Clinical studies.
Hypotheses formulated in basic and epidemiological studies eventually must be tested in randomized, controlled clinical trials. This is the gold standard to demonstrate efficacy. For example, a group of prediabetic subjects would be instructed to consume fewer sugars but otherwise maintain the same diet and calorie intake as before. The control group would be asked to maintain the same diet or reduce the intake of fat to match the calorie reduction in the reduced-sugar group. Understanding the importance of this pillar grew out of my own randomized clinical trials, and those of many others, testing the effect of a particular dietary component on risk factors for disease, such as cholesterol or fasting glucose levels, but also on a disease itself, such as cardiovascular disease.
4. Centenarian studies.
Once the data from basic, epidemiological, and clinical studies is available, there is still uncertainty about whether a specific diet or nutritional intervention is in fact safe and beneficial after long-term use, and whether it is palatable enough for people not just to adopt it but to stick with it for the rest of their lives. Studies of various centenarian populations from around the world provide long-term evidence of the safety, efficacy, and compliance associated with a particular diet (for example, a low-sugar diet). To generate data for the fourth pillar, I have studied long-lived populations in Ecuador and southern Italy and consulted the work of my colleagues focusing on other very long-lived populations in high-longevity zones around the world.
5. Studies of complex systems.
This pillar is the result of my fascination with reductionism, physics, and the need to simplify the human body’s complexity by identifying complex machines that can serve as models to teach us about the function and loss of function of human organs and systems. This last pillar can complement the others by providing reference points and useful analogies. For example, above I discuss how sugars can lead to disease. But sugars are also the most important nutrient for the human body. Sugar is to the body what gasoline is to a car—the central source of energy. So sugars are not the problem. It’s the intake of excessive quantities of sugar, in combination with proteins and certain types of fats, that contributes to disease both directly and indirectly—by activating aging-related genes, creating insulin resistance, and triggering hyperglycemia. This last pillar furthers the analysis of a human problem by taking an engineering approach to generate a relatively simple model to understand the complex interactions between food, cellular damage, and aging.
Reprinted from The Longevity Diet by arrangement with Avery, a member of Penguin Group (USA) LLC, a Penguin Random House Company. Copyright © 2018, Valter Longo.
Valter Longo is the director of the Longevity Institute at USC in Los Angeles, and of the Program on Longevity and Cancer at IFOM (Molecular Oncology FIRC Institute) in Milan.
Will artificial intelligence become conscious?
December 22, 2017
By Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University
Forget about today’s modest incremental advances in artificial intelligence, such as the increasing abilities of cars to drive themselves. Waiting in the wings might be a groundbreaking development: a machine that is aware of itself and its surroundings, and that could take in and process massive amounts of data in real time. It could be sent on dangerous missions, into space or combat. In addition to driving people around, it might be able to cook, clean, do laundry — and even keep humans company when other people aren’t nearby.
A particularly advanced set of machines could replace humans at literally all jobs. That would save humanity from workaday drudgery, but it would also shake many societal foundations. A life of no work and only play may turn out to be a dystopia.
Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong? To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution.
As a professor of electrical engineering and computer science who works in machine learning and quantum theory, I can say that researchers are divided on whether these sorts of hyperaware machines will ever exist. There’s also debate about whether machines could or should be called “conscious” in the way we think of humans, and even some animals, as conscious. Some of the questions have to do with technology; others have to do with what consciousness actually is.
Is awareness enough?
Most computer scientists think that consciousness is a characteristic that will emerge as technology develops. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could.
On the other hand, there are physicists and philosophers who say there’s something more about human behavior that cannot be computed by a machine. Creativity, for example, and the sense of freedom people possess don’t appear to come from logic or calculations.
Yet these are not the only views of what consciousness is, or whether machines could ever achieve it.
Another viewpoint on consciousness comes from quantum theory, which is the deepest theory of physics. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality. When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change. Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen Interpretation may be called the “big-C” view of consciousness, where it is a thing that exists by itself – although it requires brains to become real. This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger.
The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate. A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain.
The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from physics. We call this less expansive concept of consciousness “little-C.” It agrees with the neuroscientists’ view that the processes of the mind are identical to states and processes of the brain. It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation, in which observers are a part of the mathematics of physics.
Philosophers of science believe that these modern quantum physics views of consciousness have parallels in ancient philosophy. Big-C is like the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe.
Little-C, in contrast, is quite similar to Buddhism. Although the Buddha chose not to address the question of the nature of consciousness, his followers declared that mind and consciousness arise out of emptiness or nothingness.
Big-C and scientific discovery
Scientists are also exploring whether consciousness is always a computational process. Some scholars have argued that the creative moment is not at the end of a deliberate computation. For instance, dreams or visions are supposed to have inspired Elias Howe‘s 1845 design of the modern sewing machine, and August Kekulé’s discovery of the structure of benzene in 1862.
A dramatic piece of evidence in favor of big-C consciousness existing all on its own is the life of self-taught Indian mathematician Srinivasa Ramanujan, who died in 1920 at the age of 32. His notebook, which was lost and forgotten for about 50 years and published only in 1988, contains several thousand formulas, without proof in different areas of mathematics, that were well ahead of their time. Furthermore, the methods by which he found the formulas remain elusive. He himself claimed that they were revealed to him by a goddess while he was asleep.
The concept of big-C consciousness raises the questions of how it is related to matter, and how matter and mind mutually influence each other. Consciousness alone cannot make physical changes to the world, but perhaps it can change the probabilities in the evolution of quantum processes. The act of observation can freeze and even influence atoms’ movements, as Cornell physicists proved in 2015. This may very well be an explanation of how matter and mind interact.
Mind and self-organizing systems
It is possible that the phenomenon of consciousness requires a self-organizing system, like the brain’s physical structure. If so, then current machines will come up short.
Scholars don’t know if adaptive self-organizing machines can be designed to be as sophisticated as the human brain; we lack a mathematical theory of computation for systems like that. Perhaps it’s true that only biological machines can be sufficiently creative and flexible. But then that suggests people should – or soon will – start working on engineering new biological structures that are, or could become, conscious.
Reprinted with permission from The Conversation
• Is anyone home? A way to find out if AI has become self-aware
• The Problem of AI Consciousness
Topics: AI/Robotics | Cognitive Science/Neuroscience