# **Clinical Nutrition Recent Advances and Remaining Challenges**

Edited by Ina Bergheim Printed Edition of the Special Issue Published in *Nutrients*

www.mdpi.com/journal/nutrients

## **Clinical Nutrition: Recent Advances and Remaining Challenges**

## **Clinical Nutrition: Recent Advances and Remaining Challenges**

Editor

**Ina Bergheim**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Ina Bergheim University of Vienna Vienna Austria

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Nutrients* (ISSN 2072-6643) (available at: https://www.mdpi.com/journal/nutrients/special issues/Clinical Nutrition).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-4839-5 (Hbk) ISBN 978-3-0365-4840-1 (PDF)**

© 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**



## **About the Editor**

#### **Ina Bergheim**

Prof. Dr. Ina Bergheim is a Full Professor for Molecular Nutritional Sciences at the Department of Nutritional Sciences at the University of Vienna, Austria. She is a nutritional scientist by training. The main aim of her research is to determine molecular mechanisms underlying the development of alcohol-related and non-alcoholic fatty liver diseases as well as aging associated liver decline with a particular focus on the interplay of nutrition and the intestinal barrier herein. Furthermore, she aims to develop new strategies to target the molecular alterations underly the development of alcohol-related and non-alcoholic liver diseases as well as aging associated liver decline with nutrition including macronutrients but also secondary plant compounds.

## **Preface to "Clinical Nutrition: Recent Advances and Remaining Challenges"**

In recent years, nutrition, often in combination with physical activity, has been acknowledged as a cornerstone in the prevention and, even more so, the treatment of many diseases. Indeed, nutritional intake and dietary pattern are often thought to be the main source of wellbeing, and often the impact of nutritional intake, dietary pattern and food-derived natural compounds in regard to their impact on human health are over- but also underestimated.

This book is a summary review article published within the Special Issue entitled 'Clinical Nutrition: Recent Advances and Remaining Challenges´ in the journal 'Nutrients´ and it aims to provide a broad overview and summary of recent findings and advances in various fields of clinical nutrition. A special focus is laid on new dietary approaches in the prevention and treatment of metabolic diseases, including obesity, non-alcoholic fatty liver disease and type 2 diabetes, but also on inflammatory bowel diseases as well as celiac disease and intestinal fibrosis in children and adults. Furthermore, new strategies in the prevention and treatment of malnutrition in elderly, adult patients with liver cirrhosis and children with cholestasis are presented and discussed. Additionally, recent findings and novel dietary approaches in the treatment of patients in intensive care and palliative care, respectively, as well as with gastrointestinal surgery are summarized. Moreover, the nutritional impact in the development and treatment as well as dietary approaches in the prevention of kidney stones, the Brugada Syndrome and prostatic hyperplasia are discussed. By summarizing novel findings regarding the relation of nutrition and the development of these disease, the present book aims to provide a better understanding of the impact but also the limitations of nutrition in disease development and treatment, but also highlight remaining questions and challenges in clinical nutrition.

> **Ina Bergheim** *Editor*

## *Review* **Nutrition Concepts for the Treatment of Obesity in Adults**

**Meike Wiechert and Christina Holzapfel \***

Institute for Nutritional Medicine, School of Medicine, Technical University of Munich, 80992 Munich, Germany; meike.wiechert@tum.de

**\*** Correspondence: christina.holzapfel@tum.de; Tel.: +49-89-289-249-23

**Abstract:** Obesity caused by a positive energy balance is a serious health burden. Studies have shown that obesity is the major risk factor for many diseases like type 2 diabetes mellitus, coronary heart diseases, or various types of cancer. Therefore, the prevention and treatment of increased body weight are key. Different evidence-based treatment approaches considering weight history, body mass index (BMI) category, and co-morbidities are available: lifestyle intervention, formula diet, drugs, and bariatric surgery. For all treatment approaches, behaviour change techniques, reduction in energy intake, and increasing energy expenditure are required. Self-monitoring of diet and physical activity provides an effective behaviour change technique for weight management. Digital tools increase engagement rates for self-monitoring and have the potential to improve weight management. The objective of this narrative review is to summarize current available treatment approaches for obesity, to provide a selective overview of nutrition trends, and to give a scientific viewpoint for various nutrition concepts for weight loss.

**Keywords:** dietary recommendation; weight loss; overweight; intermittent fasting; carbohydrate; fat; protein

#### **1. The Challenge Obesity**

Obesity is a complex, multifactorial, and largely preventable chronic disease defined as abnormal or excessive fat accumulation [1]. Body mass index (BMI), calculated as weight in kilograms divided by the square of height in metres (kg/m2), is the current most widely used criterion for classifying obesity [1]. People with a BMI ≥ 25 kg/m<sup>2</sup> are classified as overweight, and a BMI ≥ 30 kg/m2 is categorized as obese [1–3] (Table 1). Although the BMI captures the degree of overweight and obesity, abdominal obesity measured mainly by waist circumference is additionally associated with health risks (Table 1). Clinical practice and medical guidelines focus on BMI and waist circumference as simple, objective, and reproducible tools to measure weight status and abdominal obesity. However, the diagnosis of obesity should not be based on BMI alone, rather together with other anthropometric and clinical parameters. Instrumental methods (e.g., bioimpedance analysis, Dual Energy X-ray Absorptiometry, and magnetic resonance imaging) to assess body composition and adipose tissue depots are available but are often time- and cost-intensive.

It is well known that overweight and obesity are the main risk factors for several diseases such as type 2 diabetes mellitus, hypertension, dyslipidaemia, cardiovascular disease, and several types of cancer [4]. Recently published data show that BMI is positively associated with severe coronavirus disease 2019 (COVID-19) outcomes [5]. Furthermore, an increased BMI might lead to a decline in quality of life and contribute to a decreased life expectancy [6–9].

**Citation:** Wiechert, M.; Holzapfel, C. Nutrition Concepts for the Treatment of Obesity in Adults. *Nutrients* **2022**, *14*, 169. https://doi.org/10.3390/ nu14010169

Academic Editor: Ina Bergheim

Received: 25 November 2021 Accepted: 27 December 2021 Published: 30 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).


**Table 1.** Body mass index and waist circumference: cut-offs and risk of co-morbidities.

The cause of obesity is a long-term energy imbalance caused by a combination of increased energy intake and reduced energy expenditure [1,6,10,11]. The National Health and Nutrition Examination Survey (NHANES) observed that the average daily energy intake increased from 1971 to 2000 in men by 168 kilocalories (kcal)/day and women by 335 kcal/day [12]. Without an active regulation or adaptation of energy balance, this increase theoretically could explain a weight gain per year of eight kilograms for men and 16 kg for women. Furthermore, energy expenditure has decreased over the last decades [12]. Basset and colleagues reported that in 2003, an American adult walked about 5000 steps/day. Compared to people living 300 years ago, it is a difference of 13,000 steps/day for men and 9000 steps/day for women [13]. Without any physiological adaption, this decline in physical activity could explain a yearly weight gain of 31 kg for men and 21 kg for women [10]. Daily occupation-related energy expenditure has decreased by more than 100 kcal/day in U.S. adults. That alone could explain a substantial weight gain in the population over the last five decades [14]. In addition to lifestyle factors, other contributing factors like food and environment have been identified [15].

Over the past 50 years, the prevalence of obesity has increased worldwide in pandemic dimensions [3,6,16–18]. The Global Burden of Disease study with data from 68.5 million persons demonstrated that in 2015 603.7 million adults were obese [19]. Since 1980, the prevalence of obesity has doubled in more than 70 countries and has continuously increased in most other countries [19]. If trends continue, by 2030, an estimated 38% of the world's adult population will be overweight, and another 20% will be obese [20]. The confinements by the COVID-19 pandemic have changed lifestyle behaviour and have promoted an obesogenic environment. Weight trajectories during the COVID-19 lockdown have been shown [21,22]. It is expected that the COVID-19 pandemic reinforces the obesity pandemic with long-term consequences on the prevalence of overweight and obesity [23]. Because of the rapid increase in the prevalence and disease burden of obesity, it is indispensable to focus on monitoring BMI and to identify, implement, and evaluate evidence-based interventions to address this health issue [19].

#### **2. Treatment Approaches**

Different evidence-based approaches considering weight history, BMI category, and co-morbidities to treat overweight and obesity are available: lifestyle intervention, formula diet, drugs, and bariatric surgery (Figure 1). The main aims of weight management are shown in Figure 2.

**Figure 1.** Treatment approaches.

**Figure 2.** Overview of weight management goals.

#### *2.1. Lifestyle Intervention*

Lifestyle interventions inducing a negative energy balance provide the basis for the treatment of overweight and obesity and are part of the standard recommendation. Different lifestyle approaches exist, whereas nutrition, physical activity, and behaviour are the main components. By lowering energy intake and increasing physical activity accompanied

by behavioural change techniques, a daily energy deficit of about 500 kcal is recommended for weight loss. This energy deficit can produce a moderate weight loss over one year. Energy balance changes with weight loss, making it necessary to adjust energy intake and expenditure during weight management. Energy balance is dynamic and weight loss leads to a new energy equilibrium on a lower level. Adherence to a lifestyle intervention is challenging for many people with overweight and obesity. In a systematic review and meta-analysis, three main factors were associated with improved adherence to weight loss interventions: supervision, social support, and focus on dietary intervention [24].

Nutrition is the main lifestyle factor. Therefore, nutrition aspects to decrease energy intake and to support weight management are highlighted in the following.

#### 2.1.1. Energy Intake

Key components of energy balance include energy intake, expenditure, and storage. When energy intake exceeds energy expenditure, a state of positive energy balance increases body weight. The European Food Safety Authority (EFSA) recommends a daily dietary reference intake of 45% to 65% of total kilocalories from carbohydrates, 20% to 35% from fat [25], and 0.83 g protein/kg body weight [26]. For weight loss, a daily energy deficit of 500 kcal is recommended and can be reached by avoiding energy-dense food. Fat is a high-energy macronutrient and provides more than twice as high as the energy of carbohydrates or protein. Because of this, a reduction in daily fat intake supports to lower daily caloric intake. Fat intake can be reduced by using low fat dairy products like cheese, and yogurt, lean meat, and avoidance of hidden fats. In Table 2, recommendations and practical examples are shown for decreasing energy intake.


**Table 2.** Practical recommendations and examples to reduce energy intake.

#### 2.1.2. Macronutrients

Studies have shown that not the macronutrient composition of the diet but the energy content is relevant for weight management [27,28].

Low carb diets often contain approximately 40% of carbohydrates per day. The lowest intake of carbohydrates is part of a ketogenic diet, where the aim is to minimize the carbohydrate intake as much as possible. Epidemiological data showed that a daily amount of carbohydrates of 50 to 55% correlates with the lowest mortality rate. Low carb diets, as well as high carb diets, increase the mortality risk (U-shaped association) [29]. A low carb diet includes a lower amount of plant-based food that has health-promoting effects. A meta-analysis with eight randomized controlled studies concluded that low carb diets are superior to diets with a low amount of fat regarding lipid metabolism in people with overweight and obesity [30]. However, data from NHANES indicate that realized decreases in the percentage of energy consumed from fat were associated with increased total energy intake caused by compensatory over-consumption of energy from sugars [31]. The stone age diet or paleo diet is a low carb diet, but also not clearly defined. In general, it is a diet with a high amount of meat and protein. The variety of foods is limited by the renouncing of grain. Smaller short-term intervention studies with methodological limitations examined the effects of a paleo-conform diet. Manheimer et al. evaluated data of four randomized controlled trials (RCTs) with 159 participants that compared the palaeolithic diet with any other dietary pattern. Results showed that palaeolithic nutrition resulted in stronger short-term improvements of cardiovascular risk factors like waist circumference and blood pressure than control diets [32]. In 70 post-menopausal women with obesity, it has been shown that there was no significant difference between a palaeolithic-type diet and the Nordic nutrition recommendations in anthropometric changes after 24 months [33].

An evaluation of almost 50 studies found that the participants, regardless of the macronutrient composition of the diet, lost the same amount of weight within 6 and 12 months of intervention [34]. In another study, 811 persons were randomized into four groups of diets with a different energy intake from fat, protein, and carbohydrates (20, 15, and 65%; 20, 25, and 55%; 40, 15, and 45%; 40, 25, and 35%, respectively). After two years of intervention, the weight loss was about 4 kg (completers-analysis), with no significant differences between the groups [28]. Furthermore, the comparison of three different diets (low fat/low energy diet; Mediterranean/low energy diet; low carb/non-energy reduced diet) resulted in similar findings. Mean weight loss after 2 years of intervention was 3.3, 4.6, and 5.5 kg, respectively (completers-analysis) [27]. In a study with 609 adults with BMI between 28 and 40 kg/m2, the mean weight loss was about 6.0 kg (low carb diet) and 5.3 kg (low fat diet) after one year of intervention [35]. A systematic review of systematic reviews comparing low carb diets with control diets on weight loss concerned the low quality of studies, and concluded that better quality reviews and RCTs are needed for a clear recommendation of low carb diets as preferred to other energy-reduced diets [36]. Even with the plant-based form of Atkins diet or the Mediterranean diet, there is a moderate weight loss [27,37,38]. A meta-analysis indicates that a Mediterranean diet low in energy leads to moderate weight loss [39].

Finally, the macronutrient composition of a diet has no major impact on weight loss. Low carb, as well as low fat concepts, are effective for weight loss if a negative energy balance is provided.

A meal replacement is a high protein product used to replace at least one main meal per day. Those products are permitted to be marketed for weight management purposes and have specific regulatory requirements concerning supplementation with vitamins, minerals, and trace elements, as well as energy content, per portion. They are available e.g., as shakes, soups, or meal bars. Meal replacement strategy followed by a dietary change and behaviour modification strategy is popular among people trying to lose weight. One option is a very low calorie diet (VLCD) with < 800 kcal/day by total meal replacement. The other option is a low calorie diet (LCD) supplying > 800 kcal/day, generally in the range of 1200–1600 kcal/day. In a systematic review and meta-analysis on VLCDs, total weight loss ranged from 8.9 to 15 kg in persons with type 2 diabetes mellitus and 7.9 to 21 kg in persons without diabetes, over a treatment duration of 4 to 52 weeks. Study duration did not appear to influence overall weight loss. The average weight loss per week was about 0.5 kg [40]. Another review investigated the effect of weight loss interventions incorporating meal replacement compared with alternative interventions

on weight change at 1 year in adults with overweight or obesity. In this review, studies with diets providing < 800 kcal/day, and with total diet replacement, were excluded. In general, all diets incorporating meal replacement resulted in a higher mean weight change at 1 year compared to the control groups or alternative diets [41]. The Diabetes Remission Clinical Trial (DiRECT) of 306 patients with type 2 diabetes mellitus demonstrated that diet-induced weight loss by total diet replacement (825–853 kcal/day formula diet for 3–5 months), followed by food reintroduction (2–8 weeks), and followed by structured support for long-term weight loss maintenance effectively reversed type 2 diabetes mellitus. At 12 months, 86% of the participants who achieved a weight loss >15 kg (24%) became drug-free and had remission of type 2 diabetes mellitus. An overall remission of type 2 diabetes mellitus in the intervention group was observed in 46% of patients after 1 year and in 36% of patients after 2 years [42,43].

#### 2.1.3. Intermittent Fasting

There are several approaches of intermittent fasting. The 16:8 concept is a form of time-restricted eating where individuals eat within a time window of 8 h and fast for the remaining 16 h daily. The 5:2 concept consists of a normal eating routine on 5 days per week, without any specific recommendations or restrictions, and 2 days of fasting with an energy intake of about 500 kcal. Conley M. et al. compared the 5:2 diet (2 non-consecutive days with 600 kcal and 5 days of energy intake ad libitum per week) with an energy-reduced diet. After 6 months of intervention, both groups reduced their body weight similarly (5.3 kg (5:2) vs. 5.5 kg (standard)) with no significant difference [44]. A RCT compared the effects of alternate-day fasting with daily caloric restriction on body weight in participants with obesity. Findings demonstrate comparable weight loss after 6 (alternate day fasting: 6.8%, caloric restriction: 6.8%) and 12 months (6.0% versus 5.3%) [45]. In a systematic review and meta-analysis of RCTs intermittent versus continuous energy restriction on weight loss and cardio-metabolic outcomes have been investigated. The included eleven trials with a duration from 8 to 24 weeks resulted in a similar weight loss between the two intervention groups [46]. Compared to a continuous energy restriction, intermittent fasting leads to similar weight loss and similar improvement of cardio-metabolic parameters [47–49]. The recently published Cochrane review by Allaf et al. found that people lost more weight with intermittent fasting concepts than without a special dietary concept over three months (evidence from seven studies in 224 people). If intermittent fasting concepts were compared with energy-restricted diets for 3 months (10 studies; 719 people) or longer (3 to 12 months; 4 studies; 279 people), this difference in weight loss is lost [50]. The energy restriction causes the positive effect of intermittent fasting on weight loss, not fasting as a stand-alone intervention [51,52].

Besides weight loss, fasting-specific effects on metabolic regulation or cardiovascular health are discussed. In lean persons, there is no statistically significant difference between daily energy restriction and alternate-day fasting with or without energy restriction concerning postprandial indices of cardio-metabolic health, gut hormones, or the gene expression in subcutaneous adipose tissue [51].

A small study with eleven participants with overweight, early time-restricted fasting (eating between 8 a.m. and 2 p.m.) was investigated for acute effects on glucose metabolism and gene expression. In comparison to the control group (eating between 8 a.m. to 8 p.m.) 24 h glucose levels and glycaemic excursions decreased, and ketones, cholesterol, and the expression of the stress response and aging gene sirtuin 1 (SIRT1) and the autophagy gene microtube associated protein 1 light chain 3 alpha (LC3A) increased in the morning before breakfast. This was different to the gene expression pattern in the evening [53]. The early time-restricted feeding effect on cardio-metabolic health (insulin sensitivity, betacell responsiveness, blood pressure, oxidative stress, and appetite)—independent from weight loss—has further been observed by Sutton et al. in men with prediabetes [54]. In addition, a RCT with 17 participants with normal weight compared the metabolic effects of breakfast and dinner skipping. Compared to the three meals per day control group, skipping breakfast or dinner increased energy expenditure. Furthermore, fat oxidation increased on the breakfast skipping day [55]. In a prospective cohort study, it has been shown that breakfast skipping is associated with a 21% increased risk for the development of diabetes mellitus type 2 [56].

#### 2.1.4. Personalized Nutrition

In the last years, concepts of personalized nutrition have been more focused, especially by commercial providers offering direct-to-consumer genetic testing. One of the main drivers for personalized dietary recommendations is the inter-individual variability of metabolic response on standardized meal challenges suggesting that personalized diets might successfully modify elevated postprandial blood glucose and its metabolic consequences [57]. In the Personalised Responses to Dietary Composition Trial (PREDICT1) with more than 1000 twins and unrelated healthy adults in the UK, large inter-individual variability in postprandial responses of blood triglycerides (103%), glucose (68%), and insulin (59%) following identical meals was observed. Various intrinsic and extrinsic factors could be identified as predictors of the inter-individual variability. In the following, scientific evidence of gene-based and microbiome-based dietary recommendations is summarized.

#### Gene-Based Dietary Recommendations

Gene-based dietary recommendations are based on individuals' genetic make-up. The fact that body weight is, inter alia, genetically determined, and more than hundreds of genetic loci are identified for a relationship with anthropometric parameters [58], is underlining the assumption that even the inter-individual varying degree of success in weight loss indicates a genetic component [59]. The *fat mass and obesity-associated* (*FTO*) gene is the gene with the largest effect on body weight. The function of the *FTO* gene is not yet fully understood, whereas it is shown that the *FTO* gene inhibits brown adipose tissue genesis [60]. Numerous companies offer genotyping and provide recommendations for a healthy diet or weight loss. Furthermore, commercial offerings for deoxyribonucleic acid (DNA) methylation profiling started to emerge. These commercially available direct-toconsumer tests are in contrast to the lacking scientific evidence that genotypes are associated with weight loss.

In a recently published pooled analysis of weight loss data, it has been shown that single nucleotide polymorphisms have a minor role in the inter-individual variation of weight loss [61]. The Food4Me study has shown that including genotype information for dietary recommendations had no beneficial effect on weight loss [62]. The American Society of Dietetics and Nutrition clearly states that "No significant differences in weight, body mass index (BMI; calculated as kg/m2), or waist circumference were observed when results of genetic testing were incorporated into nutrition counselling compared with counselling or care that did not incorporate genetic results" [63], and the "use of nutrigenetic testing to provide dietary advice is not ready for routine dietetics practice" [64]. Present research cannot provide adequate evidence that individuals with a defined genetic make-up benefit especially from specific dietary recommendations concerning weight loss [65]. A systematic review on gene–diet interactions on weight change concluded that there is no evidence that gene–diet interactions are a main determinant for obesity treatment success [66]. For that reason, more future human studies are required to prove the clinical evidence of gene-based dietary recommendations on weight loss [59]. Furthermore, the investigation of single nucleotide polymorphisms will be replaced by the investigation of polygenetic scores to characterize humans according to their genetic predisposition. Khera AV et al. have developed and validated a genome-wide polygenetic score for five diseases (coronary heart disease, atrial fibrillation, diabetes mellitus type 2, inflammatory bowel disease, and breast cancer) [67]. In a further data analysis, a polygenetic predictor for body weight has been developed and validated [68].

Microbiome-Based Dietary Recommendations

Various correlations between the gut microbiota and individuals' nutrition, as well as the occurrence of diseases like obesity, have been shown [59]. These correlations indicate that personalized nutrition based on the microbiome is a further starting point for weight loss [59]. Increasing evidence suggests that changes in individuals' microbiome during dietary intervention are person-specific, and this heterogeneity, in addition to individuals' physiology, is due to a unique microbiome signature [69]. The integration of microbiome information in combination with other person-specific factors seems to have the potential for the understanding of complex interactions [70]. Von Schwarzenberg et al. demonstrated that a VLCD led to a decrease in bacterial abundance and restructuring of the gut microbiome. After this VLCD, the microbiota were transplanted to mice, which led to a decrease in body weight. This study reveals that diet–microbiome interactions modulate energy metabolism [71]. Independent from weight loss, there is no doubt that dietary intake influences gut microbiota structure [72]. In a literature review, the direct and indirect mechanisms behind the influence of the gut microbiome have been discussed. The composition of the microbiota, the presence of specific microbes, and their metabolic activity have to be considered in future human intervention studies to investigate the potential for targeting the microbiota for improving health [73]. Similar to the genetic direct-to-consumer tests, some companies offer the analysis of the microbiome for personalized dietary recommendations, whereas scientific evidence is still not given. The current knowledge about the microbiome's role in diet-mediated effects on health is too limited to provide evidence for microbiome-based dietary recommendations [74].

#### 2.1.5. Weight Loss Programs

Multidisciplinary weight loss programs addressing nutrition, physical activity, and behaviour are available. Most of the programs are delivered in a single country or are regionally rolled out, e.g., by health insurances, health care providers, or companies. The largest global weight loss program is WW (formerly Weight Watchers). An international study has shown that the WW program resulted in a moderate weight loss at 12 and 24 months. Mean weight change at 12 months assessment was 6.65 kg (completers-analysis) [75] and mean weight loss after 2 years was 4.76 kg [76]. Furthermore, Johnston et al. have shown that persons using all provided intervention tools (weekly meetings, WW mobile application, and WW online tools) lose more weight than persons who selected not all three ways to access the treatment [77]. Another well-known program is OPTIFAST, which also provides scientific evidence for efficacy after 12 months. The 12 months OPTIFAST concept includes a total dietary replacement for three months followed by lifestyle recommendations and professional group sessions for further nine months. Comparing the effectiveness of the OPTIFAST program with a food-based dietary plan resulted in 10.5% versus 5.5% weight loss at 52 weeks [78]. In this study, there was an active weight maintenance phase, where meal replacement was allowed.

#### 2.1.6. Support

Self-monitoring of diet and physical activity provides an effective behaviour change technique for weight management [79] and is a core component of behavioural obesity treatment [80]. It has been demonstrated that dietary self-monitoring itself and the frequency of self-monitoring is linked to weight loss [81]. Furthermore, self-monitoring tends to positively impact weight loss when combined with other self-regulation techniques, such as goal setting and feedback [82–84]. Engagement rates for self-monitoring diet were higher in digital than in paper-based self-monitoring [81,85].

Digital tools like online tools and applications (apps), tracking technologies, or even internet-based support have become attractive for teaching and supporting long-term behaviour change techniques [86]. Carter et al. examined in a RCT the acceptability and feasibility of a self-monitoring weight management intervention provided by a smartphone app compared to a website and paper diary. Results showed a mean weight change at 6 months of 4.6 kg in the smartphone app group, 2.9 kg in the paper diary group, and 1.3 kg in the website group and additionally, the app was highly appreciated in satisfaction and acceptability [85]. Weight management apps may have positive effects on weight-related outcomes; although, the methodological quality of many studies is low [87,88]. A metaanalysis by Villinger et al. with more than 6300 participants showed that app-based mobile interventions can be effective for changing nutrition behaviours and nutrition-related health outcomes [89]. Digital tools like apps are a time- and cost-effective method for the collection of health-related data with the potential of wide distribution and scalability [85,90,91]. Although many apps are available for weight loss, digital offerings for weight management rarely include evidence-based strategies for behaviour change [92,93].

#### **3. Drugs**

There are several agents for the pharmacologic therapy of obesity leading to decreased appetite, gastric emptying, nutrient absorption, or increased satiety. Some of these have gained marketing authorization during the last six years, and some others are still under development process [94,95]. Currently, the European Medicines Agency (EMA) authorized three drugs (orlistat, naltrexone/bupropion, and liraglutide), and the US Food and Drug Administration (FDA) has approved four drugs (orlistat, phentermine/topiramate, naltrexone/bupropion, and liraglutide) for obesity treatment [96]. The purpose of using pharmacotherapy to manage obesity is to increase patient adherence to lifestyle changes and to overcome the biological adaptations that occur with weight loss [97].

Increasing evidence has shown that behaviour-based interventions with one antiobesity medication can result in greater weight loss than usual care conditions only [96]. The efficacy of available anti-obesity drugs is often limited to a reduction of 5–10% of body weight over a 1-year period and weight loss typically does not occur for more than 6–8 months [98]. In a systematic review and network meta-analysis, five weight loss medications (orlistat, lorcaserin, naltrexone-bupropion, phentermine-topiramate, or liraglutide) were compared regarding efficacy on weight loss. In total, 28 RCTs with 29,018 patients were included. Study participants in the placebo group had a statistically significant lower odds ratio for achieving 5% weight loss after one year than participants taking drugs. Excess weight loss was 8.8 kg for phentermine-topiramate, 5.3 kg for liraglutide, 5.0 kg for naltrexone-bupropion, 3.2 kg for lorcaserin, and 2.6 kg for orlistat [99].

#### **4. Bariatric Surgery**

Bariatric surgery is appropriate for persons with severe obesity. Indications of bariatric surgery vary across countries. In most countries, the access to bariatric surgery is, e.g., restricted to persons for whom other weight loss measures have failed. There are two primary mechanisms: restriction and malabsorption of ingested food. A restrictive approach is physically limiting the quantity of food that can be ingested by limiting the size and capacity of the stomach while leaving the remainder of the gastrointestinal tract intact. Malabsorption of calories and nutrients occurs when a portion is bypassed or removed. Kilocalories and nutrients are less able to be absorbed because ingested food remains in the gut to a shorter distance. The International Federation for Surgery of Obesity and Metabolic Disorders (IFSO) has published an annual report of all bariatric surgery provided to the Global Registry [100]. Data from 51 countries with documented surgery between 2014 and 2018 were evaluated. The surgery procedure carried out most frequently was sleeve gastrectomy (46.0%) followed by Roux en Y gastric bypass (38.2%), one anastomosis gastric bypass (7.6%), and gastric banding (5.0%) [100]. Within the systematic review by Puzziferry et al., 29 clinical studies with 7971 patients were evaluated. The main finding was that gastric bypass resulted in greater weight loss than the gastric band [101]. A further systematic review and meta-analysis showed that all current bariatric procedures are associated with significant weight loss, but more long-term data are needed for one-anastomosis gastric bypass and sleeve gastrectomy [102]. Particularly worthy to mention is that patients with higher adherence rates to visits and behaviour changes before surgery are more likely to

lose more weight after surgery [103]. In any case, good preparation before and after surgery is indispensable to ensure the best outcomes [104]. Several steps during the preoperative evaluation are necessary. These include the individual's psychological fitness to undergo bariatric surgery, the professional nutritional assessment, and patient education to guide the patient towards dietary modifications that are necessary after surgery [104]. Nutritional deficiencies are a long-term clinical issue in patients through modifications to the gastrointestinal tract [105]. The Clinical Practice Guidelines of the European Association for Endoscopic Surgery (EAES) recommend postoperative nutritional and behavioural advice for patients undergoing bariatric surgery [106]. Nutritional monitoring is an essential component of weight management after bariatric surgery to increase the patients' adherence to healthy dietary habits and to appropriate supplementation measures [105]. In addition, monitoring prevents the risk of weight regain, makes it easier to detect possible nutritional deficiencies, and contributes to the preservation of a good quality of life [105].

#### **5. Conclusions**

This scientific viewpoint is a narrative review and not comparable with a systematic review but gives an overview of various treatment approaches, which should be used and combined considering the individuals' needs, preferences, weight status, and cardiometabolic risk factors. All treatment approaches have to result in a negative energy balance. Independent of the weight loss concept (e.g., intermittent fasting, low carb, low fat, drugs or, bariatric surgery), weight loss failed without a negative energy balance. Many trends like gene-based or microbiome-based dietary recommendations still lack conclusive scientific evidence. In general, weight loss studies often have methodological limitations (e.g., study design or duration), leading to results not being comparable, and they therefore should be interpreted with caution. With lifestyle changes, a moderate weight loss after one year is possible. Other approaches, such as bariatric surgery, lead to greater weight loss, but are proven only for specific target groups. More research, especially by long-term intervention studies, is needed to evaluate weight loss concepts and to obtain evidencebased tailored recommendations.

**Author Contributions:** M.W. and C.H. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the *enable* competence cluster, an interdisciplinary cluster of nutrition science, funded by the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF). The manuscript was written by the research group "Personalized nutrition and eHealth" (grant number: 01EA1709). The *enable* publication number is 79.

**Acknowledgments:** The authors thank Teresa Hölzl for designing the figures.

**Conflicts of Interest:** M.W. declares no conflict of interest. C.H. is a member of the scientific advisory board of the 4sigma GmbH Oberhaching, Germany.

#### **References**


## *Review* **Endovascular Bariatric Surgery as Novel Minimally Invasive Technique for Weight Management in the Morbidly Obese: Review of the Literatur**

**Giuseppe Massimo Sangiorgi 1,\*, Alberto Cereda 2, Nicola Porchetta 1, Daniela Benedetto 1, Andrea Matteucci 1, Michela Bonanni 1, Gaetano Chiricolo <sup>1</sup> and Antonino De Lorenzo <sup>1</sup>**


**Abstract:** Nowadays, obesity represents one of the most unresolved global pandemics, posing a critical health issue in developed countries. According to the World Health Organization, its prevalence has tripled since 1975, reaching a prevalence of 13% of the world population in 2016. Indeed, as obesity increases worldwide, novel strategies to fight this condition are of the utmost importance to reduce obese-related morbidity and overall mortality related to its complications. Early experimental and initial clinical data have suggested that endovascular bariatric surgery (EBS) may be a promising technique to reduce weight and hormonal imbalance in the obese population. Compared to open bariatric surgery and minimally invasive surgery (MIS), EBS is much less invasive, well tolerated, with a shorter recovery time, and is probably cost-saving. However, there are still several technical aspects to investigate before EBS can be routinely offered to all obese patients. Further prospective studies and eventually a randomized trial comparing open bariatric surgery vs. EBS are needed, powered for clinically relevant outcomes, and with adequate follow-up. Yet, EBS may already appear as an appealing alternative treatment for weight management and cardiovascular prevention in morbidly obese patients at high surgical risk.

**Keywords:** bariatric surgery; cardiovascular disease; endovascular bariatric surgery; obesity; prevention

#### **1. Introduction**

Nowadays, obesity represents one of the most unresolved global pandemics, posing a critical health issue in developed countries. According to the World Health Organization, its prevalence has tripled since 1975, reaching 13% of the world population in 2016. Obesity is defined as a body mass index (BMI) greater than 30 kg/m2, while morbid obesity is defined as BMI > 40 kg/m2. Numerous comorbidities such as major stroke, acute myocardial infarction, hypertension, type 2 diabetes, hyperlipidemia, obstructive sleep apnea, and all-cause mortality are strongly associated with this disease [1]. As a result, approximately 2.8 million deaths per year may occur in adult populations affected by obesity.

Given the tremendous impact on public health with an approximate attributable cost of nearly 150 billion dollars per year in the United States, obesity treatments have become of critical importance for the healthcare system, the medical community, and policymakers [2].

The cornerstone of obesity treatment is represented by behavioral modifications (i.e., diet and physical exercise), ideally in a highly motivated patient that should be followed by a multidisciplinary team of healthcare professionals. If successful, this strategy consents modest and durable weight loss reduction of 5% to 10% [3]. The long-term efficacy of all behavioral therapies is limited. For those unable to reach these goals, few drugs (orlistat, lorcaserin, phentermine/topiramate, nartrexone/bupropion, semaglutide and liraglutide)

**Citation:** Sangiorgi, G.M.; Cereda, A.; Porchetta, N.; Benedetto, D.; Matteucci, A.; Bonanni, M.; Chiricolo, G.; De Lorenzo, A. Endovascular Bariatric Surgery as Novel Minimally Invasive Technique for Weight Management in the Morbidly Obese: Review of the Literature. *Nutrients* **2021**, *13*, 2541. https://doi.org/ 10.3390/nu13082541

Academic Editor: Ina Bergheim

Received: 30 May 2021 Accepted: 21 July 2021 Published: 25 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

are available as adjuvant therapy, but in general, are not free of side effects, usually dosedependent, have limited adherence (frequently due to arbitrary withdrawal of the drug), and with suboptimal outcome in obtaining the goal of weight-reduction. Recently, nonsurgical endoscopic bariatric therapies such as intragastric balloons, endoscopic gastric plication, and endoluminal duodenal-jejunal sleeve have been implemented in patients not willing to undergo conventional bariatric surgery. However, potentially severe complications have been reported with these techniques (gastric perforation, bowel obstruction, and gastrointestinal bleeding), and for these reasons they are currently performed only in highly experienced centers.

Surgical approaches, mainly represented by Roux-en-Y gastric bypass, adjustable gastric banding, sleeve gastrectomy, and biliopancreatic division (Figure 1) are reserved to morbidly obese individuals or obese individuals with one or more obesity-related comorbidities (or even lower for uncontrolled diabetes) who have not been able to reach the aforementioned goals with behavioral modifications and drug therapy.

**Figure 1.** Overview of different bariatric surgery options. The most effective options in weight reduction are the Roux-en-Y gastric bypass and sleeve gastrectomy. Arrow indicates an increase in efficacy.

Well-known short- and long-term complications, even if uncommon, including bleeding, infections, deep venous thrombosis, gastric dumping syndrome, and internal hernia have been reported with different surgical techniques.

Another frequent eventuality is post-bariatric surgery anemia: it is in most cases due to iron deficiency, along with vitamin B12 deficiency as a secondary cause. Iron deficiency is expressed by low serum ferritin and it occurs because of its lower absorption secondary to hypocloridria and the bypassing of the duodenum and proximal jejunum [4]. In addition to anemia, vitamin B12 deficiency (resulting from inadequate secretion of intrinsic factor, limited gastric acidity and the bypassing of the duodenum) can lead to neurological disorders [4]. In the absence of adequate vitamin B12 supplement, up to 30% of patients will be unable to maintain normal levels of plasma B12 at 1 year [5].

Bariatric surgery results in calcium/vitamin D malabsorption (results from bypassing the duodenum and proximal jejunum) with secondary hyperparathyroidism, changes in fat mass and alterations in fat- and gut-derived hormone; the final effect is an accelerated bone loss [4]. Patients affected by secondary hyperparathyroidism should obtain bone benefits from oral supplementation of vitamin D [5,6]. In fact, the European Association for Endoscopic Surgery (EAES) Clinical practice guidelines on bariatric surgery strongly recommend vitamin D supplement post-surgery because the anticipated benefits outweigh the potential risks of vitamin therapy [7].

Poor protein digestion and absorption, secondary to altered biliary and pancreatic function, is involve in protein malnutrition and can be observed after bariatric surgery [4]; albumin levels can be considered as marker of protein deficiency [6].

Low serum levels of fat-soluble vitamins (vitamin A, K and E) usually occur after bariatric procedure [4,5].

As a "treatment gap" option, a novel minimally invasive procedure, i.e., endovascular embolization of the gastric fundus arterial supply (endovascular bariatric surgery -EBS), has been proposed as a supplementary technique with successful results in animal models and a few recently published small clinical trials [8–13].

We report herein an updated review on the rational, pathophysiology, experimental, and clinical outcomes with this procedure in obese patients.

#### **2. Pathophysiological Basis for EBS**

Gastric fundus is mainly supplied by the left gastric artery (LGA) and sometimes by the gastroepiploic artery. The stomach has a neurohumoral role on hunger regulation through ghrelin: this is the rationale for gastric fundus embolization. Ghrelin is a ligand of the growth hormone secretagogue receptor (GHS-R) in neuropeptide Y (NPY) and agoutirelated peptide (AgRP) in the arcuate nucleus of the hypothalamus with a downstream effect to inhibit the release of the α-melanocyte-stimulating hormone. Therefore, ghrelin acts to increase appetite and food intake, increasing weight gain (Figure 2). Practically, ghrelin plasma level rises sharply shortly before meals, which correlates with hunger sensation that occurs before consuming food. Conversely, ghrelin falls immediately after eating, which correlates with the sense of satiation after eating [14,15]. In addition, ghrelin downregulates anorexigenic hormone receptors for PYY, GLP-1, and cholecystokinin and reduces the sensitivity of gastric distension by selectively inhibition of gastric subpopulation of mechanically sensitive vagal afferent nerves [16,17]. Nearly 90% of body ghrelin is produced in the ghrelin-secreting cells of the stomach, mainly in the gastric fundus with fewer amounts produced in the small intestine, brain, and pancreas. Therefore, the main goal of the EBS procedure is to reduce ghrelin production by the stomach.

**Figure 2.** Hormonal changes and diagram of the ghrelin signal pathway. (**A**) The "hunger hormone" ghrelin is secreted by the gastric fundus, whereas peptide YY (PPY), cholecystokinin (CCK), and glucagon-like peptide (GLP-1) are secreted in the gut from L cells. Adipocytes produce leptin (LPT). In the fasting state, decreased food intake suppresses the release of PPY, GLP-1, CCK, and LPT, while stimulates ghrelin production from the stomach. Ghrelin binds in the hypothalamic arcuate nucleus to growth hormone secretagogue receptor (GHSR) in neuropeptide Y (NPY) and agouti-related peptide (AgRP) neu-rons. NPY and AgRP bind subsequently to NPY subtype 1 and 5 (NPH Y1/Y5) and melacortin-3 and-4 (MC3/4) receptors on proopiomelanocortin (POMP) and cocaine-amphetamineregulated transcript neurons (CART), inhibiting the release of α- melanocyte-stimulating hormone (α-MSH). By inhibiting α-MSH, ghrelin acts to increase hunger and food intake. (**B**) BES procedure reducing ghrelin production in the stomach fundus area, mimics a fed state characterized by PYY, GLP-1, CCK, and LPT hormone increases. As a result, appetite decreases and an increase in the feeling of satiety occurs.

It has been previously reported in small retrospective studies [18,19] that patients with gastrointestinal bleeding treated with LGA embolization had significant weight loss after the procedure. Although these studies are difficult to interpret due to the small sample size, associated comorbidities, high-risk profile of the population included, and variable follow-up, they proved the concept and set the basis for gastric fundus embolization as a possible treatment for obesity.

#### **3. EBS Procedure**

The celiac trunk branches from the aorta at the level of the twelfth thoracic vertebra (T12). The LGA is the first and smaller branch of the celiac trunk, even if there are less common possibilities of independent origins from the aorta, splenic artery, common hepatic artery, gastroduodenal artery and superior mesenteric artery. It runs along the superior portion of the lesser gastric curvature and anastomoses with the right gastric artery that arises from the common hepatic artery. The left gastroepiploic artery (GEA) is the largest branch of the splenic artery and gives gastric branches to both surfaces of the stomach. It anastomoses with the right GEA that arises from the gastroduodenal artery (Figure 3). Normal anatomic variants are frequent and can be present in up to 30% of patients.

**Figure 3.** Voxel Gradient Angio-CT 3D reconstruction (**A**) and schematic drawn (**B**) of left gastric artery and left gastroepiploic artery normal anatomy. Most commonly the left gastric artery originates from the celiac trunk. Less frequently, the artery may arise directly from the aorta, splenic artery, common hepatic artery, and superior mesenteric artery. The superior part of the greater curvature of the stomach is supplied by the left gastroepiploic artery while the inferior part of the greater curvature by the right gastroepiploic artery. T indicates target for BES procedure.

> Although there is no standard bariatric embolization procedure, similar approaches have been used in the majority of studies done so far. Through the femoral or radial artery, a selective digital subtraction angiography (usually in AP, LAO 60◦, or LAO 90◦ projections) of the celiac trunk is performed to identify the LGA and other potentials embolization targets (all arteries supplying the gastric fundus and potential accessory gastric arteries); the decision to embolize other vessels, especially the gastroepiploic artery, is based on their contribution to fundal blood supply (assessed angiographically). Different types of diagnostic catheters can be used for this purpose (JR, Simons, Cobra, SOS, among others) depending on the operator's preference and experience. Cone beam CT has also been used by some authors for best determining fundal perfusion and the eventual need to embolize other arterial territories. After targets have been identified, selective cannulation and

angiography using the same catheter or a microcatheter are performed. Embolic material choice has been variable throughout trials with 300–500 μm and 500–700 μm microspheres and 300–500 μm or 500–700 μm polyvinyl alcohol (PVA) particles used. LGA embolization can also be performed with an occlusion balloon microcatheter (OBC) advanced into the target artery over a standard guidewire: a subsequent balloon inflation at the OBC tip can be used to prevent retrograde reflux, with tip pressure/resistance monitored to prevent overembolization and antegrade reflux. Embolization is taken to stasis, which was defined as the visual absence of the flow of contrast after five heartbeats; postembolization DSA is usually acquired to confirm the success of embolization (Figure 4). Unfortunately there are no consensus statements to standardize this procedure.

**Figure 4.** Case example of a left gastroepiploic artery (arrow) embolization in a 55 years old male with a BMI of 43.2. (**A**) Selective angiography by multipurpose 6F 125 cm into the splenic artery (asterisk). (**B**): A Rebar 0.27 microcatheter (Medtronic, Santa Rosa, CA, USA—asterisk) was advanced into the multipurpose catheter selectively engaging the gastroepiploic artery. (**C**) Three contour spirals (Medtronic, Santa Rosa, CA, USA) of different size and length (4 × 40 mm; 4 × 40 mm; 5 × 30 mm) were subsequently released (arrow). (**D**) Final angiography with target fundus zone indicated by dashed circle.

#### **4. EBS Preclinical Evidence**

In 2007, Arepally and coauthors published the first experience of catheter embolization of the LGA to reduce systemic plasma levels of ghrelin in a swine model [20]. The study showed a reduction in ghrelin but no significant weight change. Micro-ulcers at the gastroesophageal junction were also observed in the euthanized animals. A follow-up study with a sham procedure in the control group evaluated ghrelin levels and natural weight gain in 4 weeks follow-up. A significant decrease in weight (7.8%) compared to the control group (15.1%) was shown [21].

In a canine model, Bawudun et al. [22] showed a significant decrease in plasma levels of ghrelin, abdominal subcutaneous fat, and body weight in Lipiodol-embolized (or a combination of Lipiodol and polyvinyl-alcohol) animals compared to control group. Peak effect was obtained between weeks three and four and compensatory rise in plasma levels of ghrelin was observed at seven weeks after embolization. According to the author, this result may suggest the compensatory production of ghrelin that occurred in the gastric fundus after the embolization.

Paxton and colleagues obtained 55% of ghrelin reduction associated with significant weight loss at eight weeks in a swine model using 40 μm microsphere as embolizing agent [23]. The same author evaluated by histology the gastric mucosa of swine after the procedure and demonstrated healed or healing ulcers in 50% of the gastric body of treated animals [24].

In 2016, to assess whether the number of fundal arteries embolized could impact the ghrelin reduction and gastric ulceration rate, the same authors utilized embolic microspheres into four arteries supplying the gastric fundus, vs. two arteries vs. one artery. Only the group of swine undergoing complete embolization demonstrated significant ghrelin reduction compared to sham control animals. Gastric ulcers were present in 50% of animals that embolized four arteries compared to 40% in the other groups [25].

In 2017, Kim et al. [26] performed EBS in five swine by selectively infusing 50–150 or 150–250 μm PVA particles into the gastric arteries while five animals were treated with saline infusion as a sham procedure. Endoscopy was performed three weeks after EBS to see whether any gastric ulcer occurred. Celiac trunk angiography was performed eight weeks after EBS [27]. No statistically significant differences in ghrelin levels were observed after eight weeks despite a reduction compared to the control group. In the embolized group, ulcerations were identified in three animals (60%). Re-canalization of the embolized arteries was identified on follow-up angiography in three animals (60%), respectively, suggesting that EBS procedure with PVA particles can transiently suppress ghrelin levels in embolized animals. However, ulcerations of gastric fundus and recanalization of the embolized arteries are present in the long-term follow-up.

A comparison of the studies involving animal models is reported in Table 1.



CTRL = control; LDSM = low dose sodium morrhuate; HDSM = high dose sodium morrhuate; EO = ethiodized oil (Lipiodol); PVA= polyvinyl-alcohol; MS = microspheres; EA (*n*) = number of embolized arteries.

#### **5. EBS Human Clinical Evidence**

The available clinical evidence on bariatric embolization has been thoroughly described in previous reports [10,27,28]. The first human experience was published in 2013 by Kipshidze and colleagues [11] with its two years follow-up published in 2015. In this five-patient series (mean age 44.7 ± 7.4), the average weight decreased from 128.1 ± 24.4 to 106 ± 21 at two years. All procedures were performed through femoral access using 6F JR 4 catheters for angiography and Excelsior 1018 Microcatheter (Boston Scientific Corp., Cork, Ireland) for selective cannulation. Embolization was done with 300 to 500 μm embospheres (Biocompatibles UK Limited, Surrey, United Kingdom). There were no major procedural complications. Three of the five patients complained of abdominal discomfort after the procedure, all with unremarkable follow-up gastroscopies.

The GET LEAN trial [13] (Gastric Artery Embolization Trial for the Lessening of Appetite Nonsurgically) published in 2016 analyzed the safety and efficacy of LGA embolization at six months. With four patients included (mean age 41 years, range, 30–54), average weight loss was 9.2 kg (range 2.7–17.2) at six months follow-up. Ultrasound (US) guided right femoral or left radial access and 4–5 F Simmons 1 catheter for angiography were use and embolization was performed with 300 to 500 μm microspheres through a microcatheter. No major complications were reported. Three of the five patients complained of mild nausea, occasional vomiting, and mild epigastric discomfort immediately following the procedure that were resolved within 24 h for two patients and within 3–4 days for the other.

In another series published by Bai [8] in 2018 that included five patients with a mean age of 42.8 ± 13.9, weight loss was on average 12.9 ± 14.66 kg at nine months follow-up. Operators used US guided femoral access in all patients, 5F standard catheter for angiography, Progreat 2, 7 F microcatheter (Terumo) for selective cannulation, and embolization was done with 500–710 μm PVA particles (COOK Incorporated, Bloomington). No major complications were reported: four patients experienced slight epigastric discomfort in the first hours after the procedure, which resolved within 48 h. One patient developed a small ulceration (3 mm in length) below the cardia (grade II according to CTCAE v4.0), which endoscopically healed 30 days later after treatment with omeprazole (20 mg).

Pirlet in 2019 [12] studied seven morbid-obese patients (mean age 48 ± 7 years, mean BMI of 52 ± 8 kg/m2) who were referred for coronary angiography. Weight loss was on average was 13 ± 17 kg (median loss: −11 kg [0, −25]) at up to 12 months after the procedure. The procedures were done through 5–6F right radial access, and selective cannulation of the LGA was done by 5F JR catheter in all but one patient where a 3F Renegade Microcatheter (Boston Scientific Corp., Cork, Ireland) was used. Embolization with 300–500 μm PV particles (Cook Medical, Bloomington, Indiana, USA) was performed. No major complications were observed in this series. Six patients had transient epigastric discomfort resolved with pro-ton pump inhibitors (PPIs).

In a retrospective series published by Elens and colleagues [9] in 2019 that included 16 patients who underwent embolization of the LGA the mean weight loss was 8 ± 5.12 kg. The procedure was successful in all but one patient. Four patients (25%) were lost in follow-up. Femoral access was used in all patients; celiac trunk angiogram was done with a 5F cobra catheter (Cook Medical, Bloomington, Indiana, USA), LGA angiography with 5F MP catheter, selective cannulation with Progreat 2.7F (Terumo, Tokyo, Japan) microcatheter and embolization was performed with 300–500 μm embospheres (Merit Medical) in the first patient and 500–700 μm embospheres in the remaining patients. The patient treated with 300–500 μm embosphere had gastric ulceration on control endoscopy that resolved at three months. One major complication was reported in this series of a patient that ended in the intensive care unit for pancreatitis, splenic infarct and late gastric perforation.

The BEAT trial [29] (Bariatric Embolization of Arteries for the Treatment of Obesity) 1-year results included 20 patients (mean age of 44 years 6 +/− 11 years) and the procedure was successful in all of them. The mean excess weight loss at one year was 11.5% (95% CI: 6.8%–16%; *p* < 0.001). In this series, patients underwent two-weeks pre-treatment

with omeprazole and sucralfate that continued up to 6 weeks post-procedure. Operators used femoral access, celiac trunk angiography plus cone-beam CT to evaluate the gastric arterial distribution. Catheterization was performed with a 5-F SOS guiding catheter (Angiodynamics, Latham, NY), and a 2.9-F high-flow microcatheter (Maestro; Merit Medical) was utilized for selective cannulation. After administration of 200 mg of nitroglycerin and 2.5 mg of verapamil (only into the LGA), embolization was performed with 300–500 μm embospheres (Merit Medical). No major complications were reported on this trial. Eleven minor adverse events occurred in eight participants. One participant had subclinical pancreatitis, evident by transient elevation of lipase levels during the hospital stay that was treated with supportive care and discharged within 48 h, with no further clinical sequels.

Zaitoun and collaborators recently published a pilot study of EBS in 10 patients with obesity (BMI > 30 kg/m2) and prediabetes (hemoglobin A1c (HbA1c) level 5.7%–6.4%) [30]. A statistically significant reductions in HbA1c (from 6.1% ± 0.2 to 4.7% ± 0.6; *p* < 0.0001), mean body weight (from 107.4 kg ± 12.8 to 98 kg ± 11.6; *p* < 0.0001), and mean BMI (from 37.4 kg/m<sup>2</sup> ± 3.3 to 34.1 kg/m<sup>2</sup> ± 3; *<sup>p</sup>* < 0.0001) was observed at 6 months follow-up. Embolization was performed with 300–500 μm microspheres. The authors also reported a significant positive correlation between BMI and HbA1c levels (r = 0.91; CI, 0.66–0.98; *p* = 0.0002).

In 2020 the LOSEIT group, led by Vivek Reddy of the Icahn School of Medicine at Mount Sinai in New York City, published the first in-human, sham-controlled, randomized clinical trial about EBS [31]. In this trial 40 patients (BMI of 35.0 to 55.0 kg/m<sup>2</sup> and age 21 to 60 years) underwent randomization in 1:1 to either sham or transcatheter bariatric embolotherapy (TBE) targeting the left gastric artery; patients randomized to sham were unblinded at 6 months and crossover to TBE was allowed. Patients of both groups received a lifestyle support (diet and behavioral education for weight loss). TBE has been performed via femoral arterial access using standard 6-F guiding catheters for celiac artery angiography. In a case the gastroduodenal artery was the origin of the left gastric artery which supplied the gastric fundus and in one patient the embolization target vessel was the left hepatic artery. An occlusion balloon microcatheter (Endo-bar Solutions LLC, Orangeburg, New York) was advanced into the target artery and a balloon at the tip was inflated to prevent retrograde reflux. The embolization was achieved by using 300- to 500-mm microspheres (BeadBlock, Biocompatibles Ltd., Farnham, United Kingdom) into the LGA. The procedure was repeated until adequate angiographic stasis was achieved with the balloon deflated over 5 cardiac cycles. Patients randomized to sham received propofol only, without arterial access. At 6 months, in both the intention-to-treat and per-protocol populations, the total body weight loss was greater with TBE (7.4 kg/6.4% and 9.4 kg/8.3% loss, respectively) than sham (3.0 kg/2.8% and 1.9 kg/1.8%, respectively; p 1⁄4 0.034/0.052 and p 1⁄4 0.0002/0.0011, respectively); the total body weight loss was maintained with TBE at 12 months (intention-to-treat 7.8 kg/6.5% loss, per-protocol 9.3 kg/9.3% loss; p 1⁄4 0.0011/0.0008, p 1⁄4 0.0005/0.0005, respectively). After 1 week, all the patients underwent endoscopic examination: 5 cases of asymptomatic ulceration in the treatment group occurred (4 small superficial ulcers in the sub-cardiac region of the stomach and 1 superficial ulcer in the greater curvature).

Bariatric Embolization of Arteries With Imaging Visible Embolics (BEATLES) (BAE2) is an on-going, randomized, sham-controlled study sponsored by Johns Hopkins University [32]. The aim of the trial, with an estimated enrollment of 59 participants, is to evaluate the change in body weight 12 months after randomization in the bariatric embolization procedure arm versus the control (sham) arm; the estimated study completion date is December 2023.

A comparison of the studies involving humans is reported in Table 2.


**Table 2.** Clinical Studies Evaluating BES procedure for Weight Reduction.

#### **6. Current and Future Perspectives**

Considering the huge impact of obesity in public health worldwide, novel treatments are of utmost importance to reduce associated morbidity and mortality related to this condition and its complications. Early experimental and initial clinical data have suggested that BES may be a promising technique to reduce weight and hormonal imbalance in the obese population. Compared to bariatric surgery, BSE seems to be less invasive, welltolerated, with shorter recovery time, and is probably cost-saving; further randomized clinical trials are needed to confirm this hypothesis and to evaluate eventual consequences regarding metabolic and nutritional status of patients underwent EBS. The duration of BES procedure varies from 80 to 100 min, with a fluoroscopy time of 32 ± 14 min. Current human clinical evidence shows that bariatric LGA embolization is an effective treatment that is associated with statistically significant weight loss during follow-up. In addition to weight change, some clinical series have also demonstrated a reduction in serum level of ghrelin and/or leptin [8,11,29,31] and in the level of hemoglobin A1c [29,30] and mean total cholesterol [29]; a statistically significant mean decrease in diastolic blood pressure in the embolization arm was observed in one trial [31]. Furthermore, the procedure is associated with improvements in quality-of-life after embolization, especially regarding the domains of physical function, self-esteem, sexual life, and public distress. The most frequent side effects after the procedure are nausea, vomiting and epigastric pain, and minor complications such as transient superficial mucosal ulcers are common after LGA embolization (usually treated with proton pump inhibitors and spontaneous healing within 4 weeks to 3 months without the needed of further hospitalization); only one case of major complications after the procedure (pancreatitis, splenic infarct and late gastric perforation) was reported by Elens et al. [9].

Distal embolizing agents (mostly embospheres and PVA particles) have been the main embolic material used in recent trials. Although bariatric embolization seems to be safe in this initial phase and results in terms of weight loss are promising, we need further data on patients subsequently treated with conventional bariatric surgery (given that the gastroesophageal junction is de-vascularized) to establish if BES can be utilized as bridge-therapy to surgery in this population. The ideal embolic material remains to be elucidated to achieve what seems to be a perfect safety-efficacy balance mostly on the long term. A larger well designed RCT ideally with a sham-controlled group is eagerly expected to establish the future role of BES in favoring weight loss and hormone homeostasis and for prevention of obese-related morbidity and overall mortality. There is also the needed of a trial of comparison between BES and optimal medical therapy, including new drugs like Semaglutide [33]. At last there are not data of comparison between BES and surgical techniques to evaluate if BES should be considered as an alternative in those patients who are not passible of surgery (multimorbid patients).

**Author Contributions:** Conceptualization, G.M.S. and A.C.; Methodology, G.M.S.; Validation, G.M.S., A.C. and N.P.; Formal Analysis, G.M.S., A.C. and G.C.; Investigation, G.M.S., A.C. and G.C.; Data Curation, A.M. and M.B.; Writing—Original Draft Preparation, G.M.S. and N.P.; Writing—Review & Editing, all authors. Visualization, all authors. Supervision, G.M.S. and N.P.; Project Administration, G.M.S.; Funding Acquisition, G.M.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** Authors wish to thank Sara Sguotti from Medtronic Corporation for her invaluable technical assistance during the procedures and in manuscript preparation. All the authors have consented to the acknowledgement.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Nutrition in Patients with Type 2 Diabetes: Present Knowledge and Remaining Challenges**

**Maria Letizia Petroni 1,2, Lucia Brodosi 1,2, Francesca Marchignoli 1, Anna Simona Sasdelli 1, Paolo Caraceni 1,2, Giulio Marchesini 2,\* and Federico Ravaioli 1,2**


**Abstract:** Unhealthy behaviours, including diet and physical activity, coupled with genetic predisposition, drive type 2 diabetes (T2D) occurrence and severity; the present review aims to summarise the most recent nutritional approaches in T2D, outlining unmet needs. Guidelines consistently suggest reducing energy intake to counteract the obesity epidemic, frequently resulting in sarcopenic obesity, a condition associated with poorer metabolic control and cardiovascular disease. Various dietary approaches have been proposed with largely similar results, with a preference for the Mediterranean diet and the best practice being the diet that patients feel confident of maintaining in the long term based on individual preferences. Patient adherence is indeed the pivotal factor for weight loss and long-term maintenance, requiring intensive lifestyle intervention. The consumption of nutritional supplements continues to increase even if international societies do not support their systematic use. Inositols and vitamin D supplementation, as well as micronutrients (zinc, chromium, magnesium) and pre/probiotics, result in modest improvement in insulin sensitivity, but their use is not systematically suggested. To reach the desired goals, patients should be actively involved in the collaborative development of a personalised meal plan associated with habitual physical activity, aiming at normal body weight and metabolic control.

**Keywords:** behaviour; diet; lifestyle; nutrition supplements; sarcopenia; type 2 diabetes

#### **1. Introduction**

Diabetes mellitus, namely type 2 diabetes (T2D), constitutes a significant challenge for health systems worldwide. According to the 2019 Diabetes Atlas of the International Diabetes Federation [1], 463 million adults are currently living with diabetes (1 on 11 individuals worldwide, but 1 in 5 are aged over 65). The total number is expected to increase further by 700 million in 2045. The economic impact is huge—driven by the direct costs of treatment and complications, the indirect costs of disability and premature death, and the intangible costs of poor quality of life.

Despite its characterizations as a disease of affluence, nutritional problems are frequent in T2D. Unhealthy lifestyles expressed by overnutrition and/or scarce physical activity, leading to overweight and obesity, add to genetic defects in the pathogenesis of the disease. Dietary restrictions are prescribed to reduce the incidence of T2D as well as to improve metabolic control [2], but weight loss is burdened by the loss of muscle mass [3] and sarcopenia adds to age-dependent muscle wasting [4], increasing frailty [5]. These two opposite needs make a correct nutritional approach mandatory to reduce disease burden, improve metabolic control, limit pharmacologic treatment and reduce the risk of impending cardiovascular disease.

**Citation:** Petroni, M.L.; Brodosi, L.; Marchignoli, F.; Sasdelli, A.S.; Caraceni, P.; Marchesini, G.; Ravaioli, F. Nutrition in Patients with Type 2 Diabetes: Present Knowledge and Remaining Challenges. *Nutrients* **2021**, *13*, 2748. https://doi.org/ 10.3390/nu13082748

Academic Editor: Ina Bergheim

Received: 12 July 2021 Accepted: 6 August 2021 Published: 10 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

National and international guidelines for nutritional and lifestyle recommendations are available [5–9], together with protocols to guide weight loss to produce long-term T2D remission [10]. The proposed strategies (dietary prescription, lifestyle counselling, cognitive behaviour therapy), although all-inclusive of nutritional components, are markedly different in their approach and goals and should be known by clinicians approaching patients with T2D (Table 1) [11]. The present review is intended to summarize the most recent nutritional approaches in T2D, also outlining unmet needs.


**Table 1.** Comparison of strategies and goals of different dietary interventions.

Note that enrolment into counselling and behaviour therapy may be facilitated by motivational interviewing. Treatment may be provided either in individual or in group settings; group strategies are likely to enhance the coping skills of the participants, via relational and interpersonal communication with people experiencing similar difficulties.

#### **2. Methods and Areas of Research**

#### *2.1. Literature Search*

The literature on T2D is immense. A PubMed search of June 2021, limited to the period 2016–2021 using the string "Type 2 diabetes" [MeSH Terms] AND "nutrition" [All Fields] AND "human" [MeSH Terms], retrieved 4865 references, including 887 review articles (234 systematic reviews), 255 meta-analyses and 760 clinical trials. The authors used the search to enucleate the most relevant data and unmet treatment needs. The reference lists of selected articles were used to retrieve older documents in order to provide a complete overview of present problems.

#### *2.2. Diabetes, Obesity and Sarcopenia*

The association between T2D and obesity is so strict that the term "diabesity" was originally used to indicate the dreadful association of the two conditions in a JAMA editorial in 1980 [12]. The term was finally proposed by Astrup and Finer [13], as well as by Zimmet et al. [14] and it is largely accepted inside the metabolic community. The accumulation of body fat characterizes obesity, but it is measured by a formula (the body mass index, i.e., weight (kg)/height<sup>2</sup> (m)), not at all considering body fat. Muscle mass is frequently increased in obesity but might be relatively scarce in quantity and quality compared to body fat.

Sarcopenia is particularly common in older patients, synergistically driven by age and obesity; body fat increases until the seventh decade of life (the median age of patients with diabetes attending diabetes centres) and decreases thereafter [15]. At the same time, sedentariness progressively reduces muscle mass, finally resulting in sarcopenic obesity [16], frequently associated with cardiometabolic disorders [17].

By definition, sarcopenia implies a quantitatively reduced muscle mass, as measured by dual-energy X-ray absorptiometry (DXA), the commonly accepted gold standard. Several studies have validated the use of bioelectrical impedance analysis (BIA), an easy, time-saving, and cost-effective bedside technique for assessing regional muscle mass and body composition [18,19]. BIA-assessed sarcopenia is defined by the skeletal muscle mass index (SMI), calculated as total appendicular skeletal mass (ASM, kg) divided by body weight (kg) × 100. These measurements do not consider qualitative muscle mass, and most recent guidelines suggest that functional measurements (e.g., low muscle strength by handgrip) should be primarily used to characterize sarcopenia, with quantitative data as supportive measures [20].

The prevalence of sarcopenia in diabetes has been extensively investigated. In a recent narrative review, the prevalence of sarcopenia varied between 7% and 29% [21], according to age and metabolic control, but higher figures are frequently reported. A systematic review with meta-analysis including 15 studies confirmed a prevalence varying up to 50% [22], again driven by age and metabolic control. A study with BIA concluded that patients with T2D have an enlarged ectopic fat at the expense of skeletal muscle, i.e., relative sarcopenia [23], and lower muscle mass is coupled with decreased muscle strength [24], also predicting diabetes in the general population [25]. The contribution of diabetes duration remains controversial [21,22], but older patients with T2D, with an expected longer duration of disease, show a larger decline in appendicular lean mass, muscle strength, and functional capacity compared with normoglycemic controls [26]. Notably, when compared with matched control populations, the risk of sarcopenia increased systematically in the presence of T2D (odds ratio (OR) 1.55; 95% confidence interval (CI) 1.25–1.91; *p* < 0.001 [22] and OR 1.63; 95% CI 1.20–2.22; *p* = 0.002 [27]). This indicates a need for preventive measures to limit quantitative and qualitative muscle defects by effective nutritional treatments.

#### *2.3. Metabolic Control*

The primary defect in T2D is insulin resistance, a condition where normal insulin levels are associated with lower metabolic effects or where higher than normal insulin levels are needed to elicit a normal metabolic response. Insulin resistance accounts for diffuse impairment in whole body, as well as in selective defects in different organs and tissues (liver, muscle, adipose tissue).

Whole-body insulin resistance mainly reflects muscle insulin resistance [28], reducing glucose and amino acid uptake in the postprandial phase, as well accelerating glycogen and amino acid release in the post-absorptive state, also accelerated by glucagon release [29]. Glucagon constitutes the link between muscle and liver in substrate disposal; by stimulating hepatic glucose production and ketogenesis, glucagon favours the utilization of substrates released in the periphery, whereas high insulin concentrations favour hepatic fat deposition. In both obese and nonobese subjects, higher plasma insulin levels have been associated with a linear increase in the rates of hepatic de novo lipogenesis [30], as supported by the hypoglycaemic effects of glucagon suppression of glucagon-receptor antagonists [31,32]. In the hepatocytes, fatty acids may be derived from de novo lipogenesis, uptake of non-esterified fatty acids and low-density lipoproteins, or lipolysis of intracellular triacylglycerol. Their accumulation due to higher synthesis and decreased export in the presence of high insulin concentrations in the portal vein is the likely cause of fatty liver disease, occurring in up to 73% of patients with T2D [33].

The link between muscle tissue and the liver is exerted by amino acids (Figure 1) [34]. Branched-chain amino acids, bypassing the liver in the post-prandial state, serve as nitrogen carriers to the periphery, whereas alanine and glutamine are used to carry nitrogen from the periphery to the liver, intestine and kidney. In insulin-resistant states, including obesity [35], the post-load uptake of branched-chain amino acids is impaired, possibly leading to defective amino acid supply to the muscle tissue and sarcopenia. In summary, the complex trafficking of glucose, lipid and amino acid in response to insulin resistance should be considered in the treatment of diabetes.

**Figure 1.** Interorgan amino acid exchange in the postabsorptive state and after meals in diabetes. Note the importance of BCAAs (valine, isoleucine and leucine) as nitrogen carriers to the muscle tissue (lean mass) in the post-prandial period (blue arrows) and the reverse importance of alanine and glutamine as nitrogen carriers to central organs in the post-absorptive state (liver, kidney, intestine) (green arrows). In this context, the regulatory role of the pancreas (altered secretion of insulin and glucagon) and the adipose tissue (lipolysis, release of free fatty acids and inflammatory adipokines in the general circulation, particularly in the post-absorptive state) is pivotal for the regulation of hepatic and whole-body homeostasis (red arrows).

#### **3. Medical Nutrition Therapy for Type 2 Diabetes**

The foundation of medical nutrition therapy (MNT) of T2D is to achieve glucose, lipids, and blood pressure within the target range to prevent, delay or manage microvascular and macrovascular complications [36,37].

MNT plays a pivotal role in the overall management of diabetes, and patients with T2D should be actively involved with their healthcare team for the collaborative development of a personalized meal plan. If these patients are referred to a registered dietitian or a nutritionist proficient in providing diabetes-specific treatment, an absolute reduction of glycated A1C haemoglobin of up to 1.9% may be observed [8]. Continuous dietary counselling integrated with mobile apps and wearable devices has also been advocated to facilitate the real-time assessment of dietary intake, to strengthen adherence, and support motivation and self-efficacy [38].

#### *3.1. Comparison between Different Guidelines*

Table 2 summarizes the main nutritional recommendations for patients with T2D, derived from guidelines, and the dietary patterns with a high degree of evidence [5–9]. All proposed interventions are designed to reduce energy intake and promote 5–10% loss of initial body weight, leading to improved insulin sensitivity, blood glucose and blood pressure control, and reduced lipid levels [39]. Regular mealtimes and a healthy diet should be combined with increased physical activity [4].

**Table 2.** Summary of nutritional recommendations for type 2 diabetes, as derived from international guidelines.


The optimal distribution of macronutrients as a percentage of total energy is highly variable, from 45 to 60% for carbohydrates, from 15 to 20% for proteins and 20 to 35% for fats, suggesting no ideal percentage of calories from macronutrients [7]. As to carbohydrates, high-fibre sources (30–50 g/day of dietary fibre, ≥30% as soluble fibres) and minimally processed, low-glycaemic index carbohydrates should be preferred to improve glycaemic control, LDL-cholesterol and cardiovascular (CV) risk. Overall, reducing carbohydrate intake for individuals with T2D has been shown to improve blood glucose [6]; a systematic review and meta-analysis (9 studies with 734 patients) confirmed a beneficial effect of lowcarb diets vs. normal-or high-carb diets on HbA1c and on short-term weight loss, not on long-term weight loss [40]. Food plans should emphasize the consumption of non-starchy vegetables, with minimal added sugars, fruits, whole grains, and dairy products [41]. Using non-nutritive sweeteners as substitutes for added sugar (sucrose, high fructose corn syrup, fructose, glucose) can reduce daily calories and total carbohydrates. For those who regularly consume sugary drinks, consuming a low calorie or unsweetened drink can be an alternative, but both should be consumed with caution.

Additionally, recommendations on protein intake do not differ from the general population (1.0–1.2 g/kg body weight or corrected body weight for patients with overweight/obese); protein intake should be reduced to 0.8 g/kg body weight in subjects with

chronic diabetic nephropathy [36]. At present, there is some inconsistency across guidelines from different countries as to protein sources (some do not limit animal proteins) and as to allowed maximal amount of protein intake (1.2–1.5 g/kg/day) [42]. A recent meta-analysis of 54 RCTs (4344 participants) showed a significant effect of moderate high-protein diets (20–45% of total energy) vs. low-protein diets (10–23%) on weight loss and weight loss maintenance, total fat mass reduction and cardiometabolic risk [43]. The authors suggest that the effects might also be due to the blood-pressure-lowering effect of bioactive peptides that inhibit the angiotensin-converting enzyme activity observed in protein isolates [44].

Among dietary fats, it is recommended to avoid trans-fatty acids as much as possible and to consume less than 7–9% of the total daily energy from saturated fatty acids (SFAs). SFAs should be replaced with polyunsaturated fatty acids (PUFAs), mainly mixed sources of omega-3/omega-6, and with monounsaturated fatty acids (MUFAs) of vegetable origin whole grains, nuts and seed (rich in alpha-linolenic fatty acid) [36,45].

The recommendations have largely focused on the quality of the diet and the importance of a healthy eating pattern that contains nutrient-rich foods, with less attention to the percentage of specific nutrients, with a reduction in daily caloric intake (250–500 kcal) for subjects with overweight and obesity [6]. Several dietary patterns have been studied and proposed, but no single dietary pattern should be preferred [8]. Individual preferences and treatment goals will determine the long-term use of these models; systematic reviews and meta-analyses have shown that a Mediterranean-style dietary pattern significantly improves hard outcomes such as glycaemic control, systolic blood pressure, total cholesterol, HDL-cholesterol and triglycerides [46]. The Mediterranean diet is characterised by a moderate-to-low carbohydrate intake, entirely covering the micronutrient needs [47]. Additionally, a low fat diet, i.e., the DASH-diet, promoted in the prevention of cardiovascular disease and the treatment of high blood pressure [48], has also reached consensus [49]. In a review comparing low-carbohydrate and ketogenic diets, the vegan diet, and the Mediterranean diet, all diets improved glycaemic control and weight loss, but patient adherence and long-term manageability were pivotal factors for the efficacy of each diet [50].

#### *3.2. Intensive Lifestyle Intervention*

Intensive lifestyle intervention (ILI) that supports behaviour changes, as initially experienced in the Finnish Diabetes Prevention Study and the U.S. Diabetes Prevention Program [51,52], represents the recommended approach to prevent and/or delay the onset of T2D in prediabetic patients [5]. The ILI behaviour approach combines diet and physical activity interventions with the goal to achieve and maintain a 7% loss of initial body weight and to increase moderate-intensity physical activity to at least 150 min/week. The effect of ILI has also been investigated in the treatment of T2D. The Look AHEAD study randomized 5145 individuals with T2D and associated overweight or obesity to either ILI or diabetes support and education (as control group), having cardiovascular outcomes as primary goal. Weight loss was achieved by reducing caloric intake to 1200–1800 kcal/day depending on baseline weight using portion-controlled meal plans, calorie-counting techniques, and meal replacements combined to moderate physical activity to ≥175 min/week. ILI was delivered as individual and group sessions over the first year, with a median follow-up of 9.6 years. [53]. After one year, the average weight loss in the ILI group was 8.6%, compared with 0.7% in the control group, with 55% of ILI participants having lost ≥7% of their initial b.w. vs. 7% of controls. This led to remission of T2D in 11.2% of ILI participants vs. 2.0% in controls. However, by the fifth year of follow-up, ILI participants had regained half of their initial weight loss, and the study was closed at the end of the follow-up (10-years) after an interim analysis had shown that the intervention had failed its primary outcome [54]. Thus, the critical point becomes how to achieve long-term weight loss maintenance, a difficult task in the general population [55], and a core problem in T2D treatment with approaches based on lifestyle changes. Although more effective than behaviour change in inducing and sustaining remission of T2D, bariatric surgery also suffers from reduced durability over time [56].

A novel approach was tested in the DIRECT trial, a primary care-led management intervention in patients with T2D diagnosed by less than 6 years and not receiving insulin. The ILI strategy was preceded by a commercial very-low-calorie diet followed by stepwise food reintroduction. Primary outcomes were weight loss ≥15 kg and T2D remission. At 12 months, almost half of participants achieved T2D remission off all glucose-lowering medications [57]; this percentage dropped to 36% at 24 months [58]. Notably, the maintenance of diabetes remission paralleled weight loss maintenance and particularly fat removal from the liver and pancreas, suggesting recovered insulin secretion [59]. With the limits of durability, all these data support the use of ILI, including dietary interventions, as an effective adjuvant treatment to improve glycaemic control [60].

Another approach is the so-called intermittent fasting, which has gained increased popularity for treating T2D based on very limited literature [61]. This term encompasses various eating behaviours that avoid (or limit) nutrient and energy intake for a significant amount of time (a full day or a time-restricted feeding between 6 to 8 h) on a regular intermittent schedule. Intermittent fasting is claimed to improve glucose control, insulin resistance and to induce weight loss by generating a 'metabolic switch', i.e., a sort of rejuvenation of the metabolic homeostasis, leading to increased health span and longevity [62], but no advantage over conventional caloric restriction has been proven. Moreover, this regimen could carry the risk of hypoglycaemia even when following a medication dosechange protocol and should only be used under strict medical control and/or continuous glucose monitoring [63].

Finally, the use of mobile apps and wearable devices has recently gained consensus to facilitate weight loss. The use of these devices allows a direct analysis of daily calorie intake and physical activity (daily steps), translated into calorie consumption [64]. This provides immediate feedback and is likely to support long-term adherence to well-defined goals [38]. Several commercial apps are available, and have been tested in the prevention and treatment of diabetes in trials mimicking the U.S. Diabetes Prevention trial [52]. Toro-Ramos et al. confirmed a modest efficacy of weight loss for app users after 6 and 12 months of systematic use in subjects with prediabetes compared with usual care [65], and similar studies are available with the most recent apps that also support by tailored messages interactivity [66]. Although all these supports are expected to improve long-term weight loss, and a few patients may really reach impressive results [67], their use is biased by higher attrition rates [68]. Nonetheless, the possibility to reach a larger audience makes this approach a useful opportunity.

#### **4. Nutritional Supplements for Metabolic Control**

International diabetes societies do not support the use of nutritional supplements in diabetes, but their use continues to increase in several countries, despite lack of evidence and uncertainty on safety [36]. A complete analysis of available products (combinations may account for several hundreds) is outside the scope of this review, but a few of them are of interest. Their putative mechanism(s) of action are summarized in Table 3 [69–89]. They are not expected to replace diet and glucose-lowering drugs but might be confidently used, provided their safety is proven.

**Table 3.** Putative mechanism(s) responsible for the beneficial effects of nutrition supplements and micronutrients on diabetes risk and glycaemic control.


Abbreviations: HOMA-IR, homeostasis model assessment of insulin resistance; IL, interleukin; TNF, tumor-necrosis factor.

#### *4.1. Inositols*

Several reviews and meta-analyses have been published on the treatment of gestational diabetes with myo-inositol (MI) or D-chiro-inositol (DCI) [70,90–93]. A Cochrane review was inconclusive [94]; MI supplementation did not reduce the need for insulin or produce any significant effect on blood glucose. Conflicting data have also been reported using DCI or the combination of MI and DCI, and the optimum dosage to achieve a significant effect on glucose metabolism remains unsettled [91]. A position statement of the two largest Italian diabetes societies concluded that MI (at the dose of 4 g/day) might be safely used for the prevention and treatment of gestational diabetes [95], but the level of evidence and the strength of recommendations are low. No data are available on the use of MI or DCI to treat insulin resistance outside gestational diabetes. Studies are in progress on the combined use of MI and myo-inositol hexa-phosphate (IP6), or phytic acid, showing more effective anti-oxidant and glucose-lowering activity in experimental animals [96], but no clinical data are available.

The use of inositol(s) in polycystic ovary syndrome is not considered in the present review; in that setting, specific hormonal activity is likely to produce clinical effects [97].

#### *4.2. Vitamin D*

Vitamin D levels are frequently suboptimal in T2D, probably driven by overweight/ obesity, and specifically by visceral adiposity [98], and have been associated with chronic inflammation and insulin resistance, as well as impaired insulin release [99]. Epidemiological studies support the existence of a relationship between low vitamin D levels and the presence of T2D, metabolic syndrome [100,101], nonalcoholic fatty liver disease (NAFLD) [102], cardiovascular risk factors [103] and insulin resistance, also tested by glucose clamp [75]. However, a clear association between vitamin D levels, insulin and glucose metabolism has not been systematically confirmed by intervention studies, and a causal association has never been established [104]. In a subset of the RECORD trial, a placebo-controlled trial of oral vitamin D3 and/or calcium supplementation for the secondary prevention of osteoporotic fractures in older people, vitamin D3 at the daily dose of 800 IU with or without 1000 mg of calcium did not prevent the development of T2D and did not reduce the need for glucose-lowering drugs in T2D patients [105]. Although the effects on insulin sensitivity have long been conflicting [73], a recent systematic review with metanalysis confirmed that vitamin D supplementation resulted in a significant improvement in HOMA-IR (standardized mean difference = −0.57; 95% CI: −1.09 to −0.04), particularly when vitamin D was administered in large doses and for a short period of time to nonobese, vitamin D deficient patients, or to individuals with optimal glucose control at baseline [106]. Data have been confirmed in another recent study in vitamin D-deficient adults randomized to high dose vitamin D supplementation. The HOMA value of insulin resistance was significantly reduced, and a lower rate of progression toward diabetes was observed vs. the control group (3% vs. 22%; p = 0.002) [107].

Of note, vitamin D has been extensively used also to treat sarcopenia, considering the role of insulin resistance extending from glucose metabolism to protein and amino acid metabolism, as discussed below.

#### *4.3. Niacin*

Niacin is a water-soluble derivative of pyridine, present in several forms (namely as nicotinic acid or nicotinamide), also named as vitamin B3. It is a derivative of vitamin B, frequently associated with inositols as inositol hexanicotinate. The effects on insulin release from islet β-cells have been extensively investigated in T2D with secondary failure of sulfonylureas, where niacin at the daily dose of 1.5 g significantly restored C-peptide release [108]. However, a meta-analysis of eight trials where niacin was used to treat hyperlipidemia in 2110 T2D patients showed no significant effects on plasma glucose (weighted mean difference (WMD), 0.18 mmol/L; 95% CI, −0.14 to 0.50) and HbA1c levels (WMD, 0.39%; 95% CI, −0.15 to 0.94) [109]. Niacin appeared to cause a deterioration of glucose control, in keeping with data observed in a meta-analysis of 11 trials in patients without diabetes at entry, where niacin was used to treat dyslipidaemia and prevent cardiovascular events [110] (relative risk of de novo T2D: 1.34 (95% CI 1.21–1.49)). Similar results were provided by the large trial of combination treatment with niacin plus laropiprant [111], where niacin treatment (2 g/day for a median of 3.9 years) was associated with an increased incidence of de novo T2D (rate ratio, 1.32; 95% CI 1.16–1.51) and deterioration in metabolic control in subjects with diabetes (1.55; 1.34–1.78) [112]. This deleterious effect is similar to the well-known, mild negative effect of statins on glucose metabolism. It adds to the well-known poor tolerability of niacin because of flushing, occurring at pharmacologic doses.

#### *4.4. Nutraceuticals*

Natural compounds derived from plant extracts, spices, herbs, and essential oils have been tested for alleged benefits in managing patients with metabolic syndrome [77,113]. They include Mediterranean diet components, olive oil and its anti-oxidant components, natural legumes and cereals, as well as specific compounds, alone or in combination. Curcumin [114], cinnamon [115,116], berberine [117,118], citrus flavonoids [119,120], quercetin [121,122], the bioactive compounds of garlic [123,124], red yeast rice [125] and neem extracts [126] have all demonstrated some activity on insulin sensitivity, but studies are usually of poor quality and very few received extensive validation, although supported by systematic reviews [119]. They may be included in dietary recommendations but should never replace pharmacologic treatment.

Resveratrol, a polyphenol present in plants such as grapes and nuts and mainly in derivatives (wine), merits a specific citation [127–129]. A recent Cochrane review identified three RCTs with a total of 50 participants who received graded doses of daily oral resveratrol for 4–5 weeks vs. placebo. Studies had a low risk of bias, but the analysis did not demonstrate any significant effect on glucose and HbA1c levels, with the limit of a short observation period. The authors found eight more ongoing RCTs with approximately 800 participants, likely to contribute more solid results [128]. Clinical studies in patients with insulin resistance and NAFLD have shown promising results [130], but even moderate alcohol intake is questioned in these patients due to the negative effects of alcohol on hepatic and extrahepatic cancers, which outweigh the possible beneficial effects on the cardiovascular system, largely derived from retrospective studies [131]. Finally, alcohol provides extra calories that should be considered in patients on dietary restriction, the pivotal intervention to reduce body weight and NAFLD burden.

Probiotics and/or prebiotics could be a promising approach to improve insulin sensitivity by modification of gut microbiota. Clinical data are specifically referred to gestational diabetes [132,133]; in these women four high-quality RCTs (288 participants) showed that treatment was associated with a significant reduction in insulin resistance (HOMA-IR: −0.69%; 95% CI −1.24, −0.14, *p* = 0.01), not in fasting glucose (−0.13 mmol/L; 95% CI −0.32, 0.06, *p* = 0.18) or LDL-cholesterol (−0.16 mmol/L; 95% CI −0.45, 0.13, *p* = 0.67) [133]. In the general diabetes population, the most recent review identified 38 studies totalling 2086 participants fitting pre-defined criteria to be included in a metaanalysis [134]. Overall, the use of prebiotics, probiotics or synbiotics reduced fasting glucose (−0.58 mmol/L; 95% CI −0.86, −0.30; *p* < 0.01), total cholesterol (−0.14 mmol/L; 95% CI −0.26, −0.02, *p* = 0.02) and triglyceride levels (−0.11 mmol/L; 95% CI −0.20, −0.02, *p* = 0.01) and increased HDL-cholesterol (0.04 mmol/L; 95% CI 0.01, 0.07, *p* < 0.01), but failed to reach the significance threshold in HbA1c (−2.17 mmol/mol; 95% CI, −4.37 to 0.03; *p* = 0.05) and had no effect on LDL-cholesterol [134].

Fructans are compounds acting as prebiotics, i.e., non-digestible food ingredients neither metabolized nor absorbed while passing through the upper gastrointestinal tract and fermented by bacteria in the colon. They include fructo-oligosaccharides, galactooligosaccharides, lactulose and large polysaccharides (inulin, resistant starches, cellulose, hemicellulose, pectin and gum) [135,136]. Diets rich in fructans might improve glucose metabolism in T2D also via decreased intake and intestinal absorption of food, adding to modifications of gut microbiota [137,138]. A systematic review with meta-analysis of 25 studies did not provide evidence for a beneficial effect on BMI, but inulin-type carbohydrate supplementation reduced fasting glucose (−16.4 mg/dL; 95% CI, −17.6 to −15.2), HbA1c (−0.58%; 95% CI, −0.78 to −0.39), and HOMA-IR (−0.99%; 95% CI, −1.76 to −0.2). However, a large heterogeneity was demonstrated, raising doubts on data validity [139].

#### *4.5. Other Micronutrients*

4.5.1. Zinc

Zinc deficiency is common in T2D [140], likely as an effect of both hyperzincuria [141] and reduced intestinal absorption [142], resulting in insulin resistance [143]. Its antioxidant role further strengthens the importance of zinc levels for diabetes control and the prevention of microvascular complications [144].

In the clinical setting, a systematic review with meta-analysis of 12 studies in T2D patients showed that zinc supplementation resulted in a significant reduction of fasting blood glucose (pooled mean difference, −18.1 mg/dL; 95% CI −33.8 to −2.41) and HbA1c (−0.54 %; 95%CI, –0.86 to –0.21), accompanied by a systematic reduction of total and LDL-cholesterol levels [145]. Among diabetes-related complications, zinc supplementation was shown to reduce lipoperoxidation [146] and to decrease urinary albumin excretion, independently of glucose control [147,148]. However, a few studies failed to demonstrate any positive effect of zinc supplementation in the metabolic control of T2D patients [146], also in the presence of long-term supplementation and low zinc levels at baseline [149]. Zinc supplementation might prove useful only in specific settings. In zinc-deficient patients with cirrhosis, independently of diabetes status, zinc treatment (zinc sulfate, 200 mg three times per day) was associated with improved non-insulin-mediated glucose disposal (so-called glucose effectiveness) [150], as well as improved alanine stimulated urea synthesis rate, a measure of amino acid utilization in tissues [151], also resulting in decreased ammonia levels and improved mental state. All these complementary effects might be important in subjects with T2D progressed to NAFLD-cirrhosis [152].

No relevant side effects of zinc supplements have been reported in chronic diseases [153].

#### 4.5.2. Chromium

A possible role of deficient chromium levels as risk factor T2D has long been suggested based on its insulin-sensitising activity, but the effects on human disease remain uncertain. In a large case-control study involving 4443 Chinese individuals (nearly half with either newly diagnosed T2D or newly diagnosed pre-diabetes), plasma chromium levels were approximately 10% lower in the T2D and pre-diabetes groups vs. controls, and the risk of T2D and pre-diabetes decreased across quartiles of chromium [154]. This evidence fits with smaller studies reporting decreased chromium levels and/or increased chromium excretion in T2D [141,155].

The effects of chromium supplementation have been tested in multiple review articles with pooled analysis or metanalysis [156–159]. Based on 25 RCTs of chromium supplementation, Suksomboon et al., concluded for positive effects of chromium supplementation on glucose control in patients with diabetes, with no increased risks of adverse events compared with placebo [156]. On the contrary, Yin et al., in a meta-analysis of 14 trials (875 participants, mean age range: 30 to 83 years old, 8 to 24 weeks of follow-up) did not demonstrate any significant effect of chromium, either as Cr chloride, or Cr picolinate, or Cr yeast) on HbA1c levels [157]. In a review limited to patients with T2D, very few studies reached clinically meaningful goals, defined as fasting plasma glucose (FPG) ≤7.2 mmol/dL, a decline in HbA1c to values ≤7%, or a decrease of ≥0.5% in baseline levels [158]. Finally, in the most recent and largest analysis in T2D (28 studies, 1295 participants, heterogeneous chromium supplements with daily intake ranging up to 3000 μg for 6–24 weeks), the authors concluded for a positive effect of Cr supplements on glucose metabolism [159] and include chromium supplements into the treatment of T2D [159], despite uncertainty about long-term use. Treatment reduced fasting glucose (WMD, −0.99 mmol/L; 95% CI, −1.72 to −0.25), HbA1c (WMD, −0.54 %; 95% CI, −0.82 to −0.25), triglycerides and increased HDL-cholesterol. The effects were mainly reported using both chloride and picolinate formulations and were independent of treatment duration.

#### 4.5.3. Magnesium

Insulin modulates the shift of magnesium from extracellular to intracellular space; in turn, intracellular Mg2+ concentration modulates insulin action, as well as blood pressure [160]; thus, low magnesium induces insulin resistance, and insulin resistance further decreases magnesium levels [161]. In the past 20 years, several epidemiological and clinical studies have demonstrated the protective role of magnesium on the risk of diabetes. In U.S. women aged ≥45 years (Women's Health Study) with no previous history of T2D, an inverse association was found between dietary magnesium and incident T2D, which was significant among women with increasing grades of overweight/obesity (P for trend, 0.02). It was associated with a progressive decline of insulin levels (P for trend, 0.03) [162]. Data were confirmed in 1122 individuals (20–65 years of age) enrolled between 1996 and 1997 and re-examined about 10 years later. The relative risk of new-onset prediabetes and T2D were increased in the presence of low magnesium levels at baseline [163].

Oral magnesium supplementation in subjects with T2D and low magnesium levels have been reported to improve insulin sensitivity and metabolic control [164–166]. In a meta-analysis of 40 prospective cohort studies enrolling more than 1 million participants and follow-up periods ranging from 4 to 30 years, dietary magnesium intake was associated with a 19% reduction in the relative risk of T2D (RR 0.81; 95% CI, 0.77–0.86 per 100 mg/day increment) [167]. In a different analysis of 28 studies involving 1694 subjects (834 in the treatment arm and 860 in the placebo arm), magnesium supplementation was demonstrated to produce favourable effects on blood glucose (WMD, −4.64 mg dL, 95% CI −7.60 to −1.68), as well as on HDL- and LDL-cholesterol, triglycerides and systolic blood pressure, also reducing cardiovascular risk [168].

Additionally, for magnesium supplements, no safety concerns have been raised; Verma and coll. argue that large trials should be performed to validate the use of magnesium supplements to prevent and treat T2D [168], but no consensus exists in the community [169].

#### **5. Prevention and Treatment of Diabetes-Related Sarcopenia**

Optimal energy intake, healthy food choices and sufficient protein intake, coupled with habitual physical activity, especially resistance training, are the cornerstones for metabolic control and the prevention of frailty in T2D. Despite the mounting evidence of the negative impact of sarcopenia on the natural history [170] and quality of life of T2D patients [171], there is a surprising dearth of intervention studies addressing T2D-related sarcopenia. Therefore, we must rely on findings from general intervention studies on sarcopenia and/or sarcopenic obesity.

Resistance training represents the most effective intervention for prevention and treatment and can be safely carried out even in fragile patients [172]. High protein (1.2–1.4 g/kg) hypocaloric diets—either exclusively food-based or including protein supplements, both as an adjunct to resistance training—have proven effective for preventing muscle mass loss during weight-reduction diets in women with obesity [173]. To reach the anabolic threshold, the protein supplement should be provided at meals rather than between meals in the elderly. The optimal protein dose (including food protein and proteins from supplements) should be 30–45 g of proteins per serving in the elderly [174]. However, high protein load cannot be recommended to T2D patients with chronic kidney disease (CKD) [175].

Whey proteins, rich in the anabolic amino acid leucine, represent the most frequently used protein supplements. Additionally, BCAA supplement or the leucine metabolite β-hydroxy-β-methyl butyrate have been proposed. These supplements are generally ineffective as sole treatment in patients without diabetes [173,176,177] and must be added to resistance training to improve already-established sarcopenia (associated or not to obesity). Leucine has strong insulinotropic properties, and leucine-rich supplements may increase the availability of amino acids for protein synthesis and reduce protein breakdown in the muscle, at the same time enhancing glucose disposal and glycaemic control, but solid data are lacking [178]. A noteworthy issue is that BCAA treatment has proven effective both in preventing and in improving sarcopenia in patients with liver cirrhosis, also independently of physical exercise/resistance training [179,180].

Finally, vitamin D was also proposed as a nutritional supplement to control sarcopenia. The activation of the vitamin D receptor present in muscle cells promotes their differentiation, proliferation and hypertrophy. Vitamin D deficiency is associated with reduced muscle mass and strength in the elderly [181], and vitamin D supplementation increased muscle strength, particularly in vitamin D-deficient cases and in the elderly [181]. Data were not confirmed by a Cochrane review in patients with liver disease; no data are available in T2D [182] and trials are eagerly warranted.

#### **6. Management of Other Comorbidity in Patients with T2D**

#### *6.1. Cirrhosis*

Nutrition therapy in cirrhosis has already been discussed in this Special Issue of Nutrients. Nonetheless, its association with T2D deserves a special focus considering the high prevalence—up to two-thirds of patients with cirrhosis listed for liver transplantation have T2D [183]—and its importance as a risk factor for the development of complications (ascites, hepatic encephalopathy, bacterial infections, renal insufficiency, hepatocellular carcinoma) [184]. Nutrition treatment becomes extremely challenging since additional determinants of malnutrition may be present, including reduced food intake and/or defective absorption of nutrients and impaired albumin synthesis. Sarcopenia—accelerated by upregulation of myostatin due to hyperammonaemia—becomes a predictor of morbidity and mortality, aggravated by obesity (sarcopenic obesity) [185,186], and is difficult to treat. Bariatric surgery is frequently contraindicated [187]; also pharmacologic treatment with GLP-1 agonists favouring weight loss [188], such as liraglutide, may be contraindicated by the presence of varices at risk of bleeding [189], and dietary treatment remains the sole possibility.

Unfortunately, there are no specific guidelines for the nutritional treatment of T2D associated with cirrhosis, and individualized, structured nutritional programs are suggested to accomplish the need for restriction of sodium and fluids [190]. Due to the accelerated depletion of glycogen stores, it is important to provide frequent (3 to 5) meals containing carbohydrates, plus a late evening carbohydrate snack to prevent muscle protein catabolism [191,192].

Protein restriction is not systematically advocated, as these patients usually tolerate a normal protein intake. Besides hypoalbuminemia, potentially requiring a higher protein intake, albumin glycation is present in T2D [193]. The structurally damaged albumin molecule is also dysfunctional, and albumin administration may be required to reduce ascites. Although the specific indications for use are clearly defined by international guidelines [194], albumin is frequently administered outside evidence-based indications, including nutritional support [195]. At present, no studies showed a direct link between albumin administration and nutritional correction in decompensated cirrhosis; it can only be hypothesized that the clinical improvement seen with long-term albumin treatment could indirectly improve the nutritional status through different mechanisms, which include the control/resolution of ascites and whole body edema, or the reduction of systemic inflammation [196].

#### *6.2. Renal Failure*

In T2D patients with CKD, protein restriction may be advised; low protein diets (daily protein intake reduced to 0.8 g/kg b.w.) showed a beneficial impact on the trajectory of renal function leading to an attenuation in the progression of CKD and delayed initiation of dialysis treatment, an important goal for patients [197–199]. However, protein restriction may worsen sarcopenia and should be limited as long as possible. According to the National Kidney Foundation/Kidney Disease Outcomes Quality Initiative (NKF-KDOQI) Guidelines, protein intake must actually be increased up to 1.2 g/kg body in patients

undergoing maintenance dialysis due to important additional amino acid losses occurring in dialysate [200,201].

Different sources of dietary protein may have a different impact on CKD-related complications; meat intake increases the production of nitrogenous end products, worsens uraemia and may increase the risk of constipation with consequent hyperkalaemia associated with the low fibre intake [199]. A predominantly plant-based diet, fibre-rich and low in protein content (0.6–0.8 g/kg/day), can produce favourable changes in the intestinal microbiome, thus modulating the generation of uremic toxins and slowing down the progression of CKD, finally reducing cardiovascular risk [202]. Carbohydrates from sugars should be limited to less than 10% of the energy intake [203], and saturated fatty acids, trans fats, and cholesterol should be replaced by polyunsaturated and monounsaturated fats, associated with more favourable outcomes [204]. Dietary sodium restriction should be considered, but a deficient sodium intake (to less than 1.5–2.0 g/day) carries the risk of hyponatremia, leading to reduced insulin sensitivity and prediabetes [205]. T2D patients with advanced CKD progressing to end-stage renal disease may be prone to the "burntout diabetes" phenomenon (i.e., spontaneous resolution of hyperglycaemia and frequent hypoglycaemic episodes); further studies in this frail population in chronic hemodialysis treatment are particularly needed to determine the safety and the effectiveness of dietary manipulations [206].

#### **7. Conclusions**

T2D is the paradigm of conditions where genetic, behavioural and individual factors drive disease occurrence and severity. Despite decades of epidemiological studies and randomized trials, several unmet needs remain (Table 4). The goal of optimal nutritional approach is to maintain or regain a body weight within the normal range, providing adequate intake of macronutrients and micronutrients to reduce the risk of sarcopenia. Various dietary approaches have been proposed to improve outcome, with the Mediterranean diet supported by solid evidence. However, as long-term adherence is the main goal to be achieved, the dietary plan and the calorie restriction that patients feel confident to maintain life-long should always be preferred. At present, supplementation with inositols, vitamin D and micronutrients (zinc, chromium, magnesium) is not systematically suggested, but might be considered in individual patients.


**Table 4.** Principal unmet needs for optimal nutritional treatment of patients with type 2 diabetes.

Although advances in nutrigenomics and metabolomics offer the rationale for tailored precision medicine, a personalized meal plan, supported by continuous dietary counselling by registered dietitians remains at present the key strategy for long-term success in weight and glycaemic control [37], particularly in individual high-risk cases [38].

**Author Contributions:** Conceptualization, M.L.P., G.M. and F.R.; literature search, M.L.P., L.B., F.M., A.S.S., P.C., G.M. and F.R.; writing—original draft preparation, M.L.P., G.M. and F.R.; writing—review and editing, M.L.P., L.B., F.M., A.S.S., P.C., G.M. and F.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** F.M. is supported by a contract financed by the Italian Ministry of Health and Italian Regions (NET-2016-02364191).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Beyond the Paradigm of Weight Loss in Non-Alcoholic Fatty Liver Disease: From Pathophysiology to Novel Dietary Approaches**

**Angelo Armandi 1,2 and Jörn M. Schattenberg 2,3,\***


**Abstract:** Current treatment recommendations for non-alcoholic fatty liver disease (NAFLD) rely heavily on lifestyle interventions. The Mediterranean diet and physical activity, aiming at weight loss, have shown good results in achieving an improvement of this liver disease. However, concerns related to compliance and food accessibility limit the feasibility of this approach, and data on the long-term effects on liver-related outcomes are lacking. Insulin resistance is a central aspect in the pathophysiology of NAFLD; therefore, interventions aiming at the improvement of insulin sensitivity may be preferable. In this literature review, we provide a comprehensive summary of the available evidence on nutritional approaches in the management of NAFLD, involving low-calorie diets, isocaloric diets, and the novel schemes of intermittent fasting. In addition, we explore the harmful role of single nutrients on liver-specific key metabolic pathways, the role of gene susceptibility and microbiota, and behavioral aspects that may impact liver disease and are often underreported in clinical setting. At present, the high variability in terms of study populations and liver-specific outcomes within nutritional studies limits the generalizability of the results and highlights the urgent need of a tailored and standardized approach, as seen in regulatory trials in Non-Alcoholic Steatohepatitis (NASH).

**Keywords:** insulin; lifestyle; non-alcoholic; steatohepatitis; fibrosis; metabolic syndrome; weight loss; time-restricted feeding; intermittent fasting; low-carb diet; liver disease

#### **1. Introduction: Rationale for Lifestyle Interventions in NAFLD**

Non-alcoholic fatty liver disease (NAFLD) represents the leading form of chronic liver disease, with a worldwide prevalence of 25% [1] and a disease burden which is projected to dramatically increase by 2030 [2], in parallel with the pandemic of metabolic-related affections, mainly obesity and type 2 diabetes (T2DM). Overall, the economic and social burden of NAFLD in Europe is high [3].

NAFLD is a multi-systemic disease which involves multi-directional metabolic derangements [4]. The term NAFLD covers different disease activities and disease stages, ranging from simple steatosis (non-alcoholic fatty liver, NAFL), to the histological evidence of lobular inflammation and ballooning (non-alcoholic steatohepatitis, NASH), which represents the progressive form of the metabolic-related liver damage that can potentially lead to cirrhosis and further complications, including hepatocellular carcinoma (HCC) [5]. Progression along this disease continuum is driven by alternating inflammatory bouts that foster excessive fibrogenesis and promote scars inside the liver parenchyma. Progression through the histological fibrosis stages to cirrhosis has been recognized as the most relevant prognostic factor in NAFLD [6]. In addition, resolution of NASH is more

**Citation:** Armandi, A.; Schattenberg, J.M. Beyond the Paradigm of Weight Loss in Non-Alcoholic Fatty Liver Disease: From Pathophysiology to Novel Dietary Approaches. *Nutrients* **2021**, *13*, 1977. https://doi.org/ 10.3390/nu13061977

Academic Editor: James King

Received: 16 May 2021 Accepted: 7 June 2021 Published: 8 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

difficult to achieve when compared to NAFL, where no inflammatory activity is detected upon histology. Presently, liver biopsy is the reference standard to reliably grade NASH and stage fibrosis. Hence, in the absence of accepted non-invasive markers, an accurate assessment of treatment efficacy requires liver biopsy. This is frequently not performed in lifestyle intervention trials and thus results are not directly comparable with current registration studies exploring pharmacotherapy options.

Currently, no liver-directed pharmacological therapy is approved for the treatment of NAFLD, and the management relies on lifestyle interventions. The majority of individuals with NAFLD are overweight or obese and suffer from T2DM [7]. Therefore, weight loss, aiming at ameliorating the liver injury and decreasing fibrosis, is suggested by all guidelines. However, NAFLD can also occur in lean individuals, where other factors, such as an unfavorable genetic background, may lead to a comparable liver phenotype. The patients considered as lean NAFLD can develop all disease stages of NASH, supporting the concept that even in the absence of overt obesity, steatohepatitis and fibrosis can occur. [8]. In all NASH patients, a balance between harmful (metabolic) co-factors, intrinsic genetic factors, and extrinsic lifestyle choices impact on the clinical phenotype. This variability also adds to the inhomogeneity of responses observed in lifestyle interventions.

The seminal studies conducted by Vilar-Gomez et al. on 293 NAFLD individuals undergoing intensified lifestyle changes (low-fat hypocaloric diet) with psychological coaching for 52 weeks highlighted the results that nutrition and physical activity can have on liver histology: weight loss of more than 10% of the initial body weight led to a resolution of NASH in 90% of cases, and a regression of fibrosis in 45% of cases [9]. Unfortunately, only 10% of all intensively guided participants could reach this endpoint. Similar data resulted from another randomized, controlled trial conducted on 31 overweight or obese patients with biopsy-proven NASH undergoing lifestyle intervention for 48 weeks, with significant improvements in histological inflammatory activity in those who could reach at least a 7% reduction in weight loss [10]. In a routine clinical setting which offers less intensive counseling, the effects of lifestyle modifications are clearly lower, which is linked to the lower rate of adherence.

Evidence coming from morbidly obese patients undergoing bariatric surgery have provided insightful results on the impact of rapid weight loss on liver damage. Despite differences of NAFLD in bariatric cohorts with typically lower disease activity and stages, the pronounced and sustained weight loss that occurs highlights the degree of effect that can be achieved in a short time frame. Studies conducted by Lassailly et al. showed that NASH resolves in 85% of cases at 1 year after bariatric surgery [11], and an additional beneficial effect on fibrosis can be observed across 5 years following surgery [12].

Histological improvements resulting from weight loss are also observed across clinical trials with investigational drugs. In a phase II trial conducted on patients with NASH, liraglutide, a synthetic agonist of glucagon-like peptide-1 (GLP-1) approved for the treatment of T2DM, showed a resolution of inflammatory activity in 39% of cases along with a weight reduction of 15% from the initial value [13]. Empagliflozin, a sodium–glucose co-transporter-2 (SGLT2) inhibitor used for the treatment of T2DM, showed a 20% resolution of liver fat content as measured by magnetic resonance spectroscopy, accompanied by a reduction in a placebo-corrected 2.5 kg of weight [14]. On the contrary, pioglitazone, a peroxisome proliferator-activated receptor (PPAR)-γ agonist, has been shown to improve NASH in up to 34% of cases with a mean weight gain of 4.7 kg [15].

In addition, data extrapolated by clinical trials have also demonstrated the significant influence of lifestyle interventions in modulating the course of NAFLD. Dietary recommendations and physical activity are suggested for all participants in experimental studies, in which proper, systematic designs may help to identify strong endpoints, limit heterogeneity, and predict the size of placebo responses [16]. Therefore, results emerging from placebo arms mirror the effective action of lifestyle modifications, which are strengthened by higher compliance in the setting of close clinical visits. One recent meta-analysis of the placebo groups from 39 randomized controlled trials of adults with NASH showed a significant response: up to 25% of patients given a placebo had improvements in all the histology scores, including fibrosis, with moderate heterogeneity among the different studies. Moreover, patients given the placebo had a significant reduction in steatosis, as measured by proton magnetic resonance spectroscopy, and a significant decrease in serum transaminases [17].

#### **2. Response to Lifestyle Intervention between Genes and Environment**

Diverse genetic susceptibility, as expressed by different single nucleotide polymorphisms (SNPs) in targeted genes, as well as the additional impact of metabolic co-morbidities and lifestyle habits, are together responsible for the high variability in the efficacy of lifestyle interventions.

Genome-wide association studies (GWAS) have identified vulnerable sites in many gene loci, such as Patatin-like phospholipase domain-containing protein 3 (PNPLA3) [18], Membrane-bound O-acyltransferase domain-containing protein 7 (MBOAT7) [19] and Transmembrane 6 Superfamily Member 2 (TM6SF2) [20], which are related to a more aggressive disease phenotype, leading to a major risk for developing advanced liver disease. However, post hoc analyses conducted in a randomized controlled trial of lifestyle modifications in NAFLD populations have shown that variants in PNPLA3 were associated with better improvements in weight loss, better impacts on dyslipidemia, and greater reductions in intrahepatic fat, as evaluated by proton magnetic resonance spectroscopy [21]. These data suggest that the presence of the PNPLA3 variant may induce a better response to lifestyle intervention, and favorable results also come from a study from Sevastianova et al., where the same variant did not prevent the decrease in liver fat upon undertaking a hypocaloric low-carbohydrate diet for 6 days [22].

This evidence may lead to a reduced need for drug therapy in patients with PNPLA3 variants, and further studies investigating the role of other variants would potentially bring stronger results. Currently, polygenic risk scores are being evaluated, putting together different gene variants, in order to assess their potential role in preventing hard endpoints [23], and response to lifestyle interventions might be included in this novel approach.

In addition, gene expression is continuously shaped by environmental agents, which act as epigenetic modulators of specific protein translation and synthesis. In one study conducted on mice, a high-cholesterol diet was associated with a down-regulation of key genes involved in cholesterol metabolism, such as farnesoid-X receptor (FXR), which exerts ani-inflammatory and anti-fibrotic effects upon the liver [24]. Hence, the unhealthy dietary environment may predispose the liver to unfavorable gene expression, with relevant implications in clinical settings.

The close connection between the liver and the gut supports the concept that gut microbiota (GM) correlate with the liver disease. However, these interactions are multidimensional and complex. Alterations of the gut vascular barrier and intestinal epithelial barrier are in reciprocal interaction with the GM composition and contribute to liver injury in murine models [25]. Gut integrity—a function of intestinal epithelium and dietary composition—also affects the balance between bacterial species and microbial gene richness, which may give rise to hepatic inflammation [26]. Obese people present with low microbial gene richness, and dietary intervention has been shown to improve this pattern [27]. Additionally, reductions in GM heterogeneity lead to a decrease in short-chain fatty acids and increased lipopolysaccharide (LPS)—both factors are associated with insulin resistance [28]. To add to this evidence, one longitudinal study conducted in 307 males observed a reduced risk of cardio-metabolic disease in those individuals who adhered to a Mediterranean-style dietary pattern, and this was associated with a specific GM taxonomy [29].

When considering the diverse response on a lifestyle treatment, ethnicity plays a relevant role. A recent meta-analysis of over two million Chinese individuals highlighted a dramatic increase in the prevalence of NAFLD, reaching 29% in the 2010s. A greater predominance of PNPLA3 variants was observed, partly explaining the 10% of NAFLD diagnosed in non-obese individuals. This might also impact on the response from lifestyle treatment. On the other hand, a major prevalence of NAFLD was observed in Westernized areas of Asia, underlying the combining impact of environmental factors [30]. The balance between genes and environment with regard to response to lifestyle treatment is still underreported in NAFLD. Randomized control trials of lifestyle interventions in Asian NAFLD populations have shown that both aerobic and resistance training, alone or in combination with dietary changes, have a positive impact on weight loss and reductions in liver fat and inflammation, however with substantial heterogeneity among studies involving different ethnicities [31]. Therefore, ethnic differences may be responsible for different outcomes for lifestyle interventions and may affect the interpretation of data.

#### **3. Improving Insulin Resistance as Metabolic Endpoint for Lifestyle Intervention**

Insulin resistance has been widely assessed as the major trigger of liver damage in NAFLD [32,33]. Defective action of insulin in peripheral tissues causes a reduced insulinmediated glucose uptake in skeletal muscle, resulting in persistent hyperglycemia, and enhances lipolysis in adipose tissue, with increased levels of free fatty acids. The liver is responsible for the capture of the overflow of free fatty acids, accumulating esterified fats in lipid droplets inside the cytoplasm of hepatocytes. Insulin resistance inside the liver, linked to the alterations in post-receptor insulin signaling, leads to increased gluconeogenesis, with further harmful impacts on glycemic homeostasis.

A mechanistic link between insulin resistance and chronic liver inflammation has been explored in studies conducted on animal models. In mice, overexpression of cytochrome P450 2E1 (CYP2E1) as a result of oxidative stress-derived liver inflammation, impairs intrahepatic insulin signaling, by decreasing the tyrosine phosphorylation of insulin receptor substrates (IRS), one key passage in the insulin metabolic pathway [34]. Hence, this interference prevents the liver from implementing a proper response to insulin. Interestingly, CYP2E1-derived inhibition of insulin signaling was partially mediated by downstream c-Jun N-terminal kinase (JNK), tightly connected to the activation and apoptosis of signalregulating kinase 1 (ASK1). These are two major components of the hepatic inflammasome that promote apoptosis, inflammation and fibrosis, and their inhibition has been shown to improve liver damage [35], potentially acting as a target to treat insulin resistance [36].

Moreover, it has been shown that the crosstalk between adipose tissue and the liver, mediated by free fatty acids, results in the activation of Kuppfer cells, promoting inflammation and fibrogenesis. This pathophysiological milieu develops in the absence of obesity and T2DM, as an independent mechanism of disease [37]. One study conducted on nonobese, non-diabetic NASH individuals showed a direct association between saturated fat intake, derived indices of insulin resistance, and the postprandial rise of triglycerides, suggesting a shared pathological ground between nutrition, fats, and insulin activity, in the absence of overt metabolic-related morbidities [38].

Moreover, a high heterogeneity was observed in the obese population with NAFLD. Obesity is not a unique phenotype and the different compartments of adipose tissue, e.g., subcutaneous tissue versus abdominal/visceral adipose tissue, contribute differently to the disease. Visceral adipose tissue is characterized by proinflammatory activity and contributes to insulin resistance in peripheral tissues, in particular, in skeletal muscle. Muscle insulin resistance is linked to a worse metabolic phenotype. Overall, these individuals have a different clinical phenotype compared to obese patients that do not exhibit visceral adiposity and accompanying insulin-resistance. These patients are currently considered metabolically healthy obese patients. This difference may also have important implications in terms of prognosis, as well in the response to lifestyle treatment.

High insulin levels are therefore the results of multiple drivers, involving both environmental and genetic factors, of which balance determines the phenotype and the natural history of liver disease. A diet rich in saturated fats, sucrose-enriched beverages, refined carbohydrates, high glycemic index foods, high fructose intake, high caloric foods, inserted in the picture of the Western diet, as well as harmful eating habits and sedentary lifestyles, promote hyperinsulinemia and NAFLD [39]. In particular, the detrimental effect of fructose in the context of a hypercaloric diet leads to increased de novo lipogenesis and lipotoxicity.

These two major factors are involved in the progression of NAFLD to fibrosing NASH [40]. Hence, the improvement of insulin resistance would be the preferable endpoint for lifestyle interventions, with consequent beneficial effects on lipid profiles, cardio-metabolic parameters, and anthropometric measures. This complex crosstalk may be influenced by genes, as well as the co-presence of metabolic affections that would likely confer variable responses to the dietary approach.

The importance of insulin resistance in the clinical setting is supported by lifestyle interventions examining the improvement of NAFLD. Secondary or co-primary endpoints of most interventional studies are linked to the evaluation in changes of insulin resistance indices. In one randomized crossover trial conducted on obese non-diabetic biopsy-proven NAFLD individuals, lifestyle interventions using a Mediterranean diet led to a significant improvement in insulin sensitivity, as determined by a euglycemic clamp, in parallel to a reduced liver fat content, without relevant weight loss [41]. Another study conducted on adolescents with NAFLD undergoing a low-fructose diet led to a reduction in liver transaminases, accompanied by a significant improvement in systolic blood pressure and the Homeostatic Model Assessment for Insulin Resistance (HOMA-IR) [42]. The emerging role of the tight connection between liver outcomes and improvements in insulin resistance across the clinical studies suggest that insulin sensitivity should be addressed as a solid endpoint of lifestyle interventions.

#### **4. Quantitative and Qualitative Aspects of Nutrition**

Although association studies have highlighted a link of nutritional components, and in particular, hypercaloric diets, to liver injury in NAFLD, the specific dietary patterns or, more generally, the type of lifestyle intervention, that will reverse the disease phenotype beyond adherence to a hypocaloric diet is less clear. Resolution of steatohepatitis and regression of fibrosis are viewed as the most relevant endpoint in clinical trials. However, data on long-term outcomes, including the different incidence of metabolic concomitant affections and overall mortality, are lacking. Hence, most data are extrapolated from the efficacy of some dietary patterns in clinical trials, as well as the diverse harmful impact of different micro or macronutrients in animal models or surrogate endpoints in humans.

Recently, the evaluation of these interconnected pathways has been explored through the approach of the Geometric Framework of Nutrition (GFN), a dimensional model that graphically integrates key aspects of nutritional systems and maps the relationship between nutrient intake and health outcomes [43]. One experimental study conducted in mice showed, with the GFN model, that a carbohydrate intake of less than 25 kj/day and protein intake of more than 10 kj/day was associated with a lower probability of developing fatty liver disease, while increased fat content was positively associated with a fatty liver [44]. This evidence suggests that not only the reduction in calories, but rather the quality and the energy content of nutritional components, can add to a successful dietary intervention.

#### *4.1. Mediterranean Diet versus Western Diet*

Dietary patterns that approximate the Mediterranean diet have been repeatedly assessed in patients with metabolic diseases. In a large meta-analysis of 50 studies comprising 534,906 individuals, adherence to the Mediterranean diet was associated with a reduced risk of metabolic syndrome and, importantly, overall mortality. This was accompanied by a lower waist circumference, higher glucose tolerance, higher levels of high-density lipoprotein (HDL) cholesterol, and better levels of systolic and diastolic blood pressure [45].

In treating NAFLD, it is likely that easy and simple dietary patterns will be most beneficial for patients. One recent Italian randomized controlled trial, conducted by Franco et al. in 144 non-diabetic NAFLD patients, showed that a low glycemic index Mediterranean diet significantly improved hepatic steatosis, assessed by a control attenuation parameter (CAP), as well as markers of insulin resistance, as determined by HOMA-IR [46]. Interestingly, this pattern of diet ameliorates insulin resistance even in the absence of weight loss, acting

on a pathophysiological level [41], and improves blood levels of transaminases (alanine aminotransferase, ALT) as surrogate markers of hepatic necroinflammatory activity, but also liver stiffness at elastography after 6 months of treatment [47].

One European prospective population study conducted on 2288 Swiss individuals without baseline hepatic steatosis for a mean time of 5.3 years has confirmed the previous evidence on a larger scale, because higher adherence to Mediterranean diet was associated with a reduced risk of developing NAFLD [48].

The Mediterranean food pattern comprises the high consumption of vegetables, fruits, mainly unrefined grains, low-fat milk, nuts; low glycemic index carbohydrates; higher proportions of monounsaturated fatty acids (MUFA) or polyunsaturated fatty acids (PUFA) with minimal saturated fats; a weekly consumption of fish, legumes, poultry and eggs; daily consumption of olive oil and a moderate consumption of red wine with meals as sources of polyphenols; and a more sporadic consumption of potatoes, red meat, sweets [49]. The model of the Mediterranean diet is based on low-glycemic index foods, which favor low levels of insulin and, more generally, a lower risk of developing insulin resistance. Therefore, the beneficial effect of the Mediterranean diet on NAFLD is likely to be dependent on the improvement of insulin sensitivity. However, any excessive intake of nutrients should be avoided, regardless of the beneficial impact on health. For instance, the anti-inflammatory benefits derived from PUFA are linked to extra-virgin olive oil, which is widely used in this dietary pattern. At the same time, an excessive intake of fatty acids (including PUFA) is associated with increased fat deposition, with harmful effects on health, highlighting the importance of balance in a dietary pattern.

Notably, the spread of the Western diet has counterbalanced the original paradigms of the Mediterranean diet, leading to less strict adherence in recent decades. In one study conducted on an Italian population, a greater use of animal proteins, processed and sugary foods, and higher intake of simple sugars and saturated fats was observed among young individuals, with respect to old subjects that continued on the original pattern of the Mediterranean diet [50].

In general, a lower intake of fibers and a higher intake of carbohydrates, saturated fats, fructose, and animal proteins favor the onset of NAFLD, in particular, in the picture of the Western diet. One prospective study conducted on 14-year-old adolescents reported a higher incidence of fatty livers at the age of 17 in those who followed a Western diet pattern, rich in take-away foods, refined cereals, and processed meats [51].

#### *4.2. Behavioral Aspects That Contribute to Liver Damage*

Regardless of the type of diet, behavioral aspects need to be considered. When approaching food intake, several mechanisms involve the frequency and the number of meals, and in many cases the psychological underlying background is not fully investigated. Conceiving food either as a reward, or as one way to adapt to recurrent frustrations, can result in altered nutritional behavior: craving of carbohydrates, sweet-eating, night-eating, and emotional eating are some of the labels used to identify these patterns. In particular, attitude to snacking increases intrahepatic and abdominal fat, independently from caloric contents of meals [52]. Eating before bedtime seems to be associated with a higher risk of developing NAFLD in otherwise healthy individuals [53], as does the highly perceived stressfulness [54]. Fast eating leads to an increase in total calorie intake, in particular with regard to carbohydrates, impacting on the incidence of NAFLD [52,55], and should be investigated as a potential driver of liver disease in lean individuals as well [56]. Moreover, sleep disorders are common in NAFLD population, with delayed sleep onset, poor sleep quality, and shortened sleep duration, with subsequent daily sleepiness and diminished quality of life. In one study, food intake times were switched towards the night, and was partially responsible for the sleep disturbances [57]. On the contrary, binge eating disorder was not associated with the severity of liver disease, with respect to NASH and fibrosis [58], although a higher prevalence of binge eating has been observed in NAFLD populations [59].

As for alcohol reporting, of which correct estimations in clinical setting are often challenging, dietary misbehavior is frequently underreported among obese individuals. This may be a reason of concern when approaching a failure in lifestyle intervention, possibly caused by the discrepancy between effective and reported calorie intake [60]. In recent years, mobile technology has offered novel approaches to reduce underreporting, in particular with the help of image-assisted methods that can improve the accuracy of dietary assessment [61]. With this system, food information is directly extracted from the images and the calorie content is automatically calculated through portion size estimation [62]. In one study, eating pattern among healthy individuals was monitored with the use of a mobile app. Most subjects ate frequently and irregularly for more than 14 h during the day. Regulation of eating pattern and restriction of eating duration, assisted by a novel system of data visualization ("feedogram"), resulted in reduced body weight and increased wellness [63]. These approaches might help clinicians to gather comprehensive reports of dietary patterns and habits, with the possibility of defining more precise lifestyle interventions.

#### **5. Diverse Impact of Nutrients on NAFLD**

Dietary patterns are highly conditioned by the geographical area, with subsequent differences in nutrient availability, but also cultural heritage, socio-economic status, and food accessibility as determined by local institutions. Moreover, the variability of the outcomes across the clinical studies, as well as the unknown effect of single nutrients on liver histology, make it hard to assess the harmful size of one specific reported dietary pattern. This aspect seems of crucial relevance in clinical practice, because it would lead to a tailored dietary pattern after addressing potentially harmful nutrients (Figure 1).

**Figure 1.** Impact of gut microbiota, behavioral aspects and specific nutrients on features of liver damage leading to Non Alcoholic Fatty Liver Disease/Non Alcoholic Steatohepatitis (NAFLD/NASH). Abbreviations: AGEs, advanced glycation end-products; LPS, lipopolysaccharide.

#### *5.1. The Harmful Effect of Fructose Intake*

For instance, in one cross-sectional study conducted on 789 individuals undergoing screening colonoscopy, the consumption of red and processed meat was associated with both NAFLD and insulin resistance [64]. Similarly, a high amount of fructose or high glycemic index foods lead to enhanced hepatic fat synthesis, while a high intake of saturated fats predisposes the liver to fat accumulation. The subsequent lipid-driven toxic damage represents the main driver of hepatic insulin resistance that arises along with NAFLD. Exogenous and endogenous advanced glycation end-products (AGEs) represent a further source of oxidative stress that might interfere with hepatic glucose metabolism in the onset of liver inflammation. Likewise, low fiber intake negatively shapes intestinal bacterial

species to dysbiosis, increasing proinflammatory bacterial products that directly impact on liver metabolism.

Fructose is among the main drivers of liver disease, because it feeds into hepatic de novo lipogenesis, thus increasing the amount of steatosis and consequent lipotoxicity, and moreover promoting insulin resistance in the context of a hypercaloric diet. One study conducted on NAFLD adolescents showed how a 6 month reduction in overall fructose intake, together with low glycemic index diet, improved both metabolic parameters (systolic blood pressure and HOMA-IR) and liver biochemistry (ALT) [42]. However, specific patterns of fructose intake have been investigated; not fructose ingestion per se, but rather its addition in sweet beverages, which has been attributed to metabolic derangements. Indeed, one recent randomized, double blind, placebo-controlled trial evaluated the impact of beverage consumption: containing fructose, sucrose (glucosefructose disaccharide) or glucose. Interestingly, fructose- and sucrose-, but not glucosecontaining beverages increased the hepatic synthesis of fatty acids, even in basal state and at a stable average total energy intake. These findings support the hypothesis of long-term fructose-induced intrahepatic metabolic changes, leading to adaptive pathways and increased basal lipogenic activity. During the 7 weeks of intervention, the authors did not detect any differences in metabolic outcomes (fasting plasma triglycerides, glucose and insulin concentrations, HOMA-IR modifications, arterial hypertension rates), concluding that chronic exposure to fructose-containing beverages impacts only on liver lipogenesis, the first hallmark in the onset of metabolic liver disease [65].

#### *5.2. Exploration of High-Glycemic Index Nutrients*

The detrimental effect of high glycemic index nutrients on NAFLD has been shown in multiple studies. High glycemic index foods (potatoes, white rice, white bread, honey) rapidly increase insulin blood levels, with a subsequent harsh decrease in blood glucose levels. This metabolic pattern is linked to two main aspects: the hyperinsulinemia involved in the long-term deleterious adaptation in insulin-sensitive tissues, and the quick onset of starvation caused by hypoglycemia, causing an increase in caloric intake and potentially leading to eating disorders. This eating pattern leads to hepatic fat accumulation and higher glycogen stores [66]. As with fructose consumption, a high glycemic index food pattern acts on an early step of the liver disease, without impacts on other metabolic parameters (namely, blood glucose levels, triglycerides, high-density lipoprotein cholesterol) [67]. However, longitudinal studies are needed to assess the long-term impacts of this eating pattern on the multiple components of the metabolic syndrome, as well as on the evolution of NAFLD.

In a diabetic population, high glycemic index food patterns are particularly unfavorable, and a long-term low glycemic index diet has been shown to improve glycated hemoglobin and fasting blood glucose, with respect to controls [68]. This nutritional strategy may therefore be tailored to diabetic NAFLD patients, in order to improve overall metabolic disruption.

#### *5.3. AGEs and Oxidative Stress*

Oxidative stress, induced by reactive oxygen species (ROS), is part of the pathophysiology in NAFLD [69]. Overproduction of ROS occurs when the mitochondrial oxidative capacity is exhausted by an excess of substrates, including saturated fats undergoing βoxidation. This leads to impaired mitochondrial phosphorylation, which is associated with reduced ATP synthesis and caspase-mediated apoptosis. Excessive ROS can also cause lipid peroxidation, which is a strong driver of lipid-derived liver inflammation.

The susceptibility to ROS-driven hepatic oxidative stress is also influenced by the reduced availability of antioxidant molecules, such as glutathione (GSH). GSH is an ROS scavenger that enables activity of the key antioxidant enzyme GSH peroxidase. In mice fed with a high fat diet, a reduction in GSH and GSH peroxidase was detected, with reduced ability to control the inflammation and mitochondrial-derived oxidative stress, perpetuating the damage. Diet-induced weight loss reduced hepatic fat content and restored the regular transcription and synthesis of antioxidant enzymes [70]. Similar results have been observed in NAFLD patients undergoing bariatric surgery. One year after the intervention, significant differences in plasma and liver markers of oxidative stress were detected, in comparison with baseline evaluations [71].

In addition, one source of oxidative stress is AGEs. These are a biologically active group of molecules that are formed through no-enzymatic reactions between reducing sugars and proteins, lipids and nucleic acids [72]. Their synthesis greatly depends on the concentration of reactants and the half time of the proteins involved; the longer the time, the more prone to develop AGEs. Low rates of AGE formation are normal in metabolism in healthy individuals, but increased production occurs in the presence of carbohydrate overload. Hence, in the context of T2DM, these products are increasingly synthesized. In addition, a Western dietary pattern, in particular with high consumption of red meat and refined grains, sustains the onset of pro-inflammatory activity [73] and hence the development of oxidative stress, which favors AGE formation. In addition, AGEs can be synthesized at an exogenous level, in processed foods (heat treatment, caramel production, bread baking) which are prevalent in Western diets [74].

Accumulation of AGEs results in the activation of pro-inflammatory and pro-fibrotic pathways. In fact, the liver is responsible for the clearance of AGEs, through the interaction between specific receptors (RAGEs) expressed by Kuppfer cells and endothelial cells [75]. This link activates intracellular signaling involved in oxidative stress, thus perpetuating the damage.

Interestingly, AGEs have been shown to potentially discriminate between healthy individuals and NAFLD [76], as well as to distinguish between minimal steatosis versus moderate steatosis; therefore, they may likely be non-invasive biomarkers in high-risk populations [77]. In one randomized, placebo-controlled trial, the simple restriction of oral AGE intake was shown to ameliorate insulin resistance in obese individuals, suggesting that this could be a valuable nutritional intervention acting on a pathophysiological level [78].

#### *5.4. Contribution of Lipids to Liver Damage*

The exploration of the impact of lipid profiles in NAFLD has provided notable results. As discussed above, carbohydrates mainly act by increasing hepatic de novo lipogenesis, whereas the excessive intake of saturated fats increases intrahepatic triglyceride content, as compared to unsaturated fats, and increases the rate of lipolysis. Moreover, saturated fats seem to have the highest impact on insulin resistance and stimulate the synthesis of ceramides, which are mainly involved in the process of lipotoxicity and oxidative stress [79]. Similarly, diacylglycerols (DAGs) are intermediates of dietary fat oxidation directly implied in the disrupted hepatic glucose metabolism. By interfering with downstream regulating factors of the insulin pathway, DAGs promote insulin resistance and lipid-mediated hepatocellular damage [80].

In patients with NASH, long-term supplementation of PUFA was shown to decrease blood triglyceride levels and to improve or stabilize intrahepatic inflammatory activity [81], as well as to increase circulating levels of adiponectin, which is an adipose tissue-derived anti-inflammatory hormone [82]. The effect of non-saturated fats is mostly evident in the setting of the Mediterranean diet; nonetheless, one study conducted on an Asian Indian population showed an improvement in the grading of fatty livers and HOMA-IR in NAFLD patients consuming MUFA from vegetable oil, regardless of the overall alimentary pattern [83]. In addition, fish oil, which has been widely used as a source of unsaturated fats for the modulation of dyslipidemia and inflammation, has shown benefits in the setting of liver disease. In a randomized, double blind, placebo-controlled trial of patients with NAFLD and dyslipidemia, fish oil intake was associated with a significant improvement in glucose, cholesterol and triglyceride levels, as well the normalization of transaminases and increases in adiponectin levels. Moreover, a further beneficial effect on inflammation was reported from reductions in prostaglandins and tumor necrosis factor-α (TNF) [84].

#### *5.5. Current Evidence on the Role of Fiber Supplementation*

One further aspect that needs to be highlighted is the contribution of fiber intake to metabolic equilibrium. Fibers modulate gut microbiota, which is a substrate for producing short-chain fatty acids (acetate, propionate and butyrate), improving intestinal motility and enterocyte function [85]. Moreover, their positive effects on satiety lead to a better control of body weight [86]. The Western diet is characterized by a lower intake of fibers, and the subsequent reduced modulation of gut microbiota may lead to altered metabolic host status and impaired regulating pathways of enteric hormones, favoring the onset of obesity and T2DM [87]. Despite these two conditions being the key metabolic factors inducing NAFLD, evidence of the direct impact of fibers of NAFLD are lacking, due to the low level of scientific evidence [88]. Fiber supplementation seems to improve NAFLD-related surrogate outcomes, namely, Body Mass Index (BMI), HOMA-IR and transaminases [89], although prospective studies are needed to evaluate the long-term impact of fibers on hard NAFLD outcomes.

#### *5.6. Alcohol and NAFLD: What Are the Proper Recommendations?*

Alcohol consumption is a co-factor that exerts an injurious action on liver health. A small intake of red wine with meals (with a threshold of 30 g/day for men and 20 g/day for women) has been considered safe, with regard to substantial harm for chronic liver disease. Additionally, the regular intake of red wine is promoted, due to its beneficial effect related to the antioxidant action of resveratrol. However, when considering individuals presenting NAFLD, the beneficial action of red wine becomes challenging. Cross-sectional studies have suggested that modest consumption of alcohol is associated with a reduced degree of severity (comprising histological necroinflammation and fibrosis), with respect to abstinence [90,91]. However, in the complex metabolic scenario in which NAFLD is imbricated, the compound effect of the concomitant conditions has to be considered, because some features of alcohol metabolism (e.g., oxidative stress, hypertriglyceridemia) may have impacts in a considerable and unique manner. One large prospective population study involving 6732 individuals with metabolic syndrome and different grades of alcohol consumption showed that even low amounts of alcohol independently predict a severe phenotype of NAFLD [92]. Currently, less than moderate levels of alcohol, defined as 210 g/week for men and 140 g/week for women, are recommended most frequently, with absolute abstinence in patients with cirrhosis.

#### **6. Alternative Dietary Approaches: Which Best Strategy?**

The Mediterranean diet, despite its comprehensively beneficial action on NAFLD and metabolic syndrome, has several limitations, due to geographical and ethnical disparities throughout the globe. Therefore, different approaches have been evaluated in order to assess the benefit of a specific dietary pattern on the glycometabolic profile, regardless of the actions of single nutrients (Table 1).

Reducing overall daily calories leads to weight loss due to the negative energetic balance, strengthened by adding aerobic physical activity. In one prospective study conducted on obese individuals, a low-fat restricted-calorie diet, Mediterranean restricted-calorie diet, and low-carbohydrate non-restricted diet were randomly assigned for 2 years. The impact of a specific dietary pattern on weight loss varied according to gender, being higher for the low-carbohydrate group in males (mean reduction of 4.9 kg in men versus 2.4 kg in women) and for the Mediterranean diet group in women (mean reduction of 4.0 kg in males versus 6.2 kg in women). Low-fat diet resulted in smaller changes (3.4 kg in males and 0.1 kg in women). The low-carbohydrate diet showed the best rate of reduction in total cholesterol/HDL ratio (20% versus 12% in the low-fat arm), whereas the best impact on the glucose and insulin levels, in particular in the subgroup of diabetic individuals, was observed in the Mediterranean diet group [102]. These results suggest that a reduction in carbohydrates may be the preferable strategy in dietary patterns. In addition, the different impact on metabolic parameters emphasizes the importance of a tailored approach.


**Table 1.** Clinical studies evaluating the impact of different dietary pattern on both hepatic and metabolic outcomes.


**Table 1.** *Cont.*


**Table 1.** *Cont.*

steatohepatitis;

 PNPLA3, Patatin-like

phospholipase

domain-containing

 3; RCT, randomized controlled trial; TNF-α, tumor necrosis factor-α; US, ultrasound.

Therefore, the common strategy, given the impact of weight loss in ameliorating liver histology, has been so far the hypocaloric diet (approximately 1000 kcal/day or less). In particular, the model of the very low calorie diet (800 kcal/day), of which benefits have been well-assessed in diabetic populations, was applied in 45 patients with NAFLD, and comprised 19.4% fat, 43.4% carbohydrate and 33.7% protein. Overall, 34% of participants achieved a 10% weight loss, while 51% of participants achieved a more than 7% weight reduction. Notably, this dietary pattern was well tolerated and provided long-term benefits after interruption of the study. Concomitant significant improvement of HOMA-IR, transaminases and liver stiffness were observed [98].

Reducing carbohydrates results in significant improvements in intrahepatic triglycerides, hepatic insulin sensitivity and glucose production, when compared to an approach based on reducing fats, despite long-term evaluations not showing differences between the two approaches [103]. It would seem that mere calorie restriction helps in achieving the endpoint, regardless of the nutrient composition. In fact, one randomized controlled trial conducted on 60 overweight or obese NAFLD patients showed the impact of a food pattern with low saturated fat and low dairy products (called DASH, Dietary Approaches to Stop Hypertension, with less than 1000 kcal/day). After 8 weeks of intervention, a greater reduction in weight loss was observed in the DASH group as compared to the control arm (3.8 kg versus 2.3 kg), and BMI (−1.3 points versus −0.8 points). A significant reduction in HOMA-IR was observed in the DASH group (−0.8 points versus −0.2 points in the control arm). Accordingly, serum triglycerides, total cholesterol/HDL ratio, transaminases and markers of oxidative stress improved significantly in the interventional arm. In this study, the proportion of nutrients was almost the same (55% carbohydrates, 15% proteins, 30% fats), with sole differences in fat compositions [97].

Alternatively, isocaloric approaches have been proposed, aiming at modifications in the proportion of nutrients, but keeping average daily calories. Increasing protein intake (30% of the total), either animal- or plant-derived, has been shown to reduce liver fat independently of body weight, and to improve insulin resistance [100]. The relative reduction in carbohydrate intake might also have contributed to the outcome. Moreover, a reduction in fat intake inside the isocaloric dietary pattern has been shown to positively impact on liver fat, as evaluated by proton magnetic resonance spectroscopy: limiting fat intake to less than 16% of the total daily calories led to a 20% reduction in liver fat [104], and similar results were achieved when combining low saturated fats with low glycemic index foods [101]. Furthermore, one 12 week interventional study compared the impacts of a low-fat diet or Mediterranean diet on hepatic steatosis, and showed no differences between the two strategies in terms of benefits, with stronger adherence among the latter [94]. However, one isocaloric low-carbohydrate approach with a relative increase in protein intake seemed to provide the same results, with further evidence of the improvement in beta-oxidation and de novo lipogenesis [99].

Overall, two assumptions could be made. Firstly, there seems to be no clear evidence of preferring one nutrient reduction in respect to others, and the heterogeneity of the studies does not allow for reliable conclusions to be drawn. Moreover, compound modifications in nutrient proportions show that the endpoint is rather achieved with a varied and balanced diet. One large, prospective, interventional study showed, with proton magnetic resonance spectroscopy, that a resolution of hepatic fat could be obtained after 12 months of a dietary approach based on a variety of foods, with emphasis on fruits and vegetables, and with moderate carbohydrate intake, low fats, low glycemic index foods and appropriate portions of meals [93].

Secondly, the short durations of the studies do not provide strong evidence for the impact on metabolic health and liver outcomes. This last point is of crucial relevance when approaching clinical management of the disease. Long-term adherence to a dietary approach may be difficult to achieve, and solid outcomes need to be addressed. One large multicenter study conducted in the United States (the Look Action for Health in Diabetes (AHEAD) Study) enrolled diabetic overweight or obese patients with T2DM and randomly assigned them to either an intensive lifestyle intervention aiming at weight loss (with both decreased caloric intake and increased physical activity), or to diabetes support and education. The primary endpoint was death from cardiovascular causes and incidence of cardiovascular events for a maximum follow-up of 13.5 years. The study was interrupted early at 10 years for futility: no difference between the two groups in the primary outcome occurrence was observed [105]. Again, this evidence highlights the still-unmet need for clear outcomes in dietary patterns, associated with specific, disease-related hard endpoints.

#### **7. Intermittent Fasting to Improve Metabolic Health**

Metabolic modifications with regard to lipid and glycemic profiles can also be achieved with other manipulations, in particular by acting on the mealtime, rather than on its composition. Protracted fasting, in particular, causes deep modifications in metabolic pathways that persist over the refeeding time and thus may be one valuable alternative for shaping energy intakes. With this approach, energy restriction is not required to obtain metabolic improvement, nor is weight loss regarded as a key endpoint [106].

This approach is defined as intermittent fasting (IF), and covers different strategies, including alternate-day fasting, the 5:2 diet, or the fast-mimicking diet. In particular, the model of time-restricted feeding (TRF) [107] looks attractive for the liver and metabolic health prospective. TRF allows food intake within a definite interval of hours, following hormone circadian rhythms. The concept of "chrononutrition" has progressively gained attention, because daily metabolic rhythms were found to be dictated by molecular "clocks" acting on specific subsets of genes [108]. This complex metabolic crosstalk is hierarchically overseen by the hypothalamus, in response to exogenous and endogenous stimuli (the sleep–wake cycle according to light–dark courses as a paradigm). Peripheral organs, such as the liver, pancreas, skeletal muscle and adipose tissue, the key drivers of the systemic metabolic homeostasis, allow the perfect coordination (a true "synchronization" ) between the environment and the biochemical processes. Glucose homeostasis and insulin sensitivity are examples of time-related regulation; mealtimes, in return, regulate adiposity and body weight [109,110].

Animal studies conducted in rodents experiencing TRF have been shown to attenuate the harm of obesogenic diets [111], proportional to the fasting duration, reversing or stabilizing preexisting obesity, insulin resistance [112], as well as improving hepatic steatosis [113] and inflammatory markers [114]. Furthermore, TRF seems to have a modulating effect on GM, contributing to cyclical changes and diversity that impact on host metabolism [115]. Moreover, a long-term protective impact of TRF, related to the underlying gene regulation that provides durable effects, was detected even when it was temporarily interrupted in favor of unlimited access to food [116].

In humans, biological clocks suggest that metabolism is optimized for food intake in the morning, with insulin sensitivity and beta-cell responsiveness being higher in the morning than in the afternoon or evening. Individuals with T2DM who follow this pattern show a significant reduction in postprandial hyperglycemia [117], whereas insulin-resistant individuals show significant improvements in insulin sensitivity and, interestingly, a reduction in hunger [117]. In addition, overweight or obese individuals are more prone to weight loss following this alimentary timing [118], and a better modulation of adipokines (increase in adiponectin and decrease in leptin) has been reported [119]. Sutton et al. performed one elegant proof-of-concept study to show the benefits of an early TRF (allowing food intake for 6 h before 3 p.m.). This was one 5-week, randomized, crossover, isocaloric end eucaloric controlled feeding study conducted on overweight and pre-diabetes individuals. TRF improved insulin levels, insulin sensitivity, beta-cell responsiveness, blood pressure and oxidative stress, even in the absence of weight loss [120]. In particular, the improvement in blood pressure, even though participants had mean values in the pre-hypertensive range, is of great interest in terms of endpoints, as well as the smaller importance given to weight loss in the amelioration of metabolic health. Given the central role of insulin resistance in the pathogenesis of NAFLD, the TRF approach might provide interesting results.

#### **8. Outlook and Summary—Open Issues in the Clinical Setting**

The wide heterogeneity of lifestyle intervention approaches, together with the diverse endpoints assessed by the clinical studies and the high variability in study populations and duration of treatment, have considerable impacts in clinical practice.

In addition, a uniform effect to specific lifestyle intervention can hardly be achieved. One large study evaluated changes in glucose levels in 800 overweight and obese nondiabetic individuals following identical meal intakes. Here, a high variability in postprandial glucose levels was observed, highlighting that individual recommendations might be required [121].

More than one approach seems beneficial in NAFLD populations, but a lack of standardization prevents clinicians from making reliable decisions, in particular with regard to the short-term assessment of either compliance or success of the chosen intervention. Weight loss would still be the primary endpoints of "bedside" recommendations, but more appropriate, tailored outcomes are required. In particular, in the absence of long-term hard endpoints in NAFLD populations, validation of simple tools may help in handling successes or failures in clinical settings. Reductions in CAP after 12 weeks of treatment is one example explored in the above-mentioned studies [46], as was the remission of steatosis through proton magnetic resonance spectroscopy after 12 months of intervention [93]. Alternatively, surrogate measures of insulin resistance, such as short-term improvements in HOMA-IR, might be used to assess the benefit of the intervention, even though the size effect may vary according to the chosen approach [97,103].

In fact, the assessment of one endpoint seems critically associated with the interventional strategy. The fatty liver index (FLI) and the NAFLD-liver fat score (NAFLD-LFS) are non-invasive liver fat indices introduced in clinical practice aiming for the early detection of NAFLD and are best validated for purposes in cross-sectional studies. However, their longitudinal accuracy has been proven to be diet-specific: changes in liver fat, obtained by a low-fat diet intervention, moderately correlate to changes in FLI and NAFLD-LFS, but this does not happen when undertaking a low-carbohydrate diet approach [96]. Additionally, one 6-month clinical trial on NAFLD patients undertaking a Mediterranean diet has shown significant improvements in FLI and NAFLD-LFS [95]. The impact of lifestyle interventions on fibrosis is less well-established, although long-term histologic evaluation has provided evidence of fibrosis amelioration [11]. In one cross-sectional study conducted on biopsy-proven NAFLD patients with T2DM, liver fibrosis was inversely associated with adherence to the Mediterranean diet at multivariate analysis, including multiple putative factors (age, gender, BMI, glycated hemoglobin) [122]. In addition, one prospective observational Greek study reported the same inverse association when evaluating liver fibrosis with non-invasive scores (namely, fibrosis-4 score (FIB-4), AST-to-platelet ratio (APRI) and BARD (BMI, AST/ALT Ratio, Diabetes) index) through cross-sectional analysis [123]. An inverse association between the Mediterranean diet and liver fibrosis also emerged through elastography after 6 months of intervention, suggesting a possible role for liver stiffness, together with CAP, in the assessment of treatment progress [47].

In conclusion, the nutritional landscape of interventional approaches for NAFLD treatment is burdened by the heterogeneity in study populations, duration of treatments, and the diversity of selected endpoints. The lack of longitudinal studies in nutritional patterns on the natural history of NAFLD makes it hard to assess strong benefits of either nutrients or dietary approaches, of which evidence relies on surrogate markers or short-term evaluation. The complex picture of the metabolic syndrome requires a careful, multidisciplinary method, because multiple causal agents impact on liver disease, and systemic morbidities are in return worsened by liver damage. The relative strong evidence of beneficial effects of the Mediterranean diet is counterbalanced by a lack of studies in other dietary models, of which usefulness is crucial for populations where the Mediterranean diet would be undertaken with difficultly, due to geographical and cultural discrepancies in food accessibility.

In addition, evidence of the harmful effect on liver fat content of late eating, rushed eating and attitude to snacking has highlighted the importance of behavioral aspects in nutrition, which are often underreported and thus require proper investigation. It is likely than more than one approach would ameliorate liver and systemic metabolism, and good communication with patients with respect to harmful nutrients, time of meals, and proper behavior might produce better results per se, as well as increase compliance and understanding. Aiming for improvements in insulin sensitivity can be the fil rouge of different approaches, which would not potentially be limited to restrictions of food, but also to eating times. In fact, the positive results on insulin sensitivity coming from the novel approaches of intermittent feeding have corroborated the concept of quality and timing of eating, with important implications in clinical management. The need for associating one nutritional intervention with inexpensive, non-invasive scores of either steatosis or fibrosis is essential in clinical settings, in order to assess benefits or futility. These algorithms should be included in future studies, with particular regard to liver fibrosis, which is the hardest prognostic factor in the setting of chronic liver disease.

**Author Contributions:** Conceptualization, methodology and finalization of the manuscript: J.M.S. Formal analysis, data curation, and writing—original draft preparation A.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding. J.M.S. is partly funded by the European Union Innovative Medicines Initiative 2 (IMI2) Joint Undertaking under grant agreement 777377: LITMUS (Liver Investigation: Testing Biomarker Utility in Steatohepatitis).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** All data are publicly available.

**Conflicts of Interest:** J.M.S. declares Consultancy: BMS, Boehringer Ingelheim, Echosens, Genfit, Gilead Sciences, Intercept Pharmaceuticals, Madrigal, Nordic Bioscience, Novartis, Pfizer, Roche, Sanofi, Siemens Healthcare GmbH. Research funding: Gilead Sciences, Siemens Healthcare GmbH, Boehringer Ingelheim. Speakers bureau: Falk Foundation. A.A. declares no conflict of interest. The companies had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


## *Review* **Malnutrition in Patients with Liver Cirrhosis**

**Julia Traub 1, Lisa Reiss 1, Benard Aliwa <sup>2</sup> and Vanessa Stadlbauer 2,\***


**Abstract:** Liver cirrhosis is an increasing public health threat worldwide. Malnutrition is a serious complication of cirrhosis and is associated with worse outcomes. With this review, we aim to describe the prevalence of malnutrition, pathophysiological mechanisms, diagnostic tools and therapeutic targets to treat malnutrition. Malnutrition is frequently underdiagnosed and occurs—depending on the screening methods used and patient populations studied—in 5–92% of patients. Decreased energy and protein intake, inflammation, malabsorption, altered nutrient metabolism, hypermetabolism, hormonal disturbances and gut microbiome dysbiosis can contribute to malnutrition. The stepwise diagnostic approach includes a rapid prescreen, the use of a specific screening tool, such as the Royal Free Hospital Nutritional Prioritizing Tool and a nutritional assessment by dieticians. General dietary measures—especially the timing of meals—oral nutritional supplements, micronutrient supplementation and the role of amino acids are discussed. In summary malnutrition in cirrhosis is common and needs more attention by health care professionals involved in the care of patients with cirrhosis. Screening and assessment for malnutrition should be carried out regularly in cirrhotic patients, ideally by a multidisciplinary team. Further research is needed to better clarify pathogenic mechanisms such as the role of the gut-liver-axis and to develop targeted therapeutic strategies.

**Keywords:** malnutrition; cirrhosis; nutritional screening; nutritional assessment; gut–liver axis; macronutrients; micronutrients; dysbiosis

#### **1. Introduction**

The Hepahealth report from 2018 reported a prevalence of chronic liver disease and cirrhosis in Europe between 500 and 1100 cases per 100.000 inhabitants [1]. Data from the USA show a 65% increase in cirrhosis associated mortality between 1999 and 2016 [2]. Cirrhosis is a systemic disease and malnutrition is a key feature as well as an important complication of the disease. This implicates that malnutrition diagnosis is not only relevant as one of the clinical characteristics of cirrhosis, but also needs to be considered as an important complication, that warrants timely and appropriate therapy to improve prognosis. Our review should highlight the importance of early diagnosis, should help to understand the pathophysiology and define appropriate therapeutic measures. Additionally, knowledge gaps are identified. In this review the term malnutrition is used to describe undernutrition. The discussion of overnutrition in liver disease is beyond the scope of this review.

The reported prevalence of malnutrition in cirrhosis is highly variable, ranging from 5–92%, indicating either a knowledge gap or difficulties in diagnosing malnutrition or both. The knowledge gap is underpinned by a survey, in which only 20% of gastroenterologists gave correct answers regarding the prevalence of malnutrition in cirrhotic patients [3]. The problem of underdiagnosis can be derived from two very large studies from the USA using the national inpatient sample, that showed a much lower prevalence of 6–12%, whereas studies that used active screening for malnutrition exhibit higher rates of detection [4,5]. Supplementary Table S1 summarizes studies describing the prevalence of malnutrition

**Citation:** Traub, J.; Reiss, L.; Aliwa, B.; Stadlbauer, V. Malnutrition in Patients with Liver Cirrhosis. *Nutrients* **2021**, *13*, 540. https:// doi.org/10.3390/nu13020540

Academic Editors: Ina Bergheim and Roberto Iacone Received: 4 December 2020 Accepted: 4 February 2021 Published: 7 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

in cirrhosis over the past five years, depicting the large differences depending on the study design and the tools used for diagnosis. Malnutrition should therefore be routinely screened for and assessed among this group of high-risk patients to avoid underdiagnosis. However, screening of malnutrition in liver cirrhosis patients is challenging because of the influence of fluid retention, ascites and peripheral edema [5]. Malnutrition prevalence, as a key feature of cirrhosis, increases with increasing disease severity [6,7]. However, frequently also patients with compensated liver disease are malnourished and malnutrition is a considerable risk factor for mortality even in patients with a model of end-stage liver disease (MELD) score < 15 [8]. Furthermore, patients with chronic liver disease but without cirrhosis are also frequently malnourished [9]. In this population, malnutrition may often be masked by obesity [9]. Malnutrition can also be seen as a complication of cirrhosis, since it has a negative impact on disease progression and outcome. Rate of hospitalization and mortality is doubled in malnourished compared to adequately nourished patients and malnutrition is an independent predictor of outcome [10–12]. Malnutrition is a predictor of other complications of cirrhosis [13]. Especially infections and hepatic encephalopathy are associated with malnutrition [14,15]. Furthermore, other malnutrition related diagnoses, namely sarcopenia, hepatic osteodystrophy and frailty are commonly found in liver cirrhosis. The presence of sarcopenia further impairs prognosis [16–25]. The interplay between malnutrition, sarcopenia and frailty has recently been reviewed [26].

This review summarizes the current knowledge on pathogenesis of malnutrition in cirrhosis and discusses the best clinically applicable strategies to diagnose malnutrition in order to raise awareness for this still often underappreciated complication of cirrhosis. Treating malnutrition by a multidisciplinary team improves survival rates and quality of life in patients with liver cirrhosis [27] and nutritional education leads to more nutrition consultations and a lower 90-day readmission rate in cirrhosis [28]. Therefore this review also discusses current and potential future therapeutic options to treat malnutrition in cirrhosis.

#### **2. Pathogenesis**

The etiology of malnutrition in liver cirrhosis is multifactorial (Figure 1). Decreased energy and protein intake, inflammation, malabsorption, altered nutrient metabolism, hormonal disturbances, hypermetabolism and gut microbiome dysbiosis can contribute to malnutrition. Additionally, fasting periods and external factors such as alcohol consumption have impact on malnutrition.

**Figure 1.** Factors contributing to malnutrition in cirrhosis. Created with BioRender.com.

#### *2.1. Decreased Energy and Protein Intake*

In patients with liver cirrhosis, decreased energy and protein intake are the commonest reason leading to malnutrition [29–35]. The percentage of patients with inadequate energy intake ranges from 9.2% to 100% in different studies, depending on the method of assessment and the patient population. Energy intake is reduced by 13–34%, indication a large variation in different studies. Supplementary Table S2 shows the design and results of different studies assessing nutritional intake. Several upstream mechanisms are known as reasons for decreased energy and protein intake (Figure 2). Impaired gastric motility and relaxation due to portal hypertension leads to reduced nutritional intake [36,37]. The presence of ascites can reduce food intake due to early feeling of fullness [38]. A decreased sense of smell and/or dysgeusia, which can be caused by micronutrient deficiencies can also be responsible for a decreased intake [39,40]. Additionally, recommended dietary restriction like a low-salt diet are discussed as possible factors for inadequate nutritional intake [41,42]. In cirrhosis, interestingly, high levels of ghrelin were observed [31,43,44]. Ghrelin is the only known peripherally-derived orexigenic hormone that normally increases appetite and food intake. However, despite high ghrelin levels, appetite is not increased in cirrhotic patients. High ghrelin levels can therefore be considered as an ineffective compensatory mechanism in cirrhosis [31]. To date, it is unknown, which of these factors plays the most important role. Therefore, all factors should be considered during the assessment of nutritional intake in each patient as a personalized approach to detect and adequately treat the most likely reasons for reduced nutritional intake.

**Figure 2.** Reasons for decreased energy and protein intake in cirrhosis. Created with BioRender.com.

#### *2.2. Malabsorption and Altered Metabolism of Macro- and Micronutrients*

Fat malabsorption is commonly seen in cirrhosis [45,46]. Impaired bile acid metabolism which affects the formation of micelles that are necessary for fat digestion and absorption of fat-soluble vitamins, [47,48] small intestinal bacterial overgrowth, which is common in liver cirrhosis patients, [49] can lead to fat malabsorption via deconjugation of bile acids [50]. Chronic pancreatitis, secondary to alcohol abuse and common in liver cirrhosis patients, may contributes to fat malabsorption as well [51,52]. Protein loss due to portal hypertensive enteropathy has been described [53–55]. No data is available regarding impairment of carbohydrate absorption in cirrhosis. Malabsorption needs to be considered in the nutritional assessment and diagnosed, using biomarkers such as fecal elastase or fecal alpha-1-antitrypsin and tests for micronutrient deficiencies (see below). A useful stepwise diagnostic algorithm, starting with noninvasive routine blood tests and specific biomarkers has been proposed by Nikaki K. in 2016 [56].

In addition to altered absorption, also fat, protein and carbohydrate metabolism are altered in cirrhosis, with differing mechanisms depending on the etiology. Much research has been done to elucidate the mechanisms: chronic alcohol consumption alters lipid metabolism by stimulating lipogenesis, decreasing the export of very low-density lipoprotein, activate de novo lipogenesis and inhibiting fatty acid oxidation which also contributes to alcoholic fatty liver disease [57–61]. Alcohol consumption also impairs fatty acid catabolism predominantly through inhibition of mitochondrial ß-oxidation, which is the most significant contribution to alcohol-induced hepatic lipid accumulation and leads to triglyceride accumulation in the liver [62–64]. Also in non-alcoholic fatty liver disease, adipose tissue and hepatic triglyceride metabolism is altered [65]. Protein metabolism is impaired due to increased protein catabolism and decreased protein synthesis. Furthermore, decreased serum branched chained amino acids (BCAA) concentration and increased levels of aromatic amino acids are observed in cirrhosis, which play a role in the pathogenesis of hepatic encephalopathy and muscle wasting [66–70]. Glucose metabolism is severely altered as well: peripheral insulin resistance but normal or enhanced uptake into the liver as well as alterations in glycolytic enzymes and changes in glucose and insulin transporters have been described. This contributes to decreased hepatic glucose production and lower hepatic glycogen reserves, associated with increased gluconeogenesis from amino acids and secondary protein breakdown. The above described metabolic abnormalities in carbohydrate metabolism also lead to a state of accelerated starvation already after an overnight fast [71–76]. The role of portal hypertension and portosystemic shunting in protein energy metabolism is not fully elucidated yet: on the one hand, the placement of a transjugular intrahepatic portosystemic shunt can lead to improvement of fat free mass and thereby prognosis [77–79]. On the other hand, there is also evidence that portosystemic shunting may have deleterious nutritional effects due to a reduction in hepatic nutrient flow [80]. While the molecular principles of changes in macronutrient metabolism in cirrhosis are already well described, the direct therapeutic implications of these findings are not well defined yet. Timing and composition of meals as therapeutic measures to account for changes in macronutrient metabolism are described below. Further research is needed to understand the effect of portal hypertension and portosystemic shunting in humans on protein anabolism and catabolism including the role of the gut-liver axis.

But not only macronutrient metabolism is altered in liver cirrhosis patients. Deficiencies in trace elements, minerals and vitamins are common, due to fat malabsorption, diuretic use and inadequate intake. In addition, liver dysfunction itself can lead to alterations in trace element metabolism [81,82]. Zinc, selenium, iron and magnesium are commonly decreased in liver cirrhosis [83–88] whereas copper and manganese can be increased [81,89]. Fat-soluble vitamin deficiencies are common in liver cirrhosis [90] which can in turn impair absorption of other nutrients, such as protein and fat [91]. For the absorption of fat-soluble vitamins, bile acids are required to form micelles which are absorbed by enterocytes into the circulation. If there is inadequate delivery of bile acids, as it is common in liver cirrhosis patients, this can lead to a deficiency of fat-soluble vitamins, especially in jaundiced patients [92,93]. Trace element and vitamin deficiencies can in turn impact negatively on nutrition intake indicating a vicious cycle of malnutrition in cirrhosis: zinc and vitamin A deficiency can impair taste and olfaction and therefore impair food intake. [39,94] Vitamin D deficiency is of prognostic relevance, since it is associated with poor outcome, increased mortality and higher complications rate, however it is yet unclear whether vitamin D levels are a mere surrogate of advanced liver disease or if there exists a

direct pathophysiological relation [95–98]. Water-soluble vitamins, especially vitamins C, B1, B2, B6 and folic acid [99–101] are decreased whereas vitamin B12 levels can be falsely increased in liver cirrhosis, possibly due to a flooding of vitamin B12 from damaged liver cells into the circulation [102,103]. For a summary of changes of micronutrients in cirrhosis see Supplementary Table S3.

Not only are absorption and metabolism altered, but also energy expenditure contributes to malnutrition: 15–30% of cirrhotic patients are hypermetabolic with a resting energy expenditure of >120%, which negatively effects nutrition status [104–107]. Hypermetabolism compromises overall transplant-free and early post-transplant survival [108–110]. The cause of hypermetabolism in liver cirrhosis is not yet clarified in full detail. From rheumatoid disease it is known that inflammation drives hypermetabolism [111]. Elevated levels of interleukin-1, interleukin-6 and transforming growth factor are also common in chronic alcoholic liver diseases [112–116]. Therefore, inflammation can be considered as a contributing factor to hypermetabolism and malnutrition [117], alongside with increased beta-adrenergic activity [72]. Additionally, elevated levels of proinflammatory cytokines may be directly responsible for decreased appetite [118,119]. Since inflammation in cirrhosis is tightly linked to changes in the gut–liver axis, [120–122] the relation to hypermetabolism needs further research to elucidate pathophysiology and define possible therapeutic interventions.

Hormones, as superordinate control of nutritional intake and metabolism also impact on nutrition in cirrhosis. The role of ghrelin in appetite regulation was already described above. In addition, ghrelin has a wide spectrum of other metabolic functions in glucose metabolism and weight control and posttranslational modification is essential to exert its metabolic function. The diverse roles of ghrelin in liver disease has recently been extensively reviewed [123]. Ghrelin as well as leptin are known to influence energy expenditure [124,125]. Leptin, which helps to regulate energy balance, circulates in free and bound form. The basal concentrations of leptin are higher in patients with liver cirrhosis and can lead to inadequate energy expenditure [126]. Hyperinsulinemia and insulin resistance are also common in liver cirrhosis; increased insulin levels induce satiety, leading to a reduction in energy intake [127]. Testosterone is reduced in about 90% of men with liver cirrhosis [128] and plays an important role in protein synthesis and protein breakdown [129].

#### *2.3. Gut Microbiome Dysbiosis as Potential Contributor to Malnutrition*

Altered nutritional status is associated with distinct gut microbiome dysbiosis in cirrhosis [130]. The gut microbiome is a nutrient signal transducer with the capacity to synthesize or modify nutrient signaling molecules such as short-chain fatty acid (SCFA) and branched chain amino acid (BCAA) [131,132]. Several bacterial genera are known to produce SCFA, such as *Bacteroides*, *Faecalibacterium*, *Succinivibrio* and *Butyricimonas* among others [133,134]. Undernourished children for example showed a lower abundance of different *Bacteroides* species, suggesting a loss in SCFA-producing species [135,136]. In cirrhosis, SCFA-producing bacterial species are reduced [137]. The observed alteration in the gut microbiome composition in cirrhosis is associated with increased protein catabolism mediated by inflammatory responses leading to muscle loss [138]. Gut microbiome dysbiosis is further associated with increased gut permeability and bacterial translocation, which is associated with inflammation [122] and complications of cirrhosis [122,139,140]. It is not known to date whether gut microbiome dysbiosis precedes the development of malnutrition in cirrhosis or if it is a consequence of the disease and the drug treatment of the disease. This question would be of high relevance to answer, in order to develop microbiome targeted therapeutic strategies to improve malnutrition in the clinical setting.

#### **3. Diagnosis**

Since malnutrition is a common key feature and a complication of liver cirrhosis and related to a poor prognosis, early diagnosis is important. Unfortunately, the common models to determine the prognosis in patients with liver cirrhosis, such as the model of end-stage liver disease (MELD) [141] and the Child–Pugh score [142], do not include nutritional screening or assessment. Of note, the original score developed by the surgeons Child and Turcotte, contained malnutrition as a variable, which was later substituted by prothrombin time [143]. All liver cirrhosis patients should be rapidly prescreened for the risk of malnutrition at each contact by assessing Child–Pugh score and BMI and when at high risk (Child–Pugh score C irrespective of the BMI or BMI < 18.5 kg/m<sup>2</sup> irrespective of the Child–Pugh score) a nutritional assessment, including assessment of sarcopenia as a complication of malnutrition, should be completed immediately to confirm the presence and determine the severity of malnutrition [144–146]. This prescreening can be done by skilled personnel from different disciplines, since it contains routine clinical data, which are normally collected at each outpatient visit or at hospitalization. In the future, automated prescreening combining routinely assessed data from electronic patient records, is thinkable. A specific screening tool, which is validated for patients with liver cirrhosis, is advised to account for special circumstances such as fluid retention in cirrhosis. The Royal Free Hospital– Nutritional Prioritizing Tool (RFH–NPT) fulfills this requirement [145,147]. Common malnutrition screening tools for the general hospital population do not perform well in cirrhosis [148]. If a general malnutrition tool is intended to be used in cirrhosis, it needs to be validated first. In patients with medium or high risk in the RFH–NPT (1 point or more), a detailed nutritional assessment using cirrhosis specific assessment tools such as the Subjective Global Assessment or the Royal Free Hospital Global Assessment as well as a detailed assessment of dietary intake are required [145,147]. In patients, who are at high risk for malnutrition, the assessment of sarcopenia in addition to a detailed nutritional assessment is recommended to confirm and characterize complications of malnutrition and identify modifiable variables for nutrition support. Methods for the assessment of sarcopenia in liver cirrhosis have recently been reviewed [149]. In obese patients with cirrhosis Child–Pugh A or B (BMI > 30 kg/m2), a nutritional and lifestyle intervention targeting obesity is indicated [145,147]. As nutritional assessment is more comprehensive, time consuming and requires interpretation of multiple nutrition indicators, it should be performed by a dietitian [150]. In patients with low risk of malnutrition, rescreening should be performed every year, in all other patients the assessment should be repeated every one to six months in the outpatient setting and at admission and for inpatients periodically during the hospital stay [145]. Figure 3 shows a comprehensive algorithm to screen for and assess malnutrition in cirrhosis, adapted from the European Association for the Study of the Liver (EASL) clinical practices guidelines [145].

**Figure 3.** Algorithm to screen for and assess malnutrition in cirrhosis, adapted from the European Association for the Study of the Liver (EASL) clinical practices guidelines [145]. Created with BioRender.com.

#### **4. Therapeutic Strategies**

#### *4.1. Diet*

The positive effect of dietary interventions on prognosis in cirrhosis has been clearly shown: optimizing the nutritional status of liver cirrhosis patients improves morbidity and mortality [13,17,151–153] even in patients with acute on chronic liver failure [154]. Nutritional therapeutic interventions by a multidisciplinary team, especially through dietary counseling from dieticians, improves biomarkers of malnutrition, [155,156] quality of life [27,157] and survival rate [27].

The recommended macronutrient composition in cirrhosis mainly focusses on protein intake. Cirrhotic patients have an increased protein requirement based on the increased protein turnover and catabolism [158,159]. A high protein intake improves in nutritional status [155,160]. Even patients with hepatic encephalopathy, where in the past protein restriction was advocated, [161] benefit from normal to high protein intake [157,162–165]. The recommendations regarding protein intake differ slightly, but not relevantly, between different guidelines, depending on the nutritional status and range from 1.2–1.5 g protein/kg bodyweight (Table 1). There are no specific recommendations regarding carbohydrate and fat intake for patients with liver cirrhosis.


**Table 1.** Summary of dietary recommendations in cirrhosis from different populations.

HE, hepatic encephalopathy; ONS oral nutritional supplements. <sup>1</sup> European Association for the Study of the Liver (EASL). <sup>2</sup> American Association for the Study of Liver Diseases (AASLD). <sup>3</sup> European Society for Clinical Nutrition and Metabolism (ESPEN). <sup>4</sup> International Society for Hepatic Encephalopathy and Nitrogen Metabolism Consensus (ISHEN).

> Current evidence suggests that timing and frequency of meals is of importance to improve malnutrition in cirrhosis. After overnight fasting, glycogen stores in cirrhotic livers are emptied [74]. A late evening snack with 50 g complex carbohydrates can improve

nitrogen metabolism, increase lean body mass and reverse anabolic resistance and sarcopenia [166–169]. Eating breakfast improves cognitive function in cirrhosis, indicating that, depending on personal preferences in dietary habits, an individualized approach for timing of meals should be developed during dietary counselling [170]. A higher frequency of 5–6 meals/day also shortens episodes of catabolism during the day [171]. It is particularly important to pay attention to the timing and frequency of meals in hospitalized patients to avoid long and often unnecessary "nil per os" periods due to planned diagnostic tests.

The main emphasis in dietary counselling should be to ensure adequate oral intake. If oral intake including oral nutrition supplements (see below) is insufficient despite adequate nutritional advice, enteral tube feeding may be considered for cirrhotic patients to achieve their nutritional and energy goals [172]. Nonbleeding esophageal varices are no contraindication for the placement of a nasogastric tube [163,173]. An endoscopic gastrostomy, on the other hand, is associated with a higher risk of complications, especially bleeding, and is therefore not recommended for patients with advanced chronic liver disease [174]. In moderately or severely malnourished cirrhosis patients, who are unable to eat oral food or cannot be fed sufficiently enterally, parenteral nutrition should be started according to the ESPEN recommendations [175]. Additionally, parenteral nutrition should be given when fasting periods last longer than 72 h [175]. Since cirrhotic patients are more prone to sepsis or infections, care should be taken to avoid infections from central venous lines [176].

When ascites is diagnosed in patient with liver cirrhosis, a "no added salt" diet restricted to 90 mmol salt per day (5.2 g) is recommended by many guidelines. However, as we recently reviewed in detail, a low-sodium diet, although leading to a faster disappearance of ascites and less need for diuretics can lead to poor diet adherence because of impaired taste of the meals, reduced energy and protein intake and increased risk of malnutrition [42]. Therefore the risks and benefits of salt restriction have to be weighed carefully in each patient, again showing the necessity of a multidisciplinary team approach to treat malnutrition in cirrhosis.

#### *4.2. Oral Nutritional Supplements and Micronutrients Supplementation*

Oral nutritional supplements can help to achieve nutritional goals in cirrhosis. A significant improvement in anthropometric nutritional parameters such as lean muscle mass and body mass index as well as serum proteins can be achieved by oral nutritional supplements [160,177]. A meta-analysis concluded that oral nutritional supplements may also improve outcome [152]. Furthermore, an improvement in quality of life, functional status and rehabilitation of malnourished cirrhosis patients can be achieved [157,178]. In terms of administration time, nocturnal oral nutritional supplements have a better effect in improving the total body protein status than at daytime [166] (Table 2). Since oral nutritional supplements also contain micronutrients and vitamins, this may be of additional benefit in patients with cirrhosis, however, so far no clear benefit of micronutrient supplementation could be shown [179]. Moreover, no clinical data comparing different products are available, therefore choice can be based on personal preference and price.

Although some studies demonstrate the positive effect of micronutrients supplementation, due to the lack of robust data, no clear recommendations can be made. Additional supplementation is currently only recommended in cases of confirmed or clinically suspected deficiency. Data on zinc supplementation is conflicting: some studies report positive effects on zinc supplementation in hepatic encephalopathy [180–182], while others report no significant improvements [183,184]. Zinc supplementation in case of deficiency improves liver function and nutritional status [185,186] and may even impact positively on clinical outcome [187,188]. Normalizing zinc and vitamin A levels can also indirectly improve the nutritional status by a positive effect on sense of taste and thereby increased food intake [94,189]. A meta-analysis found no evidence to support or refute antioxidant supplements such as beta-carotene, vitamins A, C, E and selenium in liver disease [179]. Also, for vitamin D supplementation, evidence is not sufficient. Vitamin D deficiency is

common in cirrhosis and associated with increased mortality [190] and deficiency can be corrected by oral supplementation in cirrhosis [191,192]. However, due to inadequate overall data quality, there is not sufficient evidence to prove or disprove an effect on morbidity and quality of life of vitamin D supplementation [193] (Table 2).

**Table 2.** Summary of recommendations for oral nutritional supplements, micronutrient supplementation and branched chain amino acid (BCAA) supplementation from different populations.


HE, hepatic encephalopathy; BCAA, branched chain amino acids. <sup>1</sup> European Association for the Study of the Liver (EASL). <sup>2</sup> American Association for the Study of Liver Diseases (AASLD). <sup>3</sup> European Society for Clinical Nutrition and Metabolism (ESPEN). <sup>4</sup> International Society for Hepatic Encephalopathy and Nitrogen Metabolism Consensus (ISHEN).

#### *4.3. Amino Acids*

BCAA serum levels are low in cirrhosis because they are preferentially used as energy substrates, but are also essential for protein synthesis and ammonia detoxification [194]. BCAA supplementation has been shown to prevent lipolysis and proteolysis and improves nitrogen balance, muscle mass, nutritional status, complication free survival, quality of live and hepatocellular carcinoma risk [194–202]. However, despite this quite clear evidence of positive effects, current guidelines recommend BCAA supplementation only in decompensated cirrhosis, when adequate protein intake cannot be achieved by oral diet or in case of complications. For hepatic encephalopathy, vegetarian protein, which is rich in BCAA, is considered to be the ideal protein source, not only because it is better tolerated than animal protein [164,203,204] but also because vegetarian protein may positively influence gut microbiome composition [205]. BCAA can also be used as late evening snack to improve nutritional status [73] especially in protein-intolerant patients [144–146,206] (Table 2).

#### **5. Summary**

Malnutrition is a common and dangerous key feature and complication of liver cirrhosis. Diagnosis is challenging and often overlooked. A repetitive diagnostic workup with a rapid screening for malnutrition using the Child–Pugh score and BMI, and in selected patients the RHF-NPT to stage patients in low, medium and high risk and conducting a detailed nutritional assessment including assessment of complications of malnutrition

in patients at medium and high risk should be implemented in every center. Education on malnutrition in cirrhosis for all health care professionals who treat patients with liver cirrhosis and the availability of trained dieticians seem crucial in the future management process of malnutrition in cirrhosis, to account for the complexity of the disease and the need for individualized management. From a pathophysiological point of view, more work is needed to identify the drivers of malnutrition, especially considering the complex interplay between the gut microbiome and nutrient metabolism. Therapeutic efforts should consider alternative pathophysiological mechanisms in the development of malnutrition such as the role of inflammation and dysbiosis to identify potential therapeutic targets beyond the pure increase of nutritional intake. Further clinical studies, ideally as multicenter, multidisciplinary initiatives are needed to diagnose and adequately treat malnutrition in cirrhosis.

**Supplementary Materials:** The following are available online at https://www.mdpi.com/2072-664 3/13//540/s1.

**Author Contributions:** Conceptualization, V.S.; writing—original draft preparation, V.S., J.T., L.R., B.A.; writing—review and editing, V.S., J.T., L.R., B.A.; visualization, J.T., L.R.; supervision, V.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by a grant from the Austrian Science Fund (FWF) KLI 741 to VS.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Malnutrition in Pediatric Chronic Cholestatic Disease: An Up-to-Date Overview**

**Maria Tessitore 1,†, Eduardo Sorrentino 1,†, Giuseppe Schiano Di Cola 1, Angelo Colucci 1, Pietro Vajro <sup>1</sup> and Claudia Mandato 2,\***


**Abstract:** Despite recent advances, the causes of and effective therapies for pediatric chronic cholestatic diseases remain elusive, and many patients progress to liver failure and need liver transplantation. Malnutrition is a common complication in these patients and is a well-recognized, tremendous challenge for the clinician. We undertook a narrative review of both recent and relevant older literature, published during the last 20 years, for studies linking nutrition to pediatric chronic cholestasis. The collected data confirm that malnutrition and failure to thrive are associated with increased risks of morbidity and mortality, and they also affect the outcomes of liver transplantation, including long-term survival. Malnutrition in children with chronic liver disease is multifactorial and with multiple potential nutritional deficiencies. To improve life expectancy and the quality of life, patients require careful assessments and appropriate management of their nutritional statuses by multidisciplinary teams, which can identify and/or prevent specific deficiencies and initiate appropriate interventions. Solutions available for the clinical management of these children in general, as well as those directed to specific etiologies, are summarized. We particularly focus on fat-soluble vitamin deficiency and malnutrition due to fat malabsorption. Supplemental feeding, including medium-chain triglycerides, essential fatty acids, branched-chain amino acids, and the extra calories needed to overcome the consequences of anorexia and high energy requirements, is reviewed. Future studies should address the need for further improving commercially available and nutritionally complete infant milk formulae for the dietary management of this fragile category of patients. The aid of a specialist dietitian, educational training regarding nutritional guidelines for stakeholders, and improving family nutritional health literacy appear essential.

**Keywords:** cholestasis; chronic liver diseases; malnutrition; nutritional needs; pediatrics

#### **1. Introduction**

Cholestasis is regarded as reduced bile formation or flow, leading to a decreased concentration of bile acids in the intestine, and the retention within the blood and the liver of biliary substances which are normally excreted into bile [1]. In the pediatric population, cholestasis predominantly affects neonates and infants, around 1 in every 2500 term infants [2]. A substantial proportion have chronic courses of disease (chronic cholestatic liver diseases (CCLD)). Despite recent advances, effective therapies remain elusive, so that many conditions progress to liver failure and may necessitate early liver transplantation. The most common causes of prevalently extrahepatic (surgical) cholestatic jaundice are biliary atresia (BA) (25–40%) and choledochal malformations. As shown in Figure 1, the etiologies of intrahepatic (medical) disorders are more numerous and include a mounting group of genetic disorders related to bile acid transport or synthesis, inborn

**Citation:** Tessitore, M.; Sorrentino, E.; Schiano Di Cola, G.; Colucci, A.; Vajro, P.; Mandato, C. Malnutrition in Pediatric Chronic Cholestatic Disease: An Up-to-Date Overview. *Nutrients* **2021**, *13*, 2785. https://doi.org/ 10.3390/nu13082785

Academic Editor: Antonio Colecchia

Received: 13 July 2021 Accepted: 11 August 2021 Published: 13 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

errors of protein and carbohydrate metabolism, syndromes, mitochondrial and endocrine diseases, hepatic infections, and parenteral nutrition-associated cholestasis (PNAC)) [3–5].

**Figure 1.** Main etiologies of prevalently extrahepatic ("surgical") or intrahepatic ("medical") pediatric chronic cholestatic liver disease. HFI, hereditary fructose intolerance; PFIC, progressive familial intrahepatic cholestasis.

Malnutrition is a common complication of CCLD, which may increase morbidity and mortality at all ages, particularly in pediatric patients due to their specific developmental aspects. The proportion of malnutrition due to intensely reduced bile acid-dependent absorption of fats and fat-soluble nutrients is, however, difficult to quantify compared to that caused by fibrosis/cirrhosis per se.

Chronic enteropathy secondary to associated portal hypertension, poor nutrient intake, increased energy needs, and endocrine dysfunction are further mechanisms [6,7]. Pediatricians and dietitians should be specifically experienced in pediatric CCLD. Data show that improved nutritional consultations after stakeholders' educational training on nutritional guidelines were associated with lower readmissions of adult patients [8]. Similarly, parental health illiteracy necessitates appropriate interventions targeting social and cultural family circumstances [9].

In this context, we performed a comprehensive review of the literature in the PubMed and Google Scholar databases for studies published during the past 20 years linking nutrition to pediatric chronic cholestatic liver disease. This search was completed by personal knowledge, a snowball strategy by searching for any relevant previous studies within the list of references of analyzed articles and/or citation tracking by a computer aided manual search (Figure 2). Here, we report and put together the multiple pieces of the malnutrition puzzle in children and summarize solutions that are available in everyday clinical practice.

**Figure 2.** Flow chart showing the selection process to identify studies included in the article.

#### **2. Causes of Malnutrition in Cholestatic Children**

As outlined earlier, malnutrition in CCLD depends on several cooperating factors (Figure 3).

**Figure 3.** Main factors that determine malnutrition in pediatric chronic cholestatic liver disease. GH, growth hormone; IGFB3P, insulin-like growth factor binding protein 3; IGF-1, insulin-like growth factor-1.

#### *2.1. Metabolic Changes*

The liver plays an important role in the regulation of nutrient metabolism. Therefore, when the liver is damaged by any type of injury, nutrient digestion, absorption, storage, and utilization are affected. The decrease in hepatic and muscular glycogen reserves results in the activation of alternative metabolic pathways with an increased release of amino acids (and hyperammonemia), increased fat oxidation, and rapid use of fat, resulting in hypercholesterolemia and low triglyceride levels. The associated cholestatic component results in further specific problems.

#### *2.2. Poor Nutrient Intake*

The reduced dietary intake in cholestatic children may result from anorexia, nausea, vomiting, changes in taste perception, and early satiety [10].

Anorexia may presumably be the result of a change in amino acid metabolism with increased tryptophan levels and ensuing increases in brain serotonergic activity, which are reportedly involved in the regulation of eating behavior [11]. Nausea and vomiting are triggered by increased pro-inflammatory cytokines [12], organomegaly, and ascites, resulting in reduced gastric capacity and consequent early satiety [13].

Chronic pruritus, due to accumulation of bile acid (BA) and other pruritogenic substances, may be particularly severe and disturbing especially in PFIC and Alagille syndrome. It may dramatically reduce patients' quality of life and therefore warrants serious and prompt attention with adequate treatment [1,14–18].

Finally, a deficiency in zinc or magnesium contributes to a change in taste perception, which may be further aggravated by the use of poorly palatable special formulas [8] and/or dietary modifications with sodium, fluid, or protein restrictions.

#### *2.3. Increased Requirements or Malabsorption/Maldigestion of Multiple Nutrients*

#### 2.3.1. Increased Energy Needs

Children with cholestatic disease have often (but not unanimously) [19] been reported to suffer from a *hypermetabolic state,* with increased energy expenditure, probably due to the intracellular activation of thyroid hormone by bile acids [20,21]. In biliary atresia, a 29% increase in energy requirements, compared to healthy controls, has been calculated [20]. In children with end-stage liver disease, the energy demand may further increase up to 150% of that of predicted normal for a given height and weight, especially in cases of complications such as episodes of sepsis from peritonitis, cholangitis, or variceal bleeding [10,22].

Because the equations commonly used to predict resting energy expenditure (cREE), i.e., the Food and Agriculture Organization/World Health Organization/United Nations University Schofield (weight, and weight and height) equations [23] (Table 1) perform inadequately, especially in the case of end-stage liver disease cholestasis, indirect calorimetry should be used when available to guide energy provisions, particularly in children who are already malnourished [24].

**Table 1.** Weight-based equation for calculating energy requirements (kcal/day) according to the FAO/WHO/UNO [23].


#### 2.3.2. Water and Electrolytes

The fluid and electrolytes requirements are normal for maintaining weight, unless restriction is needed because of ascites or edema. In infancy, a sodium intake of at least 1 mmol/kg/day and potassium intake of about 2 mmol/kg/day are usually appropriate [6].

Sodium should not be added to correct hyponatremia due to the systemic vasodilation and arterial underfilling in patients with advanced liver disease who have developed ascites.

#### 2.3.3. Carbohydrates

For CCLD patients, carbohydrates are a major source of energy (about two-thirds of non-protein energy) as they are better accepted than lipids or proteins due to a more pleasant taste [25]. They can be given as monomers, polymers, and starch. Short-chain polymers (maltodextrin) are generally used because of their low osmotic load, which prevents the onset of diarrhea. Starch is a possible alternative, but it can cause abdominal bloating and diarrhea because it has poor digestibility due to the enzymatic immaturity of amylases at an early age.

#### 2.3.4. Proteins

In patients with biliary atresia, increased oxidation of both exogenous and endogenous proteins has been shown, which results in zero nitrogen balance oxidation leading to muscle proteolysis [20,25,26]. Children with advanced CCLD require a higher protein intake for catch-up growth (a protein/energy ratio of 10%). Currently, the specific needs in cholestasis are not known; infants with severe cholestatic liver disease have a daily protein intake need of approximately 2–3 g/kg/day to ensure a positive nitrogen balance and proper growth [27]. In childhood, the requirement becomes progressively lower, except when complications (e.g., cholangitis) increase proteolysis and lead to negative protein balance [28].

In the absence of encephalopathy, hyperammonemia up to 120 mmol is well tolerated, without evident side-effects, and does not require protein restriction [29]. Protein restriction is rarely needed in children with cholestatic liver disease, unless there is a urea cycle disorder (UCD) related-cholestasis. Protein restriction can be considered if encephalopathy occurs as a result of liver failure and of portal hypertension-related portosystemic shunts. In this case, protein intake should be limited to 0.5–1.0 g/kg/day although restriction to <2 g/kg/day should not be continued in the long term, as it can induce endogenous muscle protein consumption. If unresponsive, patients may benefit from branched-chain amino acid (BCAA) supplementation [27].

Children with cholestatic liver disease have low levels of serum BCAAs and elevated ratios of aromatic amino acids to BCAAs, due to an alteration in amino acid kinetics and increased expenditure of BCAAs in the muscle [30]. In other words, low levels of BCAAs reflect increased BCAA utilization in muscle. Therefore, diets rich in BCAAs may have significant advantages. In experimental biliary atresia, a formula enriched with BCAAs improved weight gain, protein mass, muscle mass, nitrogen balance, body composition, and bone mineral density [31]. In addition, a diet enriched in BCAAs in children with end-stage liver disease resulted in an improvement in nutritional status and total body potassium [32]. To improve palatability without reducing energy intake, an ideally formulated product should preferably include whey protein (3 g/kg/day), at 2.6 g/100 mL in reconstituted formula, enriched in BCAAs to 10% [32–35].

#### 2.3.5. Lipids and Bile–Acid-Dependent Absorption of Fats and Fat-Soluble Nutrients

Lipids are the main source of energy supplied through breast milk. During the first years of life, lipids are necessary for growth, development, and providing essential polyunsaturated fatty acids (PUFAs) and fat-soluble vitamins [36]. Although lipids are less palatable than other macronutrients, they are an important supplement because of their high energy, low osmolarity, and content of essential PUFAs [23,37]. In cholestasis, the considerable decrease in long-chain triglyceride absorption due to impaired micellization requires balancing energy losses with extra energy supply. Despite the possible development of steatorrhea, animal studies suggest that overall nutritional status benefits from a high rather than a restricted fat intake [38]. Oral bile salt substitution therapy would be cumbersome and impractical. However, some cholestatic diseases are treated

with ursodeoxycholic acid (UDCA) (20–30 mg/kg/day), as it increases bile formation and counteracts the hydrophobic effect of retained bile acids on cell membranes. However, it has no effect on micelle formation and lipid absorption.

#### 2.3.6. Medium Chain Triglycerides and Long-Chain Triglycerides

Although medium-chain triglycerides (MCTs; C-8 to C-12 fatty acids) have a lower energy content (about 16% lower than long-chain triglycerides LCTs), they are used as a lipid supplement (MCT-enriched formula) because their shorter chains allow them to spread passively (bound to albumin) through the gastrointestinal tract and be directly absorbed into the portal circulation. In fact, unlike LCTs, MCTs do not require micellar solubilization and re-esterification [39] because they completely bypass the lymphatic system [40], with approximately 95% bioavailability, even in very cholestatic children [41]. Unless the levels of MCT exceed the metabolic capacity of the liver, they undergo liver metabolism, with energy release. This happens independently of carnitine, which is required for the transport of long-chain fatty acids through the mitochondrial membrane. However, MCTs may reduce appetite, probably due to interaction with the peptides YY and cholecystokinin, with possible interference with the metabolism of adipose tissue [42].

In CCLD children, the ideal fat content and ratio of MCT to LCT are difficult to determine. The optimal proportion of total lipids as MCTs for nutritional management is between 30 and 50% [10,43]. Much higher MCT content in the diet (i.e., >80%) without adequate supplementation of PUFA should be avoided since it can lead to a deficiency in essential fatty acids [44]. Limited data have shown that infants fed with 30 or 70% MCT instead of a 50/50% mixture of MCT/LCT have better fat solubilization and growth [45,46]. For these reasons, most infant formulae with MCTs have an MCT/LCT ratio of about 1/1 and are supplemented with essential fatty acids. Lipid intake and the MCT ratio should be tailored for optimal weight gain and growth. [46].

#### 2.3.7. Essential Fatty Acids

Essential fatty acids (EFAs) are macronutrients that must necessarily be included in the diet because humans cannot synthesize their precursors. Linoleic acid (C18:2n–6) and linolenic acid (C18:3n–3) are the two main EFAs. They undergo hepatic and cerebral elongation into long-chain polyunsaturated fatty acids (LCPs, PUFAs, i.e., arachidonic acid (AA) and docosahexaenoic acid (DHA)). The latter are important for the growth and development of membrane-rich tissues such as the brain and retina [44]. PUFAs are also precursors of eicosanoids, which improve immune function, reduce systemic inflammation [47], and participate in platelet aggregation. These important biological roles make PUFA deficiency in cholestatic liver disease a concern. PUFA and EFA deficiency may derive from low intake, fat maldigestion/malabsorption, and inefficient elongation of EFA precursors secondary to dysfunctional hepatocytes and enhanced peroxidation of lipids [6]. It can also be iatrogenic, when diets are exceedingly high in MCT and low in LCT [48]. Clinically, PUFA and EFA deficiencies present with dry and rough skin, poor growth, numbness, paresthesia, and vision impairment. These clinical signs may go unrecognized or be misdiagnosed as vitamin deficiencies [49]. Testing for PUFA and EFA deficiency, which requires the total fatty acid profile in red blood cells, is costly and not commonly available. It is recommended every 3–6 months especially in the case of severe maldigestion/malabsorption or if the diet comprises exclusive MCT lipids (>80%), or lower in severe cholestasis [6]. Attention is required when linoleic acid, α-linolenic acid, eicosapentanoic acid, and/or docosahexaenoic acid levels are low and either clinical signs or severe deficiencies in fat-soluble vitamins are resistant to supplementation. Importantly, the classic marker of a ratio of triene/tetraene >0.2 is not a sufficient marker when testing for PUFA and EFA deficiency [50]. Unlike in breast milk, formulae contain little LCP [51], and this can lead to possible malnutrition unless egg yolks or fish oil are administered at weaning [22,51].

Estimating appropriate PUFA and LCP intakes for healthy infants and children is not easy and is even more difficult for children with cholestasis. Supplementation should exceed 10% of total energy, but even this may not be sufficient. In fact, when liver damage increases, hepatic PUFA conversion to LCPs is impaired, and LCP deficiency occurs even with an adequate EFA supply. In advanced CCLD, LCP deficiency is difficult to correct; it has been documented that even 1 year after liver transplantation, the LCP status may still not be entirely reversed [52].

That said, there are currently no studies showing the functional effects of LCP supplementation in cholestatic children based on which exact recommendations can be provided. In infants, breast milk can be supplemented with breast milk fortifiers. If breastfeeding is not possible, an MCT-enriched formula should be used. To better meet the energy needs of infants, another option is to increase the caloric density of the formula to 0.8–1 kcal/mL [53]. Older children can improve their EFA intake by adding canola oil, sunflower oil, soybean oil, fish oil, and egg yolks to their diet [22,54].

#### 2.3.8. Fat-Soluble Vitamins

In cholestasis, the reduced secretion of bile acids into the intestinal lumen also induces the malabsorption of fat-soluble vitamins (FSVs) (vitamins A, D, E, and K). This is more frequent when direct-reacting (i.e., conjugated) bilirubin serum levels are greater than 2 mg/dL [55]. In infants with biliary atresia, total serum bilirubin appears to be a better, although still imperfect, predictor of FSV deficiency [56]. The role of serum bile acid as a surrogate marker for guiding the monitoring of FSV deficiency in chronic intrahepatic cholestasis is still undefined.

When cholestasis begins in infancy, vitamin stores present at birth are rapidly depleted; this results in biochemical and clinical signs of fat-soluble vitamin deficiency as early as 4–12 months of age if supplementation has not begun [27]. The serum levels of vitamins and prothrombin time should be monitored to allow the proper adjustment of dosages to the patient's specific needs and the prompt treatment of possible side effects. In particular, vitamin A levels must be closely monitored during supplementation because high levels may cause neurologic and hepatic damage [1].

Vitamin D insufficiency, defined as 25-OH vitamin D less than 30 ng/mL, is present in more than half of cholestatic patients, and this is positively correlated with serum calcium [57]. Decreased bone mineral density (BMD) was present in more than half of the studied cholestatic patients and was correlated with low serum calcium rather than vitamin D levels. Decreased BMD and dental disorders in cholestatic children are related to the level of hyperbilirubinemia [57]. Table 2 synoptically summarizes the effects of individual fatsoluble vitamin deficiencies and toxicities as reported in pediatric practice [25,27,33,57,58].

**Table 2.** Synopsis of fat-soluble vitamin deficiencies and toxicities.



**Table 2.** *Cont*.

AP, alkaline phosphatase; IM, intramuscular; PTH, parathyroid hormone; Vit, vitamin.

#### 2.3.9. Water-Soluble Vitamins and Minerals

Cholestatic children may also present with malabsorption of water-soluble vitamins and minerals. Therefore, it is necessary to supplement vitamins normally present in the diet, using standard pediatric multivitamins. The doses recommended are twice the RDAs [59].

The liver and gut microbiota play important roles in regulating most trace elements; therefore, the impairment of liver function and/or dysbiosis can negatively affect metabolism. In addition, the administration or depletion of these trace elements may also cause liver dysfunction [60] Zinc, selenium, calcium, phosphate, magnesium, and iron may be deficient as well, and they should be supplemented according to the plasma levels. Zinc and selenium are important antioxidants. Reduced zinc concentrations lead to poor linear growth, hypogeusia, anorexia, impaired immune function [42], skin rashes, and diarrhea [61,62]. Identifying zinc deficiency may be difficult because plasma zinc levels do not correlate with tissue zinc content; thus, clinicians should suspect zinc deficiency in patients on the basis of gastrointestinal and dermatological manifestations [63].

Calcium and phosphate deficiency may also develop during cholestasis, leading to bone abnormalities that are unresponsive to the normalization of vitamin D status [27]. Magnesium deficiency may occur as well, contributing to the metabolic bone disease of CCLD. Liver transplantation has favorable effects on osteopenia and vitamin D deficiency. It has been shown that in infants and children <2 years of age, the bone mineral content normalizes approximately 11 months after transplantation, provided there is a sufficient period of normal serum 25-OH-D levels [64].

Iron deficiency is uncommon in chronic liver disease. It may occur, however, in cases of reduced intake and chronic blood loss from gastrointestinal bleeds (esophageal varices or portal hypertensive gastropathy) [22]. As iron can promote oxidative stress, carcinogenesis, and fibrogenesis, it should not be supplemented on a regular basis [65]. On the contrary, chronic cholestasis may be accompanied by the excessive accumulation of copper in the liver [66] and possibly contribute to its further damage.

#### *2.4. Endocrine Dysfunction*

The liver is the main source of insulin-like growth factor (IGF-1) and its major circulating binding protein, IGF binding protein 3 (IGF-BP3). IGF-1 stimulates the anabolic actions of growth hormone (GH). In cholestatic liver diseases, IGF-1 and IGF-BP3 formation is reduced, resulting in an impaired GH/IGF-1 axis. IGF-1 levels further decrease due to GH resistance, caused by the downregulation of GH receptors [22]. This condition contributes to aggravating growth failure in these children, who may already have an underlying predisposition to short stature (e.g., Alagille Syndrome) [67].

#### **3. Issues in the Nutritional Management of Children with Cholestasis**

According to the above considerations, it is evident that CCLD negatively affects the nutritional status in infancy (i.e., when growth rates are the highest), thus compromising clinical outcomes for cholestatic children who have end-stage liver disease [68], and is present in about 80% of cases [69]. CCLD increases the mortality and morbidity associated with underlying diseases and significantly influences the outcomes of liver transplantation in children [70]. It is therefore necessary to ensure proper nutritional support to prevent further liver damage and improve the likelihood of successful liver transplantation [36]. All children with cholestatic liver disease should have a clinical nutritional evaluation with an intervention and follow-up plan reported in their care records, with a frequency appropriate for the patient's clinical course. The evaluation of the nutritional status of these patients is based on anthropometric, biochemical, and instrumental indicators [71,72], as summarized below and in Figure 4.




**Figure 4.** Nutritional assessment of pediatric chronic cholestatic liver diseases.

#### *3.1. Anthropometric Measurements*

In children with cholestatic liver disease, the use of weight-for-age and weight-forheight measurements can be inaccurate for nutritional evaluation, owing to visceromegaly,

subclinical edema, and/or ascites as a result of excessive tissue sequestration of water, which results from an abnormal intravascular colloid osmotic status, renal retention of salt and water, and hyperaldosteronism [1,73].

Stunting is determined by serial measurements of the height–age index (height-forage ≤ −2 SD of the WHO child growth standards median). Length (for children < 2 years) or height (for children > 2 years) may be a good indicator of chronic malnutrition, while short-term changes in nutritional status need to be tracked using other parameters such as the mid upper arm circumference (MUAC) and triceps skin fold (TSF). These measures provide information regarding the patient's body composition and are less affected by fluid overload or other complications of end-stage liver disease [10], and they are reported to decline before changes in weight or height become apparent [74].

The MUAC captures both muscle and adipose tissue mass, and it is relatively stable in the first years of life. An absolute value < 12.5–13.0 cm [75] or a Z score < −2 [76] indicate moderate to severe malnutrition. Therefore, by using the MUAC as a screening tool, it is possible to quickly identify children with moderate to severe malnutrition, who are at increased risk for death and need nutrient supplementation and treatment for their underlying disease [36]. The TSF reflects adiposity and is a good indicator of energy reserve depletion during cholestatic liver disease [77]. It has been shown to be more sensitive for acute malnutrition and wasting than weight-for-height Z scores [78].

Globally, children with cholestatic liver disease have decreased MUAC, TSF, and mid-upper arm muscle area Z scores than healthy controls [79]. The frequency of the measurement is variable, depending on the grade of nutritional status, and can range from every 2 weeks to every 3 months [6].

#### *3.2. Biochemical Markers*

Serum protein levels (albumin, prealbumin, transferrin, and retinol-binding protein) alone cannot be used as an indicator of malnutrition, as their production is influenced by hepatic disease, sepsis, inflammation, and hydration status [80]. Albumin is not sensitive to acute changes in the nutritional statuses of children with chronic liver disease, as it has a long half-life (18–20 days). In addition, it may be depressed due to inflammation or acute physiologic stress [81]. Prealbumin is a more sensitive marker for the severity of malnutrition and/or adequacy of nutritional support [74]; it has a half-life of 2 days [15], low body reserves, responds quickly to nutritional status [80], and its production in the liver is maintained until late in liver disease [82]. Retinol-binding protein has a half-life of 12 h, which makes it the best indicator of recent dietary changes [80].

#### *3.3. Other Investigations*

The evaluation of nutritional status in cholestatic children also includes instrumental investigations that assess body composition: dual-energy X-ray absorptiometry (DXA), bioelectrical impedance (BIA), and indirect calorimetry. DXA and BIA provide a measure of fat and free-fat mass, which are helpful when designing a nutritional rehabilitation approach, although the accuracy of these tools can decrease due to fluid overload [83].

Indirect calorimetry measures oxygen consumption and can be used to estimate resting energy expenditure (REE), which increases in children with chronic liver disease. Although non-invasive, it is a technically difficult procedure and may not be easily performed in uncooperative children [80].

The above predictive equations, however, appear to perform poorly in infants and young children with ESLD. Efforts should be made to guide energy provisions with the aid of indirect calorimetry, especially when malnutrition is already present [24].

Accurately monitoring the nutritional status of these patients is essential in early interventions to correct deficiencies, thus improving growth and reducing both morbidity and mortality.

Oral nutrition should always be encouraged whenever possible because it is physiologic, maintains gastrointestinal tract immunity and gut barrier integrity, and reduces bacterial overgrowth [41]. As discussed in detail above, ready-to-use commercial preparations for infants with chronic liver disease should contain higher calorie amounts, MCT fat contents of about 50%, and branched-chain amino acids. Powder formulations allow one to customize the nutritional contributions in relation to the patient's needs. More concentrated formulae containing maltodextrin may be useful for improving carbohydrate intake [25,53].

A commercially available and nutritionally complete powdered infant milk for the dietary management of acute and chronic liver disease is composed of dried glucose syrup, 49% MCT, soya and canola oil, whey and casein, vitamins, minerals and trace elements. The formula contains a higher-than-average content of branched-chain amino acids (30% of total protein) and lower-than-average sodium content (0.56 mmol/100 mL) than any other infant formula. The low lactose content, avoiding the saturation of the intestinal lactase capacity, is also an advantage of this formula when it is necessary to administer high amounts to increase the caloric input (e.g., by enteral feeding). There are no available published studies on this product, other than one report on its safety and tolerability, and there was no evidence of increased growth parameters when cholestatic children were given the product as their primary energy source [84].

Oral feeding is preferable only if sufficient energy and nutrient supply can be secured. In infants, except in some cases such as those due to galactosemia, breastfeeding should be encouraged for its numerous advantages, provided that supplementation with a breast milk fortifier containing MCT and proteins assures adequate intake of calories and nutrients. Later in life, children with CCLD require a complete nutritional assessment and nutritional therapy. Again, besides some rare inborn errors of metabolism (IEM) that may need special treatment, a standardized, hypercaloric diet with MCT should be used for preventing and/or correcting malnutrition. Small and frequent feeding may be useful to reduce anorexia and early satiety, prevent hypoglycemia, and avoid muscle catabolism.

When oral nutrition does not guarantee an adequate nutritional intake and causes poor growth because of intolerance to a large intake via the oral route, thus necessitating moderate calorie and high protein intake, patients should start nutritional supplementation. In this case, constant attention to palatability, osmolarity, or exceedingly high lipid levels is necessary When rapid growth is needed (e.g., before liver transplantation), enteral feeding with a nasogastric tube may be the best option. Bolus feeding is preferred in the first instance because it is physiological. When bolus feeding is not tolerated, it is appropriate to use continuous infusion administered by a peristaltic pump for up to 20 h/day [22]. One study, comparing an MCT-fortified formula administered orally versus by enteral nutrition (EN), showed that EN prevented malnutrition and growth impairment in infants with BA waiting for a liver transplant [46].

Night-time nasogastric tube feeding is another option; the child is allowed to eat ad libitum during the day and receives extra energy when sleeping, with the chance therefore of maintaining some normal feeding behavior [38]. Importantly, regular non-nutritive sucking and oral stimulation should be implemented to reduce the development of oral aversion. Feeding aversion may be prevented, in part, by promoting daytime oral intake as well as encouraging children to experiment with various age- flavors and textures [41]. Last, but not least, the oral route has trophic action on the intestine and liver, and it protects against infections.

When nasogastric tube feeding is not feasible or inadequate for meeting caloric needs, jejunal tube feeding [85], or even an endoscopic gastrostomy if portal hypertension is mild [86–88] may be proposed if a multidisciplinary team can provide active follow-up and care for the child and the device is available. Gastrostomy feeding, however, should be avoided in children with CCLD because there is a risk of peritoneal infection and variceal bleeding (if portal hypertension is present) [36,89]. Newer approaches, such as low-profile gastrostomy, designed to avoid worsening existing portal hypertension and avoid peristomal variceal complications, remain scarcely documented in this category of pediatric patients [90].

Some children with severe liver disease may require parenteral nutrition (PN). These include children who cannot tolerate oral nutrition due to osmotic diarrhea or vomiting, and those with recurrent variceal bleeding. PN is also indicated when enteral feeding fails to realize growth targets. The use of PN in children with compensated liver disease follows standard principles, and if it is a short-term indication, it is not associated with hepatobiliary dysfunction or worsened cholestasis [91]. On the other hand, in children with decompensated liver disease (usually also awaiting liver transplantation), prolonged PN might lead to worsening liver disease (parenteral nutrition-associated liver disease (PNALD) or cholestasis (PNAC) [92]. The risk is generally higher in infants, especially if premature, and in those with associated extreme short gut [93]. To prevent the onset of PNALD or PNAC, it is advisable to monitor bilirubin and to administer, when possible, simultaneous enteral calories, even in small volumes (so-called "minimal enteral intake"). Enteral feeding, at least intermittently, is protective for liver function and may restore mucosal integrity, minimizing small intestinal bacterial overgrowth and promoting bile flow [92]. In addition, the administration of UDCA has also been shown to have a beneficial effect in PNAC in adults and children, and it may therefore be indicated in these patients [94,95]. The prophylactic use of UDCA for parenteral nutrition cholestasis is of unproven efficacy but acceptable. PN requires strict monitoring, as it may result in fluid and sodium overload and worsening ascites, and central line-associated bloodstream infections have also been reported [96]. PN with a partially or completely fish-oil-based lipid content may be advisable for halting and reversing liver disease [97].

Altogether, PN has pros and cons; it may significantly improve the nutritional status prior to LT, with a beneficial effect on the outcome [96,98], but in some cases, it may aggravate jaundice [92] and introduce a risk of fatal vascular access-related complications in an already fragile subject [99]. The home management of enteral/parenteral nutrition is possible after appropriate family education [100,101].

#### **4. Special Diets in Some Common or Special IEM Causing Cholestasis**

#### *4.1. Tyrosinemia Type 1*

Tyrosinemia type 1 (OMIM # 276700) is a rare autosomal recessive disorder of the tyrosine degradation pathway, resulting in the accumulation of toxic metabolites (succinyl acetone) that are substrates for fumarylacetoacetate hydrolase, i.e., the enzyme that is deficient. The clinical manifestations vary and include neonatal cholestasis with acute liver failure, hepatocellular carcinoma, growth retardation, renal dysfunction, and porphyria-like syndrome with neuropathy [102]. The current specific treatment consists of 2-(2 nitro-4-3 trifluoro-methylbenzoyl)-1,3-cyclohexanedione (NTBC) paired with a tyrosine- and phenylalanine-restricted diet Many commercial products containing tyrosineand phenylalanine-free amino acid mixtures and supplemented with vitamins and minerals are available [103]. The aim of combined treatment (NTBC and diet) is to provide adequate nutrition that allows normal growth and development, keeping the tyrosine levels in the blood and tissues under control [104].

#### *4.2. Galactosemia*

Classic galactosemia (OMIM # 23040) is a rare inborn disorder of carbohydrate metabolism caused by a defect in the GALT gene encoding the galactose-1-phosphate uridyltransferase, resulting in an inability to metabolize galactose to glucose. If not treated with a lactose- and galactose-free diet, this condition can cause neonatal cholestasis, liver failure, kidney damage, sepsis, mental retardation, and death. A galactose-restricted diet should be promptly initiated using soy formulas, not only in symptomatic infants but also in those with a highly suspicious newborn screening result [105]. With the introduction of solid foods, nutritional issues become more complicated, as trace amounts of hidden galactose are naturally found in fruits, vegetables, bread, and legumes [106]. The treatment guidelines for galactosemia, published in 2017, recommend eliminating sources of lactose and galactose from dairy products but allowing any amount and type of fruit, legume,

vegetable, or mature cheese (with galactose contents <25 mg/100 g) in the diet, as they only have small amounts of galactose [107]. The restriction of dairy products is associated with inadequate calcium intake and with a risk of diminished bone mineral density, unless proper supplementation is initiated [108,109].

#### *4.3. Hereditary Fructose Intolerance*

Hereditary fructose intolerance (OMIM# 229600) is a disorder that arises in infancy, at the start of weaning, when fructose and sucrose are introduced into the diet, or even before if an infant is given medications containing sucrose. Generally, the child presents with recurrent vomiting, abdominal pain, (sometimes fatal) hypoglycemia, growth retardation, liver failure, and renal tubulopathy (in the case of long-term fructose exposure). Older patients who survive infancy develop a natural avoidance of sweets and fruits [110].

#### *4.4. Citrin Deficiency*

Citrin deficiency, or neonatal-onset type II citrullinemia (OMIM # 605814), is a genetic disorder caused by a homozygous or compound heterozygous mutation in the SLC25A13 gene (603859) causing various metabolic abnormalities. It may present different clinical manifestations during the course of infancy to adulthood, ranging from cholestasis, fatty liver, and growth retardation in infancy, to liver dysfunction and neuropsychiatric symptoms in childhood and adulthood. Although in almost all infants it is a self-limiting condition, with symptoms and biochemical markers improving with age, the occurrence of severe hepatic dysfunction requiring liver transplantation has also been described [111,112]. It is recommended to maintain a low carbohydrate intake and a protein- and fat-rich diet with a protein–fat–carbohydrate ratio of 15–25%:40–50%:30–40% [113]. In fact, due to the impairment of NADH shuttling and glucose metabolism, patients have an aversion to carbohydrates and a peculiar preference for high-protein and high-fat foods, in contrast to patients with other urea cycle defects [114]. Excessive carbohydrate intake causes different symptoms such as fatigue, anorexia, weight loss, psychiatric symptoms, and liver failure, which is the most significant risk factor for adult-onset type II citrullinemia [113,115]. In patients with hypergalactosemia, the use of lactose-free milk containing MCT is associated with improved energy metabolism and liver function [116]. The administration of sodium pyruvate (0.1–0.3 g/kg/day) [83] improves clinical symptoms and helps in weight recovery [117,118].

#### **5. Pre- and Post-Transplant Nutritional Status of Children with End-Stage Cholestatic Liver Disease**

Although some causes of neonatal cholestasis have no specific treatment, affected children may benefit from appropriate nutritional support to prevent malnutrition and to correct macro/micronutrient deficiencies. This is paramount because a better pre-transplant nutritional status is associated with better post-transplant outcomes, and lower mortality and morbidity [18,68,119–121]. The nutritional needs of children with liver disease are outlined schematically in Figure 5.

A comparison of nutritional needs of children with chronic cholestatic liver diseases before and after liver transplantation [6,41,54,120] is shown in Table 3.

**Figure 5.** Nutritional needs of children with liver disease. BCAA, branched-chain amino acids; CHO, carbohydrates; LCP, long chain polyunsaturated fatty acids; MCT, medium-chain triglycerides; PUFA, polyunsaturated fatty acids.

**Table 3.** Nutritional needs of children with chronic cholestatic liver diseases before and after liver transplantation.


BCAA, branched-chain amino acids; EAR, estimated average requirement; LCPUFA, long-chain polyunsaturated fatty acids; MCT, mediumchain triglycerides; PUFA, polyunsaturated fatty acids. Bold characters indicate the different nutrients' groups. Modified from Yang et al. (2017) [41].

Useful literature on the nutritional management of children undergoing liver transplantation is limited. There are no pediatric studies showing which approach to feeding is superior (e.g., nasogastric, oral, or nasojejunal) and what impact it has on the post-liver transplantation outcomes. However, if the child eats orally up until the transplant, enteral feeding should be encouraged with a healthy, age-appropriate diet and also trying foods that they disliked before the transplant, as taste changes are common. Post-transplant feeding should start as soon as possible, ideally during the first 72 h if the child is stable [54]. After liver transplantation, weight gain seems to recover completely, despite previous malnutrition. Even mid-arm muscle mass and mid-arm fat start to rapidly improve within 3–6 months [41,120]. Although marked catch-up growth is observed in those who are more stunted before transplantation, transplanted children generally achieve final heights below their genetic targets [120,122]. Rejection, re-transplantation, and metabolic disease are independent risk factors associated with shorter stature. The use of steroids post-transplant for autoimmune (e.g., de novo autoimmune hepatitis) or allergic complications also contributes [123,124]. However, underlying genetic conditions (e.g., Alagille syndrome) may contribute to permanent low stature.

#### **6. Post-Transplant Obesity with Fatty Liver and MetS Risk: The Malnutrition in Excess paradox**

Most children catch up to their peers in weight by one-year post-transplant [125–127], but it may take up to five years to catch up in height [93]. This imbalance may lead to overweight and obesity within two years post-transplant. Careful nutritional counselling and close follow-up for the identification of children at risk for persistent overweight/obesity may help with targeted interventions to prevent not only obesity [128] but also long-term obesity-related comorbidities, such as metabolic syndrome [129,130] and non-alcoholic fatty liver disease (NAFLD), which has now been more correctly renamed metabolic-associated fatty liver disease (MAFLD) [131].

Post-transplant metabolic syndrome (PTMS) is increasingly recognized as a significant contributor to long-term morbidity and even mortality after solid-organ transplantation [132,133]. However, data on the prevalence of PTMS in children are scarce. Hypertension and glucose intolerance seem to be most common in the early post-transplant period. However, even five to ten years post-transplant, these comorbidities are much more common in children after liver transplantation than in the general population [134]. Liver transplant recipients are predisposed to these comorbidities predominantly due to the side effects of the immunosuppressive therapy used to prevent rejection. The side effects of glucocorticoids (GCs) are widely documented. GCs cause glucose intolerance by reducing peripheral insulin sensitivity (particularly in the muscles, the liver, and adipose tissue) and decreasing glucose utilization, and inducing dyslipidemia, hypertension, and hepatic changes with steatosis [135–137]. The high prevalence of impaired glucose tolerance (IGT) in liver transplant recipients is also due to the prolonged use of calcineurin inhibitors (CNIs). Tacrolimus is more diabetogenic than cyclosporine [138], as it impairs beta cell function [139] and/or stimulates insulin resistance [140]. CNI exposure is also associated with systolic hypertension owing to its direct nephrotoxicity, enhancement of sodium reabsorption, and induction of systemic vasoconstriction [141,142], and with increased circulating LDL cholesterol [143]. PTMS, influenced by weight gain and drug-induced hyperlipidemia and diabetes, is a crucial factor associated with the development of de novo NAFLD in liver transplant recipients [144]. Guidelines from the American Association for the Study of Liver Disease and the American Society of Transplantation recommend annual screening for obesity, hypertension, dyslipidemia, and diabetes mellitus with physical examination and fasting blood tests [145,146]. For this reason, it is essential to identify and test interventions to improve the screening and management of PTMS in order to improve long-term outcomes.

#### *Sarcopenia*

Sarcopenia is a reduction in muscle mass/muscle wasting. Few studies have evaluated its impact in this special pediatric population. Reduced muscle mass may be studied using several methods requiring cooperation to stay motionless during measurements, which is challenging for infants and young children. The tests include dual-energy X-ray absorptiometry (DEXA), CT, MRI, and bioelectric impedance analysis (BIA). The risks from radiation exposure restrict the use of CT, if not included in clinical staging (e.g., before OLT), while radiation-free MRI is expensive and may require narcosis in pediatric-age subjects. DEXA is a useful substitute, but whether all-body skeletal muscle mass and appendicular lean mass performs equally well, as markers of sarcopenia are controversial. The poor availability of age- and gender-matched normative data is a relevant obstacle.

CCLD pediatric patients may develop liver insufficiency and need early liver transplantation. As in adults, both malnutrition and sarcopenia compromise pediatric posttransplantation outcomes [147,148]. In liver-transplanted children, sarcopenia is associated with relevant clinical outcomes (growth retardation, length of hospitalization, and rate of readmission) [149]. The use of an internal comparison for the diagnosis of sarcopenia may cause different series to have different cutoffs, which hampers the generalization of results [150,151].

Myopenia, caused by decreased protein synthesis and increased protein degradation, is another clinical manifestation of end-stage liver disease (ESLD). It is associated with adverse clinical outcomes, confirming the importance of rehabilitation strategies [152].

Although several valid malnutrition screening tools with which to identify nutritional risk are available, an increased risk of protein malnutrition may be masked by edema, and further work regarding tools with which to distinguish between protein–energy malnutrition (PEM), sarcopenia, and cachexia are required for the pediatric age group. Functional assessment of muscle sarcopenia in pediatric CCLD is not generally performed because appropriate muscle function tests have not been developed for early childhood, and a standardized assessment of muscle function for the diagnosis of sarcopenia in young pediatric patients is currently lacking [150,151].

CT-derived body metrics (e.g., the skeletal muscle index (SMI), psoas muscle index (PMI), psoas muscle surface area (PMSA), and subcutaneous fat index (ScFI)) are measurable components of sarcopenia, frailty, and nutrition. The relationship between preoperative metrics and post-LT outcomes in pediatric recipients < 1 year-old with cirrhotic liver disease is unknown. Fragmentary data suggest that the preoperative values are significantly reduced compared with controls, and the values are correlated with moderate to severe postoperative infections and with longer hospital and ICU stays [153]. A higher re-operation rate and longer hospital stays following transplantation, but not waitlist mortality, were associated with lower PMI [154] or psoas muscle surface area (PMSA) [155]. Sarcopenia, evaluated by total psoas muscle area (tPMA) measurements from computed tomography (CT) imaging, was prevalent in patients with pediatric ESLD awaiting LT, highlighting the need for nutritional support before LT and/or after LT in the PICU [156]. Only one study [150] examined both muscle mass and muscle function in children with CLD, as recommended by the recent EWGSO consensus [157].

#### **7. Conclusions**

The causes of malnutrition in CCLD are varied. Although some causes of cholestasis have no specific treatment, all children with CCLD should have a periodic evaluation of clinical, anthropometric, biochemical, and instrumental nutritional indicators. In particular, it is important to identify specific nutritional needs and any deficiencies that require early preventive or corrective treatment. Supplemental nutrition, including MCT, essential fatty acids, branched-chain amino acids, and vitamins, is crucial for overcoming anorexia and preventing growth retardation.

Malnutrition is associated with increased risks of morbidity and mortality, and it also affects the outcomes of liver transplantation and long-term survival, so that failure of nutritional care is an indication for reviewing liver transplant timing. To improve life expectancy and quality of life, patients with CCLD need careful assessment of their nutritional statuses by a multidisciplinary team capable of initiating appropriate interventions. The complex dietary management for this type of malnourished patient requires new studies to improve commercially available and nutritionally complete infant milk formulae.

**Author Contributions:** Conceptualization C.M. and P.V.; literature search and data curation, M.T., E.S., A.C. and G.S.D.C.; writing—original draft preparation M.T., E.S., C.M., G.S.D.C. and A.C.; writing—review and editing C.M. and P.V. All authors were involved in the writing of the draft. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Malnutrition in Older Adults—Recent Advances and Remaining Challenges**

**Kristina Norman 1,2,3,4,\*, Ulrike Haß 2,3 and Matthias Pirlich <sup>5</sup>**

	- 12159 Berlin, Germany; pirlich@kaisereiche.de

**Abstract:** Malnutrition in older adults has been recognised as a challenging health concern associated with not only increased mortality and morbidity, but also with physical decline, which has wide ranging acute implications for activities of daily living and quality of life in general. Malnutrition is common and may also contribute to the development of the geriatric syndromes in older adults. Malnutrition in the old is reflected by either involuntary weight loss or low body mass index, but hidden deficiencies such as micronutrient deficiencies are more difficult to assess and therefore frequently overlooked in the community-dwelling old. In developed countries, the most cited cause of malnutrition is disease, as both acute and chronic disorders have the potential to result in or aggravate malnutrition. Therefore, as higher age is one risk factor for developing disease, older adults have the highest risk of being at nutritional risk or becoming malnourished. However, the aetiology of malnutrition is complex and multifactorial, and the development of malnutrition in the old is most likely also facilitated by ageing processes. This comprehensive narrative review summarizes current evidence on the prevalence and determinants of malnutrition in old adults spanning from age-related changes to disease-associated risk factors, and outlines remaining challenges in the understanding, identification as well as treatment of malnutrition, which in some cases may include targeted supplementation of macro- and/or micronutrients, when diet alone is not sufficient to meet age-specific requirements.

**Keywords:** malnutrition; ageing; inflammaging; sarcopenia; anorexia of aging; micronutrients; DoMAP; GLIM criteria

#### **1. Introduction**

The World Health Organization (WHO) has declared Healthy Ageing a priority of its work on ageing between 2016 and 2030 and developed a policy framework which emphasizes the need for action across multiple sectors [1]. The aim of the program is to enable older persons to develop and maintain functional ability which permits wellbeing and allows them to partake in society [1]. Older adults (individuals aged 65 or over) are the fastest growing age group and projections from the United Nations predict that by 2050, the number of adults aged 65 or over will be twice as big as the amount of children under the age of five and also surpass the number of adolescents aged between 15 and 24 years. In 2050, improvements in survival are expected to add approximately 5 years to the life expectancy at birth for the world's population which was 72.6 years in 2019 [2].

Biology of ageing is understood as the time related decline of physiological functions, leading to changes in functional performance of different organ systems as well as with

**Citation:** Norman, K.; Haß, U.; Pirlich, M. Malnutrition in Older Adults—Recent Advances and Remaining Challenges. *Nutrients* **2021**, *13*, 2764. https://doi.org/ 10.3390/nu13082764

Academic Editor: Ina Bergheim

Received: 29 June 2021 Accepted: 10 August 2021 Published: 12 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

reduced resilience to physical, cognitive and mental stressors, however, there are great individual differences in these changes. Advanced age is associated with reduced adaptive and regenerative capacity which is leading to higher rates of morbidity [3]. On the other hand, the presence of age-associated diseases in middle-aged individuals has been interpreted as a sign of accelerated ageing [4]. Maintaining an adequate nutritional status as well as a sufficient nutrient intake is key to health and quality of life and as such is one prerequisite for wellbeing in higher age and modulator of healthy ageing as defined by the WHO. However, older adults are susceptible to nutritional problems and ultimately also to malnutrition through a variety of mechanisms [5]. As age is one main risk factor for the development of chronic disease, older persons are particularly susceptible to disease-related weight loss, loss of muscle mass and strength (i.e., sarcopenia) and ultimately, the frailty syndrome, all of which can fundamentally impact recovery from disease and clinical outcome in general [6–8]. Weight loss, a marker of macronutrient deficiency and/or catabolism, is a common key initial phenomenon in old patients, which sets off a catabolic cascade of unfavourable events resulting in higher morbidity and mortality. The causes for weight loss in higher age are multifactorial, but can in part be attributed to both disease processes such as catabolic events, disease or age-related anorexia ("anorexia of aging") and subsequent insufficient dietary intake, but also to increased inflammatory status, depressive or cognitive disorders [9] as well as a decreased socio-economic status [10]. But even outside the context of disease or manifest disorders, both ageing processes and age-associated changes can slowly impact physiology and metabolism and thus effect a gradual change in the nutritional status of older adults [11].

The treatment of malnutrition requires early identification and multimodal intervention, in hospitalized patients as well as community dwelling older adults. However, treatment modality still poses a challenge for nutritional therapy with yet open questions [12,13]. This review summarizes the current state of evidence on the complex aetiology of malnutrition in old adults, considering both effects of ageing processes and disease-related factors. Also remaining challenges in the identification and treatment of malnutrition in the old are outlined.

#### **2. Impact of Malnutrition in the Old**

Although clinical malnutrition predominantly occurs in patients in hospitals, care situations or nursing homes, malnutrition, nutritional risk and specific nutrient deficiencies in particular, are a common albeit frequently overlooked occurrence in community-dwelling old people [14,15].

Consequences of malnutrition are deleterious and far reaching and have been described in detail [6]. While disease-related malnutrition is not limited to older adults, it is more frequent in higher age, and the consequences appear to be more severe in older persons due to their impaired regenerative capacity.

Malnutrition in general has serious implications for clinical outcome, for recovery from disease, trauma and surgery and is associated with increased morbidity and mortality both in acute and chronic disease [6] and has thus been acknowledged as serious burden for the health care system [16].

Depending on the type of malnutrition, protein catabolism can be pronounced. Disease-related malnutrition therefore leads to rapid skeletal muscle wasting, whereas age-related malnutrition is associated with a slower but progressive loss of muscle mass. The effects of protein catabolism are prominently reflected by lower muscle mass, muscle strength and function with severe implications for physical performance [17]. At the same time, malnutrition [18] but also reduced dietary protein intake per se [19] are associated with a decrease in bone mineral mass in higher age. Together with poor physical performance and coordination resulting in higher fall risk, these factors accelerate the age-related risk of osteoporosis and osteoporotic fractures [20]. Taken together, the risk of falling and the subsequent loss of independence and disability are greatly increased.

Malnutrition-related protein catabolism and micronutrient deficiency have been prominently linked to the impairment of immune function [21], which is already affected in higher age. In older malnourished adults, this manifests as the loss of cell-mediated immunity in particular [22], increasing the risk for infection and delaying recovery from disease [23]. Several studies have thus shown a close relationship between malnutrition and risk for infection, such as healthcare-associated infections [24], infectious complications and subsequent longer stays in intensive care units (ICU) and increased ICU mortality in older malnourished patients [25].

Wound healing as well as tissue recovery are also impaired in malnutrition, which in part can be attributed to micronutrient deficiency [26]. This clearly predisposes older adults to wound healing disorders and chronic wounds which are a great burden to patients as well as associated with decreased quality of life and furthermore with higher expenditure in the health care setting [27].

One particularity of malnutrition in older adults is that the impact appears to be more severe than in younger adults. Studies have not only shown that changes in body composition in malnutrition occur to a greater extent in older compared to younger adults [28], but also that recovery of low body cell mass or muscle mass is impaired in higher age following weight loss [17]. This predisposes malnourished older adults to the risk of developing the so-called geriatric syndromes which have been described as multifactorial syndromes typical in higher age [29]. Geriatric syndromes greatly compromise health status, cognitive functioning, functional ability, and compensatory capacity [29] and result in higher mortality, albeit predominantly in the younger old [30].

#### *2.1. Role of Malnutrition in the Geriatric Syndromes Frailty, Fatigue, Sarcopenia*

Malnutrition plays an important role in the development of certain geriatric syndromes. Geriatric syndromes are complex multifactorial conditions occurring in higher age with serious implications for health [29] and have been described as "phonotypical presentations of accumulated and underlying ageing-related dysfunctions spanning over different organ systems" [31]. They include (but are not limited to) dementia and delirium, depression, incontinence, fall risk, visual as well as hearing impairment, wound healing disorders, frailty, and sarcopenia [32].

Involuntary weight loss, a hallmark of malnutrition, is inevitably associated with loss of skeletal muscle mass, which appears to occur at a greater extent in higher age. This increases the risk of developing sarcopenia, a phenomenon which is characterized by the loss of both muscle mass as well as muscle strength and function. As these two entities frequently occur together, this has led to the new term "sarcopenia malnutrition syndrome" [33] and a need for new screening tools which reliably identify both conditions has been voiced [34].

A phenomenon, which is often overseen in this content but nonetheless is important in an increasingly obese society, is sarcopenic obesity [35], which describes a condition with reduced lean mass and increased fat mass, resulting in a high-risk body composition phenotype. This underlines the importance of evaluating muscle mass independently from weight loss, since age-related changes in body composition such as increases in (visceral) fat mass may well mask low lean mass. Furthermore, ectopic fat infiltration in the muscle lowers the quality of skeletal muscle and thereby impairs muscle functionality [35].

As sarcopenia has been called the biological substrate of frailty, the close relationship between malnutrition and sarcopenia suggests a link between malnutrition and frailty as well. Not surprisingly, involuntary weight loss, which is an indicator of catabolism, is a major risk factor for developing physical frailty [36]. Weight loss is therefore not only one of five factors such as fatigue, weakness, slow gait speed and low physical activity which constitute frailty as defined by the frailty phenotype [37], but is also linked causally to the other four factors [36]. As such, there is a close relationship between malnutrition and frailty which has been well documented [38], although they are considered conceptually distinct conditions. Numerous studies have shown a significant overlap of both entities in

hospital patients [39] and community dwelling old [40,41]. There is even evidence that the consequences frequently attributed to malnutrition might in reality be due to the effects of frailty. An evaluation in 2804 community-dwelling older adults from the Singapore Longitudinal Aging Study II revealed that functional decline measured by decreased (instrumental) activities of daily living, disability, impaired quality of life and long-term mortality in particular was more apparent in older adults with physical frailty with or without malnutrition [42].

Moreover, fatigue is also one of the core elements of frailty. Fatigue has been described as a relentless exhaustion affecting the ability to carry out physical and mental activities [43] and has been linked to age-related mitochondrial dysfunction [44,45]. Old patients with severe involuntary weight loss at discharge from hospital had a significantly higher risk for severe fatigue which in turn compromises post hospital recovery [46]. Nutritional status including both malnutrition and obesity, has moreover been identified as an important modulator of fatigue [47], and more evidence is warranted on the role of dietary approaches, including anti-inflammatory diets, in the treatment of fatigue [48].

Due to the proposed relationship between long-term dietary patterns and cognitive function, malnutrition has also been linked to cognitive impairment [49,50], although the relationship is complex and difficult to disentangle and more studies are needed on this subject [50]. Similarly, there is a close interaction between malnutrition and depression [51–53], but causality is difficult to establish, as the relation is most likely mutual.

#### *2.2. Prognostic Impact of Malnutrition in the Old on Mortality*

It is well established that malnutrition is associated with increased mortality in both acute and chronic disease, and this effect is still observed in older adults. The risk for both short-term mortality in acute conditions [54] as well as for long-term mortality in chronic disease is significantly increased [55]. One large cohort study included 1767 older individuals with a variety of disease ranging from cancer to diseases of the circulatory or respiratory system and revealed that the increased risk of mortality due to malnutrition existed irrespective of the cause of death [56].

#### **3. Malnutrition: Definition and Types and How to Screen for Them**

Despite an ongoing debate, there is still no universally accepted definition of malnutrition [57]. Therefore, in 2016, the world's four leading Clinical Nutrition Societies (ESPEN, ASPEN, FELANPE, and PENSA) representing more than 70 national scientific societies started a consensus process in order to develop criteria for malnutrition which could be used in all clinical settings on a global scale. The resulting concept of the Global Leadership Initiative on Malnutrition (GLIM) [58,59] was published in 2019 and considers three phenotypic criteria for the diagnosis of malnutrition: weight loss (>5% within past 6 months, or >10% beyond 6 months), low body mass index (BMI) (<20 kg/m2 if <70 years, or <22 kg/m2 if >70 years), reduced muscle mass (according to validated body composition techniques), and two etiologic criteria: reduced food intake or assimilation (≤50% of energy requirements >1 week, or any reduction for >2 weeks, or any chronic gastro-intestinal condition that adversely impacts food assimilation or absorption), and inflammation (acute disease/injury or chronic disease-related). It is proposed that the diagnosis of malnutrition is based upon the presence of at least one phenotypic and one etiologic criterion, and in a second step different thresholds of the criteria can be used for severity grading of malnutrition. The GLIM criteria are subject of numerous ongoing validation studies. A recently published study on community-dwelling older adults participating in a long-term osteoporosis trial in Hongkong demonstrated that the GLIM criteria were associated with a higher risk for sarcopenia, frailty, and mortality during a 14-year follow-up period [60]. While the GLIM criteria are not age-specific, they do include age as a risk factor among the components. For geriatric patients, the guidelines on Enteral Nutrition in Geriatrics by the European Society of Clinical Nutrition and Metabolism (ESPEN), have defined clinical malnutrition as the presence of either weight loss which reflects a catabolic state (>5% in six months) and/or low BMI (i.e., BMI below 20 kg/m2) which represents depleted physiological stores [61].

Nutritional risk is less well defined but commonly understood to be a condition in which the present nutritional status is at imminent risk of impairment due to a range of factors such as medical history, comorbidities or drugs which might increase dietary requirements or interfere with nutrient absorption or metabolism. Further factors may include physical, mental or cognitive status which might prevent the older person to properly care for themselves as well as socio-economic factors which hinder access to a varied high-quality diet [61].

Screening tools which are suitable for the use in older adults need to focus on the most important risk factors for malnutrition in high age, in order not only to diagnose manifest malnutrition but also to capture the risk to develop malnutrition as described above. That way, screening tools can be used to identify patients early in order to initiate nutritional treatment. Several screening tools have been established for the specific use in older adults, but the most widely used and most studied instrument is the Mini Nutritional Assessment (MNA) in its long or short form. However, due to its broad range of covered topics, the specificity of the MNA has been questioned, as it is associated with a high risk of "over-diagnosing" malnutrition in the old [62]. This problem might be countered by complementing the MNA with the GLIM criteria as suggested [59].

Screening for and treating malnutrition has also been acknowledged as one of the first necessary steps in the identification and treatment of sarcopenia [63]. Recent studies have compared the ability of screening tools for malnutrition such as the MNA short or long form to predict the development of sarcopenia to newer diagnostic tools such as the GLIM criteria [58,59]. In the SarcoPhAge cohort, the GLIM criteria did predict incident sarcopenia, whereas neither of the MNA forms did [17].

#### *3.1. Macronutrient Deficiencies*

Clinical malnutrition results from an imbalance between macronutrient intake and requirement [64], which causes a measurable reduction in tissue and ultimately weight. Due to the frequently insufficient protein and energy intake, it is therefore also commonly referred to as protein-energy malnutrition (PEM) or protein-energy undernutrition (PEU). Consequently, there is concomitant need for both adequate energy intake (25–30 kcal/kg body weight, depending on individual situation) and higher protein intake [65]. There is overwhelming evidence that protein requirements are generally higher in older age (1.0–1.2 g/kg body weight) and adequate protein intake [65,66] is crucial in order to prevent malnutrition and sarcopenia. Recommendations regarding protein intake are mainly based on the concept of anabolic resistance, which describes an impaired capacity of the muscle to respond to anabolic stimuli in higher age. However, protein intake in community-dwelling older adults has been reported to be frequently well below recommended intake [67,68] which is associated with a higher risk for the development of malnutrition.

Also, depending on the type of malnutrition and underlying disease, protein catabolism can be pronounced. Protein requirements are therefore further increased in older adults with malnutrition or disease (1.2–1.5 g/kg body weight) [65,66].

#### *3.2. Micronutrient Deficiencies*

One specific form of malnutrition frequent in older adults are micronutrient deficiencies [10,69]. In contrast to quantitative malnutrition, which is reflected by weight loss, micronutrient deficiencies are much harder to screen for and identify, in part also due to methodological issues such as missing suitable markers of stored or available micronutrients. The majority of dietary intake surveys have, however, identified an inadequate intake of a broad range of micronutrients in older adults [70]. Since micronutrient deficiency such as iron (Fe), vitamins C and D, vitamins B6 and B12, as well as folic acid and the trace element zinc (Zn), have been prominently linked to the impairment of immune function [71], the following chapter focusses on recent findings regarding these nutrients in particular.

The reasons for insufficient micronutrient intake vary including not only a low amount of food, but typically also the choice of food and the lack of variety. Prices, availability of foods rich in vitamins, trace elements and minerals can reduce the intake of micronutrients. On the other hand, ageing is also associated with changes that facilitate deficiencies in calcium, vitamin D, vitamin B12, Fe, magnesium, and Zn among other important nutrients. Large studies on the micronutrient serum status in older adults are rare due to the costs and efforts. The Population Based KORA-Age Study, a representative cohort study, recently showed subclinical micronutrient deficiencies in community-dwelling old [69]. 52.0% of the 1079 older study participants had a vitamin D deficiency (<50 nmol/L), 27.3% had low vitamin B12 levels (<221 pmol/L), 11.0% had insufficient Fe levels (men <11.6 μmol/L, women <9.0 μmol/L), and 8.7% had low folate levels (<13.6 nmol/L). Among the risk factors for the subclinical deficiencies were advanced age, frailty, lack of physical activity and no or irregular use of dietary supplements. Dietary supplement intake is an easily modifiable risk factor, and recent studies have confirmed that regular use of dietary supplements is associated with lower rates of subclinical deficiencies of nearly every micronutrient investigated in the study [72]. Age-specific requirements are not really clear, but higher intakes have been recommended due to the frequently impaired intestinal absorption or distribution [73]. Also, it has recently been postulated that diet alone might not be enough to meet the age-specific requirements of older adults, but tailored supplementation of micronutrients might well be necessary [74], in order to e.g., support the immune function. The intake of drugs may also interfere with the metabolism of micronutrients [75,76] which further compounds the problem as poly-pharmacy is common in higher age.

Moreover, it is known that inflammation, which is common in older adults, affects trace element status and their biomarkers, resulting in depleted plasma stores of Fe, Zn, and manganese, a phenomenon which is partly known as nutritional immunity and serves as a defence against invading pathogens [77].

Due to the multifactorial role of micronutrients in various metabolic processes as well as immune functioning, cell proliferation and growth, signalling processes and genomic stability, micronutrient deficiencies are in turn involved in the pathogenesis of a variety of conditions and age-related diseases [78]. One of the most well studied trace elements in higher age is Zn, and dietary Zn intake has also been reported to be frequently insufficient in older adults [78]. Age-associated alterations of intestinal absorption, problems regarding chewing and swallowing, drug-interactions, and impaired subcellular processes in Zn metabolism can further contribute to low Zn absorption and availability. In turn, low dietary Zn intake with subsequent Zn deficiency has been linked to depressive disorders [79], loss of appetite and cachexia in age-advanced old [80], and to increased muscle catabolism via inflammatory cytokine activation [81] as well as to immunosenescence [82] and frailty [83]. Moreover, in older adults, an inadequate intake of antioxidant micronutrients such as vitamin E, carotenoids, and vitamin C has also been linked to the development of impaired muscle strength and physical performance [84–87].

#### **4. Prevalence of Malnutrition**

Despite the body of evidence describing the personal and clinical consequences of malnutrition and its economic impact on the health care system, malnutrition in the old remains a considerable problem with reported high frequencies, especially in situations of dependency [88]. This has been attributed to poor awareness and lack of time or education in medical as well as nursing staff, but recognition and treatment of malnutrition in older adults is undeniably a challenge even when identified early. All in all, it is estimated that roughly a quarter of European adults over the age of 65 are at high risk of malnutrition across various settings [89,90].

Prevalence of malnutrition, however, strongly depends on the setting, on underlying or accompanying diseases as well as on screening and assessment methods. Numerous studies have investigated prevalence of malnutrition in hospital and nursing home settings; but recently, several meta-analyses have been published which also use meta-regression to

further explore determinants. Although the specific number of malnutrition prevalence differs between meta-analyses, the main findings are comparable, showing the lowest percentage of malnourished individuals in the community setting and the highest percentage in acute and subacute care settings, higher prevalence rates in higher age as well as sex-specific differences, as women had the highest risk. Also, region and instrument used to identify malnutrition had a significant impact on prevalence. One systematic review and meta-analysis of prevalence data of malnutrition and nutritional risk in older adults across different healthcare settings showed a wide range of malnutrition from 3% in the community setting to approximately 30% in rehabilitation and subacute care, even though the review only included studies using the MNA [91]. This study was recently complemented by two systematic reviews and meta-analyses [89,90]. Leji-Halfwerk et al. included further studies using 22 malnutrition screening tools validated for use in adults aged 65 years or more. They reported pooled prevalence rates of high malnutrition risk across all countries and screening tools which ranged from 8.5% in the community setting to 28.0% for the hospital [89,90]. The meta-analysis and systematic review by Crichton and colleagues focussed on studies carried out in community-dwelling older adults who had been assessed with Subjective Global Assessment (SGA), Patient-generated (PG)-SGA or MNA and found clear differences across countries with low prevalence in Northern Europe and highest in Southeast Asia. Using meta-regression, both meta-analyses reported higher prevalence rates in adults aged above the age of 80 [89,90], in women, in patients with one or more comorbidities and a higher prevalence of malnutrition in rural rather than metropolitan regions [90].

Recent advances in prevalence research has also included re-evaluation of large pool datasets using harmonized definitions of malnutrition indicators such as low BMI, weight loss, low food intake and combinations of these. Wolters et al. were able to show varying prevalence of malnutrition according to the different criteria used and concluded that it might be more useful to consider the criteria separately as each may reflect a distinct nutritional problem [92].

#### **5. Determinants of Malnutrition**

Malnutrition in older adults is of complex and multifactorial origin. A variety of factors such as life-style factors, disease and ageing processes may be involved and interaction between these factors is common. Understanding risk factors is crucial in order to address malnutrition effectively, but the complex aetiology of malnutrition is still not perfectly understood.

In industrial countries, disease is one of the most common reasons for developing malnutrition and the onset of malnutrition can be both acute and slow. Age in itself is an established non-modifiable risk factor for malnutrition. Higher age is associated with physiological changes which can potentially slowly result in or further malnutrition such as impaired taste and smell, decreased gastric flexibility, reduced appetite, etc. As higher age clearly increases the risk for disease, there is considerable risk for potentiation of nutritional problems.

#### *5.1. Modifiable Determinants of Malnutrition*

Recent research has particularly focussed on the evidence of potentially modifiable risk factors of clinical malnutrition, as targeting these risk factors is a fundamental element in the prevention and treatment of malnutrition. A recent meta-analysis investigated which factors are associated with incident malnutrition and identified marital situation, hospitalisation and physical limitations as the most important predictors [93]. Another systematic review categorized the evidence grade of thirty potentially modifiable factors which have been associated with malnutrition; however, stating that robust evidence was lacking for most of the studied determinants and highlighting the need for high quality prospective studies [94].

In a modified Delphi process, an international expert group therefore developed a model for the theoretical framework on the aetiology of malnutrition termed DoMAP (Determinants of Malnutrition in Aged Persons) and specifically, potential causative mechanisms [95]. The model construct consists of three triangle-shaped levels illustrates the suggested direction of causality of the various risk factors for developing malnutrition. Malnutrition is at the inner core of the DoMAP model surrounded by different layers of risk factors. The immediate layer consists of the three principal conditions which result in malnutrition (low intake, increased requirements, and impaired nutrient bioavailability). The next layer consists of the factors which are believed to directly cause one of these conditions and the outermost level presents factors which impact the direct factors and indirectly cause one of the three main conditions. By illustrating the sequence of events which can cause malnutrition the DoMAP model offers a better understanding of the aetiology which can be used both in research as well as in clinical routine. By addressing factors, which can potentially impact the three principal components and thus cause malnutrition, the model also offers the possibility of early identification of patients at risk for malnutrition.

#### *5.2. Age-Associated Changes as Risk Factors*

Physiological factors which may precipitate malnutrition in higher age include sensory impairment such as diminished taste or olfactory dysfunction, delayed gastric emptying, and disturbed motility leading to a functional decline of the ageing gut [96]. Ageing is therefore also associated with an increase in colonic transit time, increased intestinal permeability, and, ultimately, altered intestinal microbiota [97], which includes loss of biodiversity, enrichment in opportunistic pathogens, and concomitant reduction of healthassociated species, such as short chain fatty acid producing species [98]. The changes in the microbiome have recently been implicated in the development of loss of appetite and frailty and as such are also potentially promoting malnutrition [99,100], but more studies are needed in this context.

Moreover, a decrease in gastrointestinal hormones (e.g., ghrelin), with concomitant adverse changes in anorectic signalling (e.g., neuropeptide Y, peptide YY (PYY), orexin A, leptin, cholecystokinin (CCK)) [101–103] leading to an altered appetite regulation in higher age have been described. While ghrelin is the only appetite-enhancing peptide, other hormones such as CCK and PYY for instance are recognized as relevant mediators in satiety. Those gut-derived peripheral hormones as well as systemic insulin, glucose and fatty acid levels regulate central appetite and hunger in the hypothalamic region in a feedback loop [104–106]. As has been shown in a postprandial setting with 14 frail and 20 non-frail old (>70 years) as well as 19 young adults (20–65 years), older persons have higher levels of fasting insulin and glucose as well as increased and prolonged postprandial insulin, glucose and CCK levels compared to younger adults [107]. Furthermore, frail older persons exhibited significantly less hunger in the fasting condition and impaired gastric emptying and gallbladder contraction in the postprandial period [107]. Similarly, in a small study, the suppressed postprandial hunger in the old was paradoxically accompanied by sustained ghrelin levels [108].

Last but not least, in the case of reduced smell and taste, the reward system in the central nervous system (CNS) is hampered in the perception of pleasure associated with palatable foods [109] which may in turn further decrease dietary intake.

Some of the pathway dysregulations can be attributed to the chronically elevated inflammation observed in higher age since enhanced cytokine levels (e.g., tumour necrosis factor alpha (TNF-α), interleukin (IL)-1β) are able to affect appetite regions in the CNS [103,110]. The resulting loss of appetite and early satiety seen in older adults has been termed "anorexia of aging" and leads to an insufficient food intake with a higher risk for both quantitative and qualitative malnutrition [111,112]. One risk factor for anorexia in healthy older adults is, not surprisingly, ageing itself [113]. Furthermore, body weight dissatisfaction, which might lead to avid dieting, weight loss and eating disorders, is frequent in older adults [114] and has been studied as possible precursor of appetite loss. Ultimately, anorexia of ageing has even been proposed to be a geriatric syndrome [115] as it significantly and independently affects nutritional und functional status in the old [116,117].

Postprandial regulation in older adults has gained increasing attention in the last decade, especially with regard to muscle protein synthesis. In the old, a blunted postprandial muscle protein synthesis in response to dietary protein has been described [118–120], whereas the basal muscle protein synthesis does not appear to be affected [121]. Moreover, the lower protein balance response to hyperinsulinemia in older adults indicates insulin resistance of protein metabolism [122]. The so-called anabolic resistance, i.e., an impaired capacity of the muscle to respond to anabolic stimuli (such as dietary protein and resistance exercise) seen in higher age serves as one explanation leading to a slow onset of sarcopenia [123].

Recent research has also focussed on postprandial regulation of metabolic parameters. Age-related changes in postprandial glucose and insulin have been well established [107,124], but more recently, age-specific changes in the postprandial dynamics of fibroblast growth factor 21 (FGF21) have also been shown, resulting in considerably higher values in older compared to younger adults [125]. FGF21 is an important metabolic parameter which regulates glucose and lipid metabolism, but higher levels in older adults are paradoxically associated with higher mortality [126] and have been implicated in the loss of muscle [127,128] and bone mass [129] as well as in the cachexia anorexia syndrome in old hospitalized patients [130]. Altered postprandial regulation of FGF21 might contribute to explain the higher values. Similarly, concentrations of adiponectin, a mediator of FGF21 functions increase with age [131] and are also associated with all-cause mortality in the old [132]. Again, alterations in postprandial adiponectin response have been shown in higher age which were positively associated with the FGF21 response [133].

Taken together, in contrast to disease-related mechanisms such as catabolism, which can result in acute weight loss and malnutrition, altered appetite regulation in higher age and changes in postprandial metabolism are associated with gradual changes that may not necessarily result in manifest malnutrition but contribute to or exacerbate existent nutritional problems and facilitate nutritional deficiencies.

#### *5.3. Inflammaging as a Risk Factor for Malnutrition*

Under physiological conditions, inflammatory processes represent a desired, strictly controlled and usually self-limiting reaction. However, chronic inflammation has longterm negative effects on the entire system. Increasing knowledge indicates that ageing is accompanied by slightly but chronically elevated inflammation levels. This sterile and silent inflammation in old adults has been termed "inflammaging" [134]. Inflammaging has since been investigate a whole range of age-related diseases within the so-called "network theory of aging". Recently, in the pandemic of coronavirus disease 2019 (COVID-19), inflammaging gained additional attention in view of the observed cytokine storm and autoimmunity during infection among older patients, which can result in multiple organ failure and is in general associated with a worse disease outcome [135,136] and of course potentially increases the risk for malnutrition, though data are lacking [137]. This severe systemic inundation of cytokines might be further triggered by the dysregulated immune system in old age (immunosenescence), which is expressed in an imbalanced homeostasis of pro- and anti-inflammatory mediators [135].

Although underlying mechanisms are still not fully elucidated, inflammaging has been characterized as a chronic low-grade inflammatory status, which not only is reflected by higher pro-inflammatory levels, but by a remarkable imbalanced cytokine network [138,139]. Research in centenarians has shown that they have less burden of the typical age-related (co-) morbidities and that those "long-livers" have better immunological coping strategies which are attributed to a balanced and more anti-inflammatory system [4,139]. The senescence-associated secretory phenotype (SASP) is considered to support key cellular processes in inflammaging, since the secretion of those pro-inflammatory mediators do not only disrupt local tissue structures and functions, but primarily perpetuate the vicious inflammatory circle. Further, accumulation of waste products or inadequate elimination of cellular damage, as well as age-associated mitochondrial damages that result in a higher release of reactive oxygen species and extracellular release of mitochondrial DNA in particular, belong to the damage-associated molecular pattern which activates the innate immune system [140,141].

Meanwhile, it is widely recognized that inflammaging is partly responsible for triggering many age-related diseases, e.g., Alzheimer's Disease ("neuroinflammation") [142], atherosclerosis and cardiovascular events [143,144], type 2 diabetes mellitus ("metaflammation") [145], (osteo-) sarcopenia and frailty [146,147] as well as cancer [148–150]. Therefore, there is considerable potential that inflammaging also contributes to an impaired nutritional status [151], but more studies are needed to explore the relationship. In the old, increased levels of pro-inflammatory cytokines have early on been associated with cachexia ("geriatric cachexia") [152]. Cachexia is a well described complex wasting syndrome which is driven by inflammation and reflected by a striking loss of skeletal muscle mass and function [153]. It is the most pronounced condition within the disease-related PEM spectrum and occurs in catabolic disease such as cancer, chronic kidney disease, HIV/AIDS, chronic obstructive pulmonary disease. Pro-inflammatory cytokines, such as TNF-α, initially called "cachectin" [154] and IL-6, also known as the "cytokine for gerontologists" [155,156], play a dominant role in this phenomenon. The underlying inflammatory processes actively fuel the muscle protein breakdown via pro-inflammatory cytokines (TNF-α, IL-6, IL-1β) which on the one hand stimulate the ubiquitin-proteasome signalling pathway, while on the other hand inhibiting anabolic pathways [157]. Although cachexia is generally associated with severe disease, there is definite overlap in the mechanisms that have been used to explain malnutrition in the old caused by inflammaging [158] and in geriatric cachexia [152]. Inflammatory and related immunological processes in older persons is clearly associated with a higher risk for malnutrition (as indicated by the MNA) [158].

Taken together, the aetiology of malnutrition is complex, and a multitude of risk factors can contribute to or aggravate the development of malnutrition as illustrated in Figure 1. While it is important to further understand age-associated changes which impact nutritional intake, nutrient absorption, digestion and metabolism, many of these factors are not easily identified or modified. They, however, need to be accounted for when assessing the risk for malnutrition and in the concept of treatment.

#### **6. Treatment of Malnutrition**

Acknowledging the different and complex risk factors, which can result in or aggravate malnutrition or contribute to the risk of developing it, it becomes clear that treatment of malnutrition is as complicated and challenging. Dependent on both setting and the situation of the older adult, different therapy approaches are warranted.

Regarding the overwhelming evidence that protein requirements are higher in older age [65,66], adequate protein with appropriate energy intake is crucial in order to prevent malnutrition and sarcopenia. Further, due to the frequent micronutrient deficiencies in higher age, targeted supplementation of micronutrients might be useful, when diet alone is not sufficient to meet the age-specific requirements. However, there are still considerable gaps concerning the evidence of non-pharmacological treatment of malnutrition.

Available studies so far have delivered conflicting evidence on the benefits of nutritional theory. Recently, an international panel of nutrition experts identified the shortcomings of these studies which include heterogeneous study populations and too short or varying treatment duration as well as issues relating study methodology such as inadequately described control group and no placebo, no blinding of study personnel, frequently no intention-to-treat-analyses as well as selective outcome presentation [13]. Therefore, it remains unresolved which interventions are most effective in which patient groups and whether specific approaches are needed depending on the aetiology of malnutrition [13].

**Figure 1.** Model of downward spiral in the development of malnutrition in higher age.

A recent meta-analysis, which did not find convincing evidence of the use of oral nutritional supplements in the treatment of malnutrition in older patients, also highlighted the need for more and larger trials to identify suitable and effective interventions for treating malnutrition [159]. Pooling data from 9 randomized controlled trials from different care settings [160], Reinders and colleagues found evidence that dietary counselling with or without oral nutritional supplements was more effective across the settings than oral nutritional supplements alone, but the outcome parameters included merely energy intake and body weight, and no other relevant outcome parameters such as improved muscle strength or function were considered.

#### **7. Remaining Challenges**

Despite the overwhelming evidence regarding the negative outcome of malnourished older adults, there are still many remaining challenges in the understanding, identification and treatment of malnutrition in older adults. They concern both the prevention of slow onset age-associated malnutrition as well as the treatment of disease-related malnutrition. Figure 2 summarizes open questions and remaining challenges in this field.

**Figure 2.** Remaining challenges in the effective prevention and sustainable treatment of malnutrition in old adults. ↑↑↑ indicates a high prevalence, whereas ↑ indicates a lower prevalence.

> One particular concern regarding the prevention of malnutrition is that some nutritional requirements are not well established in higher age. Micronutrient and trace element concentrations have been shown to change with age [74] and have even been implicated to play a role in the ageing processes [161]. Changes in body composition and physical activity may lower energy requirements, but not the requirements of nutrients in general. In order to ensure a sufficient supply of essential nutrients, this requires a high nutrient density in the diet. Therefore, requirements of micronutrients which are frequently deficient in older adults, clearly need elucidation given their multifactorial role in promoting health. Also, while the need for some nutrients may change in higher age, ageing processes themselves may impact the requirements of certain nutrients, in particular in the presence of inflammation. As inflammation alters the trace element profile (increasing copper, decreasing selenium and Zn among other trace elements), an inflammation adapted intake of these trace elements may be necessary. However, more research is needed to elucidate inflammation related needs of the older population and the possible benefit of a tailored supplementation. Furthermore, as higher age is frequently associated with disease, disease-specific nutrient requirements may further complicate the picture.

> Taken together, a need to critically appraise current nutrient recommendations for the older age group has been identified, and the WHO has been challenged to issue guidelines [162]. There is also an increasing demand worldwide for WHO guidelines which competent national authorities can use to address the nutritional needs of their growing older populations [70].

> Moreover, although recent research on determinants of malnutrition has added considerable new evidence, the impact of ageing-related changes on the long-term development of malnutrition, still needs further elucidation. One example is the ageing gut and its

altered microbiome, which have gained attention in the last years. Emerging technologies, such as e.g., high-throughput culturing, will further the research on ageing microbiome, so its role in the development of malnutrition might soon be elucidated. Novel dietary approaches which modulate the ageing microbiome such as the Mediterranean diet [163] are of interest in the prevention of malnutrition, and more studies are warranted. Considering the importance of muscle mass maintenance [164], nutritional intervention needs to focus on muscle mass and also, needs to be combined with exercise. Therefore, more studies on nutritional therapy together with different kinds of exercise regimen are clearly needed.

Lastly, not only the prevention of malnutrition, but also the topic of treatment of malnutrition needs more research, in terms of well-designed and adequately powered clinical trials in order to ensure sufficient statistical power to identify true treatment effects. Moreover, relevant outcome parameters need to be included, and a careful selection of target populations with well-defined malnutrition is necessary. Also, more research is needed for treatment approaches which specifically target the underlying causes of malnutrition themselves [165] i.e., causative treatment. This, however, not only requires a longer time frame and a broader inter-disciplinary approach, but more research regarding the most relevant causes and their common pathophysiology which would allow a "causation-oriented" multimodal treatment [13].

#### **8. Conclusions**

Taken together, more research is needed to understand which ageing-related changes are early predictors/precursors of malnutrition that in turn can be addressed in order to prevent the development of nutritional deficiencies. For clinically manifest malnutrition, more studies need to be performed in older adults in order to identify the suitable treatment for the various settings.

**Author Contributions:** Conceptualization, K.N.; writing—original draft preparation, K.N. and U.H.; writing—review and editing, K.N. and M.P.; visualization, U.H. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**



#### **References**


## *Communication* **Enteral Nutrition by Nasogastric Tube in Adult Patients under Palliative Care: A Systematic Review**

**Eduardo Sánchez-Sánchez 1,2,\*, María Araceli Ruano-Álvarez 3, Jara Díaz-Jiménez 4, Antonio Jesús Díaz <sup>5</sup> and Francisco Javier Ordonez <sup>6</sup>**


**Abstract:** Nutritional management of patients under palliative care can lead to ethical issues, especially when Enteral Nutrition (EN) is prescribed by nasogastric tube (NGT). The aim of this review is to know the current status in the management of EN by NG tube in patients under palliative care, and its effect in their wellbeing and quality of life. The following databases were used: PubMed, Web of Science (WOS), Scopus, Scielo, Embase and Medline. After inclusion and exclusion criteria were applied, as well as different qualities screening, a total of three entries were used, published between 2015 and 2020. In total, 403 articles were identified initially, from which three were selected for this review. The use of NGT caused fewer diarrhea episodes and more restrictions than the group that did not use NG tubes. Furthermore, the use of tubes increased attendances to the emergency department, although there was no contrast between NGT and PEG devices. No statistical difference was found between use of tubes (NGT and PEG) or no use, with respect to the treatment of symptoms, level of comfort, and satisfaction at the end of life. Nevertheless, it improved hospital survival compared with other procedures, and differences were found in hospital stays in relation to the use of other probes or devices. Finally, there are not enough quality studies to provide evidence on improving the health status and quality of life of the use of EN through NGT in patients receiving palliative care. For this reason, decision making in this field must be carried out individually, weighing the benefits and damages that they can cause in the quality of life of the patients.

**Keywords:** artificial nutrition; enteral nutrition; nasogastric feeding; nasogastric tube; palliative care

#### **1. Introduction**

Initially, the aim of Palliative Care (PC) was to relieve suffering at the end of life. However, it is nowadays considered as a model to follow in patients in whom there is no curative treatment, and is therefore being implemented at earlier stages. Initially, PC was focused on cancer patients, but it currently covers other conditions such as advanced dementia, HIV/AIDS, heart disease, etc. [1].

Every year, 40 million people need palliative care, but only 3 million have access to such special attention [2]. Currently, the goal of PC is to promote comfort and to maintain an optimal quality of life for patients and their families under palliative care [3] through prevention and management of physical, psychosocial and spiritual issues in these patients [4]. It should not be forgotten that quality of life evaluates the subjective perception that each patient has around alterations or limitations that the disease undertakes in the physical, psychosocial and spiritual aspects of their lives [5].

**Citation:** Sánchez-Sánchez, E.; Ruano-Álvarez, M.A.; Díaz-Jiménez, J.; Díaz, A.J.; Ordonez, F.J. Enteral Nutrition by Nasogastric Tube in Adult Patients under Palliative Care: A Systematic Review. *Nutrients* **2021**, *13*, 1562. https://doi.org/10.3390/ nu13051562

Academic Editors: Ina Bergheim and Omorogieva Ojo

Received: 27 February 2021 Accepted: 29 April 2021 Published: 6 May 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Nutrition and hydration are basic elements for maintaining life, and they are considered signs of health in our society [6]. Occasionally, patients present failure at maintaining adequate oral intake for meeting their nutritional needs [7], and this can lead to physical and psychosocial issues such as anxiety and distress [8]. Therefore, it may be necessary to commence Artificial Nutrition (AN). In 2008, Cochrane published a review regarding the use of AN in adult patients receiving palliative care. The authors concluded that there was not enough evidence to guide the development of guidelines for practice [9]. Six years later, the update of this review presented the same results; therefore, there are no new quality studies regarding this subject [7].

If the patient takes less than 50% of their nutritional requirements and there are no contraindications or bronchoaspiration risks, and their life expectancy is less than 6 weeks [10], Enteral Nutrition (EN) must be prescribed through a nasogastric (NG) tube [11]. This is a widely used and easily accessible technique, although in the case of patients with advanced dementia who receive PC, evidence supporting the use of NG tube is limited, and this technique may have a negative impact on the quality of life of these patients [12]. The use of tubes in patients with advanced dementia does not improve survival, prevent aspiration [13], or improve their functional status. In addition, the use of tubes for artificial nutrition has been associated with agitation, increased physical restrictions, and complications related to the tubes [14,15].

The use of EN through NG tubes in patients under PC continues to be a controversial subject [16], since there is little evidence on the role of nutritional support and whether its implementation improves quality of life. In addition, it affects the psychological sphere of patients, because it can influence their social relationships and the way they interact with others. However, Mitchell et al. reported that more than a third of nursing home residents with dementia had been subjected to a feeding tube [17]. Decisions and/or choices may confront patients, family members, and health professionals. Therefore, having a good knowledge of the benefits and harms of the use of this technique is paramount in order to reduce ethical conflicts and to understand how the use of this technique can influence the physical, psychological and spiritual spheres, and therefore, the quality of life of patients receiving PC. Accordingly, the goal of the present study is to understand the current state of the management of EN using NG tubes in patients receiving palliative care, along with its effect on health status and quality of life.

#### **2. Materials and Methods**

A systematic review of the literature was made. The results were obtained by direct online access through the following database: PubMed, Web of Science (WOS), Scopus, y Scielo, Embase y Medline. The aim of this review was to address the next question: Is it appropriate the use of EN by NG tube in patients under palliative care?



**Table 1.** PICOS criteria (Population; Intervention; Comparison; Outcome; Study design).

The articles reviewed were published in any country, by any institution or individual investigator, and written in Spanish or English. The research was limited to those published in the last 5 years (between 2015 and 2020).

For the documentary retrieval, the following MeSH descriptors were used: "palliative care", "enteral nutrition", "terminal care" "terminally ill". Neither Subheadings nor Entry Term classifiers were used. The search strategy was: ("Palliative Care" OR "Terminal Care" OR "Terminally ill") AND "Enteral Nutrition". The final choice of articles was made following the inclusion criteria: (a) studies published in journals indexed in international databases subject to peer review, (b) published between 2015 and 2020, and (c) written in English or Spanish; and the exclusion criteria were: (a) studies based on pediatric age, (b) expert reports, editor's letters, books, monographs, clinical narratives or reviews. Due to the large number of articles found in the first search, and as a quality assessment, two screenings were carried out. The first was based on the title and summary, eliminating those articles that dealt with a topic other than the one proposed. In the second screening, review articles, editor's letters, etc., were eliminated.

To carry out the critical reading and evaluation of the articles found, the STROBE (Strengthening the Reporting of Observational studies in Epidemiology) statement was used for the observational studies [18] and the CONSORT guide (Consolidated Standards of Reporting Trials) for randomized clinical trials [19].

#### **3. Results**

A total of 403 articles were found: 32 (7.9%) PubMed, 30 (7.4%) WOS, 20 (4.9%) Scopus, 4 (1.0%) Scielo, 151 (37.4%) Embase, and 166 (41.2%) Medline. Of these recovered papers, 223 (55.3%) were redundant.

Once the first screening was applied based on the title and abstract, 168 articles were eliminated. After the second screening, nine articles were eliminated. The number of articles selected was three, all of which were observational studies, for which the STROBE statement was made. All of these articles fulfilled 90% of the points of the set declaration. The parameters of PRISMA (Preferred Reporting Items for Systematic Review and Meta-Analyses) were followed (Figure 1).

**Figure 1.** PRISMA (Preferred Reporting Items for Systematic Review and Meta-Analyses) diagram.

The results obtained showed different study parameters in the approach to the proposed topic (Table 2). No studies were found that addressed the use of NGT versus not using a feeding tube, but there was always a third group representing the use of either Percutaneous Endoscopic Gastrostomy (PEG) or esophageal stent. Therefore, the results obtained in relation to the use of NG tube and the other groups were taken.

In the study carried out by Bentur et al. 2015 [20], three groups were compared: subjects without feeding tubes, subjects with NG tubes and another group caring Percutaneous Endoscopic Gastrostomy (PEG). The results related to the use of NG tube versus the nonuse of a catheter or the use of PEG were taken as a reference for this review. They concluded that the use of a feeding tube in people with advanced dementia in the community was associated with negative outcomes and increased caregiver burden. The use of an NG tube caused less diarrhea and more restrictions than the group that did not carry a catheter. The use of feeding tubes increased attendances to the emergency department, although they did not distinguish between NGT and PEG. No statistical difference was found between catheter use (NG tube and PEG) and non-use with respect to the treatment of symptoms at the end of life, comfort or satisfaction at the end of life.

Yang et al., in 2015, compared hospital stays and survival among patients with esophageal obstruction and a short life expectancy in subjects with EN by tube, with esophageal stent placement, and with nutritional support without oral intake. The results obtained showed that the patients with NGT and esophageal stent had a shorter hospital stay (19 and 12 days, respectively) and a longer median survival (*p* < 0.01) than the group with nutritional support. Concluding that enteral feeding by NG tube in palliative care was safe, inexpensive, and had a low complication rate [21].

The multicenter study carried out by Shinozaki et al. in 2017 in Japan, found that 74.6% of patients in the terminal phase required EN.

These authors suggest that the nutritional intake route may play a role in quality of life. No significant difference was found in quality of life between the different study groups. However, the mean hospitalization period was significantly shorter for gastrostomy-fed patients than for nasogastric tube-fed patients (21 vs. 64 days). Patients with PEG had a shorter period between study prescription and death than patients fed through an NG tube [22].


*Nutrients* **2021**, *13*, 1562

with brain and neck cancer.

by 15 items related with health wellbeing

by NGT.

No significant difference was found in quality of life,

between the starting point and week 3 of the study,

among the different study groups.

and quality of life. The sample was formed

by 100 patients.

#### **4. Discussion**

The results obtained show the limited bibliography in the field of EN through NG tube in patients receiving palliative care. There are studies on the use of tube feeding in these patients, but without distinction between the NG tube and PEG, so it was not possible to obtain individual and differentiated results between both routes of administration.

The articles in this research can be found to represent a low level of evidence, since they are observational studies, and no randomized clinical trials (RCTs) were performed. These results coincide with those reported by other studies, such as the systematic reviews carried out by Good et al. in 2008 and later in 2014 [7,9].

Malnutrition leads to increased comorbidities and decreased performance status and quality of life [10]. Therefore, nutritional support should be integrated into palliative care, and its implications with respect to quality of life and life expectancy should be assessed [23]. Within such nutritional support is included the use of nutrition through a tube, although its use remains controversial, especially in the case of the NG tube. The emergence of research and guidelines on the management of patients under palliative care has managed to reduce the use of tube feeding by 50% [24].

Some studies report that the use of enteral tube feeding is effective for improving the quality of life of patients [25], since it may improve physical, psychosocial and spiritual aspects. Although the quality of life of patients with NGT was not studied in the study carried out by Bentur et al. in 2015, they did find that these patients presented more diarrhea and restrictions, which can affect the physical and even psychosocial sphere, which could influence the quality of life of these patients.

Even though there was no distinction between patients with NG tube and PEG, it was concluded that these patients attended the emergency department more times than those who did not carry any type of feeding tube, which also negatively influences their quality of life, since they present more comorbidities, making it necessary for them to go to a health center more frequently, and causing changes in their daily life, as reflected in the well-being subscale of CAD-EOLED [20]. Another aspect that can negatively influence quality of life is the increase in the number of hospital stays and the decrease in survival. The use of NGT may decrease hospital stays and improve survival in patients receiving palliative care, and thus improve the quality of life perceived by these patients [21]. However, Shinozaki et al. concluded that subjects presenting NG tube had longer hospital admissions than those using PEG. Even though the survival period was longer, no significant differences in quality of life were found among the various groups [22]. This may be due to the choice of the measurement interval, since it was performed in patients with a short life expectancy. It should be noted that the perception of quality of life is related to reality and expectations. In patients receiving palliative care, the expectations for improvement are sometimes low, especially when their life expectancy is short [26]. Perhaps for this reason, no differences were found in quality of life in these investigations. The scant evidence on this topic has led to different interpretations and approaches in these patients.

The Ethics Work Group of the Spanish Society in Parenteral and Enteral Nutrition (SENPE) recently (2019) published a confirmation that the placement of tubes for nutrition in patients with advanced dementia was a futile treatment that only contributed to prolonged suffering and concluded that health care professionals should not make wide use of EN by tube [27]. Schwartz et al. considered that EN by tube could improve quality of life, but that the benefits in the last days phase were limited and did not exceed the loads [28].

Furthermore, there may be discrepancies between health professionals and patients when prescribing nutritional support through an NG tube. For example, Amano at al. found that 78.6% of subjects in their study did not wish to receive artificial nutrition by feeding tube, even though their intake was insufficient [29]. In the study undertaken by Pengo et al. in 2017, it was found that the numbers of doctors and nurses who agreed with the use of the AN declined when life expectancy decreased [30]. These decisions can create ethical dilemmas and are related to feelings, thoughts and beliefs [31].

Sometimes, it is the patients themselves who do not wish to receive EN by NG tube [28]. Therefore, it is necessary to make an individualized decision, even though no other contraindications may be found. This respects the principles of autonomy, beneficence and non-maleficence [32]. Furthermore, the team of health care professionals looking after such patients should establish what the aims and benefits of such treatment are, whether these are achievable, and any possible damage that may be encountered [33]. In addition, the principle of autonomy recognizes the right and the capacity of a person to make their own personal decisions. Self-determination includes the right to reject EN, although this refusal may be difficult to understand for family members and healthcare professionals [3]. Perhaps the means to avoid ethical conflicts and future dilemmas is the use of anticipated instructions, where patients can reflect their decisions regarding future treatments or techniques, although the prevalence of patients who make use of such mechanisms is very low [34].

Among the limitations in this review are the lack of studies with a large enough sample to be able to describe the results, and the subjectivity of the results.

Although it is a difficult field of research, conducting higher-quality research could result in the provision of recommendations or guidance to aid patients and healthcare professionals in decision making.

The results obtained lead us to consider the need to create a clinical practice guide on the nutritional management of these patients, which includes the use of EN by NGT. Progress must be continued in education so that these differences do not exist, and such clinical practice is common to all nurses. The benefits and risks of the use of EN by NGT in these patients should be investigated, in order to provide evidence-based care. Clear evidence would help to reduce variability in the management of these patients.

#### **5. Conclusions**

There are not enough quality studies to provide evidence regarding the benefits for wellbeing and quality of life in patients under palliative care receiving EN through an NG tube.

For this reason, decision making in this field must be carried out individually, weighing the benefits and damages that they can cause in the quality of life of the patients.

**Author Contributions:** Conceptualization and design, E.S.-S.; study selection, E.S.-S. and J.D.-J.; data extraction, E.S.-S. and J.D.-J.; data synthesis, E.S.-S. and J.D.-J.; writing—original draft preparation, E.S.-S. and M.A.R.-Á.; writing—review and editing, E.S.-S., J.D.-J., M.A.R.-Á., A.J.D. and F.J.O.; Translation of the manuscript: E.S.-S., M.A.R.-Á., A.J.D. and F.J.O. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data was collected from available literature. Data for doing the systematic review is available in the manuscript's tables.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Oral Nutritional Supplements and Enteral Nutrition in Patients with Gastrointestinal Surgery**

**Maria Wobith and Arved Weimann \***

Department of General, Visceral, and Oncological Surgery, Klinikum St. Georg gGmbH, 04129 Leipzig, Germany; maria.wobith@sanktgeorg.de

**\*** Correspondence: arved.weimann@sanktgeorg.de

**Abstract:** Nowadays, patients undergoing gastrointestinal surgery are following perioperative treatment in enhanced recovery after surgery (ERAS) protocols. Although oral feeding is supposed not to be stopped perioperatively with respect to ERAS, malnourished patients and inadequate calorie intake are common. Malnutrition, even in overweight or obese patients, is often underestimated. Patients at metabolic risk have to be identified early to confirm the indication for nutritional therapy. The monitoring of nutritional status postoperatively has to be considered in the hospital and after discharge, especially after surgery in the upper gastrointestinal tract, as normal oral food intake is decreased for several months. The article gives an overview of the current concepts of perioperative enteral nutrition in patients undergoing gastrointestinal surgery.

**Keywords:** enteral nutrition; oral nutritional supplements; perioperative nutrition; malnutrition; sarcopenia; gastrointestinal surgery

#### **1. Introduction**

In an overweight or even obese society, obvious malnutrition seems to be a rare phenomenon even in patients undergoing abdominal surgery. Furthermore, nutritional therapy, either the enteral or even the parenteral route, seems to be very traditional in the era of enhanced recovery after surgery (ERAS) protocols. Nevertheless, malnutrition can be overlooked easily and has been described as a "silent epidemic" [1]. This is why screening for malnutrition and, if necessary, nutritional therapy is one eminent part of ERAS protocols [2].

While the term malnutrition means loss of body mass by an inadequate supply of energy, the term sarcopenia describes the loss of muscle mass. Malnutrition is often associated with sarcopenia if, in addition to storage fat, a relevant proportion of muscle mass is lost. On the other hand, sarcopenia can—often in geriatrics—develop without weight loss and thus without the criterion of malnutrition [3].

The presence of sarcopenia is often masked, especially in overweight or obese patients. In clinical terms, sarcopenic obesity is usually considerably underestimated. Hospital mortality occurred in 7% of 750 obese patients who were also malnourished [4]. The terminological differentiation between sarcopenic obesity and malnutrition in obesity is a matter of discussion.

In 2019, the Global Leadership Initiative on Malnutrition (GLIM) developed a new definition of malnutrition that is endorsed by all major nutritional and medical societies worldwide [5]. A distinction is made between phenotypic (weight loss, low body mass index, reduced muscle mass) and etiological criteria (reduced food intake or assimilation, inflammation/disease burden). For diagnosing malnutrition, a two-step procedure is recommended: In the case of positive nutritional screening, at least one phenotypic and one etiological criterion must be met.

The prognostic influence of the nutritional status on the occurrence of postoperative complications and the length of hospital stay has been shown many times retrospectively

**Citation:** Wobith, M.; Weimann, A. Oral Nutritional Supplements and Enteral Nutrition in Patients with Gastrointestinal Surgery. *Nutrients* **2021**, *13*, 2655. https://doi.org/ 10.3390/nu13082655

Academic Editor: Ina Bergheim

Received: 29 June 2021 Accepted: 28 July 2021 Published: 30 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

and prospectively [6–8], resulting in evidence-based guidelines on nutritional therapy for the surgical patient: European Society for Clinical Nutrition and Metabolism (ESPEN) 2017 [7] and Short ESPEN Practice guideline 2021 [9].

The ESPEN surgical guidelines are in line with the aim of early oral feeding in ERAS [7]:


It is evident that in most patients, oral/enteral feeding can be started just a few hours after abdominal surgery. This also applies to anastomoses in the upper gastrointestinal tract [6,7]. Oral food intake as early as possible is a key intervention of the ERAS program [10]. In the case of inadequate coverage of the calorie and protein requirement, perioperative nutrition therapy strives for supplementing the primarily oral diet by oral nutritional supplements (ONS) and enteral nutrition.

Recent evidence comes from a meta-analysis of 56 randomized controlled studies with 6370 patients undergoing surgery for gastrointestinal cancer. Benefits of supplementation with a glucose drink, increased protein intake, and immunonutrition were shown to reduce postoperative complications (RR 0.74, 95% CI 0.69–0.80), postoperative infections (RR 0.71, 95% CI 0.64–0.79), and non-infectious complications (RR 0.79, 95% CI 0.71–0.87) with a reduction in length of hospital stay of 1.58 days (95% CI −1.83 to −1.32). Nevertheless, there is considerable heterogeneity of the included studies (I = 89%) [11].

#### **2. Who Will Benefit from Perioperative Nutritional Supplementation?**

#### *Screening and Assessment of Nutritional Status*

The functional status, which is pivotal for enhanced postoperative recovery, is largely determined by muscle mass. From a metabolic point of view, the combination of impaired functionality and nutritional status is crucial for postoperative mobilization and lung function. Sarcopenic patients have a significantly increased risk for severe (*p* < 0.0001) and the total number of postoperative complications (*p* = 0.001) [12]. This is why sarcopenia may be considered to be "forcing a vicious circle" [13]. The prevalence of sarcopenia in cancer patients with chemotherapy can be up to 83% [14].

Therefore, the identification of risk patients is essential preoperatively. In elderly people in particular, this should include functionality and nutritional status in a "complex geriatric assessment" [15,16]. As screening for malnutrition, the ESPEN nutritional risk score (NRS), according to Kondrup [17], has been well validated for surgical patients.

While in surgical patients, "severe nutritional risk" including NRS score ≥3, may be achieved in a 71-year-old patient without any weight loss or diminished food intake scheduled for surgery of colorectal cancer, it has been demonstrated impressively in medical patients that appropriate nutritional therapy in patients with an NRS score ≥3, treatment may already favor this outcome [18].

Current data from an observational study on the implementation of the ERAS program in hospitals in the Canadian province of Alberta revealed a positive NRS as a predictor of low patient compliance in the ERAS program (<70%: OR 2.77; 95% CI 2.11–3.64; *p* < 0.001) and a trend towards longer hospital stay >5 days (OR 1.40; 95% CI 1.00–1.96; *p* = 0.052). An ERAS compliance of <70% was associated with an increased rate of postoperative complications (OR 2.69; 95% CI 2.23–3.24; *p* < 0.001) [19].

When patients are screened at risk for malnutrition, body composition should be examined for establishing GLIM criteria [5]. In addition to more "traditional" tools such as bioelectrical impedance analysis (BIA) and dual X-ray absorptiometry (DEXA), the use of routine computer tomography (CT) in cross-section L3 has been introduced. From the section derived muscle and fat mass, conclusions can be drawn for the whole body with good correlation. The skeletal muscle index at level L3 can be used as a surrogate parameter for the presence of sarcopenia [13]. During the past few years, the prognostic impact of the CT-based measurement of the body composition with the determination of the muscle mass and the radiodensity of the muscle has been shown in numerous studies, especially for oncological patients with surgical therapy [20–22], as shown in Table 1.

**Table 1.** Overview of the literature that shows CT-derived sarcopenia to be predictive for outcomes in patients undergoing gastrointestinal surgery.


#### **3. Preoperative Nutritional Therapy**

The indication for nutritional therapy should be based on nutritional screening and by the diagnosis of malnutrition. Depending on the degree of malnutrition, the underlying disease, and the expected period of inadequately low calorie take, the decision on the duration and route of nutritional therapy has to be made (Figure 1). In case of neoadjuvant treatment with the indication for major abdominal surgery, any weight loss and deterioration of the nutritional status should be prevented.

The current ESPEN guideline [7] gives a "Good Clinical Practice (GCP)" recommendation: "Perioperative nutritional support therapy is indicated in patients with malnutrition and those at nutritional risk. Perioperative nutritional therapy should also be initiated, if it is anticipated that the patient will be unable to eat for more than five days perioperatively. It is also indicated in patients expected to have low oral intake and who cannot maintain above 50% of recommended intake for more than seven days".

This is in line with the updated ERAS guideline for patients undergoing elective colorectal surgery, which recommends preoperative screening for malnutrition and, if possible, oral nutrition therapy for at least 7–10 days in the case of metabolic risk or manifest deficits [2].

#### **4. Prehabilitation**

Current "pre-rehabilitation" conditioning concepts include a period of pre-inpatient 4–6 weeks that goes beyond the previously usual inpatient period of 10–14 days. The approach is frequently trimodal: physiotherapy, nutrition therapy, and psychological coaching in order to reduce anxiety and perioperative stress [36–38].

The primary goal is to make the patient "fit for ERAS" and at least to prevent further weight loss. The benefit of perioperative administration of oral nutritional supplements (ONS) has been clearly demonstrated for surgical patients with a reduction in complications and resulting in economic savings [39]. In a systematic review and meta-analysis of nine controlled studies compiled under the aspect of prehabilitation (six exclusively nutritional intervention, three multimodal), a significant reduction in the length of hospital stay by two days was found for a nutritional intervention over at least seven days, with supplementation being continued postoperatively [40].

Currently, however, there are no specific evidence-based recommendations for nutritional therapy in a prehabilitation program [36–38]. Eating a high-protein diet after exercise is recommended regarding the physiology of muscle protein synthesis. Since compliance is often limited for the intake of ONS [41,42], special and repeated motivation is required.

A start of prehabilitation can already be considered with the start of neoadjuvant treatment. It was shown that early nutrition support during neoadjuvant treatment in patients with esophageal cancer leads to less weight loss at 12 months postoperatively [43]. A weight loss of ≥10% results in a significantly higher mortality.

#### **5. Immunological Conditioning**

The stimulation of the immune system by enriching the diet with suitable immuneenhancing substrates is an interesting and controversially discussed concept as so-called "immunonutrition".

The stimulation of anti-tumor T-cell activity has been shown in vitro for arginine. Anti-inflammatory effects can be expected from the administration of omega-3 fatty acids [44,45]. Numerous randomized clinical studies and their meta-analyses have examined the combination of arginine, omega-3 fatty acids, and ribonucleotides in an enriched oral drink supplement, as well as enterally [45]. Overall, clinical benefits have been shown in reducing the rate of infectious complications and length of hospital stay, as well as cost. This also applies to an ERAS program [46]. It has been a matter of discussion whether the exclusively preoperative administration offers advantages not only in comparison with a (hospital) diet but also in comparison with standard drinking food [7].

A recent meta-analysis of the available data from 16 randomized studies with 1387 surgical patients with gastrointestinal tumors (immunonutrition *n* = 715, controls *n* = 672) was aimed at this question. Here, the preoperative intake for 5–7 days led to a significant reduction in the incidence of infectious complications in comparison with a normal diet and with an isonitrogenic standard drinking food (OR 0.52; 95% CI 0.38–0.71, *p* < 0.0001). The heterogeneity of the data was low (I = 16%). There was a significant reduction in the length of stay in hospital compared to normal food and a tendency compared to standard drinking food (−1.57 days, 95% CI −2.48–0.66, *p* = 0.0007, I = 34%). Non-infectious complications and mortality were unaffected [47].

The results of this meta-analysis, with a focus on surgical patients with gastrointestinal cancer, show good study quality and acceptable heterogeneity: (1). Oral supplementation carried out exclusively preoperative for 5 days is effective, (2). In this context, immunonutrition is superior, at least in terms of the risk of infectious complications, compared to standard oral nutritional supplements. Focusing on patients undergoing esophagectomy, another recent meta-analysis including six randomized trials of perioperative immunonutrition did not show significant benefits regarding postoperative complication rate [11]. Thus far, the ESPEN guideline recommends ONS before major operations for 5–7 days, with immunomodulating supplements being preferred [7].

#### **6. Postoperative Nutrition**

#### *Early Oral Diet*

A recent meta-analysis of five randomized controlled studies showed early enteral nutrition after major emergency surgery to be correlated with reduced mortality. Regarding the optimal timing and composition, the authors claimed the need for more high-quality data [48]. In comparison with parenteral nutrition, a recent PRCT showed significant benefits regarding intestinal recovery, hospital length of stay, and immune function for patients with cholangiocarcinoma and obstructive jaundice undergoing surgery [49]. Another randomized controlled trial in patients undergoing pelvic exenteration surgery compared parenteral nutrition and trophic enteral feeding (20 mL/h) via nasogastric tube. No significant difference between the two groups was observed for time to first bowel movement. Postoperative ileus occurred significantly less in enterally fed patients in the per-protocol analysis (*p* = 0.036). In regression analysis, it was shown again that time restriction from an oral diet was significantly associated with the time to first bowel movement and the postoperative complication rate (*p* < 0.0005) [50].

Generally, and even after surgery of the lower gastrointestinal tract, oral food intake can be started within hours (ESPEN guidelines A recommendation) [7]. The diet should be adapted to the individual tolerance and the operation performed; elderly patients require special attention [7].


#### **7. Early Oral Delivery after Esophagectomy and Gastrectomy**

A recent multicenter randomized Dutch study investigated the feasibility and safety of an early oral diet after minimally invasive esophagectomy with intrathoracic anastomosis [51]. In the intervention group (*n* = 65), oral intake took place without delay, while the control group (*n* = 67) was fed only via the enteral tube for 5 days. There was no significant difference in the primary endpoint time of postoperative recovery (7 vs. 8 days) and the

secondary parameters of complications with anastomotic leakage (18.5% vs. 16.4%) and pneumonia rate (24.6% vs. 34.3%).

In another retrospective study in patients after gastrectomy, a group with early oral food intake from the first day (EOF *n* = 203) was compared with a historical control group with conventional delayed oral food intake (COF *n* = 203) by means of propensity score matching. The EOF group showed an earlier onset of flatus (2.9 vs. 3.1 days, *p* = 0.013). The length of stay in the hospital was significantly shorter (8.9 ± 5.7 vs. 12.6 ± 10.2 days, *p* < 0.01). No significant differences were observed for morbidity and mortality, with the EOF group showing a lower rate of abdominal infections (3.0% vs. 7.4%, *p* = 0.044) and anastomotic leakages (1.5% vs. 4.9%, *p* = 0.048). A subgroup analysis based on age, gender, surgical method, lymph node dissection, and tumor stage also showed no risk of increased morbidity or anastomotic leakage in the EOF group. The compliance with oral nutrition was the same in both groups [52]. In a prospective study with 50 patients undergoing major abdominal surgery, the protein and calorie intake in the first postoperative week was recorded. In the majority of patients, the energy and protein intake was insufficient (82% and 90%, respectively), leading to more Clavien-Dindo III complications in patients who did not meet their protein targets [53]. In terms of nutritional medicine, it must be emphasized that an early oral diet is feasible but that the oral calorie requirement is not met over a longer period of time comprising the time after discharge, even after post-inpatient, and thus initiates weight loss. This is an argument in favor of oral/enteral supplementation via sip feed/tube feed via feeding jejunostomy (FJT) placed during surgery.

#### **8. Feeding Jejunostomy**

Zhuang et al. retrospectively analyzed the outcome of their patients undergoing esophageal resection with and without placing a feeding jejunostomy. The feeding jejunal tube was placed if the patient was considered to be at high risk for developing anastomotic leakage. No significant difference was observed regarding the length of hospital stay, short-term mortality, and overall survival. There was a tendency in FJT patients for the recovery of anastomotic leakage (27.2 vs. 37.4 d, *p* = 0.073). The results suggest FJT being safe and that the placement should be considered in high-risk patients [54]. A meta-analysis for the comparison of jejunostomy versus nasoenteral tube showed benefits for the jejunostomy regarding postoperative pneumonia, length of hospital stay, and dislocation of the tube [55]. Similar results were shown in a Swedish register-based study, which investigated the differences between patients undergoing esophagectomy with and without intraoperative implantation of nutritional jejunostomy. The risk of developing severe complications (≥IIIb Clavien-Dindo) was significantly decreased in jejunostomy patients with an anastomotic leak. There was no increased risk for jejunostomy-related complications [56]. These arguments are for the selective use in high-risk patients undergoing esophagectomy [57], while the preoperative identification may be a matter of debate.

#### **9. Oral Food Intake after Prolonged Intensive Care Treatment**

Despite best perioperative management and ERAS protocol as a plan A, serious complications occur, inducing catabolism and subsequently increasing the risk of postoperative deterioration in nutritional status [58]. This needs a plan B, including enteral and even parenteral nutrition. The ESPEN guidelines state: If the energy and nutrient requirements cannot be met by oral and enteral intake alone (<50% of caloric requirement) for more than seven days, a combination of enteral and parenteral nutrition is recommended (GPP).

From a nutritional point of view, the period after transfer from intensive care to a normal ward is very vulnerable. Spontaneous oral food intake is usually inadequate. The main causes may be inappetence and fatigue with limited cooperation. In the anabolic situation, after a longer period of severe catabolism, the undersupply of energy and protein may be a "metabolic catastrophe", increasing the risk for delayed recovery and rehabilitation. In a small cohort study, 32 patients were followed up with their food intake measured by their energy (2000; 1650–2550 kcal/d) and protein (112; 84–129 g/d) requirements after

they had been transferred from the intensive care unit. A median of 1238 (869–1813) kcal/d and 60 (35–89) g protein was consumed over 227 days. Most of the patients were fed exclusively orally (55%) or combined enterally (42%). The energy and protein intake were lower than they estimated or even measured by indirect calorimetry. It was shown that energy and protein requirements could only be achieved with a combination of oral and enteral nutrition. The supplementation of an oral diet with ONS only covered about 70% of the requirement [59].

#### **10. Post-Discharge Nutrition**

In particular, after resections in the upper gastrointestinal tract, sustained weight loss as a "bariatric effect" has to be expected. Dietary counseling and follow-up monitoring of the nutritional status (minimum: BMI), including documentation of the amount of oral food intake, are essential.

A systematic review of 18 studies revealed a postoperative weight loss of 5% to 12% after 6 months. More than half of the patients lost >10% of their body weight after 12 months [60]. Therefore, these patients have to be considered to be at severe metabolic risk. Koterazawa et al., additionally showed that severe weight loss 3 months after esophagectomy could not be diminished by enteral tube feeding but had a significant impact on the 5-year overall survival rate [61]. The guidelines recommend the implantation of needle-catheter-jejunostomy (FIT) during surgery. Our own results in patients with esophageal and gastric resection, including partial pancreato-duodenectomy, show a weight loss of >10% in 40% of patients after 6 months, even with consistent postoperative continuation of nutritional therapy via feeding jejunostomy [16]. Early postoperative weight decreased up to 3 months, while stabilization occurred between 4–6 months after surgery; a further decline could be prevented by continuing enteral feeding supplementation [16]. In comparison with a control group, Chen et al. demonstrated in elderly patients after esophagectomy significant benefits from home enteral nutrition for at least 8 weeks for the BMI, PG-SGA score, serum albumin, and immune parameters [62].

In a recent meta-analysis, home enteral nutrition (HERN) and oral nutritional supplements were compared in patients with upper gastrointestinal resection for malignancy [63]. A total of 15 RCTs involving 1059 patients were included. Home enteral nutrition seemed to be superior by the significant prevention of weight loss (−3.95 vs. −5.82 kg; SMD: 1.98 kg; 95% CI 1.24–2.73) and reduction of the incidence of malnutrition or latent malnutrition (RR = 0.54; *p* < 0.01). Furthermore, improved levels of albumin, hemoglobin, pre-albumin, and transferrin were observed. Subgroup analysis based on the approach of home nutrition therapy showed that weight loss in the home enteral nutrition subgroup was significantly lower than that of the control group (WMD = 2.69, *p* < 0.01). No significant difference could be observed between the ONS subgroup and the control group. The same results were found in albumin, physical function (WMD: 5.29; 95% CI 1.86–8.73), and fatigue (WMD: −8.59; 95% CI −12.61, −4.58). Furthermore, dimensions in QOL were significantly better in the HERN group.

In a randomized study, significantly less bodyweight loss was observed when ONS was administered until 12 weeks after surgery in patients with total gastrectomy, which was not seen in patients with subtotal resection [64]. In another randomized study in patients after gastrectomy, dietary counseling in combination with ONS [65] showed a significant reduction in weight loss, with a higher BMI and SMI. Fatigue and loss of appetite were also significantly lower than with diet advice alone. Limited compliance in taking sip feeds has to be always considered [17], which may be caused by loss of appetite, taste, bloating, belching, and diarrhea. A recent multicentric randomized trial of 1003 patients after gastrectomy compared the impact of ONS 400 kcal/d on weight loss 1 year after gastrectomy. In the ONS group, only 50.4% of the patients had an intake of more than 200 kcal/day of ONS (average 301 mL) and showed significantly less bodyweight loss (8.2 ± 7.2%) at 1 year compared to the control (*p* = 0.0204) [66].

#### **11. Conclusions**

In conclusion, in patients undergoing gastrointestinal surgery with special regard to those of the upper GI tract, nutritional monitoring and therapy may be required even after discharge from the hospital. In line with the guidelines appropriate recommendations are summarized in Table 2. Oral nutritional supplementation and/or enteral nutrition provide clinical benefits for the attenuation of weight loss. From an ethical point of view, randomized studies may be only justified comparing oral versus enteral supplementation. Individualized home nutrition therapy requires a network with the cooperation of the surgeon, general practitioner, dietitian, and home care provider. Regular monitoring of body composition and quantitative registration of oral nutrition intake is mandatory. In the future, patient reporting and dietary counseling may be improved by virtual coaching and assisted digitalized communication by chatbot [67].

**Table 2.** Perioperative recommendations summary for patients undergoing gastrointestinal surgery.


the framework of prehabilitation and ERAS on long-term oncological outcomes in patients with gastrointestinal cancer has to be elucidated in the future [68]. Further data from observational and controlled trials, including patient-reported outcomes, are urgently needed.

**Author Contributions:** Conceptualization, M.W. and A.W.; writing—original draft preparation, A.W. and M.W.; writing—review and editing, A.W. and M.W.; supervision, A.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** Arved Weimann received lecture fees from Baxter, B. Braun, Fresenius Kabi, Ethicon, and Falk Foundation, and a research grant from Baxter, Mucos. Maria Wobith declares no conflicts of interest.

#### **References**


## *Review* **Nutrition in the Intensive Care Unit—A Narrative Review**

**Aileen Hill 1,\*, Gunnar Elke <sup>2</sup> and Arved Weimann 3,\***


**Abstract:** Background: While consent exists, that nutritional status has prognostic impact in the critically ill, the optimal feeding strategy has been a matter of debate. Methods: Narrative review of the recent evidence and international guideline recommendations focusing on basic principles of nutrition in the ICU and the treatment of specific patient groups. Covered topics are: the importance and diagnosis of malnutrition in the ICU, the optimal timing and route of nutrition, energy and protein requirements, the supplementation of specific nutrients, as well as monitoring and complications of a Medical Nutrition Therapy (MNT). Furthermore, this review summarizes the available evidence to optimize the MNT of patients grouped by primarily affected organ system. Results: Due to the considerable heterogeneity of the critically ill, MNT should be carefully adapted to the individual patient with special focus on phase of critical illness, metabolic tolerance, leading symptoms, and comorbidities. Conclusion: MNT in the ICU is complex and requiring an interdisciplinary approach and frequent reevaluation. The impact of personalized and disease-specific MNT on patient-centered clinical outcomes remains to be elucidated.

**Keywords:** medical nutrition therapy; critical care; enteral nutrition; parenteral nutrition; micronutrients; energy; protein; review

#### **1. Introduction**

Medical nutrition therapy (MNT) is an essential part of the care for critically ill patients, but the optimal feeding strategy for patients in the intensive care unit (ICU) is still debated and often remains a challenge for the ICU team in clinical practice. Recommendations for MNT in critically ill patients vary between guidelines of the DGEM (German Society for Nutritional Medicine) [1], the ESPEN (European Society of Enteral and Parenteral Nutrition) [2], the A.S.P.E.N (American Society of Enteral and Parenteral Nutrition) [3], and other societies [4,5], and their implementation into clinical practice may be considered a challenge [6]. This article summarizes current recommendations and discusses available evidence. The important questions "why", "who", "when", "how", and "with what" are answered to provide a pragmatic oversight. The optimal MNT for special patient groups in the ICU is presented grouped by organ systems.

This narrative review is a summary of current guideline recommendations and more recent evidence afterwards. For many aspects of the broad topic "nutrition in the ICU", we can only give an overview in this manuscript and refer to other references for greater detail. Pragmatic recommendations are given as a summary at the end of each chapter and represent the author's opinion in the attempt to help the clinician focus on the most important aspects of each section.

**Citation:** Hill, A.; Elke, G.; Weimann, A. Nutrition in the Intensive Care Unit—A Narrative Review. *Nutrients* **2021**, *13*, 2851. https://doi.org/ 10.3390/nu13082851

Academic Editors: Ina Bergheim, Roland N. Dickerson and Yasuo Tsutsumi

Received: 28 June 2021 Accepted: 17 August 2021 Published: 19 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. "Why?"—Nutritional Status as a Prognostic Factor**

MNT in the ICU aims at avoiding malnutrition in primarily well-nourished patients and at preventing further deterioration of previously malnourished patients.

Malnutrition is a significant prognostic risk factor for critically ill patients, influencing major outcomes such as mortality, length of stay (LOS), duration of mechanical ventilation, and infection rates [7]. In a current meta-analysis of 20 studies with 1168 patients, the prevalence of malnutrition in ICU patients was 38–78% [7], highlighting the need of optimal and individually tailored MNT in the ICU. In the same vein, the large percentage of malnourished patients makes nutritional risk assessment upon ICU-admission followed by an appropriate MNT mandatory. Special attention and care should be attributed to those patients who will be anticipated to stay for longer than a week in the ICU [8,9].

Feeding protocols should be used and have proven beneficial in the nutrition of ICU patients [10–12]. A clearly defined feeding protocol has shown to decrease the rate of patients who cannot be enterally fed at all and will increase the delivery of calories [13]. Protocols should be locally tailored according to expertise, local barriers, facilities, and patient subpopulation in the ICU [14]

**Pragmatic recommendations:**


#### **3. "Who?"—Assessment of Nutritional Status**

Pragmatically, when admitted to the ICU, according to Sundstrom et al., three patient groups may be metabolically differentiated (although the discrimination between them is still debated) [15]:


The latter may be either patients with pre-existing malnutrition or patients with anticipated complicative courses and prolonged ICU stays. These patients benefit from an individually adjusted MNT, but their distinction from the other patient groups is not trivial. While it may be easy to identify an undernourished patient with low body weight and body mass index (BMI) during clinical routine, in case of normal or even elevated BMI, the detection of malnutrition is less likely [3]. In 2019, the Global Leadership Initiative on Malnutrition (GLIM) developed a new definition of malnutrition based on phenotypic and etiologic criteria [16].

There are no validated and recommended tools to estimate the nutritional status of a critically ill patient. Therefore, in case of (semi-) elective admissions, a screening for malnutrition in standard care before major surgery may be practical and is recommended [12]. Several tools to estimate nutrition risk are the Nutrition Risk Score (NRS 2002), the NU-TRIC (Nutrition Risk in the Critically Ill) Score, the Subjective Global Assessment (SGA), or the Malnutrition Universal Screening Tool (MUST). In many of these tools, some of the following factors are included [16,17]:


In many critically ill patients, the aforementioned parameters—such as nutrition history and exact BMI—are difficult to obtain. Weighing critically ill patients should be mandatory but feasibility may be limited in the ICU setting. In addition, changes of volume status impede weight measurements and clinical examinations, such as anthropometry [2]. The ESPEN guideline defines every patient who is in the ICU for more than 48 h to be

at nutritional risk [2]. The DGEM guideline recommends a combination of low BMI, unintended weight loss and lack of oral food intake, or the SGA for critically ill patients [1].

Regarding technical tools, computed tomography (CT) scan analysis, musculoskeletal ultrasound, and bioelectrical impedance analysis (BIA) may be available for the assessment and monitoring of nutritional status in the ICU, but are not broadly implemented into clinical routine until now [18].

**Pragmatic recommendations:**


#### **4. "How?"—The Route of Nutrition**

#### *4.1. Enteral Nutrition (EN)*

For a long time, the issue of "enteral or parenteral" has been controversially and quite emotionally discussed [19,20]. Two large multicenter randomized controlled trials (RCTs) did not show any significant difference in mortality, while EN was associated with a higher risk for gastrointestinal (GI) complications [21,22]. An RCT by Harvey at al. including 2400 patients found no difference in 30-day mortality or infectious complications in ICU patients receiving either EN or PN, while patients receiving PN had significantly less vomiting and hypoglycaemia [21]. A more recent RCT led by Reignier et al. (NUTRIREA-2) recruiting 2410 patients also observed no difference in 28-day mortality and infection rates, but significant more frequent GI complications (vomiting, bowel ischemia, and pseudo-obstruction) [22]

The current international nutrition guidelines uniformly recommend the preferential use of EN in the critically ill patient who is unable to maintain sufficient oral intake [2–4,23]. The physiological advantages, the paradigm of "use the gut or lose it", adverse effects of PN in earlier decades and increased cost effectiveness led to uniform preference of EN. However, EN alone is often insufficient to achieve energy and protein targets particularly in the acute phase of critical illness due to frequent interruptions for procedures and/or GI intolerance [24], which a recent meta-analysis of EN+PN versus EN alone demonstrated [25].

Although clear benefits are lacking regarding the optimal nutritional route, worldwide consensus among experts exists about a cautious individualized approach with 'trophic feeding' in high-risk patients without absolute contraindication followed by a ramp-up strategy until the target is reached [20]. While severe critical illness is frequently associated with considerable GI dysfunction, even severe sepsis or septic shock have not been considered clear contraindications in the guidelines [1–4]. EN can be administered via nasogastric or nasojejunal tube. If the need for EN will potentially exceed four weeks, placement of percutaneus endoscopic gastrostomy/jejunostomy is recommended [23].

Contraindications for EN according to the European Society of Intensive Care Medicine (ESICM) Clinical Practice Guidelines [4] particularly include hemodynamic instability (escalation of or high vasopressor medication and increased lactate) and GI intolerance from minor to major symptoms, e.g., gastric residual volume (GRV) > 500 mL/6 h or acute gastrointestinal injury grade > 2.

#### *4.2. Combination of EN with Parenteral Nutrion (PN)*

The progression of EN up to calorie/protein target is often prevented by feeding intolerance or common interruptions of EN [26–28]. Thus, particularly in the patient's first ICU week, EN alone may lead to macronutrient deficiency [29–31]. To avoid large cumulative energy and protein deficits, EN and PN may be combined, either early during the patient's ICU course (combined EN+PN), or after several days once EN is proven to be insufficient or unfeasible (supplementary PN [SPN]) [19].

The debate between early or late PN in addition to EN remains controversial [32–34]. There are strong arguments to start PN latest by day 4 particularly in malnourished patients and those with special risks [34], respecting potential risk of refeeding-syndrome.

A recent meta-analysis by Hill et al. including 12 RCTs with 5543 patients found that treatment with combined EN and PN led to increased delivery of macronutrients in severely ill ICU patients [25]. No statistically significant effect of a combination of EN with PN vs. EN alone on any of the analyzed endpoints were observed: mortality (Risk Ratio [RR] 1.0, 95% Confidence Interval [CI], 0.79 to 1.28 *p* = 0.99), hospital LOS (Mean Difference [MD]-1.44, CI −5.59 to 2.71, *p* = 0.50), ICU LOS and ventilation days. Trends toward improved physical outcomes were observed in two of four trials. There was a tendency for reduced mortality in nutritionally at-risk patients in some subgroup analyses, but data were too sparse to draw further conclusions.

The ESPEN guideline recommends as good practice point: "PN should not be started until all strategies to maximize EN tolerance have been attempted" and "In patients who do not tolerate full dose EN during the first week in the ICU, the safety and benefits of initiating should be weighed on a case-by-case basis" [2].

**Pragmatic recommendations:**


#### **5. "How Much?"—Energy and Protein Requirements**

It remains unclear what the optimal protein energy targets should be and exactly when they should be reached [35–37]. Greater protein and energy intake may be associated with improved mortality in patients at nutritional risk as stated in a recent meta-analysis, but evidence remains controversial [25]. The number of macronutrients for critically ill patients needs to be carefully adapted to the individual patient [1]. Aspects requiring careful consideration and regular re-evaluation are the phase of critical illness (acute vs. post-acute), GI and metabolic tolerance to exogenous substrates, the primary disease, possible comorbidities, macro- and micronutrient deficiencies, and the intraindividual disease trajectory of the patient [1–3]. Tailoring the MNT to the individual patient´s metabolic tolerance is described below.

#### *5.1. Energy*

The optimal amount of energy is not yet agreed upon, as the evidence remains conflicting. What the targets should be and when they should be reached is still unclear, especially in the acute phase of critical illness, as targeting only caloric adequacy did not show statistically significant improvements in many studies [19].

The guidelines uniformly recommend using indirect calorimetry to determine energy requirements [1–3]. Recent advances in this technique and development of modern devices may improve feasibility and usefulness in clinical practice [38]. Two meta-analyses from 2021 concluded that patients treated with calorimetry-guided isocaloric nutrition had significantly lower short term mortality rates [39,40]. Aiming at personalized nutrition, these results may further stimulate the use of indirect calorimetry in ICUs worldwide.

Otherwise, weight adapted formulas may be used but may only be considered an alternative. The general recommended calculated energy targets vary between guidelines and are 24–30 kcal/kg/d, see Table A1, Appendix A [1–3].

#### *5.2. Protein*

The guidelines currently recommend a protein target of 1.0–2 g/kg/d, see Table A1, Appendix A [1–3], but the influence of protein on the outcome of critically ill patients has also been discussed controversially [41,42]. Increased protein intake, was associated with improved long-term physical recovery and lower mortality in observational trials [43–46]

and did not influence duration of renal dysfunction [47]. However, a systematic review and meta-analysis of 14 RCTs did not show any impact of different amounts of protein delivery on outcomes mortality, mechanical ventilation, infections, and length of stay [48]. It must be noted that the protein delivery of the included trials was below the guideline recommendations (mean: 67.15 g vs. 42.95 g/d). Suppressed autophagy has been discussed to explain the discrepancy between observational studies and the meta-analysis [49].

The most recent evidence from RCTs regarding protein showed no significant differences in clinical outcomes. Targeting full energy and protein delivery a RCT (FEED) including 60 patients had shown significantly attenuated muscle loss and a lower number of malnourished patients [50]. In an RCT from 2021, patients receiving higher protein delivery (1.5 ± 0.5 g vs. 1.0 ± 0.5 g/kgBW/d) did not show different clinical outcomes or changes in quadriceps muscle layer thickness [51]. For comparison of protein delivery according to the international guidelines vs. usual care, a recent multicenter RCT including 120 patients investigated the feasibility of a high-protein enteral nutrition formula (100 g/L vs. 63 g/L). The two groups were comparable in energy delivery as well. Protein delivery in the intervention group was higher (1.52 vs. 0.99 g/kg ideal body weight per day). No differences in the clinical outcomes including 90-day mortality were observed [52].

Even in nutrition trials targeting the adequate provision of protein, EN failed to provide more than 1.5 g/kg/d in all the mentioned trials [53]. A very recent meta-analysis concluded that a higher versus a lower protein delivery (1.31 ± 0.48 vs. 0.90 ± 0.30 g/kg) did not significantly influence overall mortality (RR 0.91, 95% CI -0.75 to 1.10, *p* = 0.34) or other outcomes [54]. Large trials, such as the EFFORT Trial (NCT03160547) are currently ongoing to evaluate the influence of high and low protein dosages in critically ill [41].

#### *5.3. Non-Nutritional Calories*

Calculation of total calorie intake should include intake of non-nutritional calories. Depending on the dose, propofol used for sedation patients may represent a significant portion of the total calorie intake. 1% and 2% propofol contains 0.1 g fat/mL; at a propofol infusion rate of 20 mL/h, fat intake would be 48 g fat/d; thus, about 450 kcal/d would be additionally provided. In a retrospective analysis of 687 critically ill patients, sedation with propofol resulted in an additional calorie intake of 146 ± 117 kcal/d, corresponding to 17% of total calorie intake [55].

Trisodium citrate is commonly used for regional anti-coagulation during renal replacement therapy (RRT). The number of effective calories provided by citrate depends on the citrate concentration /infusion rate, the blood flow rate, the filtration fraction of the ultrafiltrate per unit time and type of filter. For example, a trisodium citrate solution may contain 0.59 kcal/mmol. An infusion rate of 11–20 mmol/h according to 3 kcal/g would result in a caloric intake of 150–280 kcal/d.

A retrospective study of 146 critically ill patients showed that the median propofol and citrate contribution to total calorie intake was 6–18% during the first seven days after ICU admission. In individual cases, however, this portion may increase up to one-third of total calorie intake [56].

**Pragmatic recommendations:**

	- **Energy: 24–30 kcal/kg/d**
	- **Protein: 1.0–2 g/kg/d**

#### **6. "When?"—The Timing of Nutrition**

#### *6.1. Early EN (EEN)*

The early initiation of EEN within 24–48 h is uniformly recommended by the guidelines in the critically ill patient who is unable to maintain sufficient oral intake [2–4,23]. Bearing in mind the traditional concept that the gut may be the "motor" for multi-organ dysfunction, EN should be started at a low feeding rate (e.g. 5–10 mL) and increased carefully and individually adapted to hemodynamic stability and tolerance [12].

Regarding the effect of EEN, a meta-analysis by Tian et al. [57] (8 RCTs, 1895 patients) calculated a significantly decreased lower rate of mortality (RR 0.68; 95% CI, 0.51 to 0.92; *p* = 0.01) and GI intolerance (RR 0.65; 95% CI 0.43 to 0.99; *p* = 0.05) in a subgroup of patients with low energy intake (33.3–66.6% of energy target). However, GI intolerance was only reported in three studies, including 452 patients.

Comparing EEN and PN four recent meta-analyses [35,57–59] including up to 25 studies with 3816 patients provided these results:


#### *6.2. How Early Is Too Early to Add PN?*

Regarding the exact timing of initiating PN, the recommendations are contradictory. The A.S.P.E.N guideline recommends that PN should be withheld in patients at low nutrition risk during the first 7 days following ICU admission [3]. The ESPEN guideline advises implementing PN within 3–7 days in these patients if EN is contraindicated [2]. The DGEM does not address a particular recommendation regarding the timing of starting PN but recommends to already use PN in the acute phase if calorie-protein targets cannot be reached by EN alone [1].

However, in severely malnourished patients or patients at high nutrition risk, ESPEN and A.S.P.E.N guidelines state that early and progressive PN should be provided to patients with contraindications for EN [2,3]. The DGEM guideline states that PN may be the better route for malnourished patients, because they frequently experience GI dysfunctions [1].

The above-mentioned meta-analysis from Hill et al. did not detect any clinical differences between an early combined EN+PN or the addition of SPN several days after ICU admission regarding mortality, ICU and hospital LOS or mechanical ventilation [25].

Therefore, it is obvious that the indication for PN in the critically ill has become more critical and individualized. The ESPEN guideline states "In patients who do not tolerate full dose EN during the first week in the ICU, the safety and benefits of initiating should be weighed on a case-by-case basis" and adds a practical point: "PN should not be started until all strategies to maximize EN tolerance have been attempted" [2].

**Pragmatic recommendations:**


#### **7. "What?"—Formula Considerations**

In most critically ill patients, standard polymeric formulas for EN should be used according to the guidelines [3,12]. For PN, all-in-one bags should be preferred [12].

#### *7.1. Energy-Dense versus Standard Formula*

Due to impaired GI tolerance in the critically ill, energy-dense EN may theoretically improve nutrient delivery. Two multicenter RCTs concluded that energy dense EN (1.5 kcal/mL) increased macronutrient delivery, while adverse effects were not increased [60,61]. However, in the latter trial [61], the need for insulin was higher in the 1.5 kcal group and the 90-day survival was not significantly different. In a recent RCT, scintigraphic measurement of gastric retention at 120 min was greater in the group with the energy-dense formula, intestinal energy delivery and glucose absorption were not improved [52].

#### *7.2. Special Enteral Diets: Synbiotics*

Synbiotics refer to the combination of both probiotics and prebiotics, containing Lactobacillus organisms alongside fiber. Synbiotics reveal trophic effects in the colon, focusing on the preservation of the microbiome, promoting mucosal regeneration in balance with the microenvironment. Their administration is a challenging concept with a potential impact on diarrhea and infectious complications [62,63].

Results from current evidence are equivocal. Including nine studies, a meta-analysis from Batra et al., showed a reduction of the incidence of pneumonia (RR: 0.70, CI 0.56 to 0.88; *p =* 0.002), the duration of mechanical ventilation (MD −3.75, CI −6.93 to −0.58; *p* = 0.02), ICU LOS (MD −4.20, CI −6.73 to −1.66; *p* = 0.001), and in-hospital mortality (OR 0.73, CI 0.54 to 0.98; *p* = 0.04) [64]. A more recent meta-analysis emphasized the considerable heterogeneity of the studies. While no benefits for clinical outcome parameters were found, reduced prevalence or duration of diarrhea episodes were observed [65]. A very recent RCT including 218 patients did not confirm a significant difference in infectious complications [66].

The A.S.P.E.N. guidelines suggest that a commercial mixed fiber formula should not be used routinely in the adult critically ill patient prophylactically to promote bowel regularity or prevent diarrhea [3]. The German DGEM guidelines recommend probiotics for patients with severe trauma and those undergoing liver transplantation [1].

**Pragmatic recommendations:**


#### **8. Supplementation with Specific Nutrients**

The supplementation of EN or PN with immune-enhancing and anti-inflammatory nutrients is a challenging and controversial issue.

#### *8.1. Arginine, Glutamine, and Omega-3 Fatty Acids*

The guidelines currently do not recommend the use of arginine, glutamine, and omega-3 fatty acids in the general critically ill patient population.

Regarding arginine, in theory, the availability is reduced in sepsis, but as arginine supplementation could induce the formation of nitric oxide and increase hypotension in patients with septic shock, there is limited data [67,68].

In critical illness, the glutamine concentration in plasma may be reduced and considered an expression of the severity of the disease and infection [69]. Supplementation of glutamine may possibly lead to a reduction in bacterial translocation from the gut, improved immune cell function, decreased proinflammatory cytokine production, and increased antioxidant capacity. The clinical impact of these findings, however, has not yet been clearly established and in most meta-analyses, no statistical significance was reached. Because a high dosage in patients with (multi-)organ dysfunction may be detrimental [70,71], in unstable and complex ICU patients, particularly in those suffering from liver and renal failure, parenteral glutamine shall not be administered [2]. An "umbrella" overview of 22 meta-analyses and a current meta-analysis of 15 randomized studies have shown again benefits for glutamine supplementation in ICU patients with regard to the rate of infectious complications and the hospital LOS [72–74]. Reference was also made to the considerable and statistically significant heterogeneity of the studies and metaanalyses [72]. The new ESPEN practical guideline "clinical nutrition in surgery" states parenteral glutamine may be considered in patients requiring exclusive PN [12].

Fish oil or ω-3 fatty acids may have a high anti-inflammatory potential due to the shift in inflammatory mediator synthesis but may intensify an already existing immunosuppression. A recent meta-analysis of 49 RCTs, risk of infection was 40% lower with ω-3 fatty-acid enriched PN than standard PN (RR 0.60, 95% CI 0.49 to 0.72; *p* < 0.00001). Patients given ω-3 fatty-acid enriched PN had reduced ICU LOS (*p* = 0.01) and reduced length of hospital LOS (*p* < 0.00001). Risk of sepsis (9 RCTs) was reduced by 56% in those given ω-3 fatty-acid enriched PN (RR 0.44, 95% CI 0.28–0.70; *p* = 0.0004) [75]. Focusing on enrichment with omega-3 fatty acids in EN and supplemental PN in a recent RCT with 100 mechanically ventilated ICU patients, neither improvement in lung function, nor decrease of complications were observed. However, the supplemented group could be weaned earlier from catecholamine treatment and PN [76]. In patients requiring PN, the new ESPEN practical guideline "clinical nutrition in surgery" states postoperative PN including ω-3 fatty acids should be considered [12].

#### *8.2. Selenium*

Selenium is a known antioxidant and decreases in septic patients and after major surgery. Current evidence for the use of intravenous selenium has been inconclusive. An RCT (SISPCT) administering high-dose selenium to 1089 patients with severe sepsis or septic shock, the 28-day mortality rate in the selenium group was not significantly affected (*p* = 0.30) [77]. A recent meta-analysis focusing on eight RCTs with low risk of bias did not find effects of antioxidant micronutrient supplementation on the reduction of mortality [78]. Currently, an international multicenter RCT of selenium in cardiac surgery (SUSTAIN-CSX, NCT02002247) is undergoing statistical evaluation [79].

Currently the guidelines do not recommend pharmacological supplementation with selenium [1,2].

#### *8.3. Vitamin D*

Vitamin D exerts pleiotropic effects with actions far beyond its classic role in mineral homeostasis. Tissue actions require two enzymatic conversions to 25-hydroxyvitamin D and 1,25-dihydroxy (OH) vitamin D after which vitamin D has been shown to modulate the immune response amongst others. Vitamin D deficiency is highly prevalent across all age groups and countries [80]. Although vitamin D deficiency was further determined to be associated with greater illness severity (also in patients with COVID 19), a causal relationship between vitamin D deficiency and multiple organ dysfunction has not been established [81–84]. Thus, the efficacy of vitamin D as a therapeutic in critically ill patients remains controversial [85].

While in a previous phase 2 trial (underpowered for mortality as endpoint [VITdAL-ICU], involving 475 patients), vitamin D supplementation administered to vitamin D– deficient, critically ill patients was associated with lower observed mortality than placebo at 28 days (21.9% vs. 28.6%, *p* = 0.14) and at 6 months (35.0% vs. 42.9%, *p* = 0.09) [86]. The subsequent randomized, double-blind, placebo-controlled, phase 3 (VIOLET) trial including 1360 patients (1078 vitamin D deficient at baseline defined as 25-hydroxyvitamin D level, <20 ng/mL) receiving a single enteral dose of 540,000 IU of vitamin D or matched placebo did not reveal clinically important differences between the groups with respect to secondary clinical, physiological, or safety end points. The severity of vitamin D deficiency at baseline did not affect the association between the treatment assignment and mortality.

In a recent sub-study of the VIOLET trial, long-term cognitive outcomes were measured. The adjusted median score at follow-up (median 443 days) was not significantly different (adjusted OR, 0.83; 95% CI, 0.50 to 1.38) [87].

Another randomized, placebo-controlled, double-blind, multicenter, international trial, is ongoing with planned recruitment of 2400 adult patients with severe vitamin D deficiency (25-OH Vit D ≤ 12 ng/mL) receiving a enteral loading dose of 540,000 IU cholecalciferol within 72 h after ICU admission, followed by 4000 IU daily for 90 days or placebo [88].

So far, there is no clear evidence for pharmacological vitamin D supplementation in patients with established deficiency.

Given the controversial evidence, different recommendations can be found among the guidelines (1, 2) (see Table 1).

**Table 1.** Current guideline recommendation regarding micronutrients, vitamins, and antioxidants.


DGEM, German Society for Nutritional Medicine; ESPEN, European Society of Enteral and Parenteral Nutrition; A.S.P.E.N, American Society of Enteral and Parenteral Nutrition.

#### *8.4. Vitamin C*

Vitamin C is a pleiotropic nutrient and powerful antioxidant. While suboptimal vitamin C status is common among critically ill patients, vitamin C is currently not recommended as pharmacotherapy for these patients. Nevertheless, a very recent meta-analysis from Patel et al. (acceped for publication, registered at PROSPERO: CRD42021244074) could demonstrate with a trend towards reduction in overall mortality (RR 0.88, 95% CI 0.75 to 1.02, *p* = 0.09), which became significant when comparing high dose vitamin C supplementation (≥10,000 mg/d) with placebo (RR = 0.70, 95%, CI 0.52 to 0.96, *p* = 0.03). In the CITRIS-ALI Trial from Fowler et al., in 2019, vitamin C–infused patients exhibited a significant reduction in 28-day all-cause mortality (χ<sup>2</sup> = 4.84; *p* = 0.03) [89].

Regarding the "Marik cocktail", a treatment with intravenous hydrocortisone, ascorbic acid (vitamin C) and thiamine (HAT) [90], a current meta-analysis from 2021 in septic patients found no significant difference between both groups in long term mortality, ICU mortality, or incidence of acute kidney injury, hospital and ICU LOS, and ICU free days on day 28 between the intervention and control groups [91]. There was, however, a significant difference in the reduction of SOFA score on day 3 from baseline (MD −0.92; 95% CI −1.43 to −0.41; *p* < 0.05).

In the before-mentioned meta-analysis (PROSPERO: CRD42021244074), vitamin C monotherapy was associated with a significant reduction in overall mortality (RR 0.66,95% CI 0.49 to 0.89, *p* = 0.006), while there was no effect on overall mortality in the trials administering vitamin C in combination with thiamine and hydrocortisone (RR 0.99, 95% CI 0.82 to 1.19, *p* = 0.91, test for subgroup differences was significant, *p* = 0.02).

**Pragmatic recommendations:**


#### **9. Monitoring and Complications**

In addition to the clinical examination of the patient's abdomen, blood chemistry includes serum glucose, triglycerides, lactate, and procalcitonin should be measured. Phosphate should be controlled to detect and treat a potential refeeding syndrome. To avoid measurement of nitrogen balance, urea excretion rate per 24 h will help to estimate the extent of catabolism. For the estimation of intestinal function, the potential impact of biomarkers as citrulline and fatty acid binding protein plasma level have to be validated in clinical studies.

#### *9.1. Gastrointestinal Intolerance*

There is a variety of definitions for feeding intolerance including a lot of uncertainty for the clinical management [92]. Many critically ill patients experience feeding intolerance, motility disorders, which include delayed passage with slow gastric emptying and constipation, and accelerated passage with impaired small intestinal nutrient absorption or nutrition-related diarrhea [92,93]. According to the ERAS Society's (Enhanced Recovery After Surgery) various guidelines [94–96], causes of GI intolerance include opioid analgesia, sedation, edema, and insufficient stimulation. Treatment options are oral/EN, prokinetics, mobilization and physical therapy, prokinetics and optimization of sedation and volume status. Clinical observation of the patient's abdomen, bowel motility, and gastric reflux is mandatory.

While measurement of GRV is not standardized the optimal threshold is uncertain. Therefore, the impact of the measurement of GRV has been controversial. The monitoring of GRV is considered still relevant in surgical ICU patients and severely critically ill patients with a high risk for GI dysfunction. A GRV of more than 500 mL/6 h may be considered critical [93]. In two controlled studies non-monitoring of GRV was without significant effect on the risk of ventilator-associated pneumonia in adults receiving mechanical ventilation and EEN [97,98].

Additional possibilities to monitor GI tolerance include sonography and CT scans. If the latter shows bowel loops with intramural air accumulation, bowel ischemia must be considered. Intramucosal phi tonometry as a tool for the measurement of splanchnic perfusion has not been used frequently in clinical practice.

A research agenda has been proposed by the working group GI failure of the European Society of Intensive Care Medicine (ESCIM) including the core set of monitoring and outcomes [99].

#### *9.2. Metabolic Intolerance*

Hyperglycemia may be due either to hyperalimentation in the (acute) phase of insulin resistance or underlying subclinical diabetes. Hyperglycemia is associated with increased mortality, which has been the rationale for intensified insulin therapy [100]. To avoid any life-threatening hypoglycemia, nowadays, glucose levels up to 180 mg/dL (10 mmol/L) are accepted [5,101]. Reduction of glucose calories should be considered before insulin is administered in nondiabetic patients with moderate dosages of 0–4 IU/h. An intensive insulin therapy is not recommended to avoid the potential consequences of hypoglycemia [5].

To circumvent hypertriglyceridemia, bolus application of lipids should be avoided. During continuous lipid infusion serum triglyceride levels should not exceed 400 mg/dL [101]. Reversible liver steatosis or hypertriglyceridemia-induced acute pancreatitis may develop if metabolic control cannot be achieved within several days.

In severely malnourished patients and patients after long starving periods, a refeeding syndrome may occur within the first few days after starting MNT. Supply of carbohydrates and fluid will stimulate insulin secretion and intracellular shift of glucose and electrolytes. The decrease of serum potassium, magnesium, and phosphate levels will lead to impaired neuromuscular transmission causing life-threatening arrhythmias, and convulsions. Calories and fluid administration should be increased slowly under electrolyte- and preferably ECG monitoring [102]. In high-risk patients, a preventive thiamine administration 200 mg once daily for 2 days is necessary.

**Pragmatic recommendations:**


#### **10. Tailoring MNT to the Individual Patient's Needs**

The heterogeneity of the patients in the ICU complicates MNT and many different factors such as primary disease, comorbidities, phase of critical illness influence the patient's individual requirements. The most important aspects of the different patient groups are summarized in Figure 1 to give a pragmatic overview.

**Figure 1.** Overview important aspects of medical nutrition therapy in different patient groups. Abbreviations: EN: enteral nutrition, GI: gastrointestinal, PN: parenteral nutrition, RRT: renal replacement therapy.

#### *10.1. Adapting MNT to the Phase of Critical Illness*

MNT must be adjusted to the metabolic tolerance which may considerably alter in the different phases of critical illness and resulting catabolic response [1,2]. While metabolic tolerance may be extremely limited by severe inflammation in the early acute phase bearing the high risk of overfeeding, it is different in the post-acute phase according to chronical inflammation or beginning resolution and recovery—the shift to anabolism includes the risk of underfeeding. Therefore, the guidelines recommend an individualized approach due to specific pathophysiologic and resulting metabolic changes, the guidelines recommend an individual adaptation of MNT to the different phases of critical illness [1,2].

Immediately after onset of the critical illness, the 'acute' phase begins that can be divided into an 'early acute' phase (about 1–3 days post-onset with the possibility of fatality due to the most severe illness entity) and a 'late acute' phase (approximately lasting for 2–4 days if the patient survives the early acute phase). The post-acute phase can be described as a 'recovery' phase (duration >7 days), which is usually spent in the primary care hospital. During this phase of recovery from catabolic critical illness, substrate tolerance has been normalized with a metabolic shift to anabolism. From a nutritional point of view, sufficient macronutrient supply in this period may be considered pivotal to the patient's recovery and long-term outcome. The amount of administered calories are considered to be 1.2–1.5 fold the calculated energy requirement. Because these patients are weaned from ventilation and able to eat, it will be frequently—and falsely—assumed that they will manage to cover these requirements by the oral route. It has been recently shown that this may be achieved only by a combination of oral and enteral feeding [103]. It has been shown that after extubating, many patients will receive no more than 700 kcal/d [104]. Reasons may be early discontinuation of MNT in favor of oral nutrition (especially in case of discharge to standard care), limited oral intake due to post-critical weakness, fatigue, anorexia, and isolation. Documentation of the amount of oral intake is mandatory, supplementation with oral nutritional supplements (ONS), in some cases enteral or even SPN should be considered.

After the recovery phase, the 'rehabilitation' phase (lasting several months) follows, in which, among others, the metabolic damage suffered initially is repaired slowly. Usually, patients will not go through this phase in the primary care hospital. Alternatively, the 'post-acute' phase may merge into a 'chronic' phase (of uncertain duration) characterized by persistent organ dysfunction and an uncertain prognosis. This particular course may be described as a "persistent inflammatory immunosuppressed catabolism syndrome" [105,106]. If there is a new disturbance of homeostasis, the process will start again with the acute phase.

**Pragmatic recommendations:**


#### *10.2. Critically Ill Patients with Altered Nutritional Status*

Malnutrition is a common phenomenon and includes under- as well as overnutrition. While undernourished patients with low weight and BMI can be easily recognized during clinical routine, malnutrition may be underestimated in patients with normal or elevated BMI [3]. An obese patient with weigh loss in the past months or with reduced intake of nutrition is at risk for malnutrition, which may not be diagnosed at first sight. Therefore, a combination of clinical evaluation and the use of a validated screening tool may be recommended [1–3,107]. If the patient is at nutritional risk, a detailed nutritional assessment should follow.

#### 10.2.1. Undernourished Patients

In patients with preexisting malnutrition, according to the DGEM guideline, the same energy and protein targets may be used, as in other patients, as there is still a lack of data concerning the therapeutic relevance of MNT in this patient cohort although the prognostic relevance of malnutrition is clear [1]. In these patients, an MNT should be commenced early to avoid large cumulative macronutrient debts. This may be achieved through ONS or though SPN. Should an EN not be feasible, an early hypocaloric PN should be administered (75% of caloric target, protein ≥ 1 g/kg/d) [1].

In severely malnourished patients, according to the DGEM guideline, a more aggressive MNT is not recommended to avoid GI and metabolic intolerance and potential complications such as acute hyperalimentation, refeeding syndrome, or increased rates of infection [1].

Contrasting this recommendation, the A.S.P.E.N. suggests a rapid progression of preferably EN with the aim to reach the target within 24–48 h under careful monitoring. Within 48–72 h, >80% of energy and protein goals should be achieved [3]. However, these recommendations of the A.S.P.E.N. rely on debatable evidence.

While the essential role of micronutrients for many biological processes in the human body is accentuated in current literature, reliable evidence remains sparse. Therefore, micronutrients in pharmacological dosages should only be administered after clinically raised assumptions or measured micronutrient deficiencies as stated by the DGEM [1].

Special attention should be paid to the occurrence of a refeeding syndrome in severely malnourished patients.

#### 10.2.2. Obese Patients

Obese patients do not have 'reserves', as often assumed, but in contrast they frequently suffer from disturbances in substrate utilization, leading this patient group to be predisposed to loss of muscle mass during an ICU stay. Increased attention should be paid to the monitoring of metabolism, markers of metabolic syndrome and possible comorbidities [3]. While there is only weak evidence, the goal of an MNT is avoidance of muscle catabolism, improvement in body composition, reduction of insulin resistance and hyperglycemia and reduction of infection rates.

For obese patients with BMI ≥ 30 kg/m2, similar guideline recommendations were issued as for other ICU patients [1–3]. The best MNT for obese patients is still debated particularly because evidence remains weak from smaller and quite older observational studies, as are formulas to calculate energy demand in these patients, but indirect calorimetry should be used to measure EE [1,2]. Otherwise, obese patients should be nourished weight-adapted hypocalorically and high in protein (1.5 g protein (1.8 g amino acids)/kgideal BW/d) [1–3,108]. For calculation of ideal BW, the so-called Peterson formula is recommended in a slightly different modification by the ESPEN and DGEM guideline (DGEM: ideal BW = 48.4 + 77.0 × (height-1.50 m) related to a BMI of 22 kg/m2; ESPEN: adjusted BW = (current BW-ideal BW) × 0.33 + ideal BW, ideal BW = 2.2 × BMI + 3.5 × BMI × (height-1.50 m) related to an "overweight" BMI of 25 kg/m2) [109]. In clinical routine, a hypocaloric high-protein nutrition may be achieved via enteral or parenteral protein supplements. Additionally, patients who experienced weight loss in the past months or who underwent bariatric surgery, should receive vitamin and trace element supplements with special focus on thiamine [1,3].

**Pragmatic recommendations:**


#### *10.3. Elderly Critically Ill Patients*

In elderly patients, the most important goal is to optimize the functional capabilities to achieve the best possible quality of life and autonomy [110]. Elderly patients have higher incidences of comorbidities, malnutrition, sarcopenia, and cachexia, which may be regarded as the frailty syndrome [110]. Because the multidimensional syndrome "frailty" and other reasons for malnutrition, such as depression, anorexia, polypharmacy, low activity and catabolism of chronic diseases, and inflammation are so common in elderly patients and affects nutritional status, all elderly patients should be screened for malnutrition and be treated accordingly [111,112]. If the elderly critically ill patient is able to feed orally, sedative measures and dietary restrictions should be minimized to avoid a reduced food intake, which may negatively affect patient outcome [110]. In obese elderly patients, a weightreducing diet should be circumvented, to avoid muscle catabolism [110]. Possibilities to optimize an oral nutrition include enriched nutrition, several small meals, ONS, monitoring of food intake and company during meals. If oral nutrition is insufficient, EN and or PN should be administered with the same indications as in other critically ill patients, while minding an early start of a MNT if indicated [110].

In the absence of methodologically sound clinical evidence, the DGEM chose not to give special recommendations for this patient group [1]. The ESPEN assumes that an adequate macronutrient supply may lower the incidence of frailty in elderly patients and recommends higher protein target (1.2–1.5 g/kg/d) for elderly malnourished patients [2]. In elderly critically ill patients a hypercaloric and high-protein supplementation (30 kcal/kg/d energy and ≥1 g/kg/d protein) is recommended by the ESPEN as well [110]. Micronutrients should be supplied as for healthy elderly patients, special micronutrient deficits shall be counterbalanced [110]. The A.S.P.E.N. does not provide special recommendations for elderly ICU patients.

In addition to an effective MNT, the focus should be on adequate hydration, potential refeeding syndrome [113], as well as on preservation of muscle mass physical activity to allow for an independent living after hospital discharge [110].

**Pragmatic recommendations:**


#### *10.4. Critically Ill Patients in Shock*

Physiologic advantages or an EN are the maintenance of the gut mucosa and the GI barrier, the modulation of an inflammatory reaction and the reduction of insulin resistance. In case of compromised hemodynamics, the concern for mesenterial ischemia due to the increased demands for the gut system is voiced in the guidelines [2,3]. Cautious EN in patients with the need for catecholamines or vasopressors may be considered [114], but there is still a risk for nonocclusive bowel disease and a lack of data from controlled randomized trials [115–117]. In the NUTRIREA-2 trial including 2410 patients, early EN did not reduce mortality or the risk of secondary infections but was associated with a greater risk of digestive complications compared with early isocaloric PN [22].

In the concept of "minimal trophic nutrition" it is recommended to start EN with a low flow rate (10–20 mL/h) [118]. In case of hemodynamic instability and the administration of catecholamines, limited enteral tolerance should be anticipated. Careful clinical examination of the abdomen must be performed and keeping in mind that the energy target cannot be achieved in such patients via the EN in most cases within the first ICU week. Complete stop of enteral supply should be avoided whenever possible [23].

The guidelines recommend EEN in septic patients after achieving hemodynamic stability [1–5]. The MNT should be progressed slowly to achieve more than 80% of the nutritional target within the first week [3]. If EN is contraindicated, PN should be used. An SPN shall be administered according to the ESPEN after day 3 [2]. Contrasting the ESPEN guideline, the A.S.P.E.N. and Surviving Sepsis Campaign (SSC) do not recommend SPN or PN in the acute phase of a severe sepsis due to the limited substrate utilization [3,5]. Because septic patients in the hypermetabolic phase may have an increased need for vitamins and trace elements, these should be supplemented in physiological dosages after a proven deficit or of PN is needed [1]. Immonutrition, as well as micronutrients—such as selenium, glutamine, arginine, or carnitine—should not be administered [3,5].

Attention should be paid to the macronutrient balance in the first week of sepsis, because EN alone is often hypocaloric due to limited GI and metabolic tolerance. Whether this is advantageous or not remains controversial. The GI tolerance may be improved through prokinetics and postpyloric nutrition tubes, according to the SSC guideline [5].

The guidelines furthermore state, that patients with septic shock can receive EEN after stable hemodynamics are ensured (mean arterial pressure ≥60 mmHg and stable/falling lactate/need for vasopressors) but should progress slowly and be adapted to the patient's tolerance [2,3]. In patients with uncontrolled shock, no EN should be given [2,3].

**Pragmatic recommendations:**


#### *10.5. Critically Ill Patients after Trauma and Burn Injury*

Due to insufficient data in these patient groups, they should be treated according to the general recommendations for ICU patients [1]. The A.S.P.E.N. recommends EN should be preferred to PN [3].

Patients with burn injury and exudative losses through wound surfaces have an increased need for vitamins and trace elements. A highly variable EE leads to inaccuracy of estimation formulas regarding macronutrient needs. Therefore, indirect calorimetry shall be preferred [1,3]. The ESPEN recommends—contrasting the DGEM—enteral glutamine to be supplemented (0.3–0.5 g/kg/d) for 10–15 days [2] and the A.S.P.E.N. a higher dosage of protein (1.5–2 g/kg/d) [3]. According to DGEM, protein losses via drains and dressings should be compensated [1].

For trauma patients, the ESPEN recommends glutamine (0.2–0.3 g/kg/d) for the first 5 days (10–15 days if wound healing is complicated) [2]. Immunomodulating solutions with fish oil and arginine may be considered for patients after severe trauma according to the A.S.P.E.N. [3]. The DGEM advises against the use of these substrates in the sense of pharmacotherapy [1].

**Pragmatic recommendations:**


#### *10.6. Critically Ill Patients with Central Nervous Diseases*

The guidelines state that patients with head trauma, ischemic or hemorrhagic stroke and spinal trauma should receive EEN [2–4].

Patients with **stroke** are especially vulnerable for malnutrition, dehydration, and aspiration pneumonia due to impaired consciousness, swallowing problems as well as cognitive and perceptive deficits [119]. Therefore, all stroke-patients should be screened for swallowing problems and malnutrition and treated accordingly [120]. ONS may support the individualized therapy [119,120]. In patients, where impaired consciousness or dysphagia prohibit oral nutrition, an EN should be started within 72 h via nasogastric tube [119].

Young patients with **head trauma** are usually not malnourished at ICU admission but have an increased nutritional risk due to long ICU stays, variable EE (up to 200%) and frequently experienced a profound muscle catabolism. A higher protein supplementation 1.5–2.5 g/kg/d may therefore be considered [2,3]. The A.S.P.E.N. suggests the administration of formulas including arginine and omega-3-fatty-acids [3].

Patients with a diagnosed **swallowing disorder**, which make up to 60% of the general ICU population [121], should receive a logopedic therapy and an optimization of food texture to optimize oral food intake as stated in the ESPEN guideline "clinical nutrition in neurology" [119]. Special attention should be paid to oral and pharyngeal residue of food, sufficient food and fluid intake and aspiration [119,120]. If no safe and sufficient nutrition can be ensured, EN via nasogastric route shall be added to the oral nutrition [2, 119,120]. If there is additional risk of aspiration, EN boles should be avoided, a postpyloric tube should be placed, the head should be elevated by 30◦, and prokinetics should be considered [2,3,120]. PN is another option for these patients, especially in cases with preexisting malnutrition or if an EN is not sufficient to ensure adequate nutrition and hydration [2,120].

**Pragmatic recommendations:**


#### *10.7. Critically Ill Patients with Cardiac Diseases*

Malnutrition, cachexia, and sarcopenia are common comorbidities in patients with cardiac failure [17,122,123]. In contrast to often quickly recovering patients after elective "simple" cardiac surgery, patients with preexisting malnutrition, complex heart surgery and elevated risk for prolonged ICU stays should receive regular screenings for nutritional risk [123,124]. At the same time, these patients are vulnerable for fluctuation of the volume status, which is why the use energy-dense formulae seems reasonable.

For patients with mechanical assist devices (extracorporeal membrane oxygenation (ECMO)/ extracorporeal life support (ECLS), or ventricular assist devices (VAD)), the same recommendations apply as for other critically ill patients. According to the guidelines, in patients with stable hemodynamics and intact GI system, EN should be preferred [1,2,4]. Regarding monitoring, special attention should be paid to GI bleeding during therapeutic anticoagulation. In patients on ECMO/ECLS, diagnostic measures and therefore interruptions of EN are frequent. Although evidence remains sparse, SPN seems safe and should be infused via central venous catheter (as opposed to directly into the ECMO circuit [1]. Indirect calorimetry is not reliable in this patient group due to the extracorporeal CO2-elimination, therefore weight-based formulas should be used to estimate energy targets [1].

**Pragmatic recommendations:**


#### *10.8. Critically Ill Patients with Respiratory Diseases*

In patients with respiratory failure, in theory, formulae with increased fat content and reduced carbohydrates may be useful to influence the respiratory quotient and decrease CO2-production. The A.S.P.E.N. advises against these formulae in patients with acute respiratory failure due to the lack of evidence, but recommends avoiding hyperalimentation, because lipogenesis increases CO2-production [3].

According to the guidelines, EN should not be administered to patients with lifethreatening hypoxia, hypercapnia, and acidosis as sign of respiratory decompensation. In patients with stable and compensated respiratory failure, EN can be commenced [2,4]. A restriction of fluids seems reasonable to avoid aggravating overhydration and edema. Therefore, energy-dense formulae (1.5–2 kcal/mL) are recommended [3]. Hypophosphatemia is a common (and commonly unrecognized) problem and may lead to weakness of respiratory muscles and weaning-failure. Therefore, phosphate should be monitored

closely and replaced as necessary [3]. A functional GI-System should be used, so EEN should be performed in patients managed in prone position [2]. The hypothesis that the abdominal compression leads to problems with transport and resorption has not been proven [1,2,4]. One possibility to increase tolerance and avoid aspiration of EN is to bring the entire bed in a 30◦ head elevation. Measurements of intraabdominal pressure may aid in the early detection of an abdominal/GI problem (not only) in patients in prone position.

#### Critically Ill COVID-19 Patient

Most COVID-19 patients admitted to the ICU are at high risk of or have preexisting malnutrition [125]. According to Ochoa et al., COVID-19 patients present with three different phenotypes of nutrition risk: (1) the frail older patient, (2) the patient with severe ongoing chronic illness, and (3) the patient with severe and morbid obesity [126]. The measurement of the upper waist circumference of COVID-19 patients has recently shown that with every centimeter increase there is a 1.13-fold higher probability of intensive treatment and a 1.25-fold higher probability of mechanical ventilation [127].

Regarding MNT in this patient cohort, no RCTs exist so far. Instead, the ESPEN and A.S.P.E.N. have published expert statements as an adaptation of their existing guidelines [128,129].

Before admission to the ICU, anorexia secondary to infection, dyspnea, dysosmia, dysgeusia, and impaired meal preparation during quarantine may have reduced food intake. Therefore, upon ICU admission, nutritional assessment is mandatory. An individualized approach including indirect calorimetry is recommended, because persistent hypermetabolism was observed in these patients [130,131].

While EN may be performed even in prone position, Martindale et al. recommend a lower threshold for switching to PN in cases of intolerance, high risk of aspiration, or escalating vasopressor support [129]. In case of ARDS, Thibault et al. have recommended the use of EN enriched with omega-3 fatty acids, and for PN fish oil-enriched intravenous fat emulsions [125].

In a multicenter, double-blind, RCT conducted at two sites in Sao Paulo, Brazil, 240 hospitalized patients with COVID-19 being moderately to severely ill were randomized to receive a single oral dose of 200,000 IU of vitamin D3 or placebo [132]. LOS defined as the primary endpoint was not significantly different between the vitamin D3 and placebo groups (log-rank *p* = 0.59; unadjusted hazard ratio for hospital discharge, 1.07 [95% CI, 0.82 to 1.39]; *p* = 0.62). Furthermore, there were no significant differences for in-hospital mortality, admission to the ICU (*p* = 0.30), or need for mechanical ventilation (*p* = 0.09). Mean serum levels of 25-hydroxyvitamin D significantly increased after a single dose of vitamin D3 vs. placebo (44.4 ng/mL vs. 19.8 ng/mL; difference, 24.1 ng/mL [95% CI, 19.5 to 28.7]; *p* < 0.001).

**Pragmatic recommendations:**


#### *10.9. Critically Ill Patients with Abdominal Diseases*

Critically ill patients requiring abdominal surgery often present anatomical and functional characteristics that require a critical evaluation and adaptation of MNT. Compromised functions are GI-motility, digestion, and absorption of nutrients, which often leads to a reduced tolerance of EN. On the other hand, EN also nurtures the gut mucosa, increases intestinal perfusion and peristalsis [3,17].

#### 10.9.1. Patients after GI-Surgery

If the GI system is functional, it shall be used in patients after **GI surgery** as recommended uniformly by all guidelines [1–4]. Therefore, in most abdominal-surgery patients an EEN within 24–48 h is recommended [1,12]. Even if the nutrition of these patients

remains a controversially debated topic, it is not necessary to withhold EN as per standard or administer clear liquids only [3]. The MNT should be discussed in the interdisciplinary teams to optimize the nutrition for each individual patient [12].

In patients after **upper GI-surgery,** an intraoperatively placed nasojejunal or percutaneous postpyloric tube can allow for EN of the distal GI parts without risking injury of a fresh anastomosis during feeding tube placement and without risk of regurgitation and aspiration. In general, fresh anastomoses *per se* are not a contraindication for EN, whereas the individual therapy needs to be decided in the interdisciplinary team [2].

Abdominal surgery patients with **complicative courses** often accumulate great energy debts. Therefore, the ESPEN recommends to consider an early SPN [2]. In patients with leaky anastomoses, with internal or external fistulas, an access to the distal part of the gut shall be used for EN. Enteroclysis shall be considered and re-evaluated on a regular basis, to increase resorption of nutrients and prevent mucosal atrophy as well as bacterial overgrowth with the risk of bacterial translocation and bacteriemia [2]. If EN is insufficient, the patient needs PN [2].

Patients with **open abdomen** should receive an EEN (24–48 h postinjury) in the absence of bowel injury [3,4]. Protein losses via drains and dressings should be compensated in the form of enteral protein supplements or parenteral albumin (15–30 g protein/ liter exudate) [1,3]. In addition to the above-mentioned guideline recommendations, an algorithm was proposed by Friese et al. in 2012 [133] and Moore and Burlew in 2016 [134].

#### 10.9.2. Patients with Liver Failure

According to the ESPEN guideline on clinical nutrition in liver disease [135] and the A.S.P.E.N. [3], patients with liver failure have an increased risk for malnutrition and may develop severe disturbances regarding carbohydrate, protein and fat metabolism with impeded hepatic gluconeogenesis and lactate clearance, as well as catabolism [3,135]. Therefore, all patients with liver disease should be screened for malnutrition and treated accordingly [135]. EE may be highly variable in these patients; therefore, indirect calorimetry should be used [3,135]. If the latter is not possible, the patient's dry weight shall be used to estimate energy and protein targets and 1.3× resting EE should be supplied [135]. Patients with acute liver failure should receive oral nutrition as long as possible, or in case of hepatic encephalopathy, EEN [135]. A generalized protein restriction is not recommended prevent muscle degradation, which contributes to the development of a hepatic encephalopathy [3]. In patients with encephalopathy and high ammonia, the protein supplementation can be delayed for 24–48 h [135]. A normal EN formula shall be used, because of insufficient evidence for the use of branched chained amino acids [3,135].

#### 10.9.3. Patients with Acute Pancreatitis

In patients with acute pancreatitis, there is a great range of severity. Therefore, the MNT should be re-evaluated frequently. In mild cases, patients can receive oral nutrition ad libitum. In medium or severe cases, EEN should be commenced with a low infusion rate via gastral or jejunal path. If the patient undergoes surgery for necrosectomy, placement of needle catheter jejunostomy should be considered [136]. Before PN is considered one week after onset of pancreatitis, measures to increase GI-tolerance can be performed, according to A.S.P.E.N. [3].

**Pragmatic recommendations:**


#### *10.10. Critically Ill Patients with Renal Diseases*

Patients with renal failure represent a heterogeneous group with different needs regarding macro- and micronutrients. EN should be preferred in this patient group. If an EEN is not possible, SPN or TPN should be started early according to the German guideline for Patients with Kidney Disease [137].

A patient with acute kidney failure has the same needs for energy and protein and should be treated according to the standard of other critically ill patients—compared to patients with chronic kidney failure, who have increased energy demands as stated in the guidelines [1,3,137]. Even when deranged values for potassium and phosphate are rare in patients with acute-on-chronic kidney failure, a careful electrolyte monitoring is important. In case of electrolyte derangements and no indication for RRT, special renal formulae can be used [3]. These formulae contain less fluid and protein, are high in calories and have lower potassium and phosphate content and can contain additional substances such as carnitine [137].

A RRT increases losses of energy, water soluble molecules—such as amino acids, electrolytes, trace elements, and vitamins—and induces systemic inflammation and protein catabolism. On the other hand, substances sch as lactate and citrate are added excessively with the dialysis or hemofiltration solution. This needs to be minded for calorie calculations. The guidelines vary in their recommendations regarding energy supplementation for ICU patients with kidney failure during RRT. The DGEM guideline recommends macronutrient calculation as per the standard in other ICU patients, which should be ramped up slowly while considering macronutrient losses. The A.S.P.E.N guideline suggests administration of high protein dosages of 2.5 g/kg/d, to achieve nitrogen balance [1,3,137]. To cope with the increased need of vitamins and trace elements, an increased supply is recommended, with special regard to vitamin C, folate and thiamine [1,137].

**Pragmatic recommendations:**


#### **11. Summary and Conclusions**

Despite ongoing research activities, the level of evidence remains often low due to a lack of data from large RCTs.

Evidence based international guidelines are available. Nevertheless, the implementation of the different recommendations into clinical routine remains often insufficient.

It should be kept in mind that every ICU-patient is at risk for malnutrition, regular screenings for malnutrition should be performed, close monitoring and frequent adaptation of nutrition is necessary. An early enteral nutrition with a standard formula is preferred in almost all critically ill patients. The addition of parenteral nutrition and micronutrients should be considered individually. Feeding protocols which are tailored to the treating units will improve the nutritional performance.

Due to the heterogeneity of the patients, MNT should be carefully adapted to the individual patient with special focus on phase of critical illness, metabolic tolerance, leading symptoms, and comorbidities. Nutrition in the ICU is a complex therapy requiring an interdisciplinary approach and frequent reevaluation. This article can only be regarded as an introduction and summary for the complex topic of nutrition in the critically ill patient and may be of help for the clinical routine.

Appreciating recent advances, the long-lasting open questions will remain:


**Author Contributions:** Conceptualization, A.H. and A.W. together with G.E.; Writing—original draft preparation, A.H. and A.W. together with G.E.; Writing—review and editing, A.H. and A.W. together with G.E.; Visualization, A.H.; Supervision, A.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** A.H. is currently supported by a stipend from the Ministry of Culture and Science of the State of North Rhine-Westphalia "Landesprogramm für chancengerechte Hochschulmedizin".

**Conflicts of Interest:** G.E. received lecture fees, travel support and advisor honoraria from Fresenius Kabi within the last three years, A.W. received lecture fees from Baxter, B. Braun, Fresenius Kabi, Ethicon, and Falk Foundation, Research grant from Baxter, Mucos. A.H. received a research grant from Fresenius Kabi.

#### **Abbreviations**


#### **Appendix A**

**Table A1.** Weight based calculation of macronutrient requirements, \*, recommendation for acute phase, use actual body weight for calculation.


#### **References**


## *Review* **Impact of Dietary Factors on Brugada Syndrome and Long QT Syndrome**

**Sara D'Imperio 1, Michelle M. Monasky 1, Emanuele Micaglio 1, Gabriele Negro <sup>1</sup> and Carlo Pappone 1,2,\***


**Abstract:** A healthy regime is fundamental for the prevention of cardiovascular diseases (CVD). In inherited channelopathies, such as Brugada syndrome (BrS) and Long QT syndrome (LQTS), unfortunately, sudden cardiac death could be the first sign for patients affected by these syndromes. Several known factors are used to stratify the risk of developing cardiac arrhythmias, although none are determinative. The risk factors can be affected by adjusting lifestyle habits, such as a particular diet, impacting the risk of arrhythmogenic events and mortality. To date, the importance of understanding the relationship between diet and inherited channelopathies has been underrated. Therefore, we describe herein the effects of dietary factors on the development of arrhythmia in patients affected by BrS and LQTS. Modifying the diet might not be enough to fully prevent arrhythmias, but it can help lower the risk.

**Keywords:** Brugada syndrome; long QT syndrome; diet; ingredients; glucose; ketone bodies; ROS; sudden cardiac death

#### **1. Introduction**

A healthy regime is fundamental for the prevention of cardiovascular diseases (CVD). CVD are widely known to be responsible for 33% of all deaths worldwide [1]. One of the multiple types of cardiac disorders is arrhythmia, which refers to a group of several conditions that interfere with heart rhythm. Among different types of cardiac arrhythmias, there are inherited channelopathies (IC), including long QT syndrome (LQTS), short QT syndrome (SQTS), Asian sudden unexplained nocturnal death syndrome (SUNDS), catecholaminergic polymorphic ventricular tachycardia (CPVT), and Brugada syndrome (BrS). Unfortunately, sudden cardiac death (SCD) could be the first symptom of patients affected by IC. Risk stratification and current diagnosis have been primarily focused on clinically detectable changes and abnormalities in the heart structure and function [2,3]. Several known markers are used to predict cardiac arrhythmias, although none are specific for patients affected by inherited channelopathies.

Several risks factors, such as obesity, diabetes, sleep apnea, anorexia nervosa, electrolyte imbalances, and unhealthy food consumption, are associated with these inherited channelopathies (Figure 1). Therefore, it is very important to address these risk factors in order to manage and prevent adverse outcomes. Most of the correlations are well established, and, indeed, these risks factors can be affected by adjusting lifestyle habits, such as a particular diet, impacting the risk of arrhythmogenic events and mortality. Thus, the aim of this review is to present objective insights into different daily diets. Specifically, here, we highlight the effects of various dietary factors and their suspected roles in the development of arrhythmias in BrS and LQTS, providing a summary of current literature and presenting questions that need to be the subject of future studies. This information is desperately needed to advise the patients diagnosed with these syndromes in the clinic.

**Citation:** D'Imperio, S.; Monasky, M.M.; Micaglio, E.; Negro, G.; Pappone, C. Impact of Dietary Factors on Brugada Syndrome and Long QT Syndrome. *Nutrients* **2021**, *13*, 2482. https:// doi.org/10.3390/nu13082482

Academic Editor: Ina Bergheim

Received: 24 June 2021 Accepted: 19 July 2021 Published: 21 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

**Figure 1.** Cardiac arrhythmias, such as Brugada syndrome and long QT syndrome, can be triggered by a variety of factors, but maintaining a certain lifestyle may reduce these risks.

#### **2. Brugada Syndrome and Long QT Syndrome**

Brugada syndrome and long QT syndrome are among the most common inherited cardiac arrhythmias, with high risks of malignant arrhythmias, and they present with abnormalities in the 12-lead electrocardiogram (ECG): ST-segment elevation and prolonged QT interval, respectively.

BrS is associated with right ventricular conduction abnormalities and coved-type STsegment elevation in the right precordial leads of the ECG [4]. The syndrome is clinically characterized by syncope episodes and SCD due to ventricular fibrillation [5]. The majority of the patients are asymptomatic, and they are usually diagnosed by chance due to the dynamicity of the BrS ECG pattern, since it fluctuates throughout the day [6]. Moreover, these fluctuations can be induced by several factors, such as fever, hypercalcemia, diabetes, excessive consumption of food or alcohol, hyperkalemia, and certain drugs [7]. The covedtype ST-segment elevation is diagnostic when the "Type 1" Brugada pattern is seen, which can be found either on a spontaneous ECG or after a drug challenge with a class Ia antiarrhythmic drug, such as ajmaline, which can provoke the Type 1 pattern so that the syndrome can be discovered [8]. The true incidence of BrS is currently unknown, due to the lack of availability of centers that can perform the drug challenge to unmask the syndrome. This is because these tests must be performed in specialized centers capable of specialized resuscitation due to the risks associated with the drug ajmaline [9], which can potentially induce life-threatening ventricular arrhythmias (VAs). For this reason, although used widely in Europe, the drug ajmaline is not used in the United States of America, due to risks associated with this drug. The BrS is believed to be genetic in nature, described as being oligogenic in nature, although it may occur in a minority of families as a Mendelian condition [10,11].

LQTS is defined by a variable degree of QT prolongation, absence of structural heart diseases, and it can manifest in three different subtypes (LQTS1, LQTS2, and LQTS3). The syndrome is clinically characterized by syncope episodes and SCD due to ventricular tachyarrhythmias [12]. LQTS can be congenital or acquired, therefore, a prolonged QT interval may result from genetic abnormalities, mineral imbalances, or certain medications [13]. It manifests as ECG abnormalities, including prolonged QT-interval, Torsade de Pointes (TdP), and ventricular fibrillation (VF) [14]. LQTS is usually diagnosed by measuring the QT interval on the ECG and it can include adrenalin and isoproterenol test. Moreover, epinephrine test is able to unmask the prolongation of the QT intervals in patients with

'concealed LQTS' [15]. The ECG may result normal at rest but based on the subtype, ECG abnormalities can be triggered by physical stress, emotional stress, dietary changes, certain drugs, or during sleep or at rest [16,17], and it has also been associated with metabolic syndromes and eating disorders [13]. LQTS primarily affects young people and is one of the main causes of SCD in this population [18].

#### **3. BrS ECG Pattern Triggered by Food or Alcohol Intake**

Several studies have repeatedly stressed that the BrS ECG pattern can appear after the consumption of alcohol [19,20] or caused by glucose-induced insulin secretion after the ingestion of a meal [21]. A 29-year-old BrS patient experienced several episodes of palpitation and syncope after alcohol consumption [22]. Another case report described a 25-year-old man who experienced three syncope events while consuming alcohol [23]. In yet another study, a 21-year-old man was diagnosed with BrS after he developed two episodes of syncope after the consumption of a high quantity of alcohol; specifically, his ECG, during the syncope, revealed ventricular fibrillation and the elevation of the ST-segment at high intercostal space [24]. In addition, a 56-year-old male experienced cardiac arrest while asleep after the consumption of one bottle of wine. The patient, five years previously, already had an abnormal ECG showing an elevation of the ST-segment, however, unfortunately, this abnormality was not considered important as a diagnostic criteria for BrS [25]. A nine-year-old boy, who was diagnosed with BrS by an ajmaline test, experienced syncope and cardiopulmonary arrest after the ingestion of a large hot dog [26]. Another case report of a 53-year-old man, who had been diagnosed with BrS after four weeks of Ramadan-fasting, had syncope events and several sudden cardiac arrests after the ingestion of a large meal [27].

Glucose and insulin intravenous infusion in patients affected by BrS results in a significant accentuation of the abnormal J-ST configuration [26]. In one study, 75% of BrS patients, compared to the controls, had a higher incidence of ECG fluctuation after an oral glucose tolerance test (OGTT) [28]. Although it is widely understood that large meals can trigger the BrS ECG pattern, the cellular mechanism responsible for the appearance of this ECG pattern by high levels of glucose and/or insulin concentration in BrS is still unclear.

In conclusion, the relationship between large meal intake, alcohol, and the development of the BrS ECG pattern is still unclear. However, it seems that the interplay between them is very relevant and should be investigated further.

#### **4. Cortisol and Sudden Death**

The majority of sudden cardiac deaths seem to occur from 4 am to 6 am, so in a period of nocturnal sleep in which cortisol concentration tends to be lower than in other periods of the day [29]. This observation suggests a role for abnormal sympathetic activity, which has been observed in a SUNDS cohort (sudden unexplained death during nocturnal sleep), and SUNDS is still considered to share common genetic causes with BrS [30,31].

Moreover, it is very well known that cortisol can affect the incidence and the clinical manifestations of sudden cardiac death. More recently, the role of the enzyme 11βhydroxysteroid dehydrogenase (11β-HSD1) has been considered [32]. This enzyme is one of the most promising molecular targets to treat Type 2 diabetes mellitus and its complications [33]. Basically, 11β-HSD1 is able to catalyze the production of cortisol, and the levels of 11β-HSD1 correspond to the levels of cortisol concentration [29]. High concentrations of cortisol have already been demonstrated to be associated with both cardiac arrhythmias and diabetes mellitus [34]. The drug-mediated 11β-HSD1 inhibition alleviates most metabolic abnormalities associated with both diabetes and a cortisol concentration beyond normal levels [32]. The expression of 11β-HSD1 can be induced by high-fat diet in mouse models [35], suggesting the usefulness to assess the activity of 11β-HSD1 among human patients with BrS. Therefore, these mechanisms should be further explored to better understand why the majority of sudden deaths occur during this time period.

#### **5. The Mechanisms behind Food and Alcohol Intake as a Trigger for BrS ECG Pattern Manifestation**

The mechanism behind the manifestation of ventricular arrhythmias after ingestion of alcohol or large amounts of food is still uncertain. The main product produced by the metabolism of fatty acids (FA), carbohydrates, ketones, and amino acids [36] is adenosine triphosphate (ATP), which is essential as energy source for cardiac work. The energy metabolic pathways include (1) glycolysis, where ATP is produced by glucose oxidation, (2) the citric acid cycle (Krebs cycle or TCA cycle), where guanosine-triphosphate (GTP), nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FADH2) are produced by acetyl-CoA oxidation, (3) the electron-transport chain (ETC), where most of the ATP is produced, and (4) fatty acid beta-oxidation, where FAs breakdown into acetyl-CoA and they are used by the TCA cycle. Acetyl-coenzyme A (CoA) is produced by either oxidative decarboxylation of pyruvate from glycolysis, beta-oxidation of long chain fatty acids, or oxidative degradation of some amino acids [37]. CoA is synthesized within the mitochondria, and it is the main substrate for the TCA cycle, which is a series of enzyme catalyzed reactions that generate energy sources via oxidative reactions. NADH and FADH2 are generated, then they enter into the ETC and, together with oxygen, they generate ATP through a redox reaction. The ATP level in the cells is maintained constant via two mechanisms: the production of ATP by oxidative phosphorylation and the hydrolysis of ATP [38]. In order to sustain normal cardiac activity in a healthy human heart, if there is an increase of fatty acid oxidation, there is also a decrease of pyruvate oxidation, and thus a glucose oxidation, and vice versa [39].

However, dysfunction of glucose metabolism, whether hypoglycemia or hyperglycemia, could have detrimental effects on cardiomyocytes. Several studies describe the deleterious effect of high glucose and alcohol on cardiomyocytes. In addition, the deprivation of glucose results in cardiac myocyte apoptosis [40–42]. A study conducted on rat cardiomyocytes demonstrated that high glucose-induced mitochondrial hyperpolarization increases cell injury [43]. Moreover, an in vitro induction of high glucose resulted in cardiomyocyte apoptosis [44]. Furthermore, several studies have demonstrated that ethanol depresses myocardial contractility both in humans and animals [45,46]. The first enzyme involved in alcohol metabolism is alcohol dehydrogenase (ADH), which catalyzes the conversion of ethanol into acetaldehyde, which is a reactive and toxic product that contributes to the formation of reactive oxygen species (ROS) and reduces the oxidation process in liver cells; the second enzyme involved is aldehyde dehydrogenase 2 (ALDH2), which converts acetaldehyde into acetate, which is then ready to be incorporated into Acetyl CoA and then entered into the TCA cycle. Indeed, a study conducted on 198 Japanese patients affected by BrS demonstrated that arrhythmic events caused by the consumption of alcohol were associated with the increased activity of the alcohol-metabolizing enzyme ADH1B in BrS patients [47].

BrS is a complex disease that has been described to have an oligogenic model of inheritance [10,48], and several possible genetic causes have been proposed, reviewed elsewhere [10,49]. It has also been hypothesized that, when people are overstressed by large meals and alcohol, especially during festivities, possible mutations in genes encoding for SULT1A enzymes result in the inability of those enzymes to deactivate catecholamines in the intestine, possibly inducing cardiac arrhythmia [50]. Finally, another hypothesis could be that glucose metabolism dysfunction can interfere with the homeostasis of ATP and ROS within the cardiomyocytes [51], possibly leading to arrhythmic events via mitochondria defects and impaired intracellular cation homeostasis.

Adhering to the "Mediterranean diet" may reduce the risk of cardiovascular disease. However, unfortunately, this diet is not clearly defined. However, generally, it consists of eating smaller portions throughout the day, such as eating five smaller meals instead of three larger ones. The Mediterranean diet could allow for max one glass of red wine per day, but patients with cardiac arrhythmias are probably best to abstain completely from alcohol, especially if even low amounts of alcohol make them feel unwell. In our

experience, even one glass of alcohol can make some patients with BrS fell unwell. In these cases, it is best to avoid alcohol completely. The Mediterranean diet includes a lot of plant-based foods and olive oil, but tends to be low in saturated fat, meat, and dairy products. Red meat could be eaten once per week. Fish are generally included minimum twice per week, but oily fish, such as salmon, which are larger fish and could contain a higher amount of mercury, should be limited to only once per week, although it is still good because it contains a high amount of omega-3 fatty acids. Nuts are also good for omega fatty acids and legumes are good for proteins.

#### **6. Sudden Cardiac Death and QT Prolongation Triggered by Ketogenic Diet**

While carbohydrates provide a readily available fuel for our body, fats and oils (lipids) are considered our primarily source of stored energy. Fat enters into the body through food and breaks down into triglycerides, and then into fatty acids and glycerol. Mitochondria provide the main source of energy, and, therefore, the dysfunction of the two metabolic pathways of β-oxidation of fatty acids (FAO) and oxidative phosphorylation (OXPHOS) can lead to mitochondrial remodeling and the manifestation of heart failure, arrhythmias, and ventricular hypertrophy [52,53].

Acute arrhythmic events associated with a ketogenic diet and calorie-restricted diets can include QT prolongation, leading to sudden cardiac death, and these phenomena have been described in children, adolescents, and adults [54–56]. A ketogenic diet is characterized by very low carbohydrate intake, with 75% of calories derived from fat. This carbohydrate (CHO) restriction helps to reduce blood glucose and insulin [57]. However, it is known that, among several complications related to a ketogenic diet, there is selenium deficiency, QT prolongation, and SCD [58–60]. A case report described a Torsades de Point (TdP) event in a patient with a dual-chamber implantable cardioverter-defibrillator (ICD) affected by LQT2 triggered by an uncommon factor: the ketogenic diet. Specifically, while the patient was following a ketogenic diet, she experienced four episodes of ventricular fibrillation, due to TdP, over the course of only three weeks. Selenium, ketone bodies, and alcohol levels were all within normal limits. One month later, she stopped the ketogenic diet, and the ICD showed no further arrhythmic episodes [17].

The majority of SCD cases of pediatrics patients are associated with QT interval prolongation and ketosis. In a report about two cases of death in two children on a ketogenic diet for seizure control, both patients experienced QT prolongation and suffered from selenium deficiency [58]. However, another case described a correlation between a QT interval prolongation and ketogenic diets in the absence of electrolyte imbalance in children [60]. Specifically, a direct correlation was observed between QT interval prolongation and β-hydroxybutyrate concentrations, and between QT interval prolongation and systemic acidosis [60]. Interestingly, in a study of 70 children with drug-resistance epilepsy, receiving a ketogenic diet for a 12-month period, no deleterious effects on corrected QT interval, QT dispersion, and Tp-e interval were reported [61]. Finally, another case described a five-year-old boy who developed selenium deficiency, acute reversible cardiomyopathy and ventricular tachycardia with prolonged QT interval after following a ketogenic diet to treat refractory epilepsy. Then, his clinical status improved and got back to normal after selenium supplementation [62].

In conclusion, the extended association between QT interval prolongation and/or SCD and ketosis conditions/ketogenic diet has been described, and it raises the question of whether ketosis may directly affect cardiac repolarization.

#### **7. The Mechanisms behind Ketosis as a Trigger for the Manifestation of QT Prolongation**

The relationship behind the manifestation of ventricular arrhythmias and high-fat and low-carbohydrate diet is still unclear. However, it is known that the heart uses fatty acids as a main substrate for source energy, but it can also use ketone bodies. Ketone bodies are produced by breaking down fatty acids and ketogenic amino acids in a process called ketogenesis, previously reviewed elsewhere [63]. Briefly, ketogenesis involves the

anabolic hormone insulin, and the catabolic hormones glucagon, cortisol, catecholamines, and growth hormone, of which insulin and glucagon are considered the most crucial for this pathway [64,65] (Figure 2). Ketone body concentration is lowered by insulin, which promotes glucose uptake and oxidation. The reduction of circulating insulin levels is the principal triggering event for accelerating ketogenesis [63]. Insulin acts on the adipose tissue, liver, and the periphery; specifically, the low amount of insulin and a high amount of glucagon in our blood stream trigger the augmentation of free fatty acids (FFAs), increase uptake of FFAs into the mitochondria, and increase production of ketones in the liver, by activating the acyltransferase system through the inhibition on malonyl-CoA synthesis. Indeed, the ketogenesis pathway occurs primarily in the mitochondria of hepatocytes. FFAs are converted into fatty acyl CoA (Acyl CoA), which enters into the hepatic mitochondria through CPT1-mediated transport. Then Acyl CoA undergoes β-oxidation to produce Acetyl CoA, which is only employed to generate ATP if there is enough oxaloacetate. When carbohydrate intake is limited, such as in the ketogenic diet, the liver uses the majority of oxaloacetate to produce glucose through gluconeogenesis; therefore, the liver diverts the acetyl CoA to form ketone bodies. Thiolase enzyme (acetyl coenzyme A acetyltransferase (ACAT)) catalyzes the reversible reaction where two molecules of acetyl CoA are combined to generate acetoacetyl CoA. At this point, mitochondrial β-Hydroxy β-methylglutaryl-CoA (*HMG*-CoA) synthase catalyzes a condensation reaction by adding an extra acetyl CoA molecule onto the acetoacetyl CoA. Then, the enzyme *HMG*-CoA lyase cleaves the *HMG*-CoA, which releases CoA and forms acetoacetate, the metabolized ketone body. Within the mitochondrial matrix, 3-hydroxybutyrate dehydrogenase can reduce the acetoacetate into two other ketone bodies: acetone and β-hydroxybutyrate (β-HB), through non-enzymatic decarboxylation or by beta-hydroxybutyrate dehydrogenase, respectively. Moreover, the ratio of NADH/NAD<sup>+</sup> helps to maintain an equilibrium between acetoacetate and β-HB within the mitochondria matrix. Both acetoacetate and β-HB are considered fuel molecules normally found in the heart and renal cortex. Acetone cannot be metabolized. At this point, due to the fact that the liver does not have enzyme beta ketoacyl-CoA transferase, and it cannot utilize ketone bodies [66], acetoacetate and β-HB reach the extrahepatic tissues. β-HB is converted into acetoacetate, which is then converted back to acetyl-CoA. Acetyl-CoA goes through the TCA and produces 22 ATP by oxidative phosphorylation. Therefore, due to the acid nature of ketone bodies, this causes an anion gap metabolic acidosis. This condition usually results in electrolyte imbalances, especially a reduction in K+, Mg2+, and P.

**Figure 2.** Ketogenesis pathway in hepatic mitochondria. Low amounts of insulin and high amounts of glucagon in our blood stream trigger the augmentation of free fatty acids (FFAs), and increase uptake of FFAs into the mitochondria. Acetyl CoA is generated from beta-oxidation of Acyl CoA, which derives from FFAs. The production of ketones (acetoacetate, acetone, β-HB) in the liver is then increased.

A metabolomic study on individuals with arrhythmogenic cardiomyopathy (AC) identified as a possible biomarker β-HB, due to its elevated amount in the plasma and hearts [67]. Specifically, the β-HB produced by cardiomyocytes of AC patients is released into the blood, and its levels are significantly higher compared to controls [67]. Therefore, it was demonstrated that cardiac ketogenesis occurs in CA, and β-HB may be used as a potential metabolic marker to predict CA.

#### **8. Oxidative Stress**

Oxidative stress is usually considered a state in which the production of ROS and antioxidant defenses are not balanced [68]. ROS are derivatives of molecular oxygen, such as superoxide (O2 <sup>−</sup>), hydrogen peroxide (H2O2), peroxynitrite (ONOO–), and hydroxyl radicals (OH) [69]. ROS are mainly produced by mitochondria, and their homeostasis is maintained by the enzyme glutathione peroxidase (GSH-Px). In damaged mitochondria, the calcium ions are overloaded and drive the augmentation of ROS concentration, which leads to excitotoxicity damage [70]. Indeed, excessive generation of ROS, impaired calcium homeostasis, and diminished ATP production directly impact mitochondrial function [71].

Repetitive or prolonged oxidative stress can damage proteins and lipids within the cell, and it might result in a contractile dysfunction, a downregulation of gene expression, and also a disrupted energy transfer, which could induce cardiomyocyte apoptosis, followed by heart failure [72]. There have been limited studies into the mechanisms linking oxidative stress and arrhythmias, but it has been shown that cardiac conditions with increased arrhythmic risks are associated with an unbalanced production of ROS [73]. ROS, in addition to their role as a messenger in cell signal transduction and the cell cycle, regulate both cellular metabolism and ion homeostasis in excitable cells. An elevated presence of ROS within the cells can be highly toxic, and they can lead to arrhythmogenic triggers, such as alterations of ion channels (Na, Ca2+, and K+), dysfunction of the mitochondria, and gap junction remodeling [74]. Therefore, an excessive production of ROS and ineffective ROS scavenging can culminate in cell death [52].

It is well known that ketone bodies are always present in the blood, and their concentration increases during prolonged exercise and fasting. In vitro studies demonstrated that KBs stimulate insulin release [75,76], cause lipid peroxidation, and generate oxygen radicals [77]. It is also known that KBs are able to reduce oxidative stress, due to the activation of multiple protective antioxidant pathways. However, we hypothesize that patients affected by inherited channelopathies could have a potential dysfunction of these pathways, and they are not able to reduce the levels of ROS. To sustain our hypothesis, there are some studies suggesting that exposure to high concentrations of KBs could provoke oxidative stress. A study conducted on calf hepatocytes suggested that both β-HB and acetoacetic acid decrease the activity of antioxidant enzymes superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GSH-Px), and increase malondialdehyde (MDA) and nitric oxide (NO), which are markers of oxidative stress [78,79]. Another study attributed acetoacetate treatment to the activation of mitogen-activated protein kinase (MAPK) pathway, which is known to be activated by oxidative stress in rat hepatocyte cells [80]. Therefore, even if there are several studies associating KBs with the inhibition of oxidative stress and ROS production, there are still some studies showing a correlation between KBs and the induction of oxidative stress.

It is known that a nutrient overload helps to release free fatty acids and can also induce damages to the mitochondria [81]. Therefore, free fatty acids might be related to the excess production of oxidative stress. Indeed, there is a lot of evidence suggesting that conditions of high levels of glucose, lipids, or their combination can interfere with mitochondria metabolism, and they may modulate mitochondrial ATP synthesis capacity and increase ROS production [71]. Moreover, in patients with inherited channelopathies, malignant ventricular arrhythmias and SCD occur before overt structural changes of the heart. To prevent the arrhythmogenic substrate progression, more studies to understand the electrical instability of cardiomyocytes are needed. At present, a lot of studies are

focused on the connection between metabolic disease and the manifestation of arrhythmic events. Specifically, the attention is focused on the mitochondrial dysfunction, which can drive the manifestation of arrhythmic events by interfering with the electrical activity of the cardiomyocytes. The role of mitochondrial dysfunction in inherited channelopathies is still unclear and will be the subject of future studies.

#### **9. Vitamin D**

Vitamin D, also known as calciferol or hydroxyvitamin D (25-OHD), is a fat-soluble vitamin that is naturally found in a few foods and is also produced endogenously by the exposure of the skin to sunlight ultraviolet rays [82]. Vitamin D is responsible to increase the intestinal absorption of calcium and to maintain the serum calcium and phosphate concentration [82]. Vitamin D is also involved in the reduction of inflammation, cell growth, neuromuscular, immune function, and glucose metabolism [83–85].

Vitamin D deficiency has been related to different cardiovascular disorders, including SCD [86]. Moreover, a decreased level of 25-OHD has been linked to structural and ionic channel remodeling, which may increase arrhythmic events [87]. Indeed, prolonged QTc is commonly induced by hypocalcemia, which can be caused by vitamin D inadequacy or resistance.

In a case of a patient with severe vitamin D deficiency, hypocalcemia and prolonged QTc resulted in TdP and cardiac arrest. After the administration of vitamin D and calcium supplements, the QTc interval became normal, and the patient did not experience additional arrhythmic events [88]. A hypocalcemic teenage girl affected by hypoparathyroidism experienced a few episodes of syncope during exercise, and the ECG on admission showed prolonged QTc. After the treatment with alphacalcidol, which is an analog of vitamin D, and calcium supplements, the QTs of the patient became normal [89]. Moreover, in a case of a 40-year-old woman, who followed a vegan diet, she was affected by hypocalcemia, due to severe deficiency of vitamin D. The patient manifested symptoms of palpitations, presyncope, and a long QT (556 ms). Therefore, she was first treated with calcium gluconate, then by vitamin D and calcium oral supplementation. After the treatment, the QTc normalized, and the symptoms disappeared [90].

Vitamin D deficiency can not only induce the prolongation of the QTc, but it can also promote inflammatory reactions. It is known that cardiac contraction is affected by an overload of Ca2+ ions in myocardial cells. Studies have hypothesized that lack of vitamin D could interfere with the function of Ca2+ in myocardial cells; specifically, it can induce hypertrophy [91], increase anti-inflammatory cytokines [92], increase fibrosis [93], and impact the production of ROS in the atria [94].

The correlation between atrial fibrillation and vitamin D is poorly understood. Some studies suggest a direct correlation between atrial fibrillation and vitamin D deficiency [95–97]. However, other studies did not find a connection between vitamin D levels and the incidence of atrial fibrillation [98–100]. It would be important to better understand the link between vitamin D and calcium, because it is known that calcium overload causes myocytes apoptosis and cardiac failure [101,102]. Oxidative stress associated with cardiovascular risks factors is real, and, therefore, further studies are needed to investigate this aspect.

In conclusion, vitamin D deficiency appears to play an important role in CVD, and it is important to investigate the possible role of vitamin D deficiency in the development of arrhythmic events.

#### **10. Omega-3 Fatty Acids**

Omega-3 fatty acids, also called ω-3 (n-3) fatty acids, are polyunsaturated fatty acids (PUFAs), and they are essential nutrients involved in lipid metabolism. It has been demonstrated that n-3 PUFAs, including eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), and α-linolenic acid (ALA), play an important role in the human diet and cellular physiology, and they may have beneficial effects against CVD and risks factors, including arrhythmias, probably by the modulation of cardiac ion channels [103]. DHA and EPA fatty

acids are mostly found in seafood, seaweed, and algae, while ALA is in nuts and seeds. ALA might improve cardiac function by inhibiting apoptosis through anti-inflammatory and anti-oxidative stress effects in diabetic, but not normal, rats [104].

An investigation into whether a diet enriched with fish and PUFAs could be associated with changes in QT duration on a resting ECG showed that long-term consumption of fish could positively influence the duration of QTc by lowering it; therefore, fish intake may be considered an antiarrhythmic protection [105]. A retrospective study of men affected by BrS evaluated the correlation between the serum levels of EPA and DHA and the risks factors for SCD. A multivariate logistic regression analysis showed that low levels of EPA and DHA were linked to the incidence of syncope in patients affected by BrS. This same study suggested that all levels of omega-3 PUFAs may play an important role in preventing ventricular fibrillation in BrS [106]. Furthermore, a study on 123 Langendorff-perfused rabbit hearts, used to mimic LQT2 and LQT3 syndrome, showed that PUFAs were able to prevent TdP by reverting the AP prolongation. This effect was stronger in LQT2 than in LQT3 syndrome, and the antitorsadogenic effect was more distinctive with DHA and EPA compared with ALA [107].

A recent review demonstrated that n-3 PUFAs have no significant effect on mortality or cardiovascular health [108]. For example, in a study investigating the correlation between n-3 PUFAs from fish and risks of CVD, including SCD, PUFAs were inversely related to QTc and JTc intervals. However, QTc and JTc did not reduced the inverse relationship between n-3 PUFAs and SCD risks, suggesting that this association cannot explain the prevention of prolonged ventricular repolarization [109]. Moreover, in a study investigating the association between mercury (Hg), EPA, and DHA, and large seafood consumption and heart rate variability (HRV) and QT interval duration, the authors found a possible association between specific seafood types and arrhythmias, such as tuna steak with QTc and anchovies with HRV [110]. Finally, data from four trials suggested that a high dose (4.0 g/d) of n-3 PUFAs could increase the risk of atrial fibrillation development [111].

In conclusion, data on the effects of omega-3 are still inconsistent, requiring further studies to assess their beneficial effects on preventing arrhythmic events.

#### **11. Arrhythmogenic Ingredients**

Several factors can increase the risk of the development of arrhythmias in BrS and LQTS. Although not well studied in these two specific syndromes, one of the factors that could result in arrhythmias is the consumption of specific foods, which are considered to be arrhythmogenic [112,113], as described in Table 1, which lists both arrhythmogenic and anti-arrhythmogenic dietary factors.

Several case reports have described patients experiencing arrhythmic events and cardiac arrests after the consumption of energy drinks [114–117]. Taurine has been shown to modulate ion channel activity by suppressing the activity of sodium, calcium, and potassium channels [118]. Moreover, it shortens the action potential duration and decelerates the rate of terminal repolarization of the cardiac action potential, inducing atrial and ventricular arrhythmias or cardiac arrest [116]. Furthermore, atrial and ventricular arrhythmias can also be induced by caffeine, which is able to interfere with calcium homeostasis [115,119] by increasing intracellular calcium concentration [115]. Indeed, it was demonstrated that caffeine is able to stimulate calcium release from the sarcoplasmic reticulum [120], and calcium imbalances, particularly sarcoplasmic reticulum calcium stores, may be altered in BrS [121–123]. A 24-year-old male developed arrhythmias and collapsed after the ingestion, for the first time, of a small quantity of an energy drink combined with alcohol; specifically, the drink contained 80 mg of caffeine and 1000 mg of taurine, plus Vodka. He was then diagnosed with BrS [116]. Another case report showed that energy drinks could have triggered an abnormal QT response in a 13-year-old female affected by LQTS1. Moreover, a 22-year-old female affected by LQTS1 experienced cardiac arrest after the consumption of an elevated quantity of energy drink [124].

Grapefruit has been identified as an inhibitor of drug-induced TdP and QTc prolongation, thus enhancing their pharmacokinetics [113]. Specifically, naringenin, which is a flavonoid found in grapefruit, is able to block the hERG channel and induce TdP and/or QTc prolongation [113,125]. These hERG channels are known receptors of the drug ajmaline, which is used to provoke the diagnostic type-1 BrS ECG pattern [126]. Therefore, other ingredients could be suspected to be arrhythmogenic, because they contain flavonoids: citrus fruit, parsley, onions, berries, bananas, red wine, chocolate, grains, nuts, tea, coffee, and various other fruits and vegetables [127–129], including spinach, cauliflower, broccoli, black beans, and chickpeas [130–134]. Moreover, additional ingredients like lemon, lime, clementine, oranges, and bergamot oil could be considered a risk, due to the same organic compounds of grapefruit: furanocoumarins [129]. However, other studies have suggested that flavonoids could actually be beneficial for cardiovascular health by reducing inflammation. Flavonoids, polyphenolic compound derivatives, reduce inflammation and risk of cardiovascular disease by reducing NFκB and its resulting transcription factors involved in the inflammatory pathway [128]. Many other natural products, which have polyphenols as their major compound, have been shown to have anti-inflammatory effects, such as mushrooms, honey, plant extracts, plant juices, plant powders, and essential oils [128,135,136]. Thus, further studies are needed to understand these foods and the resulting effects, in order to better advise the patients about dietary supplements or restraints.


**Table 1.** Arrhythmogenic and anti-arrhythmogenic dietary factors.

§ Honey and mushrooms have been described as both arrhythmogenic and anti-arrhythmogenic, depending on the study.

#### **12. Ingredients That Suppress Cardiac Arrhythmogenesis**

An interesting review showed that 18 active ingredients such as alkaloids, flavonoids, saponins, quinones, and terpenes, Wenxin-Keli, and Shensongyangxin could have antiarrhythmic effects [143]. In particular, the Chinese herb extract, Wenxin-Keli (WK), has been reported as an effective treatment of atrial and ventricular fibrillation by inhibiting the transient K<sup>+</sup> outward current (Ito) [144]. The experiments were conducted in a canine experimental model of BrS, and the authors observed an inhibition of Ito and indirect adrenergic sympathomimetic effects using WK in combination with quinidine [144]. However, the 18 ingredients described in the review, with the exception of omega 3, have been tested in vitro or in animal models as natural drugs or in combination with other antiarrhythmic treatments, and not as ingredients for regular meals. Therefore, further studies are needed.

Another ingredient with the potential to prevent antiarrhythmic events, in moderate doses, is resveratrol, a stilbenoid polyphenol found in grapes. It has been described as a potential inhibitor of intracellular calcium release able to eliminate calcium overload in AF, and, therefore, able to preserve the cardiomyocyte contractile function [140]. Finally, based on an interesting review on antioxidant therapies for atrial fibrillation [138], it would

be interesting to further study the role of vitamin C, vitamin E, and carotenoids, found in fruits and vegetables, and their role as suppressors of ROS, and, therefore, their role as anti-arrhythmic nutrients.

#### **13. Electrolytes**

Food is the main source of electrolytes, essential minerals for life. Electrolytes are required for maintaining osmotic pressure in cells and generating action potentials in nerves and muscles. In particular, sodium, calcium, potassium, and magnesium play an important role in the heart. Indeed, it has been demonstrated that electrolyte imbalance can have detrimental effects on the heart, such as triggering cardiac arrhythmias or cardiac arrest [141], including what has been coined "BrS phenocopies" [145], which may provide important insights into the mechanisms involved in BrS [11]. It is important to understand better the connection between food intake, electrolyte imbalance, and BrS or LQTS.

#### **14. Vagal Tone Activity and Arrhythmic Events**

The heart rhythm is regulated by cardiac parasympathetic (vagal) nerves, the sympathetic nerves, and the pacemaker cells [121]. Autonomic activity can influence the elevation of the ST-segment [146–148]. Indeed, late in the 1990s, it was shown that the nocturnal vagal activity may be involved in the cardiac arrhythmic events of BrS [149]. Moreover, the high vagal activity could lead to the manifestation of ventricular tachyarrhythmias in patients with BrS or LQTS [150,151].

The relationship between the autonomic modulation and arrhythmic events is very complex and still unclear. However, based on the studies presented in this review, the association between the vagal activity and the consumption of large meals may contribute to the manifestation of arrhythmic events.

#### **15. Conclusions**

We describe herein the effects of dietary factors in patients affected by cardiac arrhythmias, specifically BrS and LQTS. To date, the importance of understanding the relationship between diet and inherited channelopathies has been underrated. It is evident that dietary factors can influence the risk of the development of arrhythmic events. Therefore, we recommend eating and drinking small portions throughout the day and to try to limit certain types of ingredients in order to prevent arrhythmic events. Modifying the diet might not be enough to fully prevent arrhythmias, but it can help lower the risk.

**Author Contributions:** S.D. conceived and drafted the paper. M.M.M. revised the paper. M.M.M., E.M. and G.N. provided useful feedback. C.P. secured funding for the project. All authors have read and agreed to the published version of the manuscript.

**Funding:** This study was partially supported by Ricerca Corrente funding from Italian Ministry of Health to IRCCS Policlinico San Donato.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


## *Review* **Relationship between Dietary Patterns with Benign Prostatic Hyperplasia and Erectile Dysfunction: A Collaborative Review**

**Giorgio Ivan Russo 1,\*, Giuseppe Broggi 2, Andrea Cocci 3, Paolo Capogrosso 4, Marco Falcone 5, Ioannis Sokolakis 6, Murat Gül 7, Rosario Caltabiano <sup>2</sup> and Marina Di Mauro 8,†**


**Abstract:** Interest in the role of dietary patterns has been consistently emerging in recent years due to much research that has documented the impact of metabolism on erectile dysfunction (ED) and/or benign prostatic hyperplasia (BPH). We conducted a non-systematic review of English articles published from 1964 to September 2021. The search terms were: ("dietary patterns" OR "diet") AND/OR ("erectile dysfunction") AND/OR ("benign prostatic hyperplasia"). In the present review, we have highlighted how the association between dietary patterns and two of the most frequent pathologies in urology, namely erectile dysfunction and benign prostatic hyperplasia, is present in the literature. The data suggested that a diet that is more adherent to the Mediterranean diet or that emphasizes the presence of vegetables, fruits, nuts, legumes, and fish or other sources of long-chain (n-3) fats, in addition to reduced content of red meat, may have a beneficial role on erectile function. At the same time, the same beneficial effects can be transferred to BPH as a result of the indirect regulatory effects on prostatic growth and smooth muscle tone, thus determining an improvement in symptoms. Certainly, in-depth studies and translational medicine are needed to confirm these encouraging data.

**Keywords:** prostate; diet; metabolism; benign prostatic hyperplasia; erectile dysfunction

#### **1. Introduction**

The interest in the role of dietary patterns has been consistently emerging in recent years due to much research that has documented the impact of metabolism on erectile dysfunction (ED) [1] and benign prostatic hyperplasia (BPH) [2,3].

Although much research has already demonstrated a significant association between dietary patterns and prostate cancer, including polyphenols, phytoestrogens or phenolic acids [4–6], the evidence regarding ED and BPH have not been fully investigated.

It is estimated that the worldwide prevalence of erectile dysfunction will likely increase to 322 million men by 2025 [7]. These results also explain the increase in interest over the years in Google trend searches for many terms such as prosthetic surgical treatment (+1.7%), for prostaglandins (+0.7%), for traction (+0.6%), and for shock wave therapy (+1.8%) [8].

**Citation:** Russo, G.I.; Broggi, G.; Cocci, A.; Capogrosso, P.; Falcone, M.; Sokolakis, I.; Gül, M.; Caltabiano, R.; Di Mauro, M. Relationship between Dietary Patterns with Benign Prostatic Hyperplasia and Erectile Dysfunction: A Collaborative Review. *Nutrients* **2021**, *13*, 4148. https:// doi.org/10.3390/nu13114148

Academic Editor: Lynnette Ferguson

Received: 7 November 2021 Accepted: 17 November 2021 Published: 19 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Although many modifiable risk factors for ED have been investigated, such as cardiovascular disease (CVD), smoking, obesity, sedentary behavior, diabetes, hypertension, hyperlipidemia, and metabolic syndrome [9], it is still unknown whether healthy dietary patterns are associated with a reduced risk (Figure 1).

**Figure 1.** Overview of mechanisms associated with erectile dysfunction.

In addition to these concepts, it is interesting to state that men with ED are 1.33–6.24 times more likely to have BPH than men without ED [10].

In fact, data from the literature have shown a close relationship between BPH and ED, suggesting that pathophysiological mechanisms involved in the metabolic syndrome are key factors in both disorders [11].

In particular, several possible pathophysiological mechanisms have been discussed, including the NOS/NO (the nitric oxide synthase) and the Rho-kinase activation pathways, autonomic hyperactivity, pelvic ischemia and microvascular dysfunction, inflammatory pathways, sex hormones, and psychological factors [11].

All these premises lay the groundwork for the hypothesis that dietary patterns may have a role in exacerbating or even preventing ED and BPH [12].

For these reasons, in this non-systematic review, we aimed to address current evidence on the role of dietary patterns on ED and BPH. We also addressed the impact of diabetes, atherosclerosis, and metabolic syndrome on ED and BPH.

#### **2. Materials and Methods**

We conducted a non-systematic review of English articles published from 1964 to September 2021. The search terms were: ("dietary patterns" OR "diet") AND/OR ("erectile dysfunction") AND/OR ("benign prostatic hyperplasia").

#### **3. Erectile Dysfunction**

In recent years, interest in the quality of food has grown, paying attention not only to the individual food, but also to the micronutrients contained in the food [13,14].

#### *3.1. Studies in Animal Models*

In recent years, there have been important advances in the study of the molecular pathways that regulate the association between diet, exercise, and endothelial function in the penis. Authors have demonstrated that pigs fed with a high-fat diet had significantly reduced cGMP levels, an increase in eNOS uncoupling, and eNOS binding to caveolin-1 (indicating reduced NO availability) in the penis of sedentary pigs. They also reported that exercise of pigs counteracted these abnormalities [15].

These data demonstrated that unhealthy diets determine a detrimental impact in endothelial cells by the increase in oxidative stress and significant reduction in nitric oxide, an essential molecule that maintains erectile function [16].

Akomolafe et al. evaluated the effect of a diet supplemented with raw and toasted pumpkin seeds on some key biochemical parameters relevant to erectile function in the body cavernous tissues of male rats. Animals were divided into six groups (10 animals per group) for the evaluation of adenosine deaminase (ADA), phosphodiesterase-5 (PDE-5), arginase, and acetylcholinesterase (AChE) activity, including nitric oxide level and malondialdehyde (MDA). Group I: normal control rats fed with a basal diet; Groups II and III: (5% and 10% pumpkin seeds [PS]) rats fed with diet supplemented with 5% and 10% raw pumpkin seeds, respectively; Groups IV and V: (5% and 10% roasted pumpkin seeds [PS]) rats fed with diet supplemented with 5% and 10% roasted pumpkin seeds, respectively; Group VI: rats treated with Sildenafil citrate (5 mg/kg). The diet supplemented with roasted pumpkin seeds showed better PDE-5, ADA, and arginase activities, as well as NO and MDA levels. No significant differences were observed in the AChE activities of rats treated with raw and roasted pumpkin seeds. The authors concluded that the modulatory effects of raw and roasted pumpkin seeds on enzymes associated with erectile dysfunction suggest the biochemical rationale for their therapeutic role in improving erectile function. However, it appears that roasted pumpkin seeds (10%, *w*/*w* of the diet) have more beneficial effects than raw seeds [17]. In addition, a 2016 study by Akomolafe revealed that pumpkin seeds contain phenolic acids and flavonoids and prevent oxidative damage to testicular tissues [18].

Recent studies have suggested that the treatment of ED may benefit from the modulation of other enzymes such as ectonucleotidases (E-NTPDase) and ADA [19–21], which are involved in the regulation of biomolecules such as cGMP (cyclic guanosine monophosphate), NO (nitrogen oxide), ATP, and adenosine that are involved in penile erection [22,23].

In a study conducted in rats with ED, the authors investigated the effects of quercetin as a promising source of dietary phytochemicals for ED management.

The authors divided the rats into different groups by the administration of normal saline, cyclosporine-induced hypertension, rats administered with sildenafil (5 mg kg−<sup>1</sup> day<sup>−</sup>1), and rats administered quercetin 25 mg kg−<sup>1</sup> day−<sup>1</sup> or Q 50 mg kg−<sup>1</sup> day−<sup>1</sup> (50 Q). They demonstrated that quercetin improved the activities of enzymes associated with better ATP bioavailability (E-NTPDase and 5 -nucleotidase). Its effects were able to decrease ADA activity and increase NO levels [24].

These findings highlighted the concept that polyphenols are phytochemicals and can promote good health and improved erectile dysfunction [25].

#### *3.2. Studies in Humans*

The importance of diet on ED arises from the assumption that this pathology is often the first symptom of coronary heart disease (CHD) [9,26,27]. In fact, the pathophysiology of ED is similar to that of atherosclerosis [28,29]. The correlation between diet and diseases of the cardiovascular system has been known for years [1]. Therefore, a possible correlation between erectile dysfunction and dietary factors is conceivable.

In the study by Nicolosi et al., it is shown that 31.8% of men with a below-average level of physical activity demonstrate ED, compared to 17.5% of men with an average activity level and only 13.9% of men who have an above-average level of physical activity, thus demonstrating a linear association between the level of physical activity and ED [30].

Lu et al. recently studied the relationship between a plant-based diet and erectile dysfunction in 184 Chinese patients. The ED group (92 subjects) and the free ED group (92 subjects) were similar in terms of baseline characteristics (*p* > 0.05), with the exception of lifestyle (*p* < 0.05). The plant diet index (PDI) and the healthy plant diet index (hPDI) in the ED group were significantly lower than those of the control group (*p* < 0.001). Adjusted multivariate analysis indicated that the presence of ED was negatively associated with nitric oxide, PDI, and hPDI levels (all *p* < 0.05) and was positively related to body mass index, metabolic syndrome and E-selectin levels. Additionally, both PDI and hPDI significantly increased with increasing International Erectile Function Index (IIEF-5) scores within the ED group (*p* < 0.05). Finally, multimodal multivariate analysis was performed, which indicated the robustness of the results [31].

From previous studies, we know that ED is less frequent in patients who adhere to the Mediterranean diet model, characterized by the prevalent presence of fish, vegetables, fruit, whole grains, and nuts, compared to those who mainly consumed a diet containing red and processed meats and refined cereals [16,32].

Giugliano et al. assessed the relationship between adherence to the Mediterranean diet and sexual function among 555 men with type 2 diabetes, showing that men with the highest scores of adherences to the Mediterranean diet had lower overall prevalence of erectile dysfunction. Additionally, men in the middle and highest tertiles had a lower prevalence of severe ED compared to men in the lowest adhesion tertile [33].

Sticking to an unhealthy diet pattern can cause early endothelial damage through oxidative stress, which results in a reduction in the availability of nitric oxide, indispensable in the physiology of erection [34].

Food antioxidants have been shown to improve erectile dysfunction in men. A rich source of polyphenols is represented by the standardized French maritime pine bark (Pinus pinaster) Pycnogenol (PYC) extract. The main constituent of the polyphenols in PYC is made up of 70% procyanidins. PYC has significant antioxidant and multiple power biomodulatory effects, such as transcription factor inhibition NF-κB, cyclooxygenase, NO stimulation of the hypertensive effect by inhibition of the angiotensin converting enzyme, antimutagen effects, and alleviation of allergic and anti-glycemic asthma symptoms. Trebaticky enrolled 53 patients with ED who were divided into two groups (32 with diabetes mellitus, 21 non-diabetes mellitus) in a randomized, blinded, placebo-controlled study. During the 3-month intervention with Pycnogenol or placebo and one month after the end of the intervention, the patients were investigated for ED with a validated International Index of Erectile Function-5 (IIEF-5) questionnaire. Lipid profile and blood glucose were analyzed in each group. The results showed that of the natural polyphenols extracted, Pycnogenol improved erectile function in the DM group by 45% compared to the NDM group, where the improvement was also significant, but only 22%. The levels of total cholesterol, LDL cholesterol and glucose were lowered by Pycnogenol in patients with DM. Glucose level was not affected by Pycnogenol in non-DMs. The placebo showed no effect on the monitored parameters in both groups [35].

In a recent study, Salas-Huetos et al. (2019) reported that the consumption of walnuts increased sexual function. Indeed, a secondary outcome analysis of the FERTINUTS study, a 14-week randomized controlled trial with 83 subjects consuming a Western-style diet, reported that consuming 60 g/day of nuts was associated with increased orgasmic function and sexual desire compared with the control group (avoiding nuts), but no significant differences in erectile function were observed [36].

Previously, Mykoniatis et al. enrolled 350 adult men who were asked to complete an anonymous web-based questionnaire. Erectile dysfunction was diagnosed with the International Erectile Function Index (IIEF) and flavonoid intake was recorded using foodfrequency questionnaires, with a focus on flavonoid-rich foods such as coffee, fruit, etc. Participants were divided into two groups based on the IIEF scores: control group without ED (IIEF score 26; n = 264) and the case group with ED (IIEF score < 26; n = 86). Men with erectile dysfunction reported a lower median monthly intake of total flavonoids (*p* < 0.001) and all flavonoid subclasses (*p* < 0.001) than the controls. Adjusting the intake for age and body mass index, it was found that the consumption of 50 mg/day of flavonoids reduced the risk of erectile dysfunction by 32% (odds ratio = 0.68, *p* < 0.001). Of all the flavonoids recorded, flavones appeared to contribute the most to healthy erectile function. Controls

reported higher consumption of fruit and vegetables, lower consumption of dairy products and alcoholic beverages, and less intense smoking than the cases (*p* < 0.001) [37].

Cassidy et al. were among the first authors to evaluate the intake of flavonoids in erectile dysfunction. The results of the Health Professions Follow-up study showed that a reduced incidence of ED was associated with increased habitual intake of specific foods rich in flavonoids. The greatest benefits from the increase in the consumption of flavonones, flavones, and anthocyanins were observed in overweight or obese young men. Of all the flavonoids recorded, flavones appeared to contribute the most to healthy erectile function [38], according to Mykoniatis's study [37].

Bauer also conducted a population-based prospective cohort study which included men from the Health Professionals Follow-up Study with follow-up from January 1998 through January 2014, demonstrating that a higher-quality diet based on adherence to a Mediterranean diet or alternative to the Healthy Eating Index diet, which emphasize eating vegetables, fruits, nuts, legumes, and fish or other sources of long-chain (n-3) fats, as well as avoiding red and processed meats, was found to be associated with a lower risk of developing erectile dysfunction [39].

Taking together all these considerations, we postulate that dietary patterns have a significant role in ED severity (Figure 2) and should be investigated further in future studies.

**Figure 2.** Relationship between dietary patterns and erectile dysfunction. Relationship between dietary patterns and erectile dysfunction. PDE5 = phosphodiesterase-5 inhibitors; ADA = adenosine deaminase; NO = nitrogen oxide; MDA = Malondialdehyde; ED = erectile dysfunction.

#### **4. Benign Prostatic Hyperplasia**

The histopathology of BPH characteristically consists of a dual hyperplasia of the epithelial and stromal compartment of the transitional zone of the prostate. Epithelial hyperplastic features include nodules composed of variably sized and sometimes cystically dilated prostatic glands with a retained basal cell layer, often exhibiting corpora amylacea and/or calcifications; stromal hyperplasia consists of nodular proliferation of bland-looking spindle cells with rounded to ovoid nuclei, frequently resembling smooth muscle cells [40].

Several biological factors, including oxidative stress, inflammation, androgens, and enhanced expression of multiple growth factors, have been associated with benign and malignant prostatic disorders [41–45] (Figure 3).

**Figure 3.** Overview of mechanisms associated with benign prostatic hyperplasia.

In this regard, as some evidence has suggested that a high-fat diet is intrinsically correlated with BPH by stimulating inflammation and oxidative stress [46], in recent decades, the potential association between different dietary patterns, including both macroand micronutrients, and incidence of BPH has become one of the most debated topics in scientific literature [47–49].

#### *4.1. Studies in Animal Models*

Numerous advances have been made in the comprehension of the molecular basis of the relationship between diet and BPH in animals.

Zhang et al. found that Vitamin D (VD) deficiency in early life promoted BPH in middle-aged mice [50]; male pups, whose dams were fed with VD-deficient diets during pregnancy and lactation and continued to receive VD-deficient diets after weaning. Higher incidences of BPH were observed in this group of mice, compared to a control group of dams and male pups that received a standard diet. In addition, the authors [50] found that VD-deficient diets induced prostatic inflammation and fibrosis through the activation of the NF-κB-mediated pathway and the production of IL-6, as well as upregulation of the STAT3-mediated pathway that stimulates cell proliferation and growth. Interestingly, these prostatic effects were partially reversible if the standard diet was restored.

Li et al. showed that the combination of androgens and a high-fat diet-induced hyperinsulinemia promoted BPH in rats and the activation of p-ERK1/2 could be implicated in this process [51]. In particular, higher immunohistochemical and Western blot levels of p-ERK1/2 were observed both in rats with BPH plus a high-fat diet and rats with BPH compared to those found in rats with a high-fat diet and in a control group. However, it has also been reported that the administration of flaxseed reduced epithelial cell proliferative activity in rats with BPH [52,53]; this finding led to the supposition that different fat types and contents are involved in BPH onset and maintenance. As further evidence for this hypothesis, Kayode et al. emphasized that a ketogenic diet, consisting of high fat, moderate protein and low carbohydrate consumption, ameliorated testosterone propionate-induced BPH in male Wistar rats [54]. Similarly, epigallocatechin-3-gallate (EGCG), a green tea component, has been found to play an antioxidative and anti-BPH role in a metabolic syndrome rat model [55].

Recently, Aljehani et al. investigated the role of icariin (ICA), a flavonol glycoside with marked phytoestrogenic activity in rats with metabolic syndrome (MS)-induced BPH [56]. Animals were divided into five groups (two out of five groups were fed a standard diet and MS was induced in the remaining three groups). MS rat groups were given vehicle, 25 mg/kg of ICA and 50 mg/kg of ICA, respectively. The authors found that the administration of both ICA doses had positive effects on prostate weight, prostate index, and histopathologic features of BPH. Furthermore, ICA seemed to play antiproliferative, proapoptotic, antioxidant, and anti-inflammatory functions by regulating cyclin D1, Bax, Bcl2, and tumor necrosis factor-α expression.

Mangosteen pericarp powder (MPP), which originates from Mangosteen, a tropical fruit from the Malay islands and the Indonesian Moluccas, has been traditionally used for wounds and cutaneous infections. Recently, the consumption of MMP, whose xanthones are the main polyphenol compounds, has been found implicated in the decrease in prostate weight, serum testosterone, and attenuation of BPH in F344 male rats [57].

#### *4.2. Studies in Human*

BPH is a very frequent and age-related disease as it is estimated that about 50% of men over the age of 50, and 80% of those older than 70 suffer from it [58,59]. Patients with BPH often exhibit acute urinary symptoms deriving from urethra compression and/or lower urinary tract symptoms (LUTSs) [59]. Due to the huge impact of BPH on populations, multiple studies have investigated its relationship with environmental factors, including the effects of several macro- and micronutrients [60,61].

A statistically significant correlation between higher risk of BPH and high consumption of fats and red meat or low consumption of proteins and vegetables has been found by the Prostate Cancer Prevention Trial (PCPT) within a cohort of 18,800 patients aged more than 50 years. A slight relationship between lower risk of BPH and multiple nutrients, such as lycopene, VD and zinc was also established for this group; conversely, no association between this disease and antioxidant consumption was identified. Total but not dietary vitamin D was associated with reduced risk. Compared with men in the lowest quintile of total vitamin D intake, those in the highest quintile had an 18% reduced BPH risk (*p*-trend = 0.032). Compared with men eating red meat less than once per week, men eating red meat at least daily had a 38% increased BPH risk (*p* = 0.044) and, compared with men eating fewer than one serving of vegetables per day, men eating four or more servings had a 32% decreased BPH risk (*p* = 0.011) [62]. The protective role of lycopene supplementation on BPH has been also shown by Schwarz et al. who enrolled 40 patients with histologically proven BPH. The authors divided their patients into two groups: (i) lycopene at a dose of 15 mg/d for 6 months, (ii) placebo for 6 months, and found that patients who received lycopene had decreased PSA levels. No evidence of further prostate enlargement nor higher amelioration in symptoms of the disease were found, as assessed by the International Prostate Symptom Score (IPSS) questionnaire, compared to those who were given the placebo [63]. Further evidence of the anti-BPH effect of lycopene has been previously provided by Kim et al. from a clinical study on prostate cancer patients in which lycopene was found capable of inducing apoptosis in tumor-free prostatic tissue exhibiting the histologic features of BPH [64].

Rohrmann et al. prospectively investigated the effects of fruits, vegetables and micronutrients on BPH and found an inverse correlation between intake of vegetables, especially those rich in beta-carotene, lutein, vitamin C (VC), and BPH. Interestingly, fruit intake resulted to be unrelated to the onset of the disease [65]. Conversely, the study by Lagiou et al., through the administration of a food frequency questionnaire, reported for a cohort of 420 patients, all permanent residents in Athens area, that fruit consumption with high levels of beta-carotene, lutein and VC were inversely correlated with BPH risk, while a high-fat diet, especially with increased intake of butter and margarine, had a positive correlation with the disease [66]. Based on these findings, the exact effects of a fruit-rich diet on BPH is still to be elucidated. It may be hypothesized that fruit consumption, according to the type and quantity, influences BPH onset and progression in a diversified manner. Similarly, little is known about the usefulness of polyphenols contained in green tea; in

this regard, it has been suggested that they could be used for the treatment of BPH-related symptoms, due to their positive effects on LUTS [67].

Even with regard to the role of a high-fat diet, the evidence from the literature is not unique; although some of the above-mentioned studies stated an increased risk of BPH in patients with high fat consumption [62,66], Suzuki et al. found that BPH risk was modestly associated with intake of eicosapentaenoic, docosahexaenoic, and arachidonic acids, but not with energy-adjusted total fat intake [68].

Similar to what has been reported on animal models [52,53], a strong utility of dietary flaxseed lignan extract in improving BPH-related LUTSs, compared to that of alpha1A-adrenoceptor blockers and 5alpha-reductase inhibitors, has been reported by Zhang et al. [69]. These authors conducted a randomized clinical trial in which a placebo, 300, or 600 mg/day of secoisolariciresinol diglucoside, a flaxseed extract, were administered to 87 patients affected by BPH and found a decrease in IPSS and improvement of quality of life score and LUTSs, respectively [69]. Conversely, it has been shown that pumpkin seed extract did not have any benefits on BPH compared to the placebo over a 1-year period [70].

The above-mentioned findings led us to emphasize that the exact relationship between different dietary patterns and BPH has not yet been fully elucidated; this is probably due to the fact that BPH is a multifactorial disease, whose pathogenesis seems to be correlated with different biological factors, including oxidative stress, androgenic stimulation and inflammatory and growth factors. Finally, recent advances have also highlighted the potential role of statins in reverting BPH symptoms by the improvement of hypercholesterolemia and metabolic syndrome [71]. As far as we are aware, as most macro- and micronutrients that have been associated with BPH risk also influence steroid concentrations, oxidative stress level and inflammation, it is reliable to suppose that they also have positive effects on BPH, regulating prostatic growth and smooth muscle tone (Figure 4).

**Figure 4.** Relationship between dietary patterns and BPH. Relationship between dietary patterns and BPH. STAT3 = Signal Transducer And Activator Of Transcription 3; BPH = benign prostatic hyperplasia.

#### **5. Conclusions**

In the present review, we have highlighted how the association between dietary patterns and two of the most frequent pathologies in urology, namely, erectile dysfunction and benign prostatic hyperplasia, is present in the literature. Evidence comes from both animal studies and in part from human studies. The data suggested that a diet that is more adherent to the Mediterranean diet or that emphasizes the presence of vegetables, fruits, nuts, legumes, and fish or other sources of long-chain (n-3) fats, in addition to reduced content of red meat, may have a beneficial role on erectile function. At the same time, the same beneficial effects can be transferred to the BPH side due to the indirect regulatory

effects on prostatic growth and smooth muscle tone, thus determining an improvement in symptoms. Certainly, in-depth studies and translational medicine are needed to confirm these encouraging data. Studies could address the relationship of dietary patterns and tissue expression as a marker of disease severity, such as NO and cAMP for ED and markers of inflammation for BPH.

Finally, clinical studies investigating the role of specific drugs for metabolic syndromes, such as statins or hypoglycemic drugs, together with investigations of dietary patterns could be beneficial in better understanding how to counteract ED and BPH.

**Author Contributions:** Conceptualization, G.I.R.; methodology, G.I.R.; writing—original draft preparation, G.I.R., G.B., M.D.M.; writing—review and editing, A.C., M.F., P.C., I.S., M.G., R.C., EAU-YAU Sexual, Reproductive Health Group. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** We would like to thank Giorgio Ivan Russo, Paolo Capogrosso, Marco Falcone, Ioannis Sokolakis, Murat Gül as member of the EAU-YAU Sexual, Reproductive Health Group.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Nutrition and Kidney Stone Disease**

**Roswitha Siener**

University Stone Center, Department of Urology, University Hospital Bonn, Venusberg-Campus 1, 53127 Bonn, Germany; Roswitha.Siener@ukbonn.de; Tel.: +49-228-287-19034

**Abstract:** The prevalence of kidney stone disease is increasing worldwide. The recurrence rate of urinary stones is estimated to be up to 50%. Nephrolithiasis is associated with increased risk of chronic and end stage kidney disease. Diet composition is considered to play a crucial role in urinary stone formation. There is strong evidence that an inadequate fluid intake is the major dietary risk factor for urolithiasis. While the benefit of high fluid intake has been confirmed, the effect of different beverages, such as tap water, mineral water, fruit juices, soft drinks, tea and coffee, are debated. Other nutritional factors, including dietary protein, carbohydrates, oxalate, calcium and sodium chloride can also modulate the urinary risk profile and contribute to the risk of kidney stone formation. The assessment of nutritional risk factors is an essential component in the specific dietary therapy of kidney stone patients. An appropriate dietary intervention can contribute to the effective prevention of recurrent stones and reduce the burden of invasive surgical procedures for the treatment of urinary stone disease. This narrative review has intended to provide a comprehensive and updated overview on the role of nutrition and diet in kidney stone disease.

**Keywords:** calcium oxalate stone formation; diet; dietary assessment; fatty acids; fluid; oxalate; protein; sodium; uric acid; water

#### **1. Introduction**

The prevalence of urolithiasis of the general population has increased worldwide in the past decades and was reported to be 4.7% in Germany and up to 10.1% in the United States [1–3]. The recurrence rate of urinary stones is high and is estimated to be approximately 50% at 10 years [4]. Nephrolithiasis is associated with an increased risk of chronic and end-stage kidney disease, probably due to kidney injury from obstructive nephropathy [5,6]. The most common stone type is calcium oxalate (67%) followed by calcium phosphate (17%), uric acid (8%), struvite (3%), and cystine (0.4%) [7]. Urinary stone formation is a multifactorial process to which metabolic derangements, genetic factors, anatomical and functional abnormalities may contribute, whereby nutrition plays a crucial role. Diet composition can affect urine risk profile and the supersaturation with the stone-forming salt, which can modify the risk of urinary stone formation [8,9].

A reliable stone analysis is an essential precondition for specific treatment regimens, because urinary risk factors for stone formation vary with the type of stone [10,11]. Moreover, a thorough metabolic evaluation of the stone patient is required, including detailed medical history, dietary assessment as well as blood and urine analysis [11–17]. To reduce the risk of recurrent stone formation, targeted dietary treatment should be individually adjusted to the metabolic risk profile of each patient. The collection of two consecutive 24 h urines is recommended to detect frequent metabolic disturbances such as hypercalciuria, hypocitraturia, hyperoxaluria, and hyperuricosuria and to identify dietary risk factors for kidney stone formation. Specific nutritional therapy, based on dietary assessment and metabolic evaluation, has been demonstrated to be more effective than general dietary measures in preventing recurrent stone formation [18]. The aim of this narrative review is to summarize the current knowledge of the role of nutrition in kidney stone formation.

**Citation:** Siener, R. Nutrition and Kidney Stone Disease. *Nutrients* **2021**, *13*, 1917. https://doi.org/10.3390/ nu13061917

Academic Editor: Ina Bergheim

Received: 2 May 2021 Accepted: 31 May 2021 Published: 3 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. Fluid Intake**

A low urine volume caused by insufficient fluid consumption or excessive fluid loss is one of the most crucial risk factors for kidney stone formation [19]. Numerous physiological and pathological conditions may lead to dehydration that modifies water needs. These factors include excessive sweating due to heat exposure, mental stress, high physical activity level, and occupation as well as chronic diarrhea in the setting of fat malabsorption due to different gastrointestinal disorders [19–21]. A study of 100 steel plant workers reported a history of stone disease in 16%, with a urine osmolality of more than 700 mOsm, defined as dehydration, in more than 50% of workers [22]. Occupations at elevated risk for stone formation also include health care professionals with limited access to water. A survey of employees found that physicians working in the operating room had the highest prevalence of nephrolithiasis (17.4% vs. 9.7%) and reported higher stress levels and lower fluid intake compared to those who work at other locations [23]. Poor access to fluids or to bathroom facilities are other circumstances, in which occupation can affect urolithiasis, such as, for example, in professional drivers, airplane pilots or schoolteachers [24].

A randomized controlled study of 199 first-time idiopathic calcium oxalate stone patients evaluated the impact of a high fluid intake in preventing stone recurrence [25]. Patients were randomly assigned to a high water intake to achieve a urine volume of at least 2 L/day or to a control group without specific instructions. After the five-year follow-up period, the intervention group was found to have a 2.5-fold higher urine volume, a lower recurrence rate (12% vs. 27%; *p* = 0.008) and a longer time to recurrence (38.7 months vs. 25.1 months; *p* = 0.016) than the control group. Several systematic reviews and meta-analyses on the role of fluid intake in the secondary prevention of urolithiasis concluded that high total fluid intake to achieve a urine volume of greater than 2.0 to 2.5 L/day decreases the risk of stone recurrence [26–31]. While fluid intake was reported to be a protective factor in the primary prevention of kidney stone formation in cohort studies [30], no evidence was found from randomized controlled trials [31].

An adequate fluid intake is the most important nutritional measure to prevent kidney stone recurrence, regardless of urine stone composition and individual risk factors for stone formation [19]. The intake of an adequate amount of fluid increases urine dilution, thereby reducing the concentration of lithogenic constituents and encouraging the expulsion of crystals by decreasing the renal intratubular transit time [32]. According to the guidelines on urolithiasis, a copious fluid intake to maintain a urine volume of at least 2.0 to 2.5 L/24 h is recommended for most types of stones [15,16]. For the recurrence prevention of patients with cystinuria, excessive urine dilution is of the utmost importance. In adult cystine stone patients, urine volume of at least 3.0 L/24 h should be achieved to decrease urinary cystine concentration to below the solubility limit of 1.3 mmol/L at pH 6.0 [14,15,33]. A study of 27 adult patients with cystinuria demonstrated that maintaining a urine volume of more than 3.0 L/day significantly reduced recurrent stone formation [34]. The specific recommendations regarding fluid intake are summarized in Table 1.

**Table 1.** Recommendations for fluid intake—adapted from [14–16].


**Table 1.** *Cont.*


While the benefit of high fluid intake has been confirmed, the effect of different beverages is still debated. The constituents of different beverages can affect urine composition and consequently the risk of stone formation.

#### *2.1. Tap Water and Mineral Water*

The impact of the composition of tap water and mineral water on kidney stone formation is still a matter of debate. The composition of drinking water, predominantly the content of the divalent cations calcium and magnesium, varies widely between geographic regions within the same country [35]. Hard tap water, defined as a calcium carbonate concentration above 120 mg/L [36], can contribute to the daily dietary calcium intake. A study on 2295 patients from two regions in the United States found an increased risk of urinary stone formation for individuals consuming tap water from a private well [37]. However, the cause of elevated risk by private well users is unknown. Other cohort studies have not identified an association between the hardness of water from public water suppliers and urinary stone disease [37–39].

In contrast to tap water, bicarbonate is a natural constituent of mineral water besides calcium, magnesium and other ions. The intake of bicarbonate increases the buffering capacity of the body and is a strong alkalizing agent. Mineral water rich in bicarbonate can support alkalinization therapy and contribute to inhibitory potential of urine by increasing urine pH and citrate excretion [40]. A randomized crossover trial of healthy subjects compared the effect of an equimolar alkali load in form of bicarbonate-rich mineral water or potassium citrate on the urinary risk profile for calcium oxalate and uric acid stone formation [41]. The intake of 2 L/day of mineral water containing 1715 mg/L bicarbonate or 2.55 g/day potassium citrate significantly increased urine pH and citrate excretion and decreased oxalate excretion. The relative supersaturation of calcium oxalate and uric acid declined significantly in both groups. A study of healthy individuals conducted under controlled dietary conditions demonstrated that a mineral water rich in bicarbonate, calcium and magnesium increased urine pH, as well as the excretion of the urinary inhibitors of calcium oxalate stone formation citrate and magnesium [42]. Despite the significant increase in calcium excretion, no change in the relative supersaturation of calcium oxalate occurred.

A study of 22 idiopathic calcium oxalate stone patients evaluated the impact of three mineral waters with different contents of bicarbonate and calcium. The intake of the mineral water rich in bicarbonate and calcium resulted in a significant increase in urinary citrate and decrease in urinary oxalate excretion, while urinary calcium excretion and the supersaturation of calcium oxalate did not change [43]. A study by Rodgers [44] found that the intake of 1.5 L/day of a mineral water with high calcium content (202 mg/L) compared to 1.5 L/day of a tap water with low calcium content (13 mg/L) significantly increased calcium excretion in 20 healthy individuals of each sex as well as 20 female calcium oxalate stone formers but not in male patients. Urinary oxalate excretion and the relative supersaturation of calcium oxalate did not change in any of the four groups. A double-blind crossover study of 34 recurrent calcium oxalate stone patients under a regular diet examined the effect of 1.5 L/day of a bicarbonate-rich mineral water (2673 mg/L) vs. 1.5 L/day of a mineral water with a low bicarbonate content (98 mg/L) on the risk of urinary stone formation [45]. The intake of the bicarbonate-rich mineral water resulted in a

significant rise in urine pH, citrate and magnesium excretion compared to the control. The relative supersaturation of calcium oxalate decreased to a similar extent in both groups.

The impact of mineral water on the risk of calcium oxalate and uric acid stone formation is mainly determined by the presence of bicarbonate [40,41,45]. The effect of bicarbonate-rich mineral water corresponds to that of sodium bicarbonate in galenic form [46]. If calcium in high concentration is present in the water, the positive effect of increased urine pH and citrate excretion is neutralized by an increased urinary calcium excretion [42,44]. A randomized trial of 129 healthy women and men found that the consumption of at least 1.5 to 2.0 L/d of mineral water rich in bicarbonate (>1800 mg/L) can decrease the net acid excretion by reducing the dietary acid load [47]. Urine alkalinization is an important nutritional measure in the treatment of patients with calcium oxalate, uric acid and cystine stones, but it is not indicated in calcium phosphate and struvite stone disease [14]. Comparisons of the composition of commercial bottled 'still', 'carbonated' and 'sparkling' water from 10 European countries found a wide variation in the calcium content, with concentrations reaching up to 581.6 mg/L [48,49]. As the dietary reference intake of calcium of a total of 1000 to 1200 mg per day are already achieved by consuming 2 L of calcium-rich water, patients should be aware of the calcium content of the water.

#### *2.2. Fruit Juices and Fruit Juice Beverages*

The metabolic effect of fruit juices is primarily determined by their alkali citrate content. Dietary citrate is absorbed in the gastrointestinal tract and metabolized to bicarbonate, which may then increase urine pH and citrate excretion [50]. Citrus juices, such as lemon, orange and grapefruit juice, supply large quantities of citric acid and could be a dietary alternative to the pharmacotherapy with alkalizing agents. Orange juice is among the most popular fruit juices consumed worldwide. Findings from studies on the impact of orange juice intake on urinary risk factors for stone formation are mixed.

Three cohort studies reported that the intake of orange juice is associated with a reduced risk of kidney stone formation [51]. Interventional studies under a controlled dietary regimen showed that orange juice provided an alkali load that increased urine pH and citrate excretion [52–54]. Despite the alkalizing effect, orange juice did not change the calculated risk for calcium oxalate in the majority of studies. Although the oxalate concentration of orange juice is very low (Table 2) [55,56], two of the three studies reported a significant rise in urinary oxalate excretion, which could be due to in vivo conversion of ascorbate to oxalate [52,54]. Due to concerns over the high sugar and energy content and the lack of dietary fiber of orange juice, the nutritional advice is to opt for the whole fruit over the juice, to limit daily consumption of fruit juice to one serving and to dilute the juice with water [14]. A randomized crossover study of 10 healthy subjects under regular diet comparing a crystal light lemonade beverage and two low-calorie orange juice beverages noted a higher urine pH as the only significant change between the groups when participants consumed Kroger low-calorie orange juice beverage [57]. However, citrate excretion, a major urinary inhibitor of calcium oxalate stone formation, did not significantly differ between the groups. Moreover, concerns over health risks associated especially with overconsumption and with certain artificial colorings, preservatives, sweeteners and additives, such as ascorbic acid and calcium, severely limit the health benefit of these beverages for kidney stone formers [57,58].

**Table 2.** Oxalate content of beverages.



**Table 2.** *Cont.*

d.l., detection limit.

Compared to orange juice, lemon and lime juices were found to have higher citrate concentrations [61]. However, findings from studies on the impact of lemon juice and lemonade on urinary risk factors for stone formation are likewise inconclusive. Although several studies in stone patients with or without hypocitraturia found a significant rise in urinary citrate excretion after the administration of lemonade [62–65], others failed to confirm the citraturic effects of regular lemonade [54,66,67]. Reasons for the variability in urinary citrate excretion with lemonade could be different degrees of dilution of pure lemon juice and the higher proportion of non-alkaline citric acid, which could have neutralized the alkali potential of citrate [57,68].

Studies of other fruit juices, such as grapefruit, apple, cranberry, and blackcurrant juice, have also provided inconsistent results. The findings of two studies of healthy subjects on the impact of grapefruit juice yielded no or only a partial effect on the risk of calcium oxalate stone formation (500 mL/day: *p* < 0.05; 720 mL/day: n.s.; 1000 mL/day: n.s.), although urinary citrate excretion increased significantly [53,69]. The administration of 0.5 or 1.0 L/day of apple juice likewise increased urinary citrate excretion but did not change the relative supersaturation of calcium oxalate [53]. In healthy volunteers, the consumption of 330 mL/day of blackcurrant juice significantly increased urine pH and citrate excretion [70]. However, a simultaneous rise of urinary oxalate excretion was observed, probably due to in vivo conversion of ascorbate to oxalate. Interventional studies on the effect of cranberry juice reported that oxalate excretion decreased [71], remained unchanged [72] or increased [70] in healthy subjects and increased in calcium oxalate stone patients [72]. Overall, the relative risk of calcium oxalate stone formation was unaffected [70,72] or reduced [71] in healthy volunteers but increased in stone patients after the intake of cranberry juice [72]. Finally, a study of healthy subjects showed that the consumption of 1.9 L/day of coconut water compared to tap water significantly raised urinary citrate, potassium, and chloride excretion, without affecting urine pH [73]. Although fruit and vegetable juices could be useful in the dietary therapy of kidney stone disease, the oxalate concentration has to be taken into account.

#### *2.3. Soft Drinks*

One large randomized controlled trial in stone patients with a soft drink intake of at least 160 mL/day randomly assigned men to refrain from soft drink consumption or to the control group [74]. The study showed that the consumption of soft drinks, especially those acidified by phosphoric acid, significantly increased the risk for recurrent stone formation. The analysis of data from 194,095 participants in the Health Professionals Follow-Up Study (HPFS), and Nurses' Health Study (NHS) I and II with a median follow-up of more than

eight years indicated significant positive associations with the risk of stone formation for sugar-sweetened cola and sugar-sweetened non-cola [51]. According to findings from a cross-sectional trial based on the Third National Health and Nutrition Examination Survey (NHANES-III), sugar-sweetened soft drink consumption was positively associated with serum uric acid concentration and frequency of hyperuricemia [75]. Accordingly, cohort studies also indicated a strong positive association between sugar-sweetened soft drinks and the risk of gout in men [76]. These results could at least partly be explained by the fructose content of sugar-sweetened soft drinks, which has been associated with an increased risk of incident kidney stone formation [30,77].

#### *2.4. Tea and Coffee*

Tea and coffee are among the most commonly consumed beverages. A systematic review and large cohort studies supported in general a potentially preventive role for both coffee and tea consumption against stone formation [30,51,78–80]. It is assumed that the beneficial effect of tea and coffee could be attributed primarily to the diuretic action of the consumption of large amounts of caffeine, which could offset, at least partially, the hypercalciuric effect [78,81,82]. The European Food Safety Authority considers the habitual caffeine consumption of up to 400 mg/day, corresponding to about four cups of brewed coffee, a safe amount for healthy adults, except pregnant women [83]. An increased total fluid intake and the antioxidative effect of phytochemicals, such as polyphenols, could be other explanations for the preventive effect of tea consumption [78,84].

A limitation of the cohort studies is that no distinction was made between different coffee and tea types, such as black, green and herbal tea. Moreover, there has been concern regarding the oxalate content of coffee and tea. While the oxalate content of coffee is low [55], black tea and green tea contain varying amounts of oxalate depending on the origin, quality, time of harvest and preparation [59,60,85] (Table 2). The highest oxalate concentration was detected in black and green tea [59,60,86], whereas other types of tea such as herbal and fruit tea were found to be low in oxalate [60,86]. Therefore, the exact mechanism for the protective effect of black and green tea against stone formation remains to be elucidated.

#### **3. Protein**

The recommended intake of protein for adults is between 0.8 and 1.0 g per kg normal body weight per day [87]. High dietary intake of protein was reported to exert potential detrimental effects on urinary risk factors of stone formation. The acid load provided by a high protein intake may increase urinary calcium and reduce urine pH and citrate excretion [88,89]. A study of 18 hypercalciuric stone patients showed that a protein restriction to 0.8 g/kg body weight/day decreased urinary calcium and uric acid and increased urinary citrate excretion [90].

However, the evidence from systematic reviews regarding the relationship between protein intake and the risk of kidney stone formation is inconsistent [91,92]. The two cohort studies [93,94] included in the systematic review by Pedersen [91] found no association between dietary protein intake and stone formation. However, one systematic review confirmed that high-protein diets were associated with increased urinary calcium excretion, which is a risk factor for calcium stone formation [95]. In healthy subjects, the administration of 1.5 g per day L-methionine did not raise urinary calcium excretion [96], whereas the intake of 3 g per day L-methionine significantly increased calcium excretion by about 1 mmol per day [97]. To date, there is no randomized controlled trial comparing the isolated effect of a high versus low protein intake on the risk of urinary stone formation.

While findings on the association between dietary protein consumption and the risk of stone formation are inconclusive, large observational studies found that a higher dietary net acid load was associated with higher risk of stone formation [98]. These data suggested that the proportion of consumed vegetables and fruits compared to ingested protein intake, rather than total protein per se, could be a more reliable indicator for the risk of urinary

stone formation. Vegetables and fruits have a distinct alkalizing potential and can neutralize the proton load, metabolically generated from ingested protein [99,100]. In hypocitraturic stone patients, the introduction of vegetables and fruits increased urine pH and citrate excretion and reduced the relative supersaturation of calcium oxalate and uric acid [101]. Reduced urine pH and citrate excretion, resulting from high nutritional proton load or high dietary acidity, are risk factors for several types of urinary stones, particularly for the most common, i.e., calcium oxalate and uric acid. The higher the urine pH, the higher the stone-inhibiting citrate excretion and calcium-binding capacity and the lower the urinary calcium excretion [102].

#### **4. Carbohydrates**

Findings from studies on the impact of carbohydrates on the risk of kidney stone formation are inconclusive. While some studies reported a similar intake of carbohydrates in stone patients and controls [103,104], others noted a higher intake of carbohydrates in stone patients than in controls [105,106]. A major limitation of these trials is that they did not distinguish between different types of carbohydrates, especially the most common disaccharide sucrose and its monomers glucose and fructose.

Prospective cohort studies found a positive relationship between sucrose consumption and the risk for stone formation in women but not in men [93,94,107,108]. A previous study reported a higher rate of urinary calcium excretion after oral ingestion of 100 g glucose or sucrose in normal subjects and calcium oxalate stone formers, where the response was even more pronounced in the latter group [109]. The rise in calcium excretion after an oral glucose load has been attributed to an increased intestinal absorption and reduced renal tubular reabsorption of calcium [110–112]. It was suggested that this effect could be, at least partially, mediated by an increase in serum insulin. A study of calcium stone patients with idiopathic hypercalciuria and healthy controls conducted using a fixed metabolic diet concluded that hyperinsulinemia is unlikely to play a significant role in the pathogenesis of calcium stone formation among patients with idiopathic hypercalciuria [113].

Consumption of fructose has clearly increased over the last decades as fructose is used as sweetener in beverages or food as a replacement for sucrose or glucose. A systematic review and meta-analysis noted a positive relationship between fructose intake and the risk of incident stone formation [30], but the underlying mechanisms are not well-understood [114]. Fructose consumption was assumed to increase the risk of stone formation in part via effects on the urinary excretion of calcium [115], oxalate [115,116], urine pH [116], and by effects on uric acid metabolism [115–118]. A cohort study of men showed a positive association between fructose intake and the risk of incident gout [76]. Studies on fixed metabolic diets are required to evaluate the effect of sucrose, glucose and fructose on metabolism and urinary risk factors for uric acid and calcium oxalate stone formation.

#### **5. Fat**

Data regarding the association between dietary fat consumption and the risk of urinary stone formation are scarce and inconsistent. While several studies reported a similar fat intake in stone patients and controls [103,104], others found higher dietary fat intake in stone formers [105,106].

Studies have suggested that the dietary fatty acid pattern, especially the ratio of n-6 to n-3 polyunsaturated fatty acids, may affect the risk of calcium oxalate stone formation through various complex mechanisms [119]. An abnormal concentration of arachidonic acid (C20:4n-6) in plasma and erythrocyte membrane phospholipids was found in idiopathic calcium oxalate stone patients compared to healthy controls [120]. Arachidonic acid in cell membrane phospholipids can be released by phospholipase enzymes and subsequently serve as a precursor of PGE2 [121,122]. An elevated PGE2 production is assumed to induce hypercalciuria by increasing intestinal calcium absorption and bone resorption [123,124], and by decreasing renal tubular calcium reabsorption [125,126]. Increased

phospholipid arachidonic acid levels may also induce hyperoxaluria by activating anion carriers and consequently the intestinal and renal transport activity of oxalate [120,127]. A study of 20 healthy volunteers reported that the supplementation of n-3 polyunsaturated fatty acids DHA (22:6n-3) and EPA (20:5n-3) led to their incorporation into cell membrane phospholipids, which was partly at the expense of arachidonic acid [128]. A change in the membrane fatty acid pattern due to increasing dietary intake of n-3 polyunsaturated fatty acids, therefore, was assumed to decrease the urinary excretion of calcium and oxalate.

Analyses of 24 h urine samples and dietary records of 58 idiopathic calcium oxalate stone patients showed a positive association between the dietary arachidonic acid content and urinary oxalate excretion [129]. Several studies investigated the role of fish oil administration in the dietary management of stone formation. Fish oil supplementation was found to reduce oxalate excretion in healthy subjects [130], and to decrease urinary excretion of calcium and/or oxalate in calcium stone patients in most trials [119]. Further studies should identify those patients who benefit most from n-3 polyunsaturated fatty acid supplementation.

#### **6. Oxalate**

Urinary oxalate is regarded as an essential risk factor for calcium oxalate stone formation. Changes in urinary oxalate concentration can significantly increase the urinary supersaturation of calcium oxalate [131,132]. A prospective study of 134 recurrent calcium oxalate stone patients identified the rise in oxalate excretion as the major urinary determinant for relapse after a two-year follow-up [133]. Oxalate is an end product of metabolism. Urinary oxalate is derived from endogenous oxalate synthesis and dietary oxalate intake [134]. Endogenous oxalate metabolism occurs predominantly in the liver and is affected by dietary intake of precursors, such as ascorbic acid and hydroxyproline [135,136].

The effect of dietary oxalate intake on urinary oxalate excretion and the risk of stone formation has been examined in several interventional trials. In a study of healthy subjects, the mean contribution of dietary oxalate to urinary oxalate excretion ranged from 24% (10 mg/day dietary oxalate) to 42% (250 mg/day dietary oxalate) [137]. A study of 20 healthy women and men showed that a controlled oxalate-rich diet (600 mg/day dietary oxalate) compared to a diet normal in oxalate (100 mg/day dietary oxalate) significantly increased oxalate excretion from 0.354 to 0.542 mmol/24 h by 0.188 mmol/24 h, i.e., >50%, corresponding to 35% of total urinary oxalate excretion [138]. This study also showed that the supersaturation of calcium oxalate increases significantly with a high dietary oxalate intake. However, a prospective cohort study reported only a modest positive association between dietary oxalate intake and the risk for incident stone formation [139]. Several reasons could be responsible for these inconsistencies, including the use of food frequency questionnaires, which are prone to errors, to evaluate dietary oxalate intake in large cohort studies, the daily variation in the oxalate ingestion and the variability of the oxalate content due to growth conditions, preparation and processing of food [59,140–142]. Therefore, studies using diets strictly controlled in their oxalate and nutrient content and the use of comprehensive data on the oxalate content of raw and processed foods are required.

Dietary oxalate is mainly derived from plant foods. Estimates of dietary oxalate intake are in a wide range, depending on the ingestion of oxalate-rich foods. Therefore, it is essential to consider sources of excess dietary oxalate. Detailed knowledge of the oxalate content of foods is important in the dietary therapy of calcium oxalate stone patients. However, the previous lack of accurate and complete data on the oxalate concentration of foods hindered the elucidation of the role of dietary oxalate in urinary oxalate excretion and the risk of stone formation. Analysis of the oxalate content of a wide variety of foods by a HPLC enzyme-reactor method provided a comprehensive database of the oxalate content of foods and beverages and detected a considerable number of foods with high or extremely high oxalate concentrations [55,56,59,60,143–145]. An overview of foods rich in oxalate, including vegetables, legumes, fruits, cereals and pseudocereals, nuts, herbs and spices, is presented in Table 3.



As boiling may lead to considerable losses of oxalate into the cooking water, food processing and preparation methods are important determinants for the oxalate content [141]. For example, the oxalate concentration of raw spinach was found to be more than five times higher in raw (1959 mg/100 g) compared to cooked spinach (364 mg/100 g) [55,143]. To avoid losses of water-soluble vitamins and minerals, it is recommended to shortly steam spinach, which, however, also preserves most of the oxalate content of raw spinach. Moreover, beverages such as vegetable and fruit juices as well as black, green and iced teas were found to contain considerable amounts of highly bioavailable oxalate (Table 2) [56,59,60]. Knowledge of the oxalate concentration of beverages is particularly crucial as a high fluid

intake is the most important nutritional measure for recurrence prevention of kidney stone disease. Kidney stone formers should be advised to avoid oxalate-rich foods and beverages.

A case-control study of 186 calcium oxalate stone patients found a significant positive association between dietary ascorbic acid intake and urinary oxalate excretion [134]. The association between ascorbic acid intake and the risk of urinary stone formation has been noted in several large cohort studies [146,147]. A study under controlled dietary conditions noted a significant increase in urinary oxalate in both calcium oxalate stone patients and healthy controls after oral supplementation of 2 g ascorbic acid daily [148]. Dietary hydroxyproline, mainly present in collagen/gelatin, may also contribute to endogenous oxalate synthesis and urinary oxalate excretion in healthy subjects and in primary hyperoxaluria [135,149,150]. A study using infusions of [15N,13C5]-hydroxyproline found that hydroxyproline contributed at least 15% to urinary oxalate excretion in healthy volunteers and could be a major source of the oxalate produced in patients with primary hyperoxaluria [149,150].

Intestinal hyperabsorption of oxalate may additionally contribute to urinary oxalate, even in the absence of intestinal diseases associated with fat malabsorption. A study under standardized conditions using [13C2]oxalate demonstrated that the mean intestinal oxalate absorption was significantly higher in 120 idiopathic calcium oxalate stone patients compared to 120 healthy subjects (10.2% vs. 8.0%; *p* < 0.001) [151]. Intestinal hyperabsorption, defined as oxalate absorption exceeding 10%, was noted in 28% of healthy volunteers and in 46% of patients. The amount of dietary oxalate that is intestinally absorbed is affected by dietary constituents, especially the calcium intake [152]. Moreover, enteric hyperoxaluria in the setting of fat malabsorption due to different gastrointestinal disorders is a crucial risk factor contributing to kidney stone formation [153–155]. Unabsorbed fatty acids bind to calcium, decreasing the intraluminal calcium concentration for complexation with oxalate [156]. With the depletion of free calcium, a larger percentage of unbound oxalate is available for absorption in the gut. A study of 51 patients with intestinal fat malabsorption found that the resection status is a major risk factor for hyperoxaluria and kidney stone formation [157].

Finally, the intestinal colonization with the Gram-negative anaerobic oxalate-degrading bacterium *Oxalobacter formigenes* can be inversely associated with calcium oxalate stone formation [158]. Although the administration of *Oxalobacter formigenes* or other probiotic preparations was suggested to reduce urinary oxalate excretion and the lithogenic risk, findings from several interventional studies are conflicting [138,159]. Future studies should include functional and nutritional aspects of the interaction between nutrients and the gut microbiota composition.

#### **7. Calcium**

Hypercalciuria is a crucial risk factor for calcium stone formation. Balanced dietary calcium intake from both dairy and non-dairy sources has been demonstrated to exert a preventive effect against urinary stone formation [160]. Dietary calcium restriction should be avoided as it may induce bone loss and result in the hyperabsorption and hyperexcretion of oxalate. Dietary calcium restriction reduces intestinal calcium concentration, which enhances the absorption of uncomplexed oxalate and subsequently urinary oxalate excretion [152]. A study of healthy volunteers with a standardized [13C2]oxalate absorption test showed that intestinal oxalate absorption was inversely associated with dietary calcium intake within the range of 200 (17% oxalate absorption) to 1200 mg/day calcium (2.6% oxalate absorption) [152]. Dietary calcium intake above 1200 mg per day had only a minor impact on oxalate absorption. Epidemiologic studies found an inverse relationship between dietary calcium intake and the risk of stone formation in women and men [93,107,108]. A five-year prospective randomized study of 120 men with calcium oxalate stone formation and hypercalciuria showed that recurrences were less frequent on a normal calcium (1200 mg/day), normal-protein, and low-salt diet compared to a

low-calcium diet (400 mg/day) [161]. For idiopathic calcium stone formers, a total dietary calcium intake of 1000 to 1200 mg/day is recommended [14–16].

#### **8. Sodium Chloride**

Dietary sodium chloride consumption increases the risk of stone formation due to its propensity to enhance urinary calcium excretion [162]. High sodium chloride intake may promote calcium excretion by inhibiting renal tubular calcium reabsorption from the sodium-induced expansion of extracellular fluid volume [163]. Interventional studies in normal adults showed that every 100 mmol (2300 mg) increase in sodium intake per day enhances daily calcium excretion by approximately 1 mmol [164]. A correlation between dietary sodium intake and kidney stone formation was observed in one cohort study [108], but was not confirmed by others [93,94,107]. These discrepancies may reflect the difficulties in obtaining reliable estimates of dietary salt intake based on food frequency questionnaires. A randomized controlled trial of 210 idiopathic calcium oxalate stone patients demonstrated that a low-salt diet could effectively decrease urinary calcium excretion compared to a control diet [165]. The recommended dietary sodium intake is <100 mmol (2300 mg) or 6 g of salt (sodium chloride) per day [14,16,17].

#### **9. Dietary Management**

Diet modification is an effective method to correct urinary risk factors for kidney stone formation, particularly of the most common stone type calcium oxalate. Dietary therapy should be tailored to every single patient according to the patient-specific biochemical and dietary risk profile. A detailed nutritional assessment is an essential component of the evaluation and the main prerequisite for a successful dietary therapy of the stone-forming patient. Seven-day dietary records are regarded as the most accurate technique for assessing habitual dietary intake. A variety of dietary factors, including fluid intake, dietary protein, carbohydrates, oxalate, calcium, and sodium chloride can modulate urinary risk profile and contribute to the risk of kidney stone formation. The specific dietary recommendations for calcium oxalate stone formers are included in Table 4.


**Table 4.** Dietary recommendations for calcium oxalate stone patients.

IH, idiopathic hyperoxaluria; EH, enteric hyperoxaluria.

#### **10. Conclusions**

Nutritional factors play an important role in kidney stone formation. A careful dietary assessment should obtain from the patient any dietary habits that predispose them to urinary stone formation. An appropriate diet can modulate the urinary risk profile and contribute to the reduction of the risk of urinary stone formation. Specific dietary therapy, based on nutritional assessment and metabolic evaluation, has been demonstrated to be more effective than general dietary measures in preventing recurrent stone formation.

**Funding:** This research was supported in part by a grant from the Association of German Mineral Water Bottlers (VDM).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Review* **Diet and Nutrition in Pediatric Inflammatory Bowel Diseases**

**Ugo Cucinotta, Claudio Romano \* and Valeria Dipasquale**

Pediatric Gastroenterology and Cystic Fibrosis Unit, Department of Human Pathology in Adulthood and Childhood "G. Barresi", University of Messina, 98124 Messina, Italy; ugocucinotta@gmail.com (U.C.); dipasquale.valeria@libero.it (V.D.)

**\*** Correspondence: romanoc@unime.it; Tel.: +39-090-221-2919

**Abstract:** Both genetic and environmental factors are involved in the onset of inflammatory bowel disease (IBD). In particular, diet composition is suspected to significantly contribute to IBD risk. In recent years, major interest has raised about the role of nutrition in disease pathogenesis and course, and many studies have shown a clear link between diet composition and intestinal permeability impairment. Moreover, many IBD-related factors, such as poor dietary intake, nutrients loss and drugs interact with nutritional status, thus paving the way for the development of many therapeutic strategies in which nutrition represents the cornerstone, either as first-line therapy or as reversing nutritional deficiencies and malnutrition in IBD patients. Exclusive enteral nutrition (EEN) is the most rigorously supported dietary intervention for the treatment of Crohn's Disease (CD), but is burdened by a low tolerability, especially in pediatric patients. Promising alternative regimens are represented by Crohn's Disease Exclusion Diet (CDED), and other elimination diets, whose use is gradually spreading. The aim of the current paper is to provide a comprehensive and updated overview on the latest evidence about the role of nutrition and diet in pediatric IBD, focusing on the different nutritional interventions available for the management of the disease.

**Keywords:** diet; enteral nutrition; inflammatory bowel disease; nutrition; nutritional therapy; prevention

**1. Introduction**

The term inflammatory bowel disease (IBD) refers to a heterogeneous group of disorders encompassing Crohn's disease (CD), ulcerative colitis (UC) and unclassified inflammatory bowel disease (IBD-U), characterized by a relapsing-remitting behavior, and variably presenting with abdominal pain, diarrhea, rectal bleeding and weight loss. In the last decades, the incidence of IBD has significantly grown in industrialized countries, in children as in adults [1–3]. It is well known that IBD has a multifactorial etiology, but despite the ongoing scientific efforts, pathogenesis and pathophysiology are still unclear [4]. Both genetic and environmental factors are involved in IBD onset. In particular, the prevailing hypothesis accounts a complex interaction between an exaggerated immune response in genetically predisposed individuals and environmental factors along with intestinal flora alterations, eventually sustaining an anomalous chronic inflammation [4,5].

Several studies have been conducted in the past to evaluate the precise role of nutrition (and malnutrition) and diet composition in immune-mediated diseases risk, leading to the development of a new discipline, referred to as "clinical nutrition". Clinical nutrition is defined by European Society for Clinical Nutrition and Metabolism (ESPEN) as the "discipline that deals with the prevention, diagnosis and management of nutritional and metabolic changes related to acute and chronic diseases and conditions caused by a lack or excess of energy and nutrients" [6]. It encompasses all the diseases in which nutrition plays a main role, not only as a promoter, but also as a therapeutic tool [7]. For what IBD is concerned, the relation between diet composition and disease onset, course and management is supported by scientific evidence: i) epidemiological studies found an association

**Citation:** Cucinotta, U.; Romano, C.; Dipasquale, V. Diet and Nutrition in Pediatric Inflammatory Bowel Diseases. *Nutrients* **2021**, *13*, 655. https://doi.org/10.3390/nu13020655

Academic Editor: Ina Bergheim Received: 1 February 2021 Accepted: 13 February 2021 Published: 17 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

between specific dietary habits and nutrients with an increased risk of IBD; ii) some foods and dietary components are potentially capable to either enhance or reduce the severity of inflammation; iii) for some pediatric patients with an established diagnosis of CD, exclusive enteral nutrition (EEN) can be considered a primary induction treatment with efficacy in achieving mucosal healing; iv) exclusion diets could treat or prevent disease flares; v) in IBD children, especially in CD children, malnutrition and nutrients deficiencies are often present at diagnosis; vi) early nutritional strategies can lead to a better disease control, as well as catch-up growth, bone mineral density improvement and adequate pubertal development [8–10]. The most common therapeutic choices for pediatric IBD consist of systemic and topical corticosteroids, amino salicylate and immunosuppressants (such as thiopurines or methotrexate). Starting from the past decade, use of biologic therapies has significantly increased for their clear efficacy also in pediatric setting, although they have been associated with a loss of response over time [11]. Beside this, non-pharmacological management of pediatric IBD has evolved over years and dietary changes are currently considered major therapeutic tools [10,12,13].

The aim of the present study is to provide a comprehensive and updated overview on the latest evidence about the role of nutrition and diet in pediatric IBD, also focusing on the different nutritional interventions available for the management of the disease.

#### **2. Methods**

For the purpose of the paper, a comprehensive search of the published literature through the databases PUBMED MEDLINE and GOOGLE SCHOLAR was performed. The following keywords: "diet", "children", "enteral nutrition", "inflammatory bowel disease", "clinical nutrition", "nutritional therapy", "malnutrition" were used. We focused on the most relevant articles published in English between 2005 and December 2020 including both Meta-analysis, Systematic Reviews, Consensus Guidelines, Randomized Controlled Trials and Cohort Studies. The selected papers were then analyzed in order to extrapolate valid and updated evidences about the role of nutrition and diet in pediatric IBD, with a focus on the nutritional strategies available for the management of the disease.

#### **3. Nutrition and Diet and Intestinal Inflammation**

Despite the fact the exact pathogenesis is still unclear, IBD development results from a variable interaction between genetic susceptibility, individual immune response, gut microbiota and environmental factors [14].

The role of the diet composition in the intestinal inflammation has long been controversial, due to sometimes limited and conflicting data resulting from retrospective and case-control studies. However, it has been demonstrated that some individuals are more susceptible than others to develop the disease, depending on specific dietary practices. The increasingly "westernization" of lifestyle, mostly characterized by a high consumption of animal proteins and fats along with a poor intake of fruit, vegetables and fibers is associated with a higher risk of IBD [15,16]. Indeed, some epidemiological studies have demonstrated an increased risk of IBD among individuals moving from low-income countries, with low IBD incidence, toward high-income countries, hypothesizing that new environmental factors could increase the risk of the development of the disease [3]. While overall fat intake would not correlate with IBD, there is enough evidence to support that a low intake of omega-3 (*n*-3) and a high one of omega-6 (*n*-6) polyunsaturated fatty acids (PUFAs) is associated with an increased risk of CD. *N*-6 PUFAs (i.e., linoleic acid) are precursors to proinflammatory eicosanoids, whereas dietary *n*-3 PUFAs inhibits the formation of proinflammatory prostaglandins and leukotrienes through the arachidonic acid pathway, so their chronic imbalanced consumption may eventually lead to a pro- inflammatory state, oxidative stress and impaired intestinal mucosal permeability [17,18]. On the other side, a long-term intake of dietary fiber, particularly derived from fruit, has been associated with lower risk of CD, while no or conflicting association with UC has been found [19]. The most recent ESPEN guidelines recommend a diet rich in fruit and vegetables and low

in *n*-6 PUFAs [20]. Western diet increases the risk of disease through food additives like emulsifiers and saturated fats, maltodextrins, carrageenin and carboxymethylcellulose, that have been associated with impaired intestinal permeability [21–24].

Even early life dietary patterns can be considered risk factors. Breast milk is rich in secretory IgA (sIgA), leukocytes and antimicrobial factors (lysozyme, lactoferrin, nucleotides), and promotes immune system maturation and T-lymphocytes pool creation, as well as immune response against infections [25–27]. It has been suggested that an early discontinuation of breastfeeding may facilitate the onset of several chronic conditions later in life, such as metabolic and autoimmune diseases. A recent meta-analysis comparing the exposure to breast milk among patients with CD and UC and controls confirmed an inverse association between breastfeeding and the risk of developing IBD [28]. Moreover, a dose depending effect has been hypothesized, with higher protection for longer-lasting breast milk exposure [28]. Additionally, since microbiome has been proven to differ according to the type of early diet, with relatively higher proportion of *Firmicutes* and *Actinobacteria* in breast-fed infants, it seems reasonable to assume that dysbiosis during the first months of life may negatively impact on immunological functions of the host microbiota [29–31].

Indeed, IBD has been consistently associated with gut dysbiosis, variably determined by specific foods and dietary habits. While commensal microbiome is physiologically abundant in *Firmicutes, Bacteroidetes, Actinobacteria* and *Proteobacteria*, IBD patients typically exhibit a low concentration of *Firmicutes*, such as *Bifidobacterium, Clostridia* and especially *Faecalibacterium prausnitzii*, whose protective role has been theorized for the possibility to stimulate anti-inflammatory cytokines (including IL-10) [12,32,33]. Conversely, increased concentrations of *Escherichia Coli* and other *Enterobacteriacee* has been found [32,33]. Even metabolic pathways are altered, because short chain fatty acids (SCFAs), such as acetate, propionate, and butyrate, normally produced by commensal bacteria through fermentation of food components, have been found reduced in IBD compared to controls. Further evidence includes reduced tryptophan metabolism and disrupted bile acids metabolism, with low bile acids production, classically associated with anti-inflammatory activities and T-cells regulation [33]. All these alterations eventually contribute to a decreased bacterial diversity, altered host barrier, increased permeability and subsequent intestinal inflammation.

#### **4. Nutritional Strategies in Induction of Remission in Pediatric IBD**

#### *4.1. Exclusive Enteral Nutrition*

Exclusive enteral nutrition (EEN) is the most important nutritional intervention in pediatric IBD, consisting of a complete liquid formula as the unique source of daily energy requirement for a period of 6-8 weeks [34]. Consensus guidelines of the European Society of Pediatric Gastroenterology, Hepatology and Nutrition (ESPGHAN) and European Crohn's Colitis Organization (ECCO), as well as North American guidelines, recommend EEN as the first-line therapy for mild to moderate pediatric CD to induce remission both in the first flare-up and during relapses of symptoms [20,34–37]. The main indication for EEN is represented by active luminal CD, with solely inflammatory behavior (B1 according to Paris classification) and low-to-medium risk at diagnosis, regardless of the disease location (Table 1) [35].

By contrast, there are insufficient data supporting EEN use for extraintestinal manifestations or for perianal disease [36]. EEN advantages include: (i) high rates (up to 80%) of clinical remission; (ii) steroids sparing; (iii) correction of malnutrition and micronutrients deficiency; (iv) lean body mass increase; (v) improvement of growth and height Z scores; (vi) decreased longer term need for steroid and anti-TNF therapies; (vii) improvement of quality of life (QoL) [38–41]. Data supporting EEN in the management of UC is still lacking, and its use is still not recommended in these patients [36].

#### 4.1.1. EEN Efficacy

Many studies, including systematic reviews, meta-analyses and Cochrane reviews, proved EEN to be as effective as steroid therapy in inducing remission over a 6–8 weekperiod in CD pediatric patients, by inducing both mucosal and transmural healing [42–47]. The significant advantage of sparing steroids, traditionally used in the clinical setting, is represented by the avoidance of their short- and long-term side effects, such as poor growth, increased susceptibility to infections, and metabolic disorders (Cushing syndrome, osteopenia/osteoporosis, impaired glucose metabolism) [35,36,48,49]. Several trials have demonstrated EEN may be superior to steroids in CD children with active disease, for the achievement of mucosal healing with significant decrease of acute phase reactants, such as erythrocyte sedimentation rate (ESR), C reactive protein (CRP) and fecal calprotectin (FC) [50,51]. Particularly, FC is a reliable, non-invasive inflammatory marker and predictor of mucosal healing, and it has been observed to decrease up to 50% after the administration of EEN [52], although FC levels at the end of EEN regimen would not seem to predict the length of time until a future relapse [53]. Furthermore, a study examining FC changes after food reintroduction found that FC rises to pre-treatment levels within 4 months post-EEN, proving that a subclinical inflammation rapidly occurs after food reintroduction, with free-diet foods presumably acting as triggers of inflammation [54].



EEN, Exclusive Enteral Nutrition; B1, non-stricturing non-penetrating Behavior; QoL, Quality of Life.

Although no predictive factors of EEN outcome are currently validated, disease severity and luminal disease seem to be the only predictors of EEN success, with mild to moderate disease [weighted Pediatric Crohn Disease Activity Index (wPCDAI) < 57.5, fecal calprotectin < 500 mcg/ g of stool, ileal involvement, CRP > 15 mg/L] and ileal disease having demonstrated a better response to EEN [55]. However, due to insufficient data supporting the association between efficacy of EEN and disease location [56–58], current guidelines do not recommend taking into account the disease phenotype when starting patients on EEN [20,35,36].

#### 4.1.2. Practical Issues of EEN

EEN is delivered through the administration of enteral nutrition (EN) formulas, available in a wide range of commercial formulations. These formulas are classified according to the protein source into intact proteins (polymeric), modified protein (elemental/semielemental) and disease specific formulas [39]. A nutritionally balanced polymeric formula should be the preferred choice in patients with intestinal sufficiency and in the absence of other medical conditions (e.g., co-existence of a cow's milk allergy) [36]. Its optimal nutritional properties (approximately 45–60% carbohydrate, 15–20% protein, 30–40% fat), along with better palatability and lower price, makes polymeric feed the most commonly used [59]. However, despite these advantages, no EN formula has been demonstrated to be

superior to others, with similar results in terms of achievement of clinical remission [42,58]. Oral route should be the preferred way, switching to nasogastric tube only in the case of inadequate oral intake [36,60]. Volumes are calculated according to the estimated average daily requirement, minding the caloric content of the chosen formula (standard concentration of 0.86−<sup>1</sup> kcal/mL). Since resting energy expenditure has not shown to be increased in patients with CD during the different phases of disease, equations for estimating energy requirements can be used independently from the disease phase [60].

Despite the absence of evidence about the exact duration of an EEN regimen, the guidelines recommend a minimum of 6 weeks. Indeed, while symptoms may improve after few days and by week 3 in most cases [61], mucosal healing takes several weeks to occur. Food re-introduction is then performed over a period of 1 to 5 weeks, although no protocols for the proper management are currently available [36]. Notably, a a retrospective cohort study evaluating food re-introduction after a EEN cycle during 1 year of follow-up proved that rapid (3 days) food reintroduction after EEN is as effective as standard (5 weeks) food reintroduction, with no significant differences in clinical relapse rates: rapid 50% vs. standard 47% (*p* = 0.58) [62]. Further studies are warranted in order to assess the best timing in IBD pediatric population.

#### 4.1.3. Mechanism of Action of EEN

The exact mechanism through which EEN is capable of inducing mucosal healing is still unclear, but some evidence exists about the beneficial exclusion of specific dietary components, microbiome modulation, intestinal rest and direct anti-inflammatory effect of EEN [63,64]. Previously published trials have shown the ineffectiveness of partial enteral nutrition (PEN) associated with a free diet in inducing clinical remission, proving that dietary antigens, even in small quantities, are dangerous, thus suggesting that EEN could also work by excluding them [40,65]. After EEN administration microbiome composition has shown some modifications, including paradoxical reduction of microbial diversity and decreased proportion of presumably beneficial bacterial groups, such as *Bifidobacterium spp, Firmicutes* (including *Faecalibacterium prausnitzii*)*, Bacteroides/Prevotella* and *Proteobacteriaceae* [64,66,67]. In particular, a significant correlation was found between the decrease of *Bacteroides/Prevotella* group bacteria and the clinical improvement during EEN treatment [68]. Accordingly, regression of major EEN-induced microbiome changes after return to habitual free diet is well documented [67,68].

#### *4.2. Partial Enteral Nutrition*

PEN is a nutritional strategy based on the administration of a liquid enteral formula, not covering 100% of total energy requirements, along with whole foods. In different trials evaluating PEN for treatment of active CD or for maintenance of remission, the volume of formula has ranged from 35% to 90% of total energy requirements, while the foods consumed belonged mainly to free diet or to defined restrictive diets [24]. PEN in association with an unrestricted diet has not shown to be a good choice of treatment [65]. It has been found to be significantly less effective for the achievement of clinical remission than EEN or biologics [40], whereby its use is not currently recommended as induction strategy in active pediatric CD [35,36]. Further nutritional strategies combining PEN with specific exclusion diets are warranted.

#### *4.3. Crohn's Disease Exclusion Diet*

The Crohn's Disease Exclusion Diet (CDED) is a validated dietary intervention, firstly conceived in 2014 by Sigall-Boneh and colleagues, combining PEN with a specific exclusion diet [69]. The rationale of CDED is the avoidance of certain foods and dietary components (such as additives like emulsifiers or maltodextrins, food preservatives, etc), mostly belonging to western diets, deemed to act as triggers for intestinal inflammation, dysbiosis, altered intestinal mucous layer and impaired barrier function [12,21,22,70,71]. In addition, the use of real foods, even though in a controlled way, seems to meet the request of patients

and their parents, often blaming the monotony of EEN as responsible for a low adherence to treatment [72,73].

#### 4.3.1. CDED Efficacy

Sigall-Boneh and colleagues [69] first examined the effectiveness of CDED on 47 CD patients (mean age of 16 ± 5.6 years), through the administration of a polymeric formula covering 50% of their daily energy requirements and selected foods for the remaining 50%. After 6 weeks of treatment 70% of them (33/47) achieved clinical remission, evaluated through the Harvey-Bradshaw Index (HBI) and the PCDAI. Across the following 6 weeks, the quantity of polymeric formula was gradually reduced to 25%, while more selected foods were added to diet. Eighty percent of patients were still in remission at 12 weeks [69]. Another study demonstrated the efficacy of CDED in inducing remission even for children failing biological therapy [74].

A recent randomized controlled trial comparing CDED (50% PEN + CDED) to EEN in patients with luminal mild to moderate active CD demonstrated that CDED is as effective as EEN in inducing clinical remission at 6 weeks of treatment, with similar decrease in PCDAI and acute phase reactants [75]. Moreover, CDED seems to be even superior to EEN in terms of sustained remission rates and tolerability, leading authors to conclude that CDED + PEN is reasonably a good option either as first line treatment in luminal mild-moderate active CD or for maintenance of remission in the long term [75]. However, due to the lack of endoscopic assessment of mucosal healing - a major therapeutic goal current guidelines still don't recommend CDED + PEN neither as induction therapy nor for maintenance [35].

Another recent trial compared EEN to PEN plus a controlled diet rather similar to CDED to induce mucosal healing in active pediatric CD over a period of 6 weeks [76]. PEN group received a polymeric formula covering 75% of the daily requirement and one meal a day from an anti-inflammatory diet (AID) inspired to CDED (avoidance of processed foods with additives, animal fat, sugar, dairy products, and gluten) (Table 2).

**Table 2.** Not allowed foods in all phases - adapted from Sigall-Boneh et al. [69].


At week 6 clinical and endoscopic remission, defined as PCDAI < 10 and Simple Endoscopic Score for CD (SES-CD) ≤ 2, respectively, were evaluated, showing similar rates of both between EEN and PEN, with higher rates of clinical remission in the PEN group (81.9%) than described so far in other trials evaluating PEN [34,40]. However, some limitations of the study should be taken into account. PEN group received a greater amount of EN formula (75% of the daily requirement) than in other studies on PEN, perhaps explaining the higher remission rates. Moreover, there was the lack of randomization, which may have affected the clinical evaluation. Additionally, a small number of patients (*n* = 11) completed the protocol. Before recommendations can be formulated, further studies are necessary, aimed to replicate these data and to evaluate the real efficacy on a larger sample size of patients.

#### 4.3.2. Practical Characteristics of CDED

As previously mentioned, CDED owes its progressive diffusion both to the high efficacy in terms of clinical remission and to its variability, leading to a better acceptance by patients. CDED is a multi-stage whole food diet, based on the exclusion of animal and saturated fats, gluten, dairy products, or high-processed foods containing emulsifiers, and all packaged products. At the same time, it provides an increased consumption of fruits, vegetables, and resistant starch. The foods are classified into three main groups: mandatory foods, allowed foods and not allowed foods. CDED is then designed in 3 phases: the first two induction phases lasting 6 weeks each, while the third one starting from the 13th week and referred to as "maintenance phase" [69,73]. During the first phase, 50% of nutritional requirement is given in the form of a polymeric formula, poor in lactose and fibers (e.g., Modulen, Nestlè), while the remaining 50% is supplied by mandatory foods (Table 3), source of high-quality proteins, pectin, resistant starch and other beneficial fibers, all necessary for the production of SCFAs [73,77].

**Table 3.** Example of Crohn Disease Exclusion Diet - adapted from [70,73].


During the second phase the percentage of calories provided by the liquid formula is lowered to 25%, more foods are permitted with higher inclusion of fruits and vegetables and little quantities of bread, red meat and legumes (found to potentially aggravate symptoms) [78]. The third and last phase still consists of 25% polymeric formula and does not have a specific duration, allowing the patient to continue with a controlled diet, rich in healthy foods, with the final aim of better controlling the disease in the long term [73].

#### *4.4. Crohn's Disease Treatment With Eating Diet*

A newly devised nutritional intervention is the Crohn's Disease Treatment With Eating Diet (CD-TREAT), an individualized diet that aims to replicate composition, and hence the effect on the microbiome, of EEN using ordinary foods [71,79]. As previously said, EEN is likely to work by exclusion of dietary components thought to be detrimental for gut function and microbiome composition. Hence, similarly to EEN, in CD-TREAT some specific components are excluded (like lactose, gluten, processed meat or some additives) while others are allowed (lean meats, fish eggs, some fruits and vegetables, rich in macronutrients, vitamins and fibers) (Table 4).

CD-TREAT strength is represented by the higher variability and palatability in comparison to EEN. It has shown good percentages of remission after 8 weeks both in adults and in children: 80% (4/5) of adults showed a clinical response (wPCDAI score change > 17.5), while 60% (3/5) of pediatric patients were in remission (wPCDAI score < 12.5) [79]. Despite the exiguous group of patients tested, which make it necessary to verify these on a larger sample of individuals, such a diet seems to be an encouraging step forward in the future approach to CD treatment, especially as a substitute to the long-term use of EEN, whose acceptability could be limited, and as a solid food-based alternative which would be preferable [72].


**Table 4.** Exclusion diets with related included and excluded food groups - adapted from [71].

#### *4.5. Further Nutritional Strategies*

Further exclusion diets have been proposed, including the Specific Carbohydrate Diet (SCD), the Diet Low in Fermentable Oligo-, Di- and Monosaccharides and Polyol (FODMAPs), the Paleolithic Diet, and the Vegan Diet. Although some of them have shown interestingly results, their efficacy has been frequently evaluated in studies with several limitations. Many of these studies were mostly focused on the assessment of improvement in symptoms without standardized clinical outcomes, and with poor attention to inflammation and mucosal healing achievement [80]. Moreover, most data have been extrapolated from studies on adults, with sometimes conflicting results between those showing a real improvement of symptoms [81,82] and those showing no significative changing or even worsening of symptoms [83,84]. Moreover, some of these diets expose patients to the high risk of malnutrition due to the restrictiveness of foods allowed, so that their use is not routinely recommended in children/adolescents, unless potential benefits outweigh potential risks of the diet [20,36].

#### **5. Nutritional Strategies in Maintenance of Remission in Pediatric IBD**

Beyond the achievement of clinical remission, one of the most important goals in the management of IBD is the maintenance of remission, therefore avoiding, or at least delaying, the occurrence of a future relapse. Maintenance therapy of CD has classically consisted of immunosuppressants, such as thiopurines, or biologics in the last years. Despite their wellknown efficacy, both of these treatments are burdened by some negative aspects, including risk of side effects for thiopurines (e.g., infections, pancreatitis), loss of response over time and high cost for biologics [35]. For these reasons, trials investigating nutritional strategies for maintenance of remission have been carried out in the last years. To date, EEN is not recommended for maintenance of remission in pediatric patients, since efficacy has not been thoroughly evaluated in children and long-term adherence may be challenging after a first cycle of induction [36,55,80]. Although other nutritional interventions are usually intended for short-period use, it is not infrequent for many patients to continue using them for longer period of time. This behavior derives, on the one hand, by the current absence of a recommended diet for IBD patients in the long term and, on the other hand, by the fear

of relapses after food reintroduction. Indeed, many patients with an established diagnosis of IBD tend to avoid a variety of foods (most commonly grains, dairy products, vegetables and fruits) believed to provoke IBD symptoms and flares, or in order to ameliorate the control of disease [85,86]. Patients and their families frequently advocate dietary advices from physicians and are extremely interested about the proper dietary regimen to follow.

According to the recent ESPEN guidelines [20], no specific diet needs to be followed during remission phases of IBD, since none of the proposed alternative diets seems effective in maintenance of remission. It is important to mention that most exclusion diets (e.g., FODMAPs, gluten-free diet, lactose free diet) may improve symptoms in IBD patients but haven't been demonstrated to affect the inflammatory activity in the long term [9,36,71]. Diets should be then customized on the single patient according to individual's preference or in case of specific intolerance (e.g., reducing high-lactose containing products and/or using lactase treated products if lactase deficiency is suspected), keeping in mind the risk of malnourishment or nutritional deficiencies [20].

A recent pediatric study demonstrated that a subgroup of patients, who achieved remission using EEN and not in therapy with any other medications, can successfully continue with PEN supplements for maintenance of remission with lower flares rates at 1 year of follow up [87]. By contrast, no significative differences in relapse rates have been observed between patients using PEN and azathioprine and those only using azathioprine. For this reason, PEN associated with exclusion diets have been proposed in case of patients with mild disease and who have a low risk of flares, despite amount, duration and timing of PEN is still unknown [36,88].

#### **6. Malnutrition**

Malnutrition is a quite frequent occurrence in IBD patients, especially in CD patients, since different sites are affected in comparison to UC (small bowel vs colon-rectum). Pathophysiology of malnutrition is heterogeneous, involving multiple aspects, including: i) anorexia, food avoidance and self-imposed hypocaloric diets; ii) increased energy and nutrient losses, as a result of malabsorption and gut losses [89,90].

Data regarding prevalence of malnutrition in IBD children are mostly available among newly diagnosed patients, accounting approximatively 60% of newly diagnosed CD children and 35% of UC [90]. Protein-energy malnutrition is therefore a common finding in CD at the time of diagnosis, with low weight for age and body mass index (BMI). Then, nutritional status often fluctuates over time according to disease control and flares. Noteworthy, different studies have highlighted a progressive decrease in the degree of malnutrition among newly diagnosed IBD patients in recent years, especially in UC patients, in whom even overweight has been observed [90].

Nutritional assessment is a central component in the management of IBD patients, since a poor nutritional status is strictly related to growth failure, poor bone accrual, anemia, disrupted pubertal development over time and short stature in adulthood, as well as increased complications rates and poor prognosis [36,91,92]. Recommended approach to IBD patients should comprise: i) a periodical dietary history through a 3- to 5-day record of qualitative and quantitative intake of foods (macro and micronutrients); ii) evaluation of nutritional status with assessment of weight-for-age, height-for-age, and BMI z-score at every visit, and assessment of height velocity every 6 months [36,91].

#### **7. Micronutrient Deficiency**

Micronutrients and vitamins are frequently lacking in IBD, due to malabsorption linked to inflammation and/or to sub-optimal nutritional intake. Micronutrients deficiency can be present at the diagnosis as well as develop during the clinical course of the disease, so that IBD patients should be checked for micronutrients levels on a regular basis, and specific deficits should be appropriately corrected [92–94]. The most frequent vitamin deficiencies observed in IBD-patients are water-soluble B9 and B12 deficiencies, physiologically absorbed in the duodenum, jejunum and ileum, which are frequent site of

inflammation [90,95]. About micronutrients, iron and zinc are the most frequently lacking, with iron-deficient anemia (IDA) being the predominant type [90,96,97].

Necessity for single nutrients intervention varies according to type of elements/minerals considered. Zinc, selenium and magnesium deficiency should be not routinely assessed or supplemented [36]. By contrast, IDA should be periodically ruled out, especially during period of active inflammation. Treatment varies according to the disease phase: oral iron supplementation can be the choice of treatment in case of controlled disease and hemoglobin level > 10 g/dl, always keeping in mind the high risk of intolerance for the oral route. In case of active disease or moderate-severe anemia (hemoglobin < 10 g/dl), intra-venous ferric carboxymaltose replacement should be preferred [36]. Both children and adolescents are at risk of vitamin D deficiency during the clinical course of disease and should be always investigated for it. Vitamin D is a risk factor for low bone mineral density, whose additional risk factors are represented by cumulative corticosteroid dose, heightfor-age Z-score, and BMI Z-score [20,96]. For this reason, according to current guidelines, all children affected by CD and steroid-treated, serum calcium and 25(OH) vitamin D should be monitored and supplemented, if required, to help preventing low bone mineral density. Vitamin D level below 50 nmol/L or 20 ng/mL suggests vitamin D deficiency, requiring a proper supplementation. If encountered, osteopenia and osteoporosis should be then managed according to current osteoporosis guidelines [20]. For the risk of folate (B9) deficiency, especially in those using sulfasalazine and methotrexate, its measurement is generally recommended annually, with folate supplementation (1 mg/day or 5 mg/week) in case of documented deficiency [36].

#### **8. Conclusions**

Dietary factors and malnutrition play a primary role in the onset and management of pediatric IBD. An appropriate diet can decrease the risk of IBD flares, and an age-adequate nutritional status can decrease the risk of complications and surgery in the long-term stage of the disease. Enteral nutrition has a significant impact on mucosal inflammation in CD, and clinical response to oral polymeric diet is associated with down-regulation of mucosal pro-inflammatory cytokines. Exclusive enteral nutrition or CDED diet have shown to be more effective than steroids in inducing clinical remission in pediatric patients.

**Author Contributions:** U.C., V.D. and C.R. equally contributed to the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Diet in Intestinal Fibrosis: A Double-Edged Sword**

**Rachel Marion-Letellier 1,2,\*, Mathilde Leboutte 1,2, Asma Amamou 3, Maitreyi Raman 4,5, Guillaume Savoye 1,2,6 and Subrata Ghosh <sup>3</sup>**


**Abstract:** The natural history of inflammatory bowel diseases, especially Crohn's disease, is frequently complicated by intestinal fibrosis. Because of the lack of effective treatments for intestinal fibrosis, there is an urgent need to develop new therapies. Factors promoting intestinal fibrosis are currently unclear, but diet is a potential culprit. Diet may influence predisposition to develop intestinal fibrosis or alter its natural history by modification of both the host immune response and intestinal microbial composition. Few studies have documented the effects of dietary factors in modulating IBD-induced intestinal fibrosis. As the mechanisms behind fibrogenesis in the gut are believed to be broadly similar to those from extra-intestinal organs, it may be relevant to investigate which dietary components can inhibit or promote fibrosis factors such as myofibroblasts progenitor activation in other fibrotic diseases.

**Keywords:** fibrosis; Crohn's disease; diet; gut microenvironment

#### **1. Intestinal Fibrosis**

Inflammatory bowel diseases (IBD) are relapsing systemic inflammatory diseases, mainly affecting the gastrointestinal tract. IBD occurs in people with susceptibility genes triggered by environmental factors. It leads to an exacerbated immune response associated with a gut dysbiosis. The natural history of IBD is frequently complicated by intestinal fibrosis and strictures formation. More than half of patients with Crohn's disease (CD) develop intestinal fibrosis, especially when the ileum is involved (Montreal classification L1) [1,2]. We and others have previously shown that stricturing CD may not respond well to anti-inflammatory therapy such as anti-TNFα, the gold standard in IBD treatment [3]. Eighty percent of CD patients with intestinal fibrosis and strictures undergo resection, but it frequently recurs, leading to repeated surgeries. Chronic inflammation induces remodeling of the intestinal wall by a cascade of events from intestinal epithelial damages to angiogenesis and immune and mesenchymal cells activation [4]. There is currently no specific therapy to prevent or inhibit intestinal fibrosis and this therefore constitutes an unmet need in IBD.

#### *Western Diet*

Environmental factors, especially diet, may influence predisposition to develop IBD or alter its course. Diet can target both the host immune response [5,6], and intestinal microbial composition. In addition, diet is a recurrent concern for IBD patients and most of IBD patients believe that diet may be a trigger of disease activity [5,7,8]. The IBD incidence is higher in Western countries [5] and this incidence continues to increase in the newly

**Citation:** Marion-Letellier, R.; Leboutte, M.; Amamou, A.; Raman, M.; Savoye, G.; Ghosh, S. Diet in Intestinal Fibrosis: A Double-Edged Sword. *Nutrients* **2021**, *13*, 3148. https://doi.org/10.3390/nu13093148

Academic Editor: Ina Bergheim

Received: 23 July 2021 Accepted: 7 September 2021 Published: 9 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

industrialized countries adopting the Western diet [7]. Diet plays a key role in controlling gut immune homeostasis. Factors promoting intestinal fibrosis are currently unknown, but diet is a potential culprit. Indeed, the Western diet promotes fibrosis in studies from other organs [8,9]. The Western diet is characterized by an insufficient intake of healthy foodstuffs and an excessive amount of saturated fats, sugar, and salt. Even IBD patients in remission have considerably distorted and unhealthy dietary intake leading to an increased risk of nutritional deficiencies [10]. Very recently, a prospective cohort study has demonstrated an association between ultra-processed food consumption and IBD risk [11].

#### **2. Obesity**

The incidence of obesity is increasing worldwide and we previously demonstrated increasing body weight over time from 1991 to 2008 in CD as evidenced by baseline data from 40 randomized clinical trials [12]. In addition, CD patients exhibited a higher clinical disease activity and duration over the same time period [12]. Adiposity may thus play a potential role in initiating and perpetuating intestinal inflammation. In addition, obesity is associated with chronic gut inflammation and visceral fat accumulation as observed in CD. Recently, visceral obesity was associated with adverse outcomes in severe CD patients [13] while IBD patients with weight loss after bariatric surgery had fewer complications [14].

High-fat diets (HFD) have a significant impact on gut physiology and mucosal defenses (Figure 1). Thirty-days HFD is sufficient to alter the spatial distribution and composition of the microbiota [15]. Innate immunity is also altered with: (i) decreased antimicrobial peptide with (ii) a reduction in Paneth cells and (iii) a decrease in goblet cell number and mucus secretion [15,16]. In addition, HFD induced greater intestinal permeability [15]. A few studies have also investigated the effects of HFD on experimental colitis. Mice receiving HFD were more susceptible to chemically-induced colitis and exhibited more severe colonic inflammation [16–18]. Several mechanisms have been suggested: (i) a gut barrier dysfunction, (ii) intestinal hyperpermeability, (iii) pathobiont expansion, and (iv) decreased plasma myokine irisin or adipokine levels [16–18]. Nevertheless, effects of HFD are only investigated on acute chemically-induced colitis and their effects on chronic colitis and intestinal fibrosis are still unknown. In fibrosis from extra-intestinal organs, HFD has a deleterious effect. Wnt-β catenin signaling is activated in intestinal fibrosis and can be induced through diet-induced obesity. For example, consumption of HFD induced an upregulation of β catenin and activated epithelial-mesenchymal transition (EMT) in a murine model of colon cancer [19]. This work is in accordance with findings from numerous extra-intestinal models of fibrosis such as hepatic [20] or renal fibrosis [21] where HFD is associated with higher EMT through TGF-β and β catenin signaling.

Environmental factors promoting intestinal fibrosis are currently unknown. High fat diets (HFD) may contribute to promotes fibrosis, as observed in extra-intestinal organs.

HFD have a significant impact on gut barrier function. HFD altered innate immunity response results in: (i) lower production of antimicrobial peptides, (ii) a reduced number of Paneth cells, and (iii) a decrease in goblet cell number and mucus secretion. In addition, HFD induced higher intestinal permeability and HFD-fed mice are more susceptible to colitis. As observed in fibrosis from extra-intestinal organs, HFD may also induce factors that promote intestinal fibrosis such as Wnt-β catenin or epithelial-mesenchymal transition (EMT). A second putative mechanism of HFD on intestinal fibrosis is the involvement of epithelial endoplasmic reticulum stress. Diet is the main modulator of gut microbiota composition. HFD is associated with dysbiosis and may thus impact microbial components. It has been demonstrated that certain bacteria are able to activate ECM production in intestinal fibroblasts. In addition, bacterial ligands are able to promote angiogenesis and it may contribute to fibrosis development. Effects of HFD may be mediated through an increased adiposity. Adiposity may thus play a potential role in initiating and perpetuating intestinal inflammation. Indeed, visceral obesity is associated with higher complications in patients with IBD. Adipocytes are able to secrete adipokines such as leptin or adiponectin. While

leptin may exacerbate intestinal fibrosis, adiponectin may exert anti-fibrosis properties through a reduced extracellular matrix (ECM) deposition.

**Figure 1.** How high fat diets may contribute to intestinal fibrosis?

Involvement of epithelial endoplasmic reticulum stress has been recently suggested in CD fibrosis [22] and more precisely HFD is able to exacerbate endoplasmic reticulum stress in a model of pulmonary fibrosis [23].

Adipocytes secrete adipokines such as leptin or adiponectin in IBD mesenteric adipose tissue and serum. While leptin promote a Th1 profile, adiponectin antagonized TNFα and decreased adhesion molecules [24]. Very recently, Xie et al. investigated the effects of intraperitoneal injection of adiponectin in mice with chronic TNBS-induced colitis and they observed that adiponectin treatment reduced inflammatory markers such as colon myeloperoxidase activity and pro-inflammatory cytokines [25]. Adiponectin treatment also reduced extracellular matrix (ECM) deposition. The authors investigated the effect of adiponectin incubation in TGF-β1-treated primary human intestinal fibroblasts and they found that adiponectin reduced collagen level and the phosphorylation of Smad2 [25]. Diet-induced obesity may thus promote intestinal fibrosis via leptin. Other mechanisms may also be involved through microbial components. It has been recently demonstrated that bacterial ligands are able to promote angiogenesis through interaction with CEACAM1 in human intestinal microvascular endothelial cells (HIMEC) [26].

It has been demonstrated that creeping fat are associated to strictures [27]. More recently, Rieder's team deciphered the mechanisms underlying cross-talks between adipocyte environment from creeping fat and human intestinal muscle cells [1]. Fatty acids derived from creeping fat are able to induce human intestinal muscle cells hyperplasia. In addition, co-culture of HIMEC with whole creeping fat tissue induced their proliferation. They also demonstrated that adipokines from creeping fat of CD patients were able to induce an M2 macrophage subtype and TGF-β, a core cytokine in intestinal fibrosis [28]. Very recently, Devkota's team also demonstrated that there is specific translocation of a subset of viable bacteria such as *C. innocuum* from the gut microbiota to creeping fat [29]. It limits the systemic dissemination of gut bacteria but also leads to fibrosis development [29]. These

mechanisms observed in creeping fat may be also observed from adipocytes from visceral adiposity in obese IBD patients and further studies are required to decipher mechanisms underlying the effects of obesity on complications in IBD patients.

#### **3. High Salt**

The Western diet is characterized by a high amount of sodium intake >100 mmol/day [30], which is in excess of physiological need (i.e., 10–20 mmol/day). In Western countries processed foods are the main provider of sodium intake (approximately 75% of intake) [30]. A few recent studies have shown the potential of dietary salt to promote intestinal inflammation in colitis models [31–33]. It thus raised the potential for dietary salt to induce a more vulnerable environment to inflammatory insults. We have recently demonstrated that a high-salt diet (4%) exacerbates intestinal fibrosis in a rat model of chronic TNBS-induced colitis and fibrosis [34]. We also demonstrated that high salt fed colitic rats had higher undernutrition compared to standard diet fed colitic rats [34]. We investigated the effect of high salt in TGF-β-induced human colon fibroblasts, and reported that NaCl promoted ECM-associated proteins in fibroblasts. Taken together, our study suggested that dietary salt can activate intestinal fibroblasts, thereby contributing to exacerbation of intestinal fibrosis. Further clinical studies are required to investigate whether dietary salt may be considered as a risk factor for intestinal fibrosis.

#### **4. High Sugar**

Few studies have investigated the effect of sweet diets in colitis models [35,36]. Laffin et al. fed mice with a high sugar diet (50% of sucrose) 2 days before chemically-induced colitis induction. These mice had a higher susceptibility to acute colitis, a higher intestinal permeability, a decreased microbial diversity and a reduced production of short chain fatty acids. In addition, macrophages from high sugar fed mice were more responsive to liposaccharides. Interestingly, the authors were able to reduce high sugar-mediated proinflammatory effects such as histological score or epithelial damage by supplementation with short chain fatty acid acetate in the drinking water [35].

Khan et al. used a different approach by studying the effects of simple sugars in mice. Pre-treatment with simple sugars such as glucose, fructose, or sucrose at 10% in drinking water for 7 days upregulated the histological score and worsened colitis development in chemically-induced colitis development. The authors of the study fed IL-10−/<sup>−</sup> mice with glucose and this was associated with higher colon inflammatory mediators such as lipocalin-2 or pro-inflammatory cytokines [36]. They also observed a gut dysbiosis in high sugar-diet fed mice, in particular higher abundance of the mucus-degrading bacteria *Akkermansia muciniphila* [36].

The effects of a high sugar diet are not yet documented in preclinical models of intestinal fibrosis. Of note is that glucose is able to induce EMT in many extra-intestinal organs [37,38] and this mechanism may be also relevant in IBD-associated intestinal fibrosis.

#### **5. Beneficial Effects of Dietary Components**

Contrary to potential deleterious effects mediated by westernized dietary patterns, certain components of diet can prevent intestinal fibrosis development. These nutrients can target several mechanisms involved in intestinal fibrosis. They can act to inhibit or suppress inflammatory processes. They can target specific receptors such as PPARγ or AhR with anti-fibrotic properties. The nutrients can also act at a cellular level by down-regulating EMT processes. Diet is also the main modulator of gut microbiota which may prevent or inhibit fibrogenesis.

#### *5.1. Dietary Modulation of Receptors with Anti-Fibrosis Properties*

Some nutrients are able to target specific receptors such as PPARγ, AhR, or VDR with anti-fibrotic properties. PPARγ is a nuclear receptor highly expressed in the colon and these anti-fibrotic properties have been investigated by natural and synthetic ligands in IBD models.

AhR belongs to the basic helix–loop–helix superfamily of transcription factors and nutrients such as curcumin or tryptophan metabolites can act as AhR ligands. The AhR is widely expressed in the gut and its activation has been associated with intestinal homeostasis. The role of Vitamin D receptor (VDR) to regulate intestinal inflammation is well documented in preclinical models of IBD. More recently, its involvement in intestinal fibrosis has been reported and invalidation of VDR promotes intestinal fibrosis development in mice in response to DSS-induced colitis.

#### 5.1.1. Peroxisome Proliferator-Activated Receptor γ (PPARγ)

PPARγ is a nuclear receptor highly expressed in the colon and regulates intestinal inflammation [39]. Its anti-fibrotic properties have been investigated in IBD models. Speca et al. have used GED-0507-34, a novel PPARγ agonist, and have shown that preventive PPARγ agonist treatment reduced chronic colitis-induced intestinal fibrosis in mice and ECM-associated factors in TGF-β-induced intestinal fibroblasts and epithelial cells [40].

Many nutrients can target PPARγ [39,41] (Figure 2). The natural PPARγ agonist curcumin treatment from 2.5 to 10 μM reduced ECM-associated factors in TGFβ-induced intestinal fibroblasts as can the synthetic PPARγ agonist, rosiglitazone [42]. As myofibroblasts can be derived from various cell types in intestinal fibrosis, the authors validated their findings in epithelial cells and found that curcumin treatment also downregulated TGF-β-associated signaling in intestinal epithelial cells [42]. In addition, this effect was reversed by the use of GW9662, a synthetic PPARγ antagonist, showing the involvement of PPARγ in curcumin-induced anti-fibrotic effects [42]. The authors then confirmed their finding in vivo showing that curcumin at 200 mg/kg reduced chronic colitis-induced intestinal fibrosis and ECM-associated proteins such as fibronectin or CTGF [42]. This is also in accordance with a preclinical study using focal irradiation-induced fibrosis model [43]. In this murine model, 100 mg/kg of curcumin by gavage was able to reduce apoptosis in the injured area and intestinal and plasma IL-6 production [43]. As curcumin use is already validated in UC patients, the authors of this preclinical study hypothesized that curcumin may be relevant as a radioprotector. Novel therapeutic forms of curcumin have been developed, such as polycurcumin [44] or nanoparticle curcumin [45]. Both have been tested in preclinical IBD models and both reduced chemically-induced colitis.

**Figure 2.** Dietary modulation of receptors involved in the regulation of intestinal fibrosis.

#### 5.1.2. Aryl Hydrocarbon Receptor (AhR)

AhR is a member of the basic helix–loop–helix superfamily of transcription factors, which were first associated with cellular responses to xenobiotics [46,47]. More recently, nutrients such as curcumin or tryptophan metabolites can act as AhR ligands [46,47]. Upon ligand binding, a conformational change leads to AhR translocation into the nucleus and AhR with ARNT heterodimerization to induce target gene expression.

The AhR is widely expressed in the gut and its activation has been associated with intestinal homeostasis regulation [48]. Lamas et al. have shown that treatment with 6 formylindolo(3,2-b) carbazole (FICZ), an AhR agonist, reduced intestinal inflammation in Card9−/<sup>−</sup> mice [49]. IBD patients exhibited a reduced production of AhR ligands from the gut microbiota [49]. As dietary components can activate AhR to modulate inflammatory responses, Monteleone et al. have investigated whether FICZ exerts anti-fibrotic properties into the gut [50]. From other extra-intestinal fibrotic diseases, dietary ligands of AhR such as 2-(1 H-indole-3 -carbonyl)-thiazole-4-carboxylic acid methyl ester (ITE), L-kynurenin [51] or curcumin [42,52] are able to down-regulate ECM-associated proteins in fibroblasts.

Effects of AhR ligands on IBD-associated intestinal fibrosis are less documented. Treatment with FICZ from 100 to 400 nM decreased ECM-associated gene in stimulated fibroblasts from CD patients [50]. Similarly, 5 ng/mL of TGF-β up-regulated ECM-associated genes such as ACTA2 and COL1A1 in skin fibroblasts while FICZ at 100 nM decreased them [53].

In primary culture of human orbital fibroblasts, 1 ng/mL of TGF-β up-regulated ECM-associated proteins such as fibronectin, collagen I and α-SMA while ITE at 1 μM reduced them [54]. These data are consistent with a study in a liver fibrosis context where ITE treatment at 1 μM for 6 days inhibited ECM-associated proteins such as α-SMA in hepatic stellate cells [55].

#### 5.1.3. Vitamin D Receptor

Epidemiological studies have suggested that low serum vitamin D is associated with an increased IBD risk [56,57]. Similarly, vitamin D and its receptor VDR mediated antiinflammatory properties in experimental IBD models [58,59]. The role of vitamin D in intestinal fibrosis has been investigated. Johnson et al. have demonstrated that CARD-024, a vitamin D analogue was able to reduce ECM-associated markers in TGF-β-stimulated or stiffness-induced colonic fibroblasts [60]. In addition, down-regulation of colon VDR is observed in chronic CD patients and also in mice with chronic DSS-induced colitis and fibrosis [61]. VDR is also reduced in fibroblasts from CD patients [62]. Mitochondrial dysfunction has been described in patients with IBD and genetic deletion of prohibitin 1, a key protein of the inner mitochondrial membrane decreased in IBD, which can provoke ileitis in mice [63]. VDR is also involved in mitochondrial dysfunction [61] and its specific role in intestinal fibrosis has been recently demonstrated [61]. The authors of this elegant study first demonstrated that VDR expression was lower in intestinal stenotic areas in CD patients [61]. They then induced colitis-induced fibrosis by TNBS or DSS in mice with intestine-specific VDR deletion and they found that VDR deletion exacerbated intestinal fibrosis in both models. To decipher the mechanism underlying these anti-fibrosis effects, they used VDR invalidation in colonic fibroblasts, leading to their activation [61]. VDR invalidation also induced mitochondrial dysfunction mediated epithelial integrity.

#### *5.2. Dietary Modulation of Anti-Fibrosis Signaling*

Nuclear factor E2-related factor 2 (Nrf2) is a transcription factor involved in antioxidant response through the regulation of gene expression. Nrf2 signaling can regulate intestinal inflammation into the gut and has been recently proposed as a putative target in intestinal fibrosis. Nrf2 signaling can be activated by synthetic agonists and nutrients [64]. Sesamin derived from sesame seeds can counterbalance oxidative stress in intestinal epithelial cell line in response to H202 and Nrf2 knockdown abolished the sesamin effect [65]. The authors of this study also evaluated the effects of sesamin at 100 mg/kg in a chemicallyinduced colitis model and they observed that sesamin was more effective compared to 5-ASA at 50 mg/kg [65]. Other dietary compounds such as numerous polyphenols have been identified to activate NrF2 signaling. Very interestingly, biotransformation of plants by various *lactobacillus* leads to compounds that are dietary ligands of Nrf2 and the Western diet is also characterized by a low consumption of fermented foods compared to our ancient traditional dietary patterns [66].

Some fatty acids derivatives can be partial agonists of cannabinoids receptors (CB1, CB2) agonists and are defined as endocannabinoids. This is the case of anandamide and 2-arachidonylglycerol. By its dual role in intestinal inflammation and metabolic disorders, the endocannabinoid system may be a relevant target in the context of intestinal fibrosis [67]. Indeed, cannabinoid analogues treatment by palmitoylethanolamide (PEA) for 5 weeks was able to counterbalance an ovariectomy-induced mild obesity model with reduced food intake, body weight, and fat mass [68]. Interestingly, PEA treatment also reduced inflammation in colonic biopsies from UC patients and in mice with DSS-induced colitis and these inflammatory effects were mediated through PPARα [69]. This effect has not yet been evaluated in IBD-associated fibrosis but its effect in extra-intestinal fibrosis has been demonstrated [70,71]. Targeting the endocannabinoid system may be particularly useful in the context of obese IBD patients.

#### *5.3. Inhibition of Pro-Fibrotic Molecules by Amino Acids*

Glutamine is a conditionally essential amino acid [72] and is the preferred fuel used by intestinal cells to promote enterocyte proliferation. Glutamine also regulates tight junctions and reduces proinflammatory signaling [73]. Its effect on preclinical models of intestinal fibrosis has been evaluated. Glutamine enemas at 25 mg/kg reduced colon fibrosis, the number of α-SMA stained cells in the submucosa and ECM-associated proteins in rats with TNBS-induced colitis [74]. These results are in accordance with a study evaluating glutamine treatment in a radiation-induced model where glutamine administration at 1 g/kg/day was able to prevent radiation-induced enteropathy in rats [75]. Similarly, glutamine enemas from 4 to 12 weeks after the surgery reduced colonoscopic and histological scores and reduced the number of collagen fibers in tissue in an experimental model of diversion colitis [76]. Nevertheless, a recent meta-analysis performed on seven published articles about glutamine use in IBD found that glutamine supplementation has no effect on disease course and inflammatory markers in patients with IBD [77] but its effect on fibrosis prevention and/or inhibition has never been evaluated in IBD patients.

Arginine is also a conditionally essential amino acid and we previously demonstrated that arginine treatment was able to down-regulate IL-8 production in cultured intestinal biopsies from CD patients [78]. Nitric oxide is a product of the enzymatic conversion of arginine to citrulline. The role of arginine in intestinal fibrosis is not yet documented but it may be protective through NO pathway. Invalidation of iNOS accelerated high-fat-induced liver fibrosis and inflammation development in mice [79] and this effect was mediated through NO- mediated NF-κB activation. Interestingly, we have shown in an intestinal epithelial cell line that arginine treatment down-regulated cytokines-induced inflammation through the NO pathway [80] and Horowitz et al. have demonstrated that L-arginine treatment up-regulated NO production in HIMEC [81]. Targeting iNOS/NO pathway may be relevant in IBD-associated intestinal fibrosis.

#### *5.4. Anti-Fibrosis Properties of n-3 PUFA*

We have previously shown anti-inflammatory effects on n-3 PUFA in experimental models of IBD [82] but their effects on intestinal fibrosis are not yet documented. In extra-intestinal organs, n-3 PUFA such as EPA reduced ECM-associated markers and SMAD signaling in TGF-β-induced hepatic stellate cells [83]. These n-3 PUFA effects were reduced by PPARγ knockdown while GW9662, a PPARγ antagonist, did not alter n-3 PUFA effects [83]. In LPS-stimulated dermal fibroblasts, the effects of EPA and DHA were evaluated on fibrosis markers. DHA reduced mRNA levels of α-SMA and collagen

III whereas EPA did not. Interestingly, the DHA effect was reinforced when combined with short chain fatty acid butyrate [84]. It is in accordance with a study from Zeng et al., showing that DHA inhibits TGF-β-induced rat renal fibroblast activation at a dose and time-dependent manner [85]. DHA derivative such as resolvin D1 was also evaluated in extra-intestinal fibrosis model [86]. Resolvin D1 treatment was able to reduce mechanical stretch-induced EMT and SMAD signaling in a murine model of lung fibrosis [86].

#### *5.5. Dietary Manipulation of the Gut Microbiota*

While IBD are strongly associated with shifts in the gut microbiome, the role of microbial factors in intestinal fibrosis is largely unexplored. In vitro, bacterial ligands are able to induce proliferation and migration of intestinal endothelial cells [26]. In chronic DSS, epithelial damage contribute to bacterial translocation and a recent study highlighted the role of flagellin to induce ECM components by intestinal fibroblasts [87]. In vivo, intestinal fibrosis can be abrogated in germ free mice and fibrosis severity is associated with specific microbes in mice overexpressing a member of the TNF superfamilly called TL1a [88]. These specific bacterial strains are able to promote in vitro fibrosis [88]. Imai J et al. have infected mice models with CD-associated pathobiont adherent-invasive Escherichia coli (AIEC). While healthy mice were able to gradually eradicate their infection from the intestine, mice from Salmonella- or DSS-induced colitis models, AIEC infection exploited inflammation to persist leading to intestinal fibrosis through IL-33 receptor signaling [89]. As probiotics strains such as Saccharomyces cerevisiae CNCM I-3856 [90], Lactobacillus, or Bifidobacterium [91] can counterbalance AIEC-promoting inflammation, it may open novel therapeutic avenues in the treatment of intestinal fibrosis.

Diet is a strong modulator of gut microbiota by affecting its composition or as a substrate for microbial production of metabolites. For example, curcumin treatment is also associated with gut microbiota changes. Treatment with nanoparticle curcumin reduced colitis development and increased the abundance of butyrate producing-bacteria and fecal butyrate production [45].

Short chain fatty acids such as butyrate has been evaluated on in vitro angiogenesis in primary cultures of HIMEC [92]. The authors of this study found that butyrate treatment reduced VEGF-induced cellular proliferation, transmigration, and tube formation of HIMEC through down-regulation of COX-2.

#### *5.6. Reduction of Myofibroblast Activation*

Recently, berberine, an alkaloid extracted from medicinal plants, was able to inhibit EMT [93]. This study used conditioned medium from human intestinal fibroblasts to induce morphological changes and ECM-associated markers in a colonic epithelial cell line and these effects were reversed by berberine treatment at 100 μg/mL for 24 h [93]. The authors demonstrated that berberine reduced EMT by acting on the TGF-β/Smads signaling [93]. Myofibroblasts can be derived from various origins in intestinal fibrosis. For example, endothelial to mesenchymal transition (EndoMT) has been demonstrated in intestinal fibrosis [94]. Nutrients can modify this EndoMT and numerous nutritional approaches have been evaluated. We have previously evaluated DHA, a long chain n-3 PUFA in the primary culture of HIMEC. We have demonstrated that DHA pre-treatment can reduce IL-1β-activated HIMEC pro-inflammatory effects [82] such as decreased adhesion molecule VCAM-1, TLR-4 or cytokine production of IL-6, IL-8. Similarly, curcumin treatment at 10 μM reduced TNF and LPS-induced or irradiation-induced VCAM-1 through NFκB activation in HIMEC [95,96].

#### *5.7. Mucosal Healing*

The potential effect of probiotics on wound healing have been evaluated. Conditioned medium with the strain *Bacillus polyfermenticus* had pro-angiogenic properties in HIMEC by increasing cell migration, permeability, and tube formation and this effect is mediated though IL-8 production and NF-κB activation [97]. Results were confirmed in vivo in an acute model of DSS colitis [97].

Very few studies have also investigated more complex nutrients compared to unique nutrients in preclinical models of intestinal fibrosis. We have evaluated the effects of a polymeric diet enriched in TGF-β2 in a model of pre-pubertal rats with chronic TNBSinduced colitis and we failed to reverse the inflammation or intestinal fibrosis in our tested conditions [98]. Very recently, a study investigated the effects of fermented rice bran on post-colitis restoration demonstrating a reduction of ECM-associated markers and TGF-β/Smad signaling [99].

#### **6. Conclusions**

Diet may represent an underestimated risk factor for intestinal fibrosis. A better understanding of the crosstalk between nutrients and factors that promote intestinal fibrosis may enable to provide a better rationale for dietary advice to limit complications in IBD patients. As stipulated by the last ESPEN guidelines for clinical nutrition in IBD, all IBD patients should benefit from dietary counseling by a dietician, which will contribute to limit nutrition-related disorders [100]. In particular, restrictive diets are very popular in IBD patients and are being evaluated in clinical trials and these diets may contribute to a poor psychological well-being [101,102] and lead to undernutrition [100] unless closely supervised. Interestingly, two very recent studies highlighted the potential of Mediterranean diet in IBD patients. While the beneficial effect of this diet has been demonstrated on fibrosis in NAFLD patients with reduced cardiovascular or diabetes risk [103], a 6-month Mediterranean diet was able in IBD patients to reduce malnutrition-associated disorders, improved disease activity and inflammatory markers with a concomitant increased of a quality of life score [104]. The need for further nutritional intervention trials with diets that target factors-promoting intestinal fibrosis and/or address the Westernization of food in IBD-associated intestinal fibrosis are urgently required.

**Author Contributions:** All authors contributed equally. All authors have read and agreed to the published version of the manuscript.

**Funding:** Mathilde Leboutte was supported by a RIN grant from the Region Normandie and Asma Amamou by an international mobility grant from the Société Française de Nutrition Clinique et Métabolisme (SFNCM). The authors thank the "François Aupetit" Association (AFA Crohn-RCH) for its support.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

CD, Crohn's disease; ECM, extracellular matrix, EMT, epithelial-mesenchymal transition; IBD, inflammatory bowel disease; Nrf2, Nuclear factor E2-related factor 2, PPARγ, Peroxisome Proliferatoractivated Receptor γ, UC, ulcerative colitis.

#### **References**


## *Review* **The Gluten-Free Diet for Celiac Disease and Beyond**

**Bara Aljada, Ahmed Zohni and Wael El-Matary \***

Section of Pediatric Gastroenterology, Department of Pediatrics and Child Health, Max Rady College of Medicine, University of Manitoba, Winnipeg, MB R3A 1S1, Canada; aljadab@myumanitoba.ca (B.A.); zohniah@gmail.com (A.Z.)

**\*** Correspondence: welmatary@hsc.mb.ca

**Abstract:** The gluten-free diet (GFD) has gained popularity beyond its main medical indication as the treatment for gluten-induced immune-mediated disorders such as celiac disease (CD), dermatitis herpetiformis, gluten ataxia, wheat allergy, and non-celiac gluten sensitivity. However, the diet carries some disadvantages such as elevated costs, nutritional deficiencies, and social and psychological barriers. The present work aims to review indications, proven benefits, and adverse events of a gluten-free diet. Close follow-up with patients following the diet is recommended. More data is needed to assess the effectiveness of the diet in managing mental and cognitive disorders and to establish a connection between the brain and gluten.

**Keywords:** celiac disease; gluten; gluten-free diet

#### **1. Introduction**

Wheat is responsible for 20% of global caloric consumption, making it amongst the most valuable crops worldwide. Due to its versatility, wheat can be incorporated into various foods such as bread, pasta, cereals, and baked goods, which has propelled this crop into a staple food across the temperate world [1]. Despite its traditional view as a nutritious source containing proteins, vitamins, and minerals, concerns have been raised towards a specific component of wheat called gluten. As an ingredient, gluten consumption dates back to 6th-century Chinese cuisine, where its popularity grew amongst Buddhists who used gluten as a substitute for meat. Jia Sixie's *Qimin Yaoshu*, a Chinese agricultural encyclopedia written in 544 CE, mentions the use of gluten in noodles called bótuo. Ref- ¯ erences of gluten in Western literature appear much later. Bartolomeo Beccari authored *De Frumento*, an Italian treatise on wheat, in 1745, which documented the extraction of gluten from wheat flour. In 1803, John Imson defined gluten in the English language in *Elements of Science and Art* [2]. The industrial revolution played a prominent role in the rising popularity of wheat as a staple food in the Western diet. Over this time, wheat was inexpensively milled in large quantities and quickly distributed using the developing railroad systems [3,4]. Western popularity of wheat also rose during the Great Depression and World War II, when wheat-containing products, such as bread and pasta, served as cheaper substitutes of rationed foods such as dairy and meat [5,6]. Today, global wheat consumption has increased at a faster rate than all other cereals [7]. As a result, there is increasing attention towards the health effects of gluten.

#### **2. Gluten and Celiac Disease**

Gluten is a mixture of water-insoluble prolamin proteins. The prolamins, a complex group of alcohol-soluble lectins, constitute the significant seed proteins in cereals. They comprise about 80% of the starch endosperm storage proteins in mature cereal grains [8] and are yet to be found in other parts of the grain [9]. The most abundant gluten prolamins (called gliadin and glutenin) are predominantly found in wheat. However, prolamins can be found in different cereal species under specific names, such as in barley (called

**Citation:** Aljada, B.; Zohni, A.; El-Matary, W. The Gluten-Free Diet for Celiac Disease and Beyond. *Nutrients* **2021**, *13*, 3993. https:// doi.org/10.3390/nu13113993

Academic Editor: Ina Bergheim

Received: 14 October 2021 Accepted: 3 November 2021 Published: 9 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

hordeins), rye (secalins), oats (avenins), and other closely related grains although each has different molecular properties [10]. Gliadins comprise four significant alcohol-soluble monomers that collectively allow the gluten to elongate while providing intermolecular binding sites. The α-helices and β-sheets of α/β- and γ-gliadins allow for hydrogen and disulfide bonding, whereas ω-gliadins are composed of β-turns and have no α-helices or β-sheets [11]. In contrast, glutenins are alcohol-insoluble polymers that contribute to the flexibility and stability of gluten. When flour and water are mixed, a thiol group from glutenin interacts with disulfide bonds in gliadin, resulting in a shift towards intermolecular disulfide bonds [12]. The high concentration of glutamine amino acids results in many inter-chain hydrogen bonds that collectively provide strength [11,12]. In addition, gluten's high proline content alters the protein structure to provide elasticity [11].

Gluten is infamous for its role in celiac disease (CD). This autoimmune condition affects 1% of the population and leads to a reversible inflammatory process in small bowel mucosa with acute repercussions such as diarrhea, constipation, bloating, nausea, and vomiting [13–15]. Long-term consequences of mucosal damage and inflammation include malabsorption of nutrients such as calcium, vitamin D [16], iron [17], vitamin B12, folic acid, and zinc [18], leading to debilitating consequences such as osteoporosis, anemia, and stunted growth [19]. The clinical presentation of CD can vary depending on age. The classic presentation in pediatric patients includes malnutrition, failure to thrive, abdominal pain, and distension. In contrast, adults commonly present with gastrointestinal symptoms but with less severity [20], with most patients experiencing severe diarrhea [21].

Calcium and vitamin D absorption is of particular concern in the growth and development of pediatric patients with CD. Several factors influence bone mineral density, including inflammation from chronic disease, diet, absorption in the duodenum, and metabolism [22,23]. In patients with CD, mucosal damage of the small bowel impairs calcium and vitamin D absorption, leading to impaired bone health. Whereas vitamin D is involved in the hormonal regulation of bone remodeling and calcium absorption [24], calcium serves a structural role in bones as a component of hydroxyapatite [25]. Pediatric patients with CD are at risk of short stature and constitutional delay of puberty. One study [26] found CD in 2–8% of children with short stature and no gastrointestinal symptoms. After ruling out endocrine causes for short stature, the same study found that the proportion of CD increased to 19–59%. When using a growth chart, pediatric patients with CD typically demonstrate a decline in both weight and stature velocity, crossing several percentile lines in both categories [27]. In addition, Ludvigsson et al. [28] found that patients with CD are at increased risk of subsequent hip fracture and fracture of any kind, independent of age or sex. A lower bone mineral density is one theory for the observed fracture risk, specifically in the femoral neck region, which Melton et al. [29] determined to be the strongest predictor of future hip fracture. Kemppainen et al. [30] supported this finding after they determined that patients had significantly lower bone mineral density at the lumbar spine and femoral neck, with over 64% of men and 71% of female patients presenting with low calcifediol, a form of vitamin D produced in the liver.

The pathophysiology of CD involves a complex interplay between patients' genetics and environment [31,32] that leads to an inappropriate immune response. In turn, the maladaptive response can cause enterocyte destruction and subsequent villous atrophy [20]. Once consumed, gluten's glutamine and proline components prevent complete hydrolyzation of the immunoreactive epitope, producing peptides longer than ten amino acids in length [33]. Most notably, 13-, 19-, and 33-mer peptides are associated with the inflammatory reaction seen in CD [34,35]. In addition, gliadin prolamin upregulates the production of the intestinal peptide zonulin, which increases the permeability of tight junctions in the intestines. Several studies have shown increased levels of zonulin in patients with CD, making it a leading culprit in the pathogenesis of the disease [36,37]. In turn, these changes allow increased paracellular and transcellular peptide transport into the lamina propria [38]. Once in the gut mucosa, tissue transglutaminase (tTG) recognizes the glutamine and proline components, resulting in a series of deamidation and transamidation

reactions that increases peptide affinity to antigen-presenting major histocompatibility complex class two (MHC II) molecules [20,39]. One study found human leukocyte antigens (HLA)-DQ2 and DQ8 present in 98.4% of patients with CD and a presence of 89.6% in their families, suggesting a genetic component to the disease [40]. Antigen-presenting cells, such as dendritic cells, present the peptides to gluten-specific T cells, triggering both the innate and adaptive immune response. The innate response releases interleukin (IL)-15, leading to the destruction of gut epithelial cells by CD8+ (cytotoxic) T-lymphocytes [41]. The role of IL-17 in the pathogenesis of CD is still under investigation. Scaleia et al. [42] found lower levels of IL-17-producing T cells in the intra-epithelial lymphocyte (IEL) compartment of CD patients. They speculate that these changes negatively affect the homeostasis of the mucosal barrier while contributing to the altered permeability of the gut mucosa. In addition, the adaptive response generates inflammatory cytokines, activating either interferon-gamma (IFN-γ) producing T helper (Th)1 cells or Th2 cells that promote B-lymphocyte development into plasma cells. In turn, plasma cells produce anti-gliadin and anti-tissue-transglutaminase antibodies [43]. The effects of gluten on the gut mucosa of susceptible individuals vary but can include gut inflammation, villous atrophy, crypt hyperplasia, and CD4+ and CD8+ T-cell lymphocytic invasion of the intraepithelial tissue [44]. When studying the histopathological effects of CD and response to treatment, clinicians have traditionally used the Marsh-Oberhuber classification system (Table 1), which grades biopsies of the intestinal mucosa into four categories. A diagnosis of CD is reserved for Marsh 2&3 biopsies, which show increased IELs, crypt hyperplasia, and villous atrophy. Marsh 3 can be divided into three subgroups based on the degree of villous atrophy [45].



References [45–47].

#### **3. Gluten-Free Diet for Celiac Disease**

A lifetime gluten-free diet (GFD) is the treatment for individuals with CD [48]. Continuing to ingest gluten can exacerbate clinical symptoms, further intestinal damage, and increase the risk of future cancers, including small intestinal adenocarcinoma, esophageal cancer, melanoma, and non-Hodgkin's lymphoma [49]. For best results, this diet involves complete removal of gluten-containing foods from one's diet, including gluten proteins in wheat (gliadin), barley (hordeins), rye (secalins), oats (avenins), and other closely related grains. Due to such dietary cutbacks, individuals on a GFD are encouraged to incorporate other nutritious food sources such as fruits, vegetables, fish, meat, and gluten-free products. Over the years, scientific discovery, aggressive marketing, and media coverage of the benefits of a GFD have pushed food companies to produce more gluten-free options. As a result, 2016 saw over \$15.5 billion in retail sales of gluten-free foods, more than double 2011 figures [50]. The marked increase in gluten-free substitutes allows CD patients to reproduce the dietary habits and patterns of the general population [51]. To support consumers following a GFD, the Food and Drug Administration (FDA) passed a gluten-free labeling rule that outlined the legal requirements for labeling a product "gluten-free", "free of gluten" ,"without gluten", or "no gluten". A gluten-free product is defined as having <20 ppm of gluten while considering possible contamination during product creation [52]. In addition, local organic food stores commonly sell gluten-free products such as bread and pasta, albeit at a slightly higher cost and with a different taste than their gluten-containing counterparts.

#### *3.1. Efficacy of Gluten-Free Diet in Celiac Disease*

There has been extensive research on the efficacy of the GFD. A strict GFD can restore the histology of the small bowel architecture in 95% of children within two years [53], whereas 34% and 66% of adult patients experience mucosal recovery after two and five years, respectively [54]. However, some data show incomplete recovery in older patients (between 30 and 60 years) and no statistically significant recovery in individuals older than 60 years [55]. With small bowel recovery, a GFD can also improve symptoms of malabsorption, including diarrhea, steatorrhea, and weight loss. In addition, several studies have demonstrated significant improvement in bone mineral density after one year of the diet [56–58], although complete reversal of osteopenia could not be observed [59]. Soliman et al. [60] found that pediatric patients on a GFD for two years demonstrate average growth in height and weight compared to age-matched controls, with significant catch-up growth (increase in percentile position on a growth curve) in some patients. When comparing the efficacy of GFD between patients with mild enteropathy and those with villous atrophy, Kurppa et al. established that the GFD has similar outcomes in mucosal architecture recovery, reduction of intestinal mucosal inflammation, antibody concentrations, and symptom improvement [61]. Another study examining the GFD in patients with borderline enteropathy that does not meet the criteria of CD demonstrated restoration of mucosal structure and marked improvement in clinical symptoms within 8–12 months of adhering to the diet compared to controls [62].

#### *3.2. Skepticism of the Gluten-Free Diet*

Despite the extensive literature on the GFD, questions and skepticism remain. Even with careful preparation and storage of gluten-free food, the likelihood of cross-contamination has raised questions about the effects of chronic low-dose gluten exposure [63]. Therefore, the focus towards the GFD has shifted from the absolute removal of gluten from one's diet to limiting gluten intake below a specific threshold yet to be determined [64,65]. To identify the levels of safe gluten exposure, Akobeng and Thomas [66] reviewed thirty-five studies and found that gluten tolerability differed across studies and among study participants. While some patients had no histological abnormalities on a diet containing an average of 36 mg of gluten per day, others developed mucosal changes after only consuming 10 mg per day. They concluded that a daily intake of less than 10 mg is "unlikely to cause significant histological abnormalities." In comparison, definite mucosal changes were observed with daily intakes of 100 mg and 500 mg, respectively [67]. Taken all together, one may conclude that achieving a conclusive threshold that could result in mucosal changes in 100% of patients with CD is unlikely to occur, although a daily intake of less than 10 mg is likely to produce the safest results. Skepticism has also been raised about the GFD's ability to completely reverse abnormal changes in the gut mucosa. Gluten activation of the immune system has been shown to produce changes in the intra-epithelial lymphocyte compartment (IEL) and is associated with increased γ/δ IELs. Recent cell sequencing work found high levels of γ/δ IELs in histologically normal-appearing tissue, suggesting that some changes persist following a GFD [68].

#### *3.3. Challenges of a Gluten-Free Diet*

Given that gluten-containing food represents staple dietary components in many households worldwide, a GFD represents a dramatic lifestyle change that can pose many challenges. The threat of cross-contamination is a daily issue for individuals on a GFD. Sharing cupboards, countertops, and kitchen appliances with individuals who do not follow a GFD present possible contamination opportunities that impair the diet's success. For increased safety, meals should be prepared and stored away from non-gluten-free food. A similar concern extends to eating at restaurants, food courts, and food stands. Although individuals may face difficulty finding gluten-free options, restaurants are increasing their gluten-free options due to the rising popularity of the GFD. Of note, one study found that 32% of gluten-free labeled restaurant food tested positive for gluten, with gluten-free

pizza and pasta being the most likely culprits [69]. Processed foods made from glutencontaining ingredients represent another area of concern. Potentially hidden sources of gluten include certain soups, processed meat, French fries, seasonings, and beer. Although the degree of susceptibility to gluten-containing food varies between individuals, one study suggests a safe gluten contamination cutoff of 100 ppm (1/4 mg/kg) in gluten-free foods [70]. Therefore, eating foods that have a gluten-free label is generally a safe option for avoiding gluten-contaminated food. Finally, adhering to a GFD can be costly. In one study, gluten-free products were 242% more expensive than their gluten-containing counterparts in the same food group [71]. Several studies echoed this sentiment, demonstrating lower availability and higher cost of gluten-free foods [72,73]. However, despite these challenges, a prospective study by Mustalahti et al. [74] found declining symptoms and a significantly improved quality of life in patients with CD on a GFD, suggesting that the diet was not particularly distressing for the majority of patients.

#### *3.4. How to Monitor a Gluten-Free Diet for Celiac Disease*

Strict adherence to a gluten-free diet is the only recommended treatment for CD [75]. As such, one may suggest that newly diagnosed and symptomatic patients require more frequent assessment, especially as the gut mucosa is undergoing repair and clinical symptoms are improving. Several studies have investigated when patients should be followed up after initiating a GFD and with whom, given that there is no clear consensus. In a study examining patient preferences towards follow-up, most preferred to be seen by a dietician (with a physician available if needed), and 67% of respondents preferring annual appointments [76]. Kurppa et al. [77] found that follow-up by primary care physicians was just as successful as a follow-up in tertiary centers, with average GFD adherence rates at 88%. Current guidelines recommend routine blood tests at each follow-up visit, including checking for intestinal absorption with a complete blood count, serum calcium, ferritin, vitamin B12, and alkaline phosphate. In addition, thyroid function tests such as thyroid-stimulating hormone and thyroid hormone should be checked to screen for other autoimmune conditions, alongside liver function tests such as aspartate aminotransferase and alanine aminotransferase levels to monitor for autoimmune liver disease [44]. While there are no strong recommendations towards a particular monitoring tool, there are several methods for monitoring gluten-free diet adherence and efficacy in CD, including symptom assessment, dietetic interview, serology, stool and urine markers, and small bowel biopsy (Figure 1).

**Figure 1.** Methods for monitoring adherence to gluten-free diet.

#### 3.4.1. Symptom Assessment

The first step of monitoring the GFD in patients with CD is to identify ongoing symptoms and their severity. One study [78] found that upper gastrointestinal symptoms disappear first, while lower gastrointestinal tract symptoms, such as constipation, remain unchanged when re-evaluated 12–28 months after beginning a GFD. Abdominal bloating (51.3%), abdominal pain (45.9%), and constipation (29.7%) represent the most common symptoms at follow-up. The strongest positive predictors for ongoing symptoms at re-evaluation include experiencing symptoms for five years or more before diagnosis (OR 5.3, 95% CI 1.3 to 21.8) and having constipation at the time of diagnosis (OR 7.4, 95% CI 1.3 to 42). However, Rubio-Tapia et al. [54] found that clinical response to a GFD was an inaccurate marker for mucosal repair. Additionally, 62% of patients who experienced a clinical response to a GFD have continued mucosal damage at their follow-up biopsy, although symptomatic patients do not present with more severe histological lesions than asymptomatic patients [78]. Lahdeaho et al. [70] supported the limited utility of clinical response as a monitoring tool when they found that 22% of patients with significant small bowel damage had no symptoms. Nonetheless, symptomatic improvement is a potential motivator for the continued adherence to a GFD and serves as a limited tool for monitoring the disease.

#### 3.4.2. Dietetic Interview

A second option for monitoring the GFD is a dietetic interview conducted by a trained dietician or physician. There are various questionnaires available in a variety of languages that assess self-reported compliance with the GFD. The results from these surveys are often combined with visual analog scales that contain unmarked lines with anchor statements such as 'I never follow my diet' and 'I always follow my diet' at the boundaries [44,79]. Currently, the Standardized Dietician Evaluation (SDE) is the gold-standard interview format for assessing adherence to the GFD. A trained dietician conducts this interview, consisting of three main parts from which answers are graded according to a 6-point Likert scale. First, the dietician analyzes the patient's diet over twenty-four hours or three days. The patient then participates in a food-label quiz to determine which ingredients and additives are likely to contain gluten from a list of twenty-eight. Finally, the patient is assessed on their ability to check the labels of medicines, supplements, and cosmetics for gluten [80]. The Celiac Dietary Adherence Test (CDAT) is another popular screening tool. Developed by gastroenterologists, dieticians, psychologists, and celiac patients, this tool grades participants' answers to seven questions regarding their knowledge, opinions, and adherence to a GFD on a 5-point Likert scale [81]. Although the CDAT is highly correlated with the SDE [80], the SDE shows a stronger correlation with serological titers and duodenal biopsies [81]. Subjectivity, fear of judgment, and under-reporting of gluten consumption represent significant limitations to the interview format [82].

#### 3.4.3. Serology

Serological testing for antibodies associated with CD is another option for monitoring the GFD. Elevated levels of tissue transglutaminase antibodies (tTG-IgA), endomysial antibodies (EMA), and deamidated gliadin peptide (DGP) antibodies can indicate poor adherence to or efficacy of a GFD. Testing for tTG-IgA is a first-line diagnostic tool in the workup of CD with sensitivity and specificity levels above 95% [83,84]. Relative to other markers, the combination of high sensitivity, functionality, and cheaper costs of tTG-IgA make it a preferred choice for initial serological testing. Positive tTG-IgA results are often followed up by confirmatory EMA testing, which shows a higher specificity (99.0–100%) for CD [85,86]. Some studies have shown rising serum EMA levels before the appearance of villous atrophy, making it a potential early marker in CD [61,87]. Limitations of this marker include higher operating costs and less objective results due to the use of labor-intensive, resource-demanding, and operator-dependent immunofluorescence [88]. Testing for DGP is a newer technique for CD, although it poses a lower sensitivity (88%) and specificity

(94%) in the general population than the markers mentioned earlier [89]. Nonetheless, new evidence suggests that testing for DGP is of better use in pediatric patients for diagnosing CD and monitoring a GFD. In an investigation of forty children less than two years of age with features of chronic enteropathy, Barbato et al. found eleven patients with normal tTG, and EMA titers with elevated DGP and endoscopic changes consistent with CD [90]. In addition, Monzani et al. demonstrated that testing for DGP IgA and IgG in children had a sensitivity of 100% for screening for CD and was 52% more sensitive than tTG for monitoring GFD adherence [91]. A study by Liu et al. found that DGP levels normalized faster than tTg in children following initiation of a GFD, making DGP a possible early marker of a response to a GFD [92]. However, despite these antibodies' reported high specificity and sensitivity for diagnosing CD, serology has several limitations in its use as a marker for GFD efficacy, especially as it relates to mucosal repair. Serological markers represent the body's immune response to the disease and are not directly correlated with intestinal damage. A meta-analysis on the sensitivity and specificity of tTG IgA and EMA IgA assays determined that both serological markers had a poor correlation with mucosal damage in celiac patients undergoing a follow-up biopsy while on a GFD. In patients with villous atrophy (Marsh 3), tTG IgA had a sensitivity of 0.50 (95% CI 0.41–0.60) and a specificity of 0.83 (95% CI 0.79–0.87), while EMA IgA had a sensitivity of 0.45 (95% CI 0.34–0.57) and specificity of 0.91 (95% CI 0.87–0.94). Although a positive test result is a good indicator of persistent villous atrophy, most patients with mucosal damage will have normal antibody titers while on the GFD, making serology an unreliable marker for following mucosal repair and monitoring adherence [93].

#### 3.4.4. Stool and Urine Markers

Clinicians can also use stool and urine markers for monitoring GFD. Specific gluten peptides, such as the immunotoxic 33-mer peptide, are resistant to gastrointestinal degradation. In one study, over 30% of 33-mer peptides resisted hydrolysis in vitro simulated gastrointestinal digestion [94]. The degree of immunotoxic peptide absorption and excretion varies among individuals and can be influenced by differences in the gut microbiome and diet [95]. Some peptides are subsequently excreted in feces and can be detected by immunochromatographic strips, competitive ELISA, and Western blot [94,96]. In turn, detection of gliadin peptides in the stool can be used as evidence of gluten consumption and as a non-invasive marker of compliance with a GFD [96–98]. Whereas immunochromatographic strips are more likely to be used as clinical standard assays in point-of-care settings, ELISA is more likely to be used for more detailed quantification of gluten exposure when monitoring the efficacy of a GFD [99,100]. Comino et al. [94] found that ingestion of 50 mg of gluten was enough for detection in stool samples and that levels of gluten consumption were "roughly" correlated with gluten excretion 2–4 days after ingestion. The study concluded that the non-invasive nature of the immunologic tests could be used to monitor short-term adherence to GFD, involuntary gluten consumption from contaminated food, and for assessing the effectiveness of novel treatments for CD such as enzymatic therapies designed to destroy toxic gluten peptides. Another multicenter study [100] examined the use of ELISA to detect immunogenic gluten peptides (GIP) in patients on a GFD for at least one year and to compare the assay to other GFD monitoring tools. Researchers found that 30% of patients on a GFD had detectable GIP in their stools, suggesting they were either non-compliant with the GFD or involuntarily consuming contaminated food. The presence of GIP was strongly associated with symptoms associated with gluten exposure, with up to two-thirds of patients unresponsive to a GFD having detectable GIP on ELISA. In contrast, stand-alone use of dietary questionnaires and serum anti-tTG antibody levels revealed non-compliance in 18% of the same patients. The same study found no significant association between stool GIP and dietary questionnaires or serum anti-tTG antibody levels.

Researchers have also explored the use of urine samples as a monitoring tool for compliance and efficacy of a GFD. Moreno et al. [101] found that ingestion of greater than 25 mg of gluten results in urine GIP that are detectable on immunochromatographic strips as early as four hours after ingestion and remain detectable in the urine for up to 48 h. In addition, GIP levels were positively correlated with the level of gluten intake. Moreover, 89% of patients with CD and no intestinal mucosa damage on duodenal biopsy had no detectable GIP in their urine. Consequently, all patients with incomplete recovery of the mucosa had quantifiable GIP. Various factors can influence GIP concentration in the urine, including diet, daily liquid intake, weight, and gut microbiota.

#### 3.4.5. Small Bowel Biopsy and Pathology

At present, assessing small bowel pathology is the most accurate method for monitoring mucosal recovery in patients on a GFD. Several studies have shown mucosal damage on biopsy in patients with normal serology and clinical response to a GFD [54,102]. When performing a biopsy, multiple small bowel samples are collected given the patchy nature of histological abnormalities in CD and the declining sensitivity of biopsies for CD when less than four samples are taken [103,104]. Current guidelines give a strong recommendation, backed by a high level of evidence, for at least one duodenal bulb biopsy and at least four biopsies of the distal duodenum [104,105]. Pathologists look for the presence of crypt elongation and villous atrophy, the density of IELs, and the crypt-villous ratio while classifying the specimen according to the Marsh-Oberhuber scale [105]. Rubio-Tapia et al. [54] examined the rate of mucosal recovery, defined as a villous to crypt ratio of 3 to 1, at the first follow-up biopsy for adult patients on a GFD. Moreover, 35% of patients on a GFD receiving a biopsy within two years of starting the diet showed mucosal recovery, whereas 43% showed mucosal recovery when the first biopsy was taken between two to five years after initial diagnosis. Histological improvement, characterized by an increase of villous to crypt ratio ≥2.0 points relative to baseline, was observed in 45% of patients at the first follow-up biopsy. In addition, the average recovery time for mucosal repair was determined to be three years after starting a GFD. Although complete histological recovery is not universally achieved on a GFD, various studies suggest that mucosal healing can be seen in 57–76% of patients [44]. Compared to adults, pediatric patients s89howed a better response to the GFD, with up to 95% showing mucosal recovery within two years [53]. However, intestinal biopsies are more invasive, expensive, and unreasonable for monitoring every patient with CD than other monitoring tools. For this reason, the American College of Gastroenterology [104] gives a strong recommendation for long-term follow-up of a GFD based on history and serology alone. They further suggest that biopsies should be reserved for patients showing inadequate clinical response or relapse in symptoms while on a GFD.

#### **4. Gluten-Free Diet for Other Health Problems**

The gluten-free diet is recognized as the standard protocol for patients diagnosed with CD. However, the diet has recently gone mainstream, and individuals excluded from the CD diagnosis now make up most adherents. Chuong and colleagues [106] found that between 2009 and 2014, the prevalence of CD in the American population remained constant (0.7%) while the demographic of people who avoid gluten (PWAG) grew from 0.5% to 1.7%. Since the gluten-free diet is no longer a niche treatment for a select diagnosis and is now utilized more broadly by the general population, many studies have analyzed the benefits of the diet. Beyond patients with CD, the gluten-free diet is also recognized in the treatment of gluten ataxia, dermatitis herpetiformis, cognitive impairment, inflammatory bowel disease and irritable bowel syndrome, dermatitis herpetiformis, and non-celiac gluten sensitivity (Figure 2).

**Figure 2.** The GFD for health conditions other than celiac disease.

#### *4.1. Gluten Ataxia*

Gluten ataxia is an immune-mediated disease wherein ingestion of gluten causes the body's immune system to attack the nervous system tissue, specifically the cerebellum. Transglutaminase 6 (TG6) autoantibodies are more abundant in patients with gluten ataxia and have become an efficient marker for diagnosing the condition, as demonstrated by Hadjivassiliou and colleagues [107]. These antibodies are suspected to be the primary mechanism through which neurological diseases develop in individuals with gluten sensitivities. A study by Dipper and colleagues demonstrated that patients placed on a GFD experienced a decrease in TG6 autoantibodies and a sustained normalization in those who continued to follow the diet [108], suggesting that a GFD can be used to contain symptoms of gluten ataxia. While the GFD has proven its efficacy in treating gluten ataxia and CD, much of its perceived benefits towards other health problems remain questionable.

#### *4.2. Cognitive Impairment and Neurological and Mental Illnesses*

Recent studies have shown that there may be a correlation between gluten sensitivity and neurological diseases. Since TG6 autoantibodies are known to attack the nervous system as an immune-mediated reaction to gluten ingestion, a link may exist between this mechanism and other neurological illnesses beyond gluten ataxia. A study conducted by Hadjivassiliou and colleagues [109] analyzed the serum levels of antigliadin antibodies in 147 neurological patients, of which 53 (25 ataxia, 20 peripheral neuropathy, 5 mononeuritis multiplex, 4 myopathy, 3 motor neuropathy, 2 myelopathy) had no known cause for their diagnosis despite full investigation. They were compared alongside a second group of 94 patients that had known causes for their diagnosis. Finally, 50 healthy blood donors were used as the third group. Results demonstrated that the first group had significantly higher positive serum anti-gliadin antibodies than the other groups (57%, 5%, and 12%, respectively). These data establish a strong correlation between gluten sensitivity and neurological illnesses. Finally, another neurological illness of concern as it pertains to the GFD is autism. Autism diagnosis has started to increase, with a diagnosis of 1 in 88 children [110]. Patients

with autism have a higher prevalence of IgG antibodies to gliadin, the same antibodies associated with CD and gluten ataxia [111]. Since many children with autism have gastrointestinal symptoms, there seems to be a link between autism and gluten sensitivity. In a study conducted by Ghalichi and colleagues [112], 80 children with autism spectrum disorders (ASD) received either a GFD treatment (n = 40) or a regular diet treatment (n = 40); 53.9% of the children reported having gastrointestinal abnormalities. The ROME III questionnaire for evaluating gastrointestinal symptoms and Gilliam Autism Rating Scale 2 questionnaire (GARS-2) for assessing psychometric properties were used to evaluate the effects of the GFD versus the regular diet. Results demonstrated that children placed on the GFD experienced a significant decrease in both gastrointestinal symptoms (40.57% vs. 17.10%, *p* < 0.05) and behavioral disorders (80.03 ± 14.07 vs. 75.82 ± 15.37, *p* < 0.05). The children placed on the regular diet experienced an insignificant increase in both metrics. The research, however, is somewhat conflicting on this topic. Piwowarczyk and colleagues demonstrated that a GFD did not influence autistic symptoms, maladaptive behaviors, or intellectual abilities [113]. The relief in gastrointestinal symptoms in children with autism placed on a GFD is in accord with most of the literature, given that patients with elevated levels of IgG antibodies to gliadin tend to experience similar effects. However, the influence of the GFD on autistic symptoms and intellectual abilities is not well established.

Some evidence has emerged on the potential benefits of the gluten-free diet for depressive disorders, although the studies on this topic are scarce, and further investigations may be needed. Peters and colleagues [114] conducted a study that tested 22 patients with irritable bowel syndrome who had a negative CD diagnosis. The authors utilized a double-blind cross-over method which consisted of 3 days of one of 3 dietary challenges (diet supplemented with gluten, whey, and no supplement (placebo)) followed by a 3-day washout period before crossing-over. The mental state was assessed using the Spielberger State-Trait Personality Inventory (STPI), and results demonstrated that depression scores in the gluten group were higher than the placebo group (*M* = 2.03, 95% CI (0.55–3.51), *p* = 0.010). The whey ingestion group did not show significant differences in depression rates, cortisol secretion, or gastrointestinal symptoms. These results prompted the conclusion that a correlation could exist between depressive disorders and gluten ingestion. Another study conducted by Zylberberg and colleagues [115] found similar results in people who avoided gluten. Data from 22,274 participants of the 2009–2014 National Health and Nutrition Examination Survey compared depression, insomnia, quality of life variables, and psychotropic medication use in CD patients and people who avoid gluten to controls. The results obtained showed no increased odds of depression or sleep difficulty among CD patients. People who avoid gluten, however, had lower odds of depression compared to control after adjustments. The study calls for further investigation into the correlation between gluten exposure and depression. Since people who avoid gluten do so out of their conviction, they could be more health-conscious than the CD patients and control group without any formal diagnosis. Given that physical health is closely associated with mental health, there could be some confounding effects [116,117]. Moreover, schizophrenia is a particular mental health disease of interest when discussing the gluten-free diet. Some studies have shown that schizophrenic patients tend to have elevated anti-gliadin antibodies and transglutaminase 6 antibodies [115] despite not having a CD diagnosis. A review of articles conducted by Ergün, Urhan, and Ayer [118] found that symptoms of schizophrenia improved following the elimination of gluten from the diet. Another systematic review, conducted by Levinta and colleagues [119], searched different databases and found 9 studies relevant to gluten and schizophrenia; 6 of the studies demonstrated beneficial effects, namely decreased severity in symptoms and improved functioning. However, they found that only one of the studies was a randomized controlled trial, while seven were cross-over studies and one was an open-label pilot study. For this reason, the conclusions of the systematic review are limited. Nonetheless, there seems to be a connection between the consumption of gluten and schizophrenic disorders.

#### *4.3. Inflammatory Bowel Disease and Irritable Bowel Syndrome*

The GFD has also been utilized as a potential treatment for irritable bowel syndrome (IBS). Diarrhea-dominant irritable bowel syndrome (d-IBS) patients tend to experience symptom relief following the introduction of a gluten-free diet. In a study conducted by Wahnschaffe and colleagues, 60% of d-IBS patients positive for human leukocyte antigen (HLA)-DQ2 T-cell haplotypes, and CD-associated serum IgG had improved stool frequency. Moreover, gastrointestinal symptom scores returned to normal after 6 months of a glutenfree diet compared to 12% negative for these biomarkers [120]. While the patients with d-IBS were positive for CD biomarkers, these antibodies were not always collected. Therefore, the patients would be classified as having non-celiac gluten sensitivity. Another study conducted by Aziz et al. analyzed the effect of a 6-week gluten-free diet on patients with d-IBS (20 HLA-DQ2/8-positive and 21 HLA-DQ2/8-negative). Twenty-nine patients (71%) reported having their symptoms relieved following the completion of the trial [121]. These two studies demonstrate the potential benefits of the gluten-free diet for patients with d-IBS. Patients with inflammatory bowel disease (IBD) also appear to benefit from a GFD. Patients with CD are also more likely to have an IBD diagnosis than the general population [122]. Herfarth and colleagues [123] conducted a study analyzing the effects of a GFD on 1647 patients with IBD. CD and non-celiac gluten sensitivity were reported by 10 (0.6%) and 81 (4.9%) respondents, respectively; 314 participants reported having previously tried a GFD, and 135 reported current use of GFD (19.1% and 8.2%, respectively). Overall, 65.6% of all patients who attempted a GFD described improving their gastrointestinal symptoms, and 38.3% reported fewer or less severe IBD flares. Patients who were strict in GFD adherence also reported less fatigue. In addition, Lindberg and colleagues [124] compared the levels of IgG, IgA, and IgM antibodies to baker's yeast (saccharomyces cerevisiae), yeast mannan, gliadin, ovalbumin, and beta-lactoglobulin in twins with IBD versus those of healthy controls. Results demonstrated that the twins with ulcerative colitis had elevated IgA antibodies to gliadin levels than the other twins and healthy controls. For these reasons, the GFD may be an effective symptom managing diet in patients with ulcerative colitis IBD.

#### *4.4. Dermatitis Herpetiformis*

Treatment of patients with dermatitis herpetiformis (DH) with a GFD has been demonstrated to be highly effective [125,126]. In a study conducted by Reunala and colleagues [125], 81 patients with DH were treated with a GFD and a standard diet (control); 93% of patients placed on a GFD were able to reduce their dosage of dapsone, an antibiotic used in the treatment of DH, versus 16% in the control group. In addition, 28% of the GFD group were able to eliminate the antibiotic without experiencing any symptom aggravation. Another study conducted by Lionel et al. [126] demonstrated similar results. Twenty-four patients with DH were treated with a GFD and 16 (80%) were able to reduce their dapsone usage. Ten of the patients were able to eliminate the antibiotic and were free of any skin lesions. These two studies provide satisfactory evidence demonstrating the efficacy of a GFD in the treatment of DH.

#### *4.5. Non-Celiac Gluten Sensitivity (NCGS) and People Who Avoid Gluten*

While the benefits of the GFD in treating CD and gluten ataxia are established in the literature, many studies have sought to investigate whether the diet is viable in treating other conditions. A biopsy is generally needed to diagnose a patient with CD, requiring a gluten-free diet for treatment. In recent times, however, patients who were excluded from a CD diagnosis but had IBS-like symptoms when exposed to gluten have been put under the non-celiac gluten sensitivity (NCGS) umbrella. Patients with NCGS tend to have normal small intestinal permeability and will experience IBS-like symptoms such as bloating, stomach pain, fatigue, rash, and discomfort upon consuming gluten. The scientific literature is not always clear when establishing a diagnosis for this condition as the overlap with irritable bowel syndrome is strong. Patients with NCGS do not express CD-related antibodies and are generally harder to diagnose as they do not have well-

defined biomarkers. Still, there is strong evidence that supports the existence of this condition [127–129]. Theories have proposed that patients with NCGS may be sensitive to another component of wheat besides gluten, namely the amylase-trypsin inhibitors, which trigger a similar immune response as gluten [130]. Wheat germ agglutinin is another plant protein found in wheat that has been shown to trigger similar immune responses [131]. Thus, nonceliac gluten sensitivity patients may be more sensitive to wheat in general instead of specifically gluten, and the term non-celiac wheat sensitivity may describe the condition better. To complicate matters, a study conducted by Skodje and colleagues [132] found that NCGS patients experienced worsened symptoms following consumption of fructan but not gluten. Fructan is oligo, di, and monosaccharides that are often found in foods that also contain gluten. The double-blind cross-over challenge found that 59 selfdiagnosed NCGS individuals following a gluten-free diet experienced more symptoms based on the Gastrointestinal Symptom Rating Scale Irritable Bowel Syndrome (GSRS-IBS) following ingestion of fructan than following ingestion of gluten. No significant differences were found between gluten and placebo or fructan and placebo. While NCGS patients may not be specifically sensitive to gluten, they could be sensitive to other factors that are generally found alongside gluten. Impairments in cognitive health have been observed in some patients with gluten sensitivity before treatment. Brain fog, which is in the spectrum of non-celiac gluten sensitivity (NCGS), refers to problems involving memory, attention, executive function, and cognitive processing speed. Patients with NCGS often report this condition, and a gluten-free diet has been observed to improve some of these symptoms after one year of adherence [133].

People who avoid gluten (PWAG) are a broader term that describes GFD adherents excluded from CD and non-celiac gluten sensitivity through rechallenge tests. PWAG make up the largest demographic of gluten adherents. PWAG generally tend to do so due to its perceived benefits. However, as mentioned below, the gluten-free diet does not come without its adverse outcomes [50]. Therefore, it is important to educate individuals who adhere to the GFD without any diagnosis about the potential risks, given that these individuals do not immediately require the diet.

#### **5. Adverse Events of GFD**

While the benefits of a GFD seem alluring, it is important to consider the risks associated with the regiment. Much of the studies conducted on its health complications appear inconclusive and even conflicting. One of the main concerns of the GFD is the lack of beneficiary whole grains consumed by adherents, which can be a factor in coronary heart disease [134–137]. Assessing this hypothesis, Lebwohl et al. [138] studied the development of coronary heart disease in 64,714 women in the Nurses' Health Study and 45,303 men in the Health Professionals Follow-up Study. Food diaries were updated every 4 years from 1986 through 2010 and were used to assess the amount of gluten consumed. Results demonstrated an inverse relation between gluten intake and coronary heart disease risks. On the other hand, a systematic review conducted by Potter and colleagues [139] analyzed 27 articles on patients who adopted the GFD. Findings included increases in high-density lipoproteins, fasting glycemia, total cholesterol, and body mass index, although the increases in metrics were within a healthy range. The review did not find any increase in triglycerides, low-density lipoprotein, or blood pressure, prompting the conclusion that the GFD is not associated with coronary heart disease. Of note, only one of the articles had a control group and was limited by several confounders, so proper analysis is limited. Another analysis, conducted by Heikkilä and colleagues [140], found some support for the association of coronary heart disease with the GFD; however, they state that the evidence base was weak and had limitations. Finally, Kim and colleagues [141] demonstrated that the GFD was beneficial in waist circumference reduction and lowered BMI while maintaining that the diet was not associated with elevated cardiovascular disease risks. GFD followers, who were primarily women and were health-conscious, were found to have lower metabolic syndrome and lower cardiovascular disease risks, although the difference was

not statistically significant. Overall, most studies have called for more research to examine this hypothesis, as no conclusive findings have been made. Many studies lean towards excluding the GFD as a factor in cardiovascular disease risk. While the literature seems inconclusive regarding the GFD and coronary heart disease, other adversities associated with the regiment are clearer. Recent evidence suggests that the diet may worsen the gut microbiota while having nutritional deficiencies in iron, calcium, and fiber [142–145]. The diet is also associated with a high cost due to the further processing required for gluten-free alternatives [146]. Finally, some research has raised concerns about the negative social and psychological impacts that many GFD adherents experience, mainly due to the diet's restrictive nature [147,148].

#### *5.1. Gluten and the Gut Microbiome*

The importance of healthy gut microbiota in maintaining good health is becoming increasingly evident in the literature. The human gut contains two main phyla of bacteria, Bacteroidetes and Firmicutes. The role of these bacteria is highly diverse and includes the metabolism of nutrients consumed by the host, xenobiotic and drug metabolism, maintenance of structural integrity of the gut mucosal barrier, protection against pathogens, and immunomodulation [149]. Diet can also affect the health of gut flora, along with other factors such as birthing method (vaginal canal vs. cesarean) [150] and use of antibiotics [151]. In their study, David and colleagues [152] demonstrated that changes in diet rapidly influence the composition of the gut flora. Thus, it is important to consider the effects of gluten consumption and restriction on the health of the bacteria found in the host's gut. Golfetto and colleagues [153] conducted a study on 42 healthy subjects and 14 patients with CD to analyze the health of their gut bacteria. The study found that patients with CD had an imbalance in their intestinal microbiota despite being on a gluten-free diet. It is unknown whether the patients with CD had this imbalance before adhering to a GFD or whether they developed it later. Regardless, the diet does seem to cause this imbalance to persist. It is important to address these imbalances as they can cause the gastrointestinal symptoms patients may experience when consuming gluten [154]. In another study conducted by Palma and colleagues [155], 10 healthy subjects were introduced to a GFD, and their gut microbiota was monitored for a month. Results demonstrated a reduction in beneficial gut bacteria, raising concerns over the potential risks of the GFD. If gluten exclusion from the diet results in imbalances and a reduction in healthy gut flora, it is important to address those issues by providing support. Probiotic supplements are of particular interest as they can balance the gut flora and provide it with the nutrients it needs to remain healthy [145,156,157].

#### *5.2. Nutritional Deficiencies*

Concerns have been raised about the nutritional quality of GFD. As the diet has gained popularity from media coverage and celebrity promotion, many people have adopted the regiment despite having no diagnosed CD. For these individuals, gluten avoidance may cause nutritional deficiencies which could otherwise be prevented. For example, abnormal intake of vitamin D has been linked to the GFD. In a study conducted by Deora and colleagues [142], the medical records of 140 children with CD were assessed, and 70% of these children had vitamin D deficiency at the time of diagnosis. After 6 months of GFD adherence, these children found a slight improvement in their vitamin D uptake, although levels remained abnormal. Given that vitamin D is crucial to intestinal uptake of minerals, it is important to address this issue through supplementation and dietary adjustments. The diet also presents other deficiency concerns beyond vitamin D. In 2005, a survey conducted by Thompson and colleagues [143] in patients with CD found that women had a mean average intake of 46%, 44%, and 31% of their daily fiber, iron, and calcium intake requirements. In men, the values were 88%, 100%, and 63%, respectively. These results demonstrated that women who adhered to the GFD might be at risk of nutritional deficiencies, even more so than men. In addition, a systematic review conducted by Di

Nardo et al. [158] found that all children, regardless of whether they were diagnosed with CD or not, were at risk of nutritional deficiencies (insufficient fiber, iron, vitamin D, and calcium). Moreover, children with CD following a GFD had inadequate folate, magnesium, and zinc consumption, and higher consumption of high glycemic index foods. The paper suggested the need for therapeutic protocols to include education about these deficiencies so patients can ensure their diet is complete. Another disadvantage of the GFD is the potentially elevated level of lipid and protein consumption. Mariani and colleagues [159], in a survey analyzing the 3-day alimentary intake of 47 adolescents with CD, found that strict adherents to the GFD had increased intakes in protein and lipid, as well as a more significant prevalence of obesity (72% vs. 47% in control). These results are expected, as gluten tends to occur in carbohydrate-rich foods naturally and not protein- or lipid-rich foods. It is important to note that the quality of the lipids and proteins should be of concern and not necessarily the amount consumed. An analysis of gluten-free biscuits by Caponio et al. [160] found that they contained a sizeable mean amount of low-quality oleic transisomer fats (9.39%). Much of the literature suggests mitigating this negative side of the GFD by consuming more naturally gluten-free products and avoiding processed gluten-free alternatives as they do not seem to provide many nutritional benefits. It is important to note that many of the studies conducted on the deficiencies of the GFD have studied CD patients who suffer from gut inflammation and lack proper nutrient uptake. This may be a confounding factor as the results pertain to individuals affected by the disease and may not apply to those without CD. With that in mind, the GFD seems to have some nutritional disadvantages, namely deficiencies in vitamin D, iron, calcium, folate, and dietary fibers, and a higher amount of low-quality lipids found in some gluten-free alternatives [142–144]. Whether afflicted with CD or not, adherents of the diet should ensure that they reach daily recommended requirements for all minerals listed above. Avoiding processed gluten-free alternatives and eating naturally-occurring gluten-free foods high in iron, such as meats, fish, and green vegetables, is a recommended solution to this dietary problem associated with the diet [158].

#### *5.3. Cost*

Cost is another challenge associated with the GFD. Most products that naturally contain gluten, such as pasta and bread, require little to no processing to produce. Glutencontaining foods have been around for thousands of years and are found in many popular recipes. Bread, for example, is a staple in many dishes and diets across the world. Grains generally tend to be cheap to produce and grow in a wide range of climates, making them ideal for consumption. As these tend to contain gluten naturally, further processing is required to remove the protein while maintaining palatability. Significant price disparities are found across most gluten-free alternatives of gluten-containing foods due to this further processing requirement. In a study conducted by Missbach and colleagues [146], 63 gluten-free products and 126 of their gluten-containing counterparts were analyzed in 12 different Austrian supermarkets. The products included a broad range of items: bread, cereals, baking mixes, pasta, cookies, cakes, and snacks. Results showed that on average, gluten-free foods were 205% (cereals) to 267% (bread and bakery goods) more expensive than their gluten-containing counterparts. Whether this large price gap is because of overpricing due to high demand or processing costs is unclear. A 2-fold price gap between the two counterparts creates a tremendous burden on strict followers and may have detrimental financial effects. Other studies have confirmed this significant price difference as well [71,161]. Another study, conducted by Singh and Whelan [120], found that glutenfree products were more expensive (wheat-based products were 76–518% more expensive) and had limited availability in stores. Regular supermarkets had almost all the gluten-free alternative products (18/20, 90%); however, corner stores and budget supermarkets had limited gluten-free alternative products (1.8/20, 9%). Limited availability in convenience stores can further increase the cost of adherence, perhaps due to the time spent traveling to a regular supermarket that may be further away. One solution to circumvent this problem, provided by Di Nardo and colleagues [158], is to build the diet around naturally-occurring gluten-free foods and avoid the processed gluten-free alternatives altogether. This strategy can mitigate the price difference between the two counterparts and increase the number of stores one can buy from.

#### *5.4. Social and Psychological Impact*

Strict adherence to a gluten-free diet has been shown to cause some social and psychological adversities. Food is deeply embedded in cultures worldwide and can be found at the center of many social constructs. People gather and enjoy different foods to celebrate career accomplishments, weddings, religious rituals, and birthdays. Given that food exerts a significant influence on daily life, strict restrictions on dietary options can be a source of social isolation and unhappiness. In a study conducted by Zarkadas and colleagues [147], questionnaires were sent to members of the Canadian Celiac Association (5240 members) and 2681 adults (aged 16 or older) who had biopsy-proven CD. The questionnaire aimed to assess the recipients' quality of life based on celiac-associated questions and the "SF-12," a self-reported outcome measure assessing the impact of health on an individual's everyday life. It was discovered that 44% of respondents reported having difficulties following the diet for various reasons, including determining if foods were gluten-free (85%), finding gluten-free foods in stores (83%), avoiding restaurants (79%), and avoiding travel (38%). However, due to the rising popularity of the gluten-free diet, many restaurants now include labels on the menu identifying any gluten-free items. Another study, conducted by Silvester et al. [148], further demonstrated the social isolation associated with the diet. The study found that non-CD responders to the questionnaire were less likely to adhere to the diet strictly and would sometimes ingest gluten intentionally. This group was associated with more pleasure and less anger and depression than CD responders who were stricter in adherence. The study also found that social isolation was more pronounced in CD responders, and eating was mainly at home instead of in public spaces. These results further demonstrate the challenges with adhering to the diet at the psychological and sociological levels. MacCulloch and Rashid [161] conducted a survey and found that improved labeling, government support through income, and education for schools and restaurants greatly help adherents of the diet. The social frustrations associated with a GFD can also be seen in type 1 diabetes, another autoimmune disease requiring restrictive dieting [162–164]. Patients with CD have shown a higher prevalence of type 1 diabetes mellitus than the general population (4.4–11.1% versus 0.5%), and Camarca and colleagues [165] found that 50% of patients with CD and type 1 diabetes comply with the GFD compared to the higher rate of 73% in patients with only CD. In adolescents, significantly, strict compliance has been associated with a worsened quality of life. Although not recognized by the Diagnostic and Statistical Manual of Mental Disorders, a phenomenon involving restrictive eating called orthorexia nervosa represents another cause for concern in adherents of the GFD. Orthorexia nervosa describes the behavior of healthy individuals who pursue increasingly restrictive diets despite not needing to do so (patients have a healthy weight and no diagnosed condition), and can experience a decrease in quality of life and overall health [166]. A study conducted by Wolf and colleagues [167] found that highly vigilant GFD adherents had lower quality of life due to anxiety, putting them at risk of orthorexia nervosa as they vigorously pursue their gluten-free lifestyle. It is crucial to address these socio-psychological issues as they tend to be harder to quantify. Close follow-up of quality of life, level of adherence with a GFD, and patient education on possible risks in CD patients following the diet is essential.

#### **6. Conclusions**

The GFD remains the primary treatment for celiac disease and may work in other health conditions. Patients with celiac disease must adhere to a lifelong GFD as it is currently the best-known treatment. Treatment of patients with celiac disease should be done at an early age, as younger individuals tend to show more significant reversal of gastrointestinal symptoms and healing from damage to the gut mucosa. While the diet is recognized in treating gluten ataxia, little is known about its other benefits. Patients with d-IBS and IBD experience relief in gastrointestinal symptoms following treatment with a GFD. Patients with NCSG experience similar improvements following the diet. Maintaining a strict gluten-free lifestyle has many challenges, including nutritional deficiencies, high costs due to adherence, and social and psychological barriers. These issues should be addressed when recommending the diet for any individual. More research is required to assess the benefits of the diet in treating mental, neurological, and cognitive diseases (depressive disorders, autism spectrum disorder, and "brain fog", respectively). Large sample size studies can significantly help the current effort to assess the diet's risks and benefits, which is needed to educate individuals who follow the diet without any diagnosis. This cohort of people makes up the most prominent GFD adherents who usually follow the diet because of the reported benefits. Studies that provide strong evidence are needed in order to aid individuals in making well-educated decisions on whether to follow the diet.

**Author Contributions:** All authors: data curation, B.A. and A.Z.; writing—original draft preparation, W.E.-M.; writing—review and editing, W.E.-M.; visualization, W.E.-M.; supervision, W.E.-M.; project administration. All authors have read and agreed to the published version of the manuscript.

**Funding:** This review received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interests.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Nutrients* Editorial Office E-mail: nutrients@mdpi.com www.mdpi.com/journal/nutrients

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34

www.mdpi.com

ISBN 978-3-0365-4840-1