Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = Function as a Services (FaaS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 431 KiB  
Article
Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing
by Mauro Femminella and Gianluca Reali
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224 - 6 Sep 2024
Viewed by 1634
Abstract
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced [...] Read more.
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called “cold start” events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Graphical abstract

21 pages, 1402 KiB  
Article
Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing
by Urooba Shahid, Ghufran Ahmed, Shahbaz Siddiqui, Junaid Shuja and Abdullateef Oluwagbemiga Balogun
Sensors 2024, 24(13), 4195; https://doi.org/10.3390/s24134195 - 27 Jun 2024
Cited by 2 | Viewed by 1759
Abstract
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure [...] Read more.
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance. Full article
Show Figures

Figure 1

19 pages, 1008 KiB  
Article
On the Analysis of Inter-Relationship between Auto-Scaling Policy and QoS of FaaS Workloads
by Sara Hong, Yeeun Kim, Jaehyun Nam and Seongmin Kim
Sensors 2024, 24(12), 3774; https://doi.org/10.3390/s24123774 - 10 Jun 2024
Viewed by 1534
Abstract
A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for [...] Read more.
A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for underlying containerized contexts, including auto-scaling and pod scheduling. To take the advantages, recent cloud service providers also deploy self-hosted serverless services by facilitating their on-premise hosted FaaS platforms rather than relying on commercial public cloud offerings. However, the lack of standardized guidelines on K8s abstraction to fairly schedule and allocate resources on auto-scaling configuration options for such on-premise hosting environment in serverless computing poses challenges in meeting the service level objectives (SLOs) of diverse workloads. This study fills this gap by exploring the relationship between auto-scaling behavior and the performance of FaaS workloads depending on scaling-related configurations in K8s. Based on comprehensive measurement studies, we derived the logic as to which workload should be applied and with what type of scaling configurations, such as base metric, threshold to maximize the difference in latency SLO, and number of responses. Additionally, we propose a methodology to assess the scaling efficiency of the related K8s configurations regarding the quality of service (QoS) of FaaS workloads. Full article
(This article belongs to the Special Issue Edge Computing in Internet of Things Applications)
Show Figures

Figure 1

20 pages, 1178 KiB  
Article
µFuncCache: A User-Side Lightweight Cache System for Public FaaS Platforms
by Bao Li, Zhe Li, Jun Luo, Yusong Tan and Pingjing Lu
Electronics 2023, 12(12), 2649; https://doi.org/10.3390/electronics12122649 - 13 Jun 2023
Viewed by 1602
Abstract
Building cloud-native applications based on public “Function as a Service” (FaaS) platforms has become an attractive way to improve business roll-out speed and elasticity, as well as reduce cloud usage costs. Applications based on FaaS are usually designed with multiple different cloud functions [...] Read more.
Building cloud-native applications based on public “Function as a Service” (FaaS) platforms has become an attractive way to improve business roll-out speed and elasticity, as well as reduce cloud usage costs. Applications based on FaaS are usually designed with multiple different cloud functions based on their functionality, and there will be call relationships between cloud functions. At the same time, each cloud function may depend on other services provided by cloud providers, such as object storage services, database services, and file storage services. When there is a call relationship between cloud functions, or between cloud functions and other services, a certain delay will occur, and the delay will increase with the length of the call chain, thereby affecting the quality of application services and user experience. Therefore, we introduce μFuncCache, a user-side lightweight caching mechanism to speed up data access for public FaaS services, fully utilizing the container delay destruction mechanism and over-booked memory commonly found in public FaaS platforms, to reduce function call latency without the need to perceive and modify the internal architecture of public clouds. Experiments in different application scenarios have shown that μFuncCache can effectively improve the performance of FaaS applications by consuming only a small amount of additional resources, while achieving a maximum reduction of 97% in latency. Full article
Show Figures

Figure 1

29 pages, 2707 KiB  
Article
QuickFaaS: Providing Portability and Interoperability between FaaS Platforms
by Pedro Rodrigues, Filipe Freitas and José Simão
Future Internet 2022, 14(12), 360; https://doi.org/10.3390/fi14120360 - 30 Nov 2022
Cited by 8 | Viewed by 4216
Abstract
Serverless computing hides infrastructure management from developers and runs code on-demand automatically scaled and billed during the code’s execution time. One of the most popular serverless backend services is called Function-as-a-Service (FaaS), in which developers are often confronted with cloud-specific requirements. Function signature [...] Read more.
Serverless computing hides infrastructure management from developers and runs code on-demand automatically scaled and billed during the code’s execution time. One of the most popular serverless backend services is called Function-as-a-Service (FaaS), in which developers are often confronted with cloud-specific requirements. Function signature requirements, and the usage of custom libraries that are unique to cloud providers, were identified as the two main reasons for portability issues in FaaS applications, leading to various vendor lock-in problems. In this work, we define three cloud-agnostic models that compose FaaS platforms. Based on these models, we developed QuickFaaS, a multi-cloud interoperability desktop tool targeting cloud-agnostic functions and FaaS deployments. The proposed cloud-agnostic approach enables developers to reuse their serverless functions in different cloud providers with no need to change code or install extra software. We also provide an evaluation that validates the proposed solution by measuring the impact of a cloud-agnostic approach on the function’s performance, when compared to a cloud-non-agnostic one. The study shows that a cloud-agnostic approach does not significantly impact the function’s performance. Full article
(This article belongs to the Special Issue Distributed Systems for Emerging Computing: Platform and Application)
Show Figures

Figure 1

24 pages, 454 KiB  
Article
Cost and Latency Optimized Edge Computing Platform
by István Pelle, Márk Szalay, János Czentye, Balázs Sonkoly and László Toka
Electronics 2022, 11(4), 561; https://doi.org/10.3390/electronics11040561 - 13 Feb 2022
Cited by 12 | Viewed by 4448
Abstract
Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added [...] Read more.
Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added delays by all means. In this paper, we propose an integrated edge platform that comprises orchestration methods with such objectives, in terms of handling the deployment of both functions and data. We show how the integration of the function orchestration solution with the adaptive data placement of a distributed key–value store can lead to decreased end-to-end latency even when the mobility of end devices creates a dynamic set of requirements. Along with the necessary monitoring features, the proposed edge platform is capable of serving the nomad users of novel applications with low latency requirements. We showcase this capability in several scenarios, in which we articulate the end-to-end latency performance of our platform by comparing delay measurements with the benchmark of a Redis-based setup lacking the adaptive nature of data orchestration. Our results prove that the stringent delay requisites necessitate the close integration that we present in this paper: functions and data must be orchestrated in sync in order to fully exploit the potential that the proximity of edge resources enables. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

20 pages, 659 KiB  
Article
Experimental Analysis of the Application of Serverless Computing to IoT Platforms
by Priscilla Benedetti, Mauro Femminella, Gianluca Reali and Kris Steenhaut
Sensors 2021, 21(3), 928; https://doi.org/10.3390/s21030928 - 30 Jan 2021
Cited by 31 | Viewed by 5702
Abstract
Serverless computing, especially implemented through Function-as-a-Service (FaaS) platforms, has recently been gaining popularity as an application deployment model in which functions are automatically instantiated when called and scaled when needed. When a warm start deployment mode is used, the FaaS platform gives users [...] Read more.
Serverless computing, especially implemented through Function-as-a-Service (FaaS) platforms, has recently been gaining popularity as an application deployment model in which functions are automatically instantiated when called and scaled when needed. When a warm start deployment mode is used, the FaaS platform gives users the perception of constantly available resources. Conversely, when a cold start mode is used, containers running the application’s modules are automatically destroyed when the application has been executed. The latter can lead to considerable resource and cost savings. In this paper, we explore the suitability of both modes for deploying Internet of Things (IoT) applications considering a low resources testbed comparable to an edge node. We discuss the implementation and the experimental analysis of an IoT serverless platform that includes typical IoT service elements. A performance study in terms of resource consumption and latency is presented for the warm and cold start deployment mode, and implemented using OpenFaaS, a well-known open-source FaaS framework which allows to test a cold start deployment with precise inactivity time setup thanks to its flexibility. This experimental analysis allows to evaluate the aptness of the two deployment modes under different operating conditions: Exploiting OpenFaaS minimum inactivity time setup, we find that the cold start mode can be convenient in order to save edge nodes limited resources, but only if the data transmission period is significantly higher than the time needed to trigger containers shutdown. Full article
(This article belongs to the Special Issue Communications and Computing in Sensor Network)
Show Figures

Figure 1

29 pages, 788 KiB  
Article
A Traffic Analysis on Serverless Computing Based on the Example of a File Upload Stream on AWS Lambda
by Lisa Muller, Christos Chrysoulas, Nikolaos Pitropakis and Peter J. Barclay
Big Data Cogn. Comput. 2020, 4(4), 38; https://doi.org/10.3390/bdcc4040038 - 10 Dec 2020
Cited by 4 | Viewed by 6590
Abstract
The shift towards microservisation which can be observed in recent developments of the cloud landscape for applications has led towards the emergence of the Function as a Service (FaaS) concept, also called Serverless. This term describes the event-driven, reactive programming paradigm of [...] Read more.
The shift towards microservisation which can be observed in recent developments of the cloud landscape for applications has led towards the emergence of the Function as a Service (FaaS) concept, also called Serverless. This term describes the event-driven, reactive programming paradigm of functional components in container instances, which are scaled, deployed, executed and billed by the cloud provider on demand. However, increasing reports of issues of Serverless services have shown significant obscurity regarding its reliability. In particular, developers and especially system administrators struggle with latency compliance. In this paper, following a systematic literature review, the performance indicators influencing traffic and the effective delivery of the provider’s underlying infrastructure are determined by carrying out empirical measurements based on the example of a File Upload Stream on Amazon’s Web Service Cloud. This popular example was used as an experimental baseline in this study, based on different incoming request rates. Different parameters were used to monitor and evaluate changes through the function’s logs. It has been found that the so-called Cold-Start, meaning the time to provide a new instance, can increase the Round-Trip-Time by 15%, on average. Cold-Start happens after an instance has not been called for around 15 min, or after around 2 h have passed, which marks the end of the instance’s lifetime. The research shows how the numbers have changed in comparison to earlier related work, as Serverless is a fast-growing field of development. Furthermore, emphasis is given towards future research to improve the technology, algorithms, and support for developers. Full article
Show Figures

Figure 1

19 pages, 5801 KiB  
Article
MeSmart-Pro: Advanced Processing at the Edge for Smart Urban Monitoring and Reconfigurable Services
by Antonino Galletta, Armando Ruggeri, Maria Fazio, Gianluca Dini and Massimo Villari
J. Sens. Actuator Netw. 2020, 9(4), 55; https://doi.org/10.3390/jsan9040055 - 4 Dec 2020
Cited by 15 | Viewed by 3004
Abstract
With reference to the MeSmart project, the Municipality of Messina is making a great investments to deploy several types of cameras and digital devices across the city for carrying out different tasks related to mobility management, such as traffic flow monitoring, number plate [...] Read more.
With reference to the MeSmart project, the Municipality of Messina is making a great investments to deploy several types of cameras and digital devices across the city for carrying out different tasks related to mobility management, such as traffic flow monitoring, number plate recognition, video surveillance etc. To this aim, exploiting specific devices for each task increases infrastructure and management costs, reducing flexibility. On the contrary, using general-purpose devices customized to accomplish multiple tasks at the same time can be a more efficient solution. Another important approach that can improve the efficiency of mobility services is moving computation tasks at the Edge of the managed system instead of in remote centralized serves, so reducing delays in event detection and processing and making the system more scalable. In this paper, we investigate the adoption of Edge devices connected to high-resolution cameras to create a general-purpose solution for performing different tasks. In particular, we use the Function as a Service (FaaS) paradigm to easily configure the Edge device and set up new services. The key results of our work is deploying versatile, scalable and adaptable systems able to respond to smart city’s needs, even if such needs change over time. We tested the proposed solution setting up a vehicle counting solution based on OpenCV, and automatically deploying necessary functions into an Edge device. From experimental results, it results that computing performance at the Edge overtakes the performance of a device specifically designed for vehicle counting under certain conditions and thanks to our reconfigurable functions. Full article
Show Figures

Figure 1

22 pages, 12533 KiB  
Article
A Serverless Advanced Metering Infrastructure Based on Fog-Edge Computing for a Smart Grid: A Comparison Study for Energy Sector in Iraq
by Ammar Albayati, Nor Fadzilah Abdullah, Asma Abu-Samah, Ammar Hussein Mutlag and Rosdiadee Nordin
Energies 2020, 13(20), 5460; https://doi.org/10.3390/en13205460 - 19 Oct 2020
Cited by 19 | Viewed by 3265
Abstract
The development of the smart grid (SG) has the potential to bring significant improvements to the energy generation, transmission, and distribution sectors. Hence, adequate handling of fluctuating energy demands is required. This can only be achieved by implementing the concept of transactive energy. [...] Read more.
The development of the smart grid (SG) has the potential to bring significant improvements to the energy generation, transmission, and distribution sectors. Hence, adequate handling of fluctuating energy demands is required. This can only be achieved by implementing the concept of transactive energy. Transactive energy aims to optimize energy production, transmission, and distribution combined with next-generation hardware and software, making it a challenge for implementation at a national level, and to ensure the effective collaboration of energy exchange between consumers and producers, a serverless architecture based on functionality can make significant contributions to the smart grids advanced metering infrastructure (SG-AMI). In this paper, a scalable serverless SG-AMI architecture is proposed based on fog-edge computing, virtualization consideration, and Function as a service (FaaS) as a services model to increase the operational flexibility, increase the system performance, and reduce the total cost of ownership. The design was benchmarked against the Iraqi Ministry of Electricity (MOELC) proposed designs for the smart grid, and it was evaluated based on the MOELC traditional computing-design, and a related cloud computing-based design. The results show that our proposed design offers an improvement of 20% to 65% performance on network traffic load, latency, and time to respond, with a reduction of 50% to 67% on the total cost of ownership, lower power and cooling consumption compared to the SG design proposed by MOELC. From this paper, it can be observed that a robust roadmap for SG-AMI architecture can effectively contribute towards increasing the scalability and interoperability, automation, and standardization of the energy sector. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

26 pages, 17904 KiB  
Article
Analysis and Comparison of GPS Precipitable Water Estimates between Two Nearby Stations on Tahiti Island
by Fangzhao Zhang, Jean-Pierre Barriot, Guochang Xu and Marania Hopuare
Sensors 2019, 19(24), 5578; https://doi.org/10.3390/s19245578 - 17 Dec 2019
Cited by 5 | Viewed by 3284
Abstract
Since Bevis first proposed Global Positioning System (GPS) meteorology in 1992, the precipitable water (PW) estimates retrieved from Global Navigation Satellite System (GNSS) networks with high accuracy have been widely used in many meteorological applications. The proper estimation of GNSS PW can be [...] Read more.
Since Bevis first proposed Global Positioning System (GPS) meteorology in 1992, the precipitable water (PW) estimates retrieved from Global Navigation Satellite System (GNSS) networks with high accuracy have been widely used in many meteorological applications. The proper estimation of GNSS PW can be affected by the GNSS processing strategy as well as the local geographical properties of GNSS sites. To better understand the impact of these factors, we compare PW estimates from two nearby permanent GPS stations (THTI and FAA1) in the tropical Tahiti Island, a basalt shield volcano located in the South Pacific, with a mean slope of 8% and a diameter of 30 km. The altitude difference between the two stations is 86.14 m, and their horizontal distance difference is 2.56 km. In this paper, Bernese GNSS Software Version 5.2 with precise point positioning (PPP) and Vienna mapping function 1 (VMF1) was applied to estimate the zenith tropospheric delay (ZTD), which was compared with the International GNSS Service (IGS) Final products. The meteorological parameters sourced from the European Center for Medium-Range Weather Forecasts (ECMWF) and the local weighted mean temperature ( Tm ) model were used to estimate the GPS PW for three years (May 2016 to April 2019). The results show that the differences of PW between two nearby GPS stations is nearly a constant with value 1.73 mm. In our case, this difference is mainly driven by insolation differences, the difference in altitude and the wind being only second factors. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

14 pages, 3383 KiB  
Article
A Function as a Service Based Fog Robotic System for Cognitive Robots
by Hyunsik Ahn
Appl. Sci. 2019, 9(21), 4555; https://doi.org/10.3390/app9214555 - 27 Oct 2019
Cited by 4 | Viewed by 3808
Abstract
Cloud robotics is becoming an alternative to support advanced services of robots with low computing power as network technology advances. Recently, fog robotics has gained attention since the approach has merit relieving latency and security issues over the conventional cloud robotics. In this [...] Read more.
Cloud robotics is becoming an alternative to support advanced services of robots with low computing power as network technology advances. Recently, fog robotics has gained attention since the approach has merit relieving latency and security issues over the conventional cloud robotics. In this paper, a function as a service based fog robotic (FaaS-FR) for cognitive robots is proposed. The model distributes the cognitive functions according to the computational power, latency, and security with a public robot cloud and fog robot server. During the experiment with a Raspberry Pi as an edge, the proposed FaaS-FR model shows efficient and practical performance in the proper distribution of the computational work of the cognitive system. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 5065 KiB  
Article
Serverless Computing: An Investigation of Deployment Environments for Web APIs
by Cosmina Ivan, Radu Vasile and Vasile Dadarlat
Computers 2019, 8(2), 50; https://doi.org/10.3390/computers8020050 - 25 Jun 2019
Cited by 14 | Viewed by 8043
Abstract
Cloud vendors offer a variety of serverless technologies promising high availability and dynamic scaling while reducing operational and maintenance costs. One such technology, serverless computing, or function-as-a-service (FaaS), is advertised as a good candidate for web applications, data-processing, or backend services, where you [...] Read more.
Cloud vendors offer a variety of serverless technologies promising high availability and dynamic scaling while reducing operational and maintenance costs. One such technology, serverless computing, or function-as-a-service (FaaS), is advertised as a good candidate for web applications, data-processing, or backend services, where you only pay for usage. Unlike virtual machines (VMs), they come with automatic resource provisioning and allocation, providing elastic and automatic scaling. We present the results from our investigation of a specific serverless candidate, Web Application Programming Interface or Web API, deployed on virtual machines and as function(s)-as-a-service. We contrast these deployments by varying the number of concurrent users for measuring response times and costs. We found no significant response time differences between deployments when VMs are configured for the expected load, and test scenarios are within the FaaS hardware limitations. Higher numbers of concurrent users or unexpected user growths are effortlessly handled by FaaS, whereas additional labor must be invested in VMs for equivalent results. We identified that despite the advantages serverless computing brings, there is no clear choice between serverless or virtual machines for a Web API application because one needs to carefully measure costs and factor-in all components that are included with FaaS. Full article
Show Figures

Figure 1

8 pages, 226 KiB  
Article
Probiotic Supplementation in Preterm: Feeding Intolerance and Hospital Cost
by Flavia Indrio, Giuseppe Riezzo, Silvio Tafuri, Maria Ficarella, Barbara Carlucci, Massimo Bisceglia, Lorenzo Polimeno and Ruggiero Francavilla
Nutrients 2017, 9(9), 965; https://doi.org/10.3390/nu9090965 - 31 Aug 2017
Cited by 42 | Viewed by 8384
Abstract
We hypothesized that giving the probiotic strain Lactobacillus reuteri (L. reuteri) DSM 17938 to preterm, formula-fed infants would prevent an early traumatic intestinal inflammatory insult modulating intestinal cytokine profile and reducing the onset of feeding intolerance. Newborn were randomly allocated during [...] Read more.
We hypothesized that giving the probiotic strain Lactobacillus reuteri (L. reuteri) DSM 17938 to preterm, formula-fed infants would prevent an early traumatic intestinal inflammatory insult modulating intestinal cytokine profile and reducing the onset of feeding intolerance. Newborn were randomly allocated during the first 48 h of life to receive either daily probiotic (108 colony forming units (CFUs) of L. reuteri DSM 17938) or placebo for one month. All the newborns underwent to gastric ultrasound for the measurement of gastric emptying time. Fecal samples were collected for the evaluation of fecal cytokines. Clinical data on feeding intolerance and weight gain were collected. The costs of hospital stays were calculated. The results showed that the newborns receiving L. reuteri DSM 17938 had a significant decrease in the number of days needed to reach full enteral feeding (p < 0.01), days of hospital stay (p < 0.01), and days of antibiotic treatment (p < 0.01). Statistically significant differences were observed in pattern of fecal cytokine profiles. The anti-inflammatory cytokine interleukin (IL)-10, was increased in newborns receiving L. reuteri DSM 17938. Pro-inflammatory cytokines: IL-17, IL-8, and tumor necrosis factor (TNF)-alpha levels were increased in newborns given placebo. Differences in the gastric emptying and fasting antral area (FAA) were also observed. Our study demonstrates an effective role for L. reuteri DSM 17938 supplementation in preventing feeding intolerance and improving gut motor and immune function development in bottle-fed stable preterm newborns. Another benefit from the use of probiotics is the reducing cost for the Health Care service. Full article
(This article belongs to the Special Issue Prebiotics and Probiotics)
Back to TopTop