Next Article in Journal
Influence of Post-Harvest Processing and Drying Techniques on Physicochemical Properties of Thai Arabica Coffee
Previous Article in Journal
Influence of Temperature and LED Light Spectra on Flavonoid Contents in Poa pratensis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of the Efficiency of Machine Learning Algorithms for Identification of Cattle Behavior Using Accelerometer and Gyroscope Data

1
Department of Computer Systems and Technologies, Faculty of Electrical Engineering, Electronics and Automation, University of Ruse “Angel Kanchev”, 7000 Ruse, Bulgaria
2
Department of Automation and Electronics, Faculty of Electrical Engineering, Electronics and Automation, University of Ruse “Angel Kanchev”, 7000 Ruse, Bulgaria
3
Department of Morphology, Physiology and Nutrition of Animals, Trakia University, 6000 Stara Zagora, Bulgaria
4
Agricultural Academy, Research Institute of Mountain Stockbreeding and Agriculture, 5600 Troyan, Bulgaria
*
Authors to whom correspondence should be addressed.
AgriEngineering 2024, 6(3), 2179-2197; https://doi.org/10.3390/agriengineering6030128
Submission received: 30 May 2024 / Revised: 9 July 2024 / Accepted: 12 July 2024 / Published: 16 July 2024
(This article belongs to the Section Livestock Farming Technology)

Abstract

:
Animal welfare is a daily concern for livestock farmers. It is known that the activity of cows characterizes their general physiological state and deviations from the normal parameters could be an indicator of different kinds of diseases and conditions. This pilot study investigated the application of machine learning for identifying the behavioral activity of cows using a collar-mounted gyroscope sensor and compared the results with the classical accelerometer approach. The sensor data were classified into three categories, describing the behavior of the animals: “standing and eating”, “standing and ruminating”, and “laying and ruminating”. Four classification algorithms were considered—random forest ensemble (RFE), decision trees (DT), support vector machines (SVM), and naïve Bayes (NB). The training relied on manually classified data with a total duration of 6 h, which were grouped into 1s, 3s, and 5s piles. The obtained results showed that the RFE and DT algorithms performed the best. When using the accelerometer data, the obtained overall accuracy reached 88%; and when using the gyroscope data, the obtained overall accuracy reached 99%. To the best of our knowledge, no other authors have previously reported such results with a gyroscope sensor, which is the main novelty of this study.

1. Introduction

Today, an average of 350 million tons of animal meat is consumed globally in one year [1]. Products such as meat, milk, eggs, etc., come from a variety of animal farms around the world. Regardless of the type of the farm, or the size and geographical position, every farm faces similar problems and struggles. Daily, the farmers are responsible for feeding the animals and caring for their well-being. While there are numerous methods for automated feeding, the problem with monitoring the health of the animals is still pretty much a manual task via direct human observation of cattle behavior [2]. This problem is especially prominent in small and medium farms, where the farmers cannot set aside additional resources for people who just observe the animals [3]. However, there are different ways to observe the cattle such as video monitoring where there is no need for someone to be out with the animals and instead a camera is used to record them [2]. These video records can be later analyzed and information on the health of the animals can be gathered. Even though this method is digital and appealing, it still fails in its usefulness, as there is still a need for a person to watch the video clips and go through hours of video material. This is a time-consuming task and in certain cases it is impractical, as there might be animals that are not caught on camera but need attention. Furthermore, if the animals are only observed via cameras which are not analyzed in real-time, there might be a situation where an animal needs urgent care and there would not be anyone to identify the problem.
Regardless of the chosen method for observation, the problem becomes bigger and more complex with the expansion of the farms in terms of the number of animals and people who tend them [4]. There has been some research on the problem and the available solutions have often been based on automated systems, which provide continuous monitoring and analysis [5]. Such systems gather the data via different methods, the most common of which is through the use of sensors and the application of machine learning algorithms [6]. This approach provides the ability for enhanced understanding of health status [3], well-being [7], reproductive performance [8], etc.
Naturally, the question about the possibility of some kind of automation arises as well as questions such as “Can we make some predictions about possible future diseases among the animals?”, “Can we identify abnormal behavior?”, “Can we build a reliable semi-autonomous system that will monitor free grazing animals and will alert in real-time on possible problems?”.
When it comes to cattle, behavioral activities such as laying and rumination have been well studied because they are closely related to animal welfare [9,10]. Rumination is an essential activity for energy intake [11] and it is one of the main factors for early diagnosis of health disorders and optimization of reproductive processes with cows [12]. Rumination is the process of digestion in ruminants, the main role of which is to physically break down the roughage to facilitate its passage from the rumen to the small intestine [13,14]. According to some authors [13], cows ruminate approximately 450–550 min/day, and a reduction in the time is an important sign of digestive problems and cow welfare. Rumination activity usually occurs during breaks between meals and at night time [13]. The time an animal spends in survival is influenced by the species, breed, physiological and health status, productivity, food intake, ration composition, etc. A ration containing a higher percentage of roughage with long fibers increased the rumination time [15]. The duration of the rumination time affects the composition and yield of milk, as well as the reproductive performance of the cows [16]. Monitoring rumination time as well as the other cumulative behavioral times in cows breeding can be a reliable indicator of the presence of heat stress, calving time, estrus, some diseases, as well as subclinical ketosis in dairy cows [12,17,18]. The ratios duration of rumination (DR) in staying/laying position, duration of feeding (DF)/DR, time from the end of feeding to start ruminating are the basis for calculating the welfare indices.
The behavior of cows largely depends on the environment and the comfort it provides. A series of investigations [13,19] found that the total time spent resting was 36–53% and mainly (73–90%) during the night. Having performed experiments over many years, it was concluded that both very low and high temperatures lead to reduced duration of rest, but when yards are available, cows prefer them regardless of the season [20]. Over the last 30 years, an undesirable trend was established in Bulgaria—the relative proportion of lame dairy cows reared in modern farms has increased [21]. Ref. [22] concluded that the main factors affecting laying behavior are as follows: age, heat, illness, housing system, bedding material, tying system, and stocking density. Increases in laying time are related to increased levels of stress hormones, lameness, and injuries [21,23].
The movement of the animal characterizes the general physiological state and is associated with the processes of intake and chewing of feed and the subsequent metabolic transformations (rumination and belching). Like all living creatures, cattle also need rest, which is related to certain physiological processes affecting productivity. Usually, cows receive a ration of about 35–55 kg of different fodder—concentrated, coarse, juicy, root crops, etc. Under normal climatic conditions and in good health, cows are fed 12 to 15 times a day [24]. In the presence of changes in their surrounding environment, as well as in their state of health (heat stress, diseases, etc.), they reduce the frequency of feeding. During heat stress, cows have 3 to 5 meals per day [24,25]. The duration of the rumination periods depends on the content, type, and amount of food received, as well as on the time of day, the physiological state of the animal, and other factors. From this point of view, considering the movements of the rumen is particularly important.
Different sensors and classification approaches are used for the identification of animal behavior. In [2], a very thorough and detailed review of the devices, sensors, processing techniques, and classification methods for animal behavior pattern classification was made. They examined 17 papers on topics specific to cattle behavior classification and some of the most notable discoveries were that the accelerometer is one of the most common monitoring methods. One of the commercial solutions for cattle monitoring is RumiWatch, used for rumination, feeding, and other activities in dairy cows. Furthermore, some authors proposed the usage of combined data sources and features, such as accelerometers and geographical data from GPS, or accelerometers and pressure sensors [26]. Different databases were built, for up to 120 h of labeled data and eight months of measurements [7].
In [4], data from accelerometers mounted on the right rear limb of cattle animals was used to study the behavioral patterns of cattle using decision trees. The study was aimed at recognizing patterns in animal behavior to detect health problems. The authors used accelerometer data for the correct classification of three cattle activities: laying, standing, and walking. Furthermore, several additional features, based on the raw data from the accelerometer were used—vector magnitude max (VMM), signal magnitude area (SMA) [27,28,29], and signal vector magnitude (SVM) [29]. All of the additional features were obtained in the aggregation of the x, y, and z values of the accelerometer for every 3, 5, or 10 s of data. The SMA and SVM features were also reported as useful in [6] when classifying behavior based on accelerometer data.
Thereafter, the authors concluded that the use of accelerometers in animal observation is suitable for the classification of some of the basic activities, such as laying and standing. It should be noted that the placement of the sensor on the limb is a factor for correct classification. The added features are found to be useful for distinguishing between walking and standing.
In [6], an accelerometer was placed in a collar on a cow in order to identify the activities grazing, walking, laying, and standing. The usage of 61 features extracted from the original x, y, and z values is quite interesting and the accuracy reached by the authors was around 82–89%. Some of the used features are signal magnitude axis [30], a m a g = a x 2 + a y 2 + a z 2 , considered to be orientation-independent, where a x is the acceleration on the x-axis of the accelerometer, and a y and a z respectively for the y-axis and z-axis. Other used features include variance, standard deviation, minimum, maximum, quartiles, and range for every axis. Additional features used are signal magnitude area, average intensity [31], movement variation, skewness, kurtosis, and spectral entropy.
In [32] the authors also used a collar with an accelerometer sensor for measuring and classifying cattle movements. The support vector algorithm was used to obtain the following target classes—standing, laying, ruminating, feeding, normal walking, lame walking, laying down, and standing up. The generated features were based on data grouped in 10 s piles.
In [33] the authors used a deep neural network (DNN) with three hidden layers, a single-layer artificial neural network (ANN), linear discriminant analysis (LDA), SVM, KNN, decision tree (DT), and NB for the classification of human body movements. The LDA approach was shown to have the highest mean classification accuracy (0.84) and NB, the lowest (0.69). While the authors did not observe animals, their findings should be taken into consideration.
The performed analysis allows us to make several observations. The accelerometer is the most commonly used sensor when identifying animal behavior, with the most preferred machine learning models for classification being decision trees, random forest, and support vector machines. To the best of our knowledge, previous studies used gyroscope data only as support information but not as a primary source for the identification of animal behavior. Furthermore, the commonly used classification categories are laying, standing, walking, and grazing, and when building a dataset, the preferred method is the usage of sensors and visual observation as reference data for matching the sensor data with the behavior of the cows. Usually, the studies classify the cows’ behavior into a single class for a certain moment; however, there are situations in which the animals are doing two actions simultaneously, such as moving and ruminating.
This publication aimed to explore the feasibility of an intelligent system for cattle monitoring, based on data from a gyroscope sensor. This pilot study focused on leveraging data analysis and machine learning techniques to process the raw sensor data, and more precisely on the classification and future prediction of cattle behavior. By employing various advanced algorithms, this study aimed to explore, examine, and choose a suitable machine learning algorithm and appropriate approaches to data processing to discern patterns and trends within the collected data, enabling real-time insights into the health, activity, and overall behavior of the cattle. Furthermore, the study aimed to compare the performance of the classification algorithms depending on their data source—gyroscope or accelerometer. The integration of such technologies not only revolutionizes traditional livestock management but also paves the way for proactive decision-making, disease prevention, and enhanced breeding programs. This paper provides an in-depth examination of the technical nuances, challenges, and potential benefits associated with this sensor-based approach, contributing valuable insights to the intersection of agriculture, data science, and machine learning.

2. Materials and Methods

2.1. Experimental Setup and Means of the Investigation

The study was conducted on a farm for indoor cow breeding at the experimental base of the Research Institute of Mountain Stockbreeding and Agriculture of Troyan, Bulgaria. Its dimensions are 60 × 30 m, out of which 60 × 15 m is a building, and the other half is an enclosure (Figure 1). The site is located 380 m above sea level, (42°53′39″ N/24°42′57″ E) and is characterized by a temperate–continental climate with a pronounced mountain influence, four seasons, without fogs, and strong winds. The investigated cows were of the Dairy Simmental breed in the second lactation, equalized in terms of productivity and stage of lactation. This breed is adapted to the mountainous terrain climate; it belongs to the red-white, broad-forehead breeds of cattle, and is of combined productivity.
A collar with a specially designed IoT device was used for the experiment (Figure 2). It is equipped with a combined accelerometer and gyroscope BMI270 sensor by Bosch Sensortec GmbH (Reutlingen, Germany). When data is available, the IoT device combines it in packets of 5 sensor readings and sends it to the gateway over a 2.4 GHz wireless connection, which forwards it to a cloud database via a GPRS/3G connection. This way, the number of records obtained from the sensor depends on how active the animal is, and during daytime there could be more than 100 records per second. A detailed description of the IoT-based system, the communication procedure, and the timestamp management is available in [34].
For data analysis and model training, the Python programming language [35] was used. Python’s rich ecosystem of libraries and packages makes it a popular choice for data science and machine learning. This study uses the Scikit-learn [36], Pandas [37], and NumPy [38] packages, as they are versatile machine learning and data handling libraries. The Scikit-learning library has numerous built-in classification algorithms with the option of parameter fine-tuning, as well as algorithms for data processing, e.g., MinMaxScaler, StratifiedKFold, etc. This allows accommodation of the algorithms to the specific needs of the classification problem. The scripts built for this study are suitable for execution on every desktop machine that has the Python language installed on it.

2.2. Data Collection and Data Analysis

An experimental study was performed on 23 November 2022 and 15 May 2023 with two target cows. IoT devices were installed on their necks, as this method was recommended by previous authors [6,39]. Previous studies suggested placing the collar a day before the data collection to give the animal time to get used to it [40]. In our case, this was not a problem, as the animals had been wearing the collar with the IoT devices for several months. Figure 3 shows the conceptual diagram of the experiment. It should be noted that it is part of a bigger information system that is used for the overall monitoring of animals and pastures [41].
In order to be able to develop machine learning models for classification, a reference dataset is required. A common approach in this case is to create a learning dataset manually, which is appropriate when used with limited training data; though in the case of more animals and larger datasets, it might be too time-consuming and more complex in general [3,6,7]. Similarly, in this study, video records were made with a total duration of 6 h and during this time the monitored cows were continuously filmed. To ease up the creation of reference data, the video recordings were created with a timestamp.
This study aimed to identify three behavioral categories of the cow—“standing and eating”, “standing and ruminating”, and “laying and ruminating” (Table 1). Therefore, the videos were observed by an operator, and the cow’s behavior was manually classified into the three categories for training and validation purposes. The reference data were then imported into the database. Figure 4 shows a sample fragment from the database, where the column type identifies the sensor type (e.g., accelerometer or gyroscope), and the column label represents the animal behavior based on the manual classification.

2.3. Classification Methodology

The classification methodology adopted in this study is summarized in Figure 5 and is described below.

2.3.1. Data Collection and Preparation

The process starts with the Data Collection and Preparation, which includes:
  • data collection from the sensors
  • data storage in the database
  • data extraction (from a database to a CSV file suitable for further classification)
  • data cleaning consisting of removing rows with only zeros, duplicate rows, and ping rows (rows with zero data, used for pinging from the sensors to the server)
  • data transformation, consisting of two subprocesses—resampling the data and finding and removing the outliers in the resampled data.
The resampling of the data was done before the process of feature engineering, thus allowing the creation of more features for a model with the highest possible accuracy. The chosen bins for the resampling were one second, three seconds, and five seconds, as recommended in previous studies [6,31,32].
Often considered as an error or noise in the data [42], the outliers are known to cause over- and underfitting in the model and thus negatively impact its effectiveness. There are numerous methods and techniques for outlier detection. Some of them are domain-oriented and are applied to a specific problem and some are broader and more generic [43]. The chosen method in this study was isolation forest [44], implemented with the PyOD package [45] and a threshold of 0.85.
Having resampled, normalized datasets are the basis of the process of feature engineering and selection. In this step, 29 features were created. For the scaling of the dataset, the MinMaxScaler from the Scikit-learn library was used, and for the cross-validation of the model, the StratifiedKFold variation of the KFold, again in scikit-learn, was used. The StratifiedKFold ensures that each fold has the same class distribution of the target classes as the distribution in the whole dataset. This feature is particularly useful when working with imbalanced datasets.

2.3.2. Feature Engineering and Selection

The data acquired by the sensors were the measurements for the x, y, and z-axis of the accelerometer and the gyroscope, along with the timestamp of the measurement. Based on them and the literature review, several features were obtained and added to the model. It should be noted that in the present study, the data from the accelerometer and the one from the gyroscope were viewed and processed as two different datasets, independently.
The following statistical features, used in most of the reviewed papers, were also adapted in our study: mean value, standard variation, variance, skew, and kurtosis for every axis. Another feature obtained in the extraction process is the vector magnitude max [4] which is the highest value between x, y, and z, for the period. Furthermore, the signal vector magnitude [4,29] (also called signal magnitude axis [34,35]) has been used. It is estimated using the average values for the x, y, and z-axis (respectively x a v g , y a v g , and z a v g ) over a period of time (e.g., for every 1, 3, or 5 s):
S i g n a l   V e c t o r   M a g n i t u d e = x a v g 2 + y a v g 2 + z a v g 2
Another feature used is the signal magnitude area [12] which is estimated as below:
S i g n a l   M a g n i t u d e   A r e a = x a v g + y a v g + z a v g
Some authors also proposed using the minimum and maximum values of every axis [30]. Those features are also added when training the model— x a v g m i n , x a v g m a x , y a v g m i n , y a v g m a x , z a v g m i n , and z a v g m a x . Furthermore, the range for each axis is also added to the features list, which is estimated according to the following:
A i r a n g e = a i m a x a i m i n
where a is the corresponding axis, e.g., x, y, or z.
Another feature that is being used is the average intensity, which is calculated according to the following:
A v e r a g e   I n t e n s i t y = 1 n S i g n a l   V e c t o r   M a g n i t u d e
In this study, we did not use the timestamp as input data for the features.
The list of all features used for the model training were as follows: n, x_mean, y_mean, z_mean, x_max, x_min, x_range, x_std, x_var, x_skew, x_kurtosis, y_max, y_min, y_range, y_std, y_var, y_skew, y_kurtosis, z_max, z_min, z_range, z_std, z_var, z_skew, z_kurtosis, svm (1), vmm [4], sma (2), ai (3). Their usage is self-explanatory as their respective names suggest.

2.3.3. Model Training

Based on the literature review and the classification problem, four machine learning algorithms were chosen for training—random forest ensemble (RFE), decision tree (DT), support vector machines (SVM), and naïve Bayes (NB).
According to [46] the random forest ensemble (RFE) algorithm is one of the most widely used machine learning algorithms. In essence, the RFE is an ensemble of multiple decision tree algorithms whose predictions are averaged across all of the trees, resulting in far more satisfactory results than one decision tree model. The algorithm is suitable both for regression, and classification problems, and the prediction for the classification with RFE is the majority vote for the class label that has been predicted among the trees [47]. In the process of training a model with RFE, a set of random predictors are chosen for every split in a tree [48], thus ensuring that the predictors used for every split, and therefore the prediction errors, are less correlated.
Having in mind the importance of the randomness of the RFE, one can conclude that the most important tuning parameter in RFE is the number of features per split. According to [47] the optimal value for this parameter when considering a classification problem is estimated with the following:
n u m b e r   o f   f e a t u r e s   p e r   s p l i t = n u m b e r   o f   t o t a l   f e a t u r e s
Other important tuning parameters are the depth of the forest and the number of trees. Usually, a greater depth results in better performing models, and in many cases not explicitly setting a depth results in the best possible model. Regarding the number of trees, the increase in the number will not lead to overfitting, as RFE is somewhat “immune” to overfitting, according to some authors [46].
The decision tree (DT) algorithm is a proven and robust ML algorithm for classification. Its low computational load, ease of understanding, and usage make it a preferred solution in many cases. Having in mind that the DT are non-parametric models and that they do not make strong assumptions about the underlying distribution of the data, it makes them suitable for capturing patterns in the data, and therefore suitable for our classification problem. In this study it is interesting to observe the difference between the RFE and DT, keeping in mind that the essence of RFE is multiple decision trees.
The nature of support vector machines (SVM) is to find a hyperplane in an N-dimensional space that distinctly manages to classify the data points. The idea of maximizing the boundary between the nearest points of the classes is used—the points lying on the boundaries are called support vectors, and the middle of the boundary is the optimal separating hyperplane. The algorithm has several built-in kernels—linear, polynomial, RBF, and sigmoid. The polynomial and RBF kernels allow the classification of more complex data with non-linear decision boundaries. Additionally, the polynomial kernel (Equation (6)) introduces the degree parameter, which gives more control over the model complexity and performance:
K x , y = x T y + c d
where
  • x and y are feature vectors of size n
  • c 0 is a free parameter trading off the influence of higher-order and lower-order items in the polynomial
  • d is the degree.
One of the classic algorithms in ML is the naïve Bayes classifier, which is based on Baye’s theorem [49] for determining the posterior probability of an event occurring. Its main feature is the assumption that all variables are conditionally independent. This characteristic is the reason why it is called “naïve”. The algorithm is known for its simplicity, efficiency, and robustness to irrelevant features. It is a preferred choice of many scientists when dealing with multiclass classification and high imbalance data. A disadvantage of the algorithm is its sensitivity to outliers, as it relies on probability estimates based on the observed data, therefore the additional step of outlier detection in the step of data transform is important.
The parameters used in the training of the models are summarized in Table 2.

3. Results and Discussion

3.1. Data Resampling

The analysis of the obtained data was implemented according to the described methodology. The number of sample groups for each class after the resampling is summarized in Table 3, which allows the following observations to be made:
  • A high imbalance exists between the three classes for both sensors which can affect negatively the performance and results. While options for artificial handling of the imbalance exist, they are not entirely applicable in the current state of the dataset
  • The accelerometer provided more sample groups for the classes “Standing and Ruminating” and “Laying and Ruminating”, compared to the gyroscope
  • The number of sample groups for the class “Standing and Eating” is approximately equal for the two sensors.
From Table 3, it can be seen that in some cases the samples used for the training and testing of the model are more than 7000, and in others—around 200.

3.2. Outlier Detection

The distribution of the accelerometer data for the three classes (“Standing and Eating”, “Standing and Ruminating” and “Laying and Ruminating”) before the application of an outlier detection (OD) is presented in Figure 6a, Figure 6b, and Figure 6c, respectively. It is noticeable that there are some outliers, especially in the data marked as “Standing and Ruminating”. Therefore, an OD algorithm was applied to the training data, and the results without the filtered-out outliers are presented in Figure 6d, Figure 6e, and Figure 6f, respectively. Similarly, the distribution of the gyroscope data is presented for the three classes before (Figure 7a–c) and after (Figure 7d–f) the OD algorithm is applied. Once again it can be seen that some outliers were removed, especially in the “Standing and Ruminating” class.

3.3. Class Distribution and Class Borders

The class distribution for the two datasets (accelerometer and gyroscope) after their resampling and outlier detection is presented in Figure 8. Notable from the figure is that there is a visible class imbalance, and the classes are not represented equally. The majority of the data in both datasets are in the class “Standing and Eating”, while there are significantly fewer sample groups in the classes “Standing and Ruminating” and “Laying and Ruminating”, which is especially obvious for the gyroscope dataset. It is well known that for the best performance of supervised models, a relatively large, balanced, and evenly distributed dataset is required. Some studies have reported that if this condition is not met, training a model without any further data processing could lead to an accuracy paradox [50] and therefore to an unreliable model. Nevertheless, in the current study, the models are trained with imbalanced datasets, because they correspond to the actual behavior distribution of the animals.
The gyroscope data in 5s groups are presented in Figure 9. It shows there are distinctive borders between the three classes, bearing in mind that the samples from the “Standing and Ruminating” are not entirely visible in this graph orientation. Such distinctive class borders make the datasets especially suitable for classification with relatively high accuracy and effectiveness.
Visible from the class imbalance is that there is a need for more data to balance the class. Furthermore, with the increase of data in the dataset the models could be fine-tuned better, which could improve their accuracy and effectiveness. All of the results point out that those datasets, regardless of the type of the sensor, are not entirely relatable for behavior classification. The imbalance, especially prominent in the gyroscope dataset, influences the effectiveness of the models to a high degree. The various known methods for handling imbalanced datasets are not going to be suitable in this case. Having in mind that the two most used methods are under- or over-sampling the data, neither of these methods will positively affect the models. If the dataset is under-sampled, quite a large number of samples will be removed, and the problem of not enough data will be created. With the big difference between the majority and the minority classes, if the over-sampling technique is applied, many “fake” samples will be created thus possibly impacting the truthfulness of the model. Two possible solutions to the problem are presented—collecting more real data or applying a different approach for balancing the data.

3.4. Model Training and Result Metrics

A total of 24 models were trained using the two datasets, structured in 1s, 3s, and 5s groups using the selected classification algorithms. Their accuracy is summarized in Figure 10. It is easy to see that the gyroscope dataset has noticeably higher accuracy, which is around 10% higher for the RFT and DT algorithms, up to 20% higher for SVM, and up to 14% higher for NB. For some of the cases, this difference was lower (3–5% for 3s and 5s grouping with the NB algorithm), though in this case, the overall accuracy was also significantly lower.
In general, the performances of all models show some promising results, notably the RFE and DT models perform close to perfectly when used with the gyroscope dataset. It is easy to notice that the SVM and NB perform worse, as an accuracy of 70% for the accelerometer is not satisfactory.
When dealing with imbalanced datasets, the accuracy metric can be misleading. Since having imbalanced data is a prerequisite for a biased model, one should not rely only on accuracy as a metric. Therefore, other approaches are needed, and since both datasets are imbalanced, it is important to take a deeper look at the performance of the models for each class individually. Therefore, we used the F1 score, which accounts for both the precision and recall of the model. The F1 score for each class individually is summarized in Figure 11. Once again it is easy to notice that the RFE algorithm performs very satisfactorily, especially for the gyroscope dataset. Even the minority classes perform excellently and the F1 score varies between 0.95 and 0.99. Approximately the same situation arises with the DT algorithm, whose F1 scores for the different classes with 1s data groups vary between 0.96 and 0.99.
This only serves as support to the claim that the RFE is a robust model. Considering that the RFE is an ensemble of DT at its core, the expected differences in the results of the two models are minimal. Both Figure 10 and Figure 11 support this claim. As expected, the accuracy and the F1 score are similar for the two algorithms, and RFE performs slightly better.
On the other hand, it can be seen that the NB model has the worst performance for both datasets and for all test scenarios (accelerometer/gyroscope and 1s/3s/5s time groups). Even though when looking at Figure 10 we might get the idea that the SVM and the NB models perform almost identically, when we look at Figure 11 it can be seen that NB performs well only for the majority class “Standing and Eating”, while for the other two classes it fails completely. While the SVM did still not perform satisfactorily in the classification, it returned significantly better results, compared to NB.
The best-performing algorithms with the accelerometer dataset are once again RFE and DT when the data are used in groups of 1 s. After them follows SVM whose F-1 score is up to 0.91 for the “Standing and Eating” class but is only about 0.5 for the other two classes. Once again, the NB model performed worse and has an F1 score of 0 for the “Standing and Ruminating” class in all scenarios.
The obtained results in this study show that the RFE and DT algorithms give the highest accuracy. The authors in [4,6,33] also achieved very high accuracy with DT for various target actions. Overall, the usage of the DT algorithm is a preferred choice for cattle classification and our experiment confirms it.
In terms of targeted actions, our study examined three actions—“Standing and Eating”, “Standing and Ruminating”, and “Laying and Ruminating”. What makes an impression is that the studies in [4,6,32,33] are focused on one target at a time, e.g., only standing, or only eating, as two separate actions, and our research is focused on combinations of two actions together.
Two interesting variables in the process of cattle classification are also the size of the data bins and the number of features. None of the researched studies used the 1s data bin and our results show that the accuracy of this particular data bin is relatively high and satisfactory. It is interesting to note that the authors in [6] also used 20s and 30s as bins. The effect of such a large period could be beneficial when classifying particular actions, such as “laying and ruminating” but the current size of the datasets does not allow it. The number of features varied greatly among the authors—7 [4], 61 [6], 9 [32], where our features were 29 which proved to be sufficient and enough for high-accuracy classification results.
Table 4 summarizes the results from similar studies. It can be seen that the accuracy of the obtained results in this study, especially those based on the gyroscope data, are either better or equal to those obtained in previous studies.

4. Conclusions and Future Work

The present study investigated the possibilities of successfully identifying three behavioral activities of cows: “standing and eating”, “standing and ruminating” and “laying and ruminating”. It relied on data from a combined accelerometer/gyroscope sensor, mounted on a collar around the neck of the cow. Some of the more notable conclusions are summarized below.
Regarding the hardware aspect of this system, the experiment showed the potential of using a gyroscope for this particular problem. While this type of sensor is not as popular as the accelerometer, the results from the training of the various models are proof that it has the potential to give accurate and reliable data. Furthermore, the accuracy and the precision metrics, presented in this publication, show that the data from the gyroscope are those that give the more satisfying results. Nevertheless, it should be noted that the current pilot study used data gathered over several hours. While these are enough data for a base experiment, they are insufficient for more advanced models. More data samples are needed to make more complex models and to reinforce the effectiveness of the chosen methodology in this study. The trained models showed that the combination of those particular ML algorithms and features is suitable for the problem of cattle behavior classification.
The effects of data binning (in our case, for every X second) are in most cases negligible except for some particular scenarios; e.g., the difference for 1s and 5s with the NB algorithm is 9% in favor of the 5s dataset. Overall, the differences can be considered insignificant in the field of model efficiency. Furthermore, such errors could also be considered insignificant when it comes to animal breeding.
The experiment and results examined in this publication considered the data from the two sensors separately. With the possibility of training a model that can classify multiple actions at once in mind, it is worth considering the training of a model with both datasets simultaneously. While this is not the current case, it is certainly an interesting one that should be explored in future studies.

Author Contributions

Conceptualization, T.M. (Tsvetelina Mladenova), I.V. (Irena Valova) and B.E.; methodology, T.M. (Tsvetelina Mladenova) and I.V. (Irena Valova); software, T.M. (Tsvetelina Mladenova); validation, T.M. (Tsvetelina Mladenova), I.V. (Irena Valova) and B.E.; formal analysis, T.M. (Tsvetelina Mladenova); investigation, B.E., T.M. (Tsvetelina Mladenova), N.V., T.M. (Tsvetan Markov) and N.M.; resources, I.V. (Ivan Varlyakov), T.M. (Tsvetelina Mladenova), S.S., L.M., N.V. and N.M.; data curation, T.M. (Tsvetelina Mladenova); writing—original draft preparation, T.M. (Tsvetelina Mladenova), I.V. (Irena Valova), B.E., I.V. (Ivan Varlyakov) and N.M.; writing—review and editing, T.M. (Tsvetelina Mladenova), I.V. (Irena Valova), B.E., I.V. (Ivan Varlyakov) and N.M.; visualization, T.M. (Tsvetelina Mladenova), I.V. (Irena Valova) and B.E.; supervision, I.V. (Ivan Varlyakov), S.S., L.M. and N.M.; project administration, I.V. (Ivan Varlyakov), B.E. and N.M.; funding acquisition, I.V. (Ivan Varlyakov) and B.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Education and Science of Bulgaria under the National Research Program “Intelligent Animal Husbandry”, grant number Д01-62/18.03.2021.

Data Availability Statement

The datasets used in this study are published under the CC BY 4.0 license and can be found at https://doi.org/10.6084/m9.figshare.25920463. The videos used for manual classification are published under the CC BY 4.0 license and can be found at https://doi.org/10.6084/m9.figshare.26145436.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. The World Counts. Available online: https://www.theworldcounts.com/challenges/consumption/foods-and-beverages/world-consumption-of-meat (accessed on 1 November 2023).
  2. da Silva Santos, A.; de Medeiros, V.W.C.; Gonçalves, G.E. Monitoring and classification of cattle behavior: A survey. Smart Agric. Technol. 2023, 3, 100091. [Google Scholar] [CrossRef]
  3. Rodriguez-Baena, D.S.; Gomez-Vela, F.A.; García-Torres, M.; Divina, F.; Barranco, C.D.; Daz-Diaz, N.; Jimenez, M.; Montalvo, G. Identifying livestock behavior patterns based on accelerometer dataset. J. Comput. Sci. 2020, 41, 101076. [Google Scholar] [CrossRef]
  4. Robert, B.; White, B.J.; Renter, D.G.; Larson, R.L. Evaluation of three-dimensional accelerometers to monitor and classify behavior patterns in cattle. Comput. Electron. Agric. 2009, 67, 80–84. [Google Scholar] [CrossRef]
  5. Bernabucci, U.; Bifani, S.; Buggiotti, L.; Vitali, A.; Lacetera, N.; Nardone, A. The efects of heat stress in Italian Holstein dairy cattle. J. Dairy Sci. 2014, 97, 471–486. [Google Scholar] [CrossRef] [PubMed]
  6. Riaboff, L.; Aubin, S.; Bédère, N.; Couvreur, S.; Madouasse, A.; Goumand, E.; Chauvin, A.; Plantier, G. Evaluation of pre-processing methods for the prediction of cattle behaviour from accelerometer data. Comput. Electron. Agric. 2019, 165, 104961. [Google Scholar] [CrossRef]
  7. Becciolini, V.; Ponzetta, M. Inferring behaviour of grazing livestock: Opportunities from GPS telemetry and activity sensors applied to animal husbandry. In Proceedings of the 17th International Scientific Conference Engineering for Rural Development, Jelgava, Latvia, 23–25 May 2018; pp. 192–198. [Google Scholar]
  8. Berckmans, D. Precision livestock farming technologies for welfare management in intensive livestock systems. Rev. Sci. Tech. 2014, 33, 189–196. [Google Scholar] [CrossRef] [PubMed]
  9. EFSA Panel on Animal Health and Animal Welfare (AHAW); Nielsen, S.S.; Alvarez, J.; Bicout, D.J.; Calistri, P.; Canali, E.; Drewe, J.A.; Garin-Bastuji, B.; Rojas, J.L.G.; Schmidt, C.G.; et al. Welfare of dairy cows. EFSA J. 2023, 21, e07993. [Google Scholar] [PubMed]
  10. Defra. Code of Recommendations for the Welfare of Livestock: Cattle; Defra Publications: London, UK, 2003. [Google Scholar]
  11. Grant, R.J.; Dann, H.M. Biological Importance of Rumination and Its Use On-Farm. Cornell Nutrition Conference, Cornell University, 2015. Available online: https://hdl.handle.net/1813/41226 (accessed on 1 November 2023).
  12. Paudyal, S. Using rumination time to manage health and reproduction in dairy cattle: A review. Vet. Q. 2021, 41, 292–300. [Google Scholar] [CrossRef] [PubMed]
  13. Wadhwani, K.N.; Thakkar, N.K.; Islam, M.M.; Lunagariya, P.M.; Patel, J.H. Rumination Assessment: A Managemental Tool for Dairy Cattle. Indian J. Anim. Prod. Manag. 2023, 37, 88–101. [Google Scholar] [CrossRef]
  14. Beauchemin, K.A. Invited review: Current perspectives on eating and rumination activity in dairy cows. J. Dairy Sci. 2018, 101, 4762–4784. [Google Scholar] [CrossRef]
  15. Leiber, F.; Moser, F.N.; Ammer, S.; Probst, J.K.; Baki, C.; Spengler Neff, A.; Bieber, A. Relationships between dairy cows’ chewing behavior with forage quality, progress of lactation and efficiency estimates under zero-concentrate feeding systems. Agriculture 2022, 12, 1570. [Google Scholar] [CrossRef]
  16. Byskov, M.V.; Nadeau, E.; Johansson BE, O.; Nørgaard, P. Variations in automatically recorded rumination time as explained by variations in intake of dietary fractions and milk production, and between-cow variation. J. Dairy Sci. 2015, 98, 3926–3937. [Google Scholar] [CrossRef] [PubMed]
  17. Siivonen, J.; Taponen, S.; Hovinen, M.; Pastell, M.; Lensink, B.J.; Pyörälä, S.; Hänninen, L. Impact of acute clinical mastitis on cow behaviour. Appl. Anim. Behav. Sci. 2011, 132, 101–106. [Google Scholar] [CrossRef]
  18. Soriani, N.; Trevisi, E.; Calamari, L. Relationships between rumination time, metabolic conditions, and health status in dairy cows during the transition period. J. Anim. Sci. 2012, 90, 4544–4554. [Google Scholar] [CrossRef] [PubMed]
  19. Beaver, A.; Proudfoot, K.L.; von Keyserlingk, M.A. Symposium review: Considerations for the future of dairy cattle housing: An animal welfare perspective. J. Dairy Sci. 2020, 103, 5746–5758. [Google Scholar] [CrossRef]
  20. Varlyakov, I.; Slavov, T.; Grigorova, N. Ethological evaluation of a building for free housing of dairy cows. II. Behavioural activities in the winter. Agric. Sci. Technol. 2010, 2, 14–21. [Google Scholar]
  21. Varlyakov, I.; Penev, T.; Mitev, J.; Miteva, T.; Uzunova, K.; Gergovska, Z. Effect of lameness on the behaviour of dairy cows under intensive production systems. Bulg. J. Agric. Sci. 2012, 18, 126–133. [Google Scholar]
  22. Norring, M.; Valros, A. The effect of lying motivation on cow behaviour. Appl. Anim. Behav. Sci. 2016, 176, 1–5. [Google Scholar] [CrossRef]
  23. Rushen, J.; Chapinal, N.; de Passilé, A.M. Automated monitoring of behavioural-based animal welfare indicators. Anim. Welf. 2012, 21, 339–350. [Google Scholar] [CrossRef]
  24. Sammad, A.; Wang, Y.J.; Umer, S.; Lirong, H.; Khan, I.; Khan, A.; Ahmad, B.; Wang, Y. Nutritional Physiology and Biochemistry of Dairy Cattle under the Influence of Heat Stress: Consequences and Opportunities. Animals 2020, 10, 793. [Google Scholar] [CrossRef]
  25. Carabaño, M.J.; Ramón, M.; Menéndez-Buxadera, A.; Molina, A.; Díaz, C. Selecting for heat tolerance. Anim. Front. 2019, 9, 62–68. [Google Scholar] [CrossRef] [PubMed]
  26. Smith, D.; Little, B.; Greenwood, P.I.; Valencia, P.; Rahman, A.; Ingham, A.; Bishop-Hurley, G.; Shahriar, S.; Hellicar, A. A study of sensor derived features in cattle behaviour classification models. In Proceedings of the 2015 IEEE SENSORS, Busan, Republic of Korea, 1–4 November 2015; pp. 1–4. [Google Scholar]
  27. Bouten, C.V.C.; Koekkoek, K.T.M.; Verduin, M.; Kodde, R.; Janssen, J.D. A triaxial accelerometer and portable data processing unit for the assessment of daily physical activity. IEEE Trans. Biomed. Eng. 1997, 44, 136–147. [Google Scholar] [CrossRef] [PubMed]
  28. Mathie, M.J. Monitoring and Interpreting Human Movement Patterns Using a Triaxial Accelerometer. Ph.D. Thesis, UNSW Sydney, Sydney, Australia, 2003. [Google Scholar]
  29. Karantonis, D.M.; Narayanan, M.R.; Mathie, M.; Lovell, N.H.; Celler, B.G. Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans. Inf. Technol. Biomed. 2006, 10, 156–167. [Google Scholar] [CrossRef] [PubMed]
  30. Fida, B.; Bernabucci, I.; Bibbo, D.; Conforto, S.; Schmid, M. Pre-processing effect on the accuracy of event-based activity segmentation and classification through inertial sensors. Sensors 2015, 15, 23095–23109. [Google Scholar] [CrossRef] [PubMed]
  31. Barwick, J.; Lamb, D.W.; Dobos, R.; Welch, M.; Trotter, M. Categorising sheep activity using a tri-axial accelerometer. Comput. Electron. Agric. 2018, 145, 289–297. [Google Scholar] [CrossRef]
  32. Martiskainen, P.; Järvinen, M.; Skön, J.-P.; Tiirikainen, J.; Kolehmainen, M.; Mononen, J. Cow behaviour pattern recognition using a three-dimensional accelerometer and support vector machines. Appl. Anim. Behav. Sci. 2009, 119, 32–38. [Google Scholar] [CrossRef]
  33. Rajapriya, R.; Rajeswari, K.; Thiruvengadam, S.J. Deep learning and machine learning techniques to improve hand movement classification in myoelectric control system. Biocybern. Biomed. Eng. 2021, 41, 554–571. [Google Scholar]
  34. Evstatiev, B.I.; Valov, N.P.; Kadirova, S.Y.; Nenov, T.R. Implementation of a Prototype IoT-Based System for Monitoring the Health, Behavior and Stress of Cows. In Proceedings of the 2022 IEEE 9th Electronics System-Integration Technology Conference (ESTC), Sibiu, Romania, 13–16 September 2022; pp. 77–81. [Google Scholar] [CrossRef]
  35. Van Rossum, G.; Drake, F.L., Jr. Python Reference Manual; Centrum voor Wiskunde en Informatica: Amsterdam, The Netherlands, 1995. [Google Scholar]
  36. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  37. McKinney, W. Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference, Austin, TX, USA, 28 June–3 July 2010; Volume 445. [Google Scholar]
  38. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  39. Smith, D.; Rahman, A.; Bishop-Hurley, G.J.; Hills, J.; Shahriar, S.; Henry, D.; Rawnsley, R. Behavior classification of cows fitted with motion collars: Decomposing multi-class classification into a set of binary problems. Comput. Electron. Agric. 2016, 131, 40–50. [Google Scholar] [CrossRef]
  40. Benaissa, S.; Tuyttens, F.A.; Plets, D.; Cattrysse, H.; Martens, L.; Vandaele, L.; Joseph, W.; Sonck, B. Classification of ingestive-related cow behaviours using RumiWatch halter and neck-mounted accelerometers. Appl. Anim. Behav. Sci. 2019, 211, 9–16. [Google Scholar] [CrossRef]
  41. Valova, I.; Mladenova, T. An Information System for Livestock and Pasture Surveillance. In Proceedings of the 13th National Conference with International Participation, ELECTRONICA, Sofia, Bulgaria, 19–20 May 2022; pp. 1–4. [Google Scholar] [CrossRef]
  42. Ben-Gal, I. Outlier Detection. In Data Mining and Knowledge Discovery Handbook; Springer: Berlin/Heidelberg, Germany, 2006; pp. 131–146. [Google Scholar] [CrossRef]
  43. Singh, K.; Upadhyaya, S. Outlier Detection: Applications and Techniques. Int. J. Comput. Sci. Issues 2012, 9, 307–323. Available online: https://www.proquest.com/openview/08b675f647e808f41d65e964df5b52f4/1?pq-origsite=gscholar&cbl=55228 (accessed on 1 November 2023).
  44. Liu, F.T.; Ting, K.M.; Zhou, Z.-H. Isolation Forest. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008; pp. 413–422. [Google Scholar] [CrossRef]
  45. Zhao, Y.; Nasrullah, Z.; Li, Z. PyOD: A Python Toolbox for Scalable Outlier Detection. J. Mach. Learn. Res. 2019, 20, 1–7. [Google Scholar]
  46. Brownlee, J. Ensemble Learning Algorithms with Python: Make Better Predictions with Bagging, Boosting, and Stacking; Machine Learning Mastery: Vermont, Australia, 2021. [Google Scholar]
  47. Kuhn, M.; Johnson, K. Applied Predictive Modeling; Springer: Berlin/Heidelberg, Germany, 2013; Volume 26. [Google Scholar]
  48. Witten, D.; James, G. An Introduction to Statistical Learning with Applications in R; Springer Publication: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  49. Berrar, D. Bayes’ theorem and naive Bayes classifier. Encycl. Bioinform. Comput. Biol. ABC Bioinform. 2018, 403, 412. [Google Scholar]
  50. Valverde-Albacete, F.J.; Peláez-Moreno, C. 100% classification accuracy considered harmful: The normalized information transfer factor explains the accuracy paradox. PLoS ONE 2014, 9, e84217. [Google Scholar]
Figure 1. General schematic of the experimental setup.
Figure 1. General schematic of the experimental setup.
Agriengineering 06 00128 g001
Figure 2. A cow with a collar and an experimental IoT device (a), and closeup of the experimental IoT module (b).
Figure 2. A cow with a collar and an experimental IoT device (a), and closeup of the experimental IoT module (b).
Agriengineering 06 00128 g002
Figure 3. Conceptual diagram of the data collection and analysis system.
Figure 3. Conceptual diagram of the data collection and analysis system.
Agriengineering 06 00128 g003
Figure 4. A sample fragment from the training database.
Figure 4. A sample fragment from the training database.
Agriengineering 06 00128 g004
Figure 5. Adapted classification methodology in this study.
Figure 5. Adapted classification methodology in this study.
Agriengineering 06 00128 g005
Figure 6. Distribution of the accelerometer data before (ac) and after (df) the outlier detection algorithm was applied.
Figure 6. Distribution of the accelerometer data before (ac) and after (df) the outlier detection algorithm was applied.
Agriengineering 06 00128 g006
Figure 7. Distribution of the gyroscope data before (ac) and after (df) the outlier detection algorithm was applied.
Figure 7. Distribution of the gyroscope data before (ac) and after (df) the outlier detection algorithm was applied.
Agriengineering 06 00128 g007
Figure 8. Class distribution for the accelerometer and gyroscope datasets with 1s piles.
Figure 8. Class distribution for the accelerometer and gyroscope datasets with 1s piles.
Agriengineering 06 00128 g008
Figure 9. Distribution of the gyroscope data grouped in 5s piles.
Figure 9. Distribution of the gyroscope data grouped in 5s piles.
Agriengineering 06 00128 g009
Figure 10. Comparison of the overall accuracy results for the different training algorithms, datasets, and grouping intervals.
Figure 10. Comparison of the overall accuracy results for the different training algorithms, datasets, and grouping intervals.
Agriengineering 06 00128 g010
Figure 11. F1 Score for each class individually.
Figure 11. F1 Score for each class individually.
Agriengineering 06 00128 g011
Table 1. Adapted classification of the cows’ behavior.
Table 1. Adapted classification of the cows’ behavior.
BehaviorDescriptionPhotoDatabase ID (Label)
Standing and eatingThe cow is standing and actively eating grass or hay, which is characterized by frequent head movements.Agriengineering 06 00128 i0013
Standing and ruminatingThe cow is standing and ruminating, which is characterized by frequent bites, without vertical changes in the head position.Agriengineering 06 00128 i0021
Laying and ruminatingThe animal is ruminating while laying.Agriengineering 06 00128 i0035
Table 2. Parameters for model training.
Table 2. Parameters for model training.
AlgorithmParameters
Random Forest EnsembleFeatures per Split = 6 (Equation (4)), Depth = 10, Number of Trees = 1000
Decision TreeCriterion = Entropy
Support Vector MachinesKernel = Polynomial, C Parameter = 10, Gamma = 0.1, Degree = 5,
Class Weight = Balanced
Naïve BayesMultinomial NB, alpha = 1
Table 3. Distribution of the sample groups after the data resampling.
Table 3. Distribution of the sample groups after the data resampling.
ClassAccelerometerGyroscope
1s3s5s1s3s5s
Standing and Eating767526751617682424841553
Standing and Ruminating38371304782970368246
Laying and Ruminating537618551132805319212
Table 4. Study comparison.
Table 4. Study comparison.
AuthorsML AlgorithmsSensorPlacementTargetData BinningTotal # of FeatureAccuracy
[4]Decision TreesaccelerometersRight rear limbLaying, standing, walking3, 5, 10s7Lying—98%, walking—67.8%
[6]Decision TreesaccelerometercollarGrazing, walking, laying, standing3s, 5s, 10s, 20s and 30s6120s and 30s—95%
[32]Support Vector MachinesaccelerometercollarStanding,
laying,
ruminating,
feeding,
normal walking, lame walking, laying down, standing up
10s9Standing = 87%
Laying = 84%
Ruminating = 92%
Feeding = 96%
normal walking = 99%
lame walking = 98%
laying down = 100%
standing up = 100%
[33]Deep Neural Network,
Artificial Neural Network,
Linear Discriminant Analysis,
Support Vector Machines,
KNN,
Decision Trees,
Naïve Bayes
EMG sensors on peoplelimbsHuman body movements5sN/ADNN—82%
ANN—82%
LDA—84%
SVM—82%
KNN—76%
DT—75%
NB—69%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mladenova, T.; Valova, I.; Evstatiev, B.; Valov, N.; Varlyakov, I.; Markov, T.; Stoycheva, S.; Mondeshka, L.; Markov, N. Evaluation of the Efficiency of Machine Learning Algorithms for Identification of Cattle Behavior Using Accelerometer and Gyroscope Data. AgriEngineering 2024, 6, 2179-2197. https://doi.org/10.3390/agriengineering6030128

AMA Style

Mladenova T, Valova I, Evstatiev B, Valov N, Varlyakov I, Markov T, Stoycheva S, Mondeshka L, Markov N. Evaluation of the Efficiency of Machine Learning Algorithms for Identification of Cattle Behavior Using Accelerometer and Gyroscope Data. AgriEngineering. 2024; 6(3):2179-2197. https://doi.org/10.3390/agriengineering6030128

Chicago/Turabian Style

Mladenova, Tsvetelina, Irena Valova, Boris Evstatiev, Nikolay Valov, Ivan Varlyakov, Tsvetan Markov, Svetoslava Stoycheva, Lora Mondeshka, and Nikolay Markov. 2024. "Evaluation of the Efficiency of Machine Learning Algorithms for Identification of Cattle Behavior Using Accelerometer and Gyroscope Data" AgriEngineering 6, no. 3: 2179-2197. https://doi.org/10.3390/agriengineering6030128

Article Metrics

Back to TopTop