**1. Introduction**

Thanks to the numerous studies in the field of computer vision applied to the medical and biomedical field, we now have many additional tools to support specialists in their tasks [1–5]. Modern technologies have improved the acquisition, transmission, and analysis of digital images. A growing benefit is also provided thanks to the spread of fast network connections for smartphones, allowing for the exchange of large amounts of clinical data also useful for remote diagnosis or follow-up [6–8].

Segmentation and contour extraction are important steps towards the analysis of digital images in the medical field, where such images are routinely used in a multitude of different applications [9]. Segmentation algorithms, based on structural analysis, continue to be used, often as an ensemble of segmentation techniques, especially in critical applications, such as lesion localization [10,11]. Other approaches, based on biased normalized cuts or light techniques, are also devised [12,13]. Many studies have also been conducted in the segmentation and classification of cells from digital images. Almost all studies are in the field of hematology. An interesting study into the classification of white blood cells (WBCs) is reported in [14]. In some studies, only segmentation aspects are discussed [15,16], while a neural network-based classifier of cytotypes in the hematological smear

of a healthy subject was described in [17]: starting from digital scans of hematological preparations, it showed over 95% accuracy. Many other papers report interesting results about this last theme [18–20].

One of the fields that can benefit from the above technologies is nasal cytology, a branch of otolaryngology, which is gaining increasing importance in the diagnosis of nasal diseases due to the simplicity of the diagnostic examination and its effectiveness. In fact, the global spread of the nasal diseases is significant: allergic rhinitis is estimated to affect 35% of the world's population and the World Health Organization considers it a growing epidemic form as, in a few years, 50% of children may be allergic. Rhinosinusitis affects 4% of the world's population, and nasal polyposis 5%. Non-allergic vasomotor rhinitis affects 15% of people [21].

To the best of our knowledge, to date, there are no public or private laboratories that carry out the examination of the cell population of the nasal mucosa routinely, as instead it is done for hematological tests. This is for different reasons: firstly, because diagnostics based on nasal cytology have grown recently; secondly, because economic interest is still residual; finally, because the spectrum of diagnosable pathologies is not as extensive as in other fields of medicine. Typically, a rhinocytologist who wants to benefit from a cytological study must independently arrange a set of personal instruments, or, more frequently, carry out direct microscopic observation and manual cell counting using a special rhinocytogram.

Methods and techniques designed for hematology cannot be used directly for nasal cytology; for example, the WBC appear in almost all cases as isolated from each other, while nasal mucosa cells often appear amassed in the smear.

The first studies about the automatic extraction and classification of the cells of the nasal mucosa are reported in [22–24] where a diagnostic support system provides cell counting automatically—it uses segmentation algorithms to extract cells and a convolutional neural network to classify them. The sampling process and the diagnosis remain human activities, carried out by the specialist, but the whole time and effort are reduced considerably, letting the accuracy of the diagnoses remain unchanged or even be improved. To the best of our knowledge, there are no further contributions in the literature.

The further request of the stakeholders is to considerably reduce the cost of the analysis and of the instrumentation with the aim of increasing the capillarity of the analysis itself. Therefore, the challenge we have been given is to carry out the entire evaluation of the cell population on a mass device, such as a smartphone, fully automatically (as shown in Appendix A). Devices with limited resources will interact with the surrounding environment and users. Many of these devices will be based on machine learning models to decode meaning and the behavior behind sensors' data, to implement accurate predictions and make decisions [25]. Several research papers have focused on the possibility of bringing artificial intelligence to devices with limited resources and there have been efforts in decreasing the model's inference time on the device. Machine learning developers focus on designing models with a reduced number of parameters in the Deep Neural Network model, thus reducing memory and execution latency, while aiming to preserve accuracy, as far as possible. It is evident that, at the moment, there are several problems to overcome, first among which is the limitation of the computational capacity of mobile architectures [26–34].

In this paper a novel system based on a smartphone is presented to support rhinocytologists during cell observation. It carries out cell extraction from the digital image of the microscopic fields. Once this is done, the specialist can independently evaluate the segmented cells or send them to the Rhino-cyt platform [2], which will also perform the fully automatic classification, giving back the final rhinocytogram. This way he significantly reduces the time for diagnosis.
