Next Article in Journal
Real and Promoted Aesthetic Properties of Geosites: New Empirical Evidence from SW Russia
Next Article in Special Issue
Digital Artifacts and Landscapes. Experimenting with Placemaking at the Impero Project
Previous Article in Journal
A Citizen Science Approach to Build a Knowledge Base and Cadastre on Earth Buildings in the Weinviertel Region, Austria
Previous Article in Special Issue
Roman Model-Books as a Resource for Digital Architectural Reconstructions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Open System for Collection and Automatic Recognition of Pottery through Neural Network Algorithms

by
Maria Letizia Gualandi
*,†,
Gabriele Gattiglia
and
Francesca Anichini
Department of Knowledge and Forms of Civilisation, University of Pisa, 56126 Pisa, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Heritage 2021, 4(1), 140-159; https://doi.org/10.3390/heritage4010008
Submission received: 11 December 2020 / Revised: 7 January 2021 / Accepted: 8 January 2021 / Published: 13 January 2021

Abstract

:
In the last ten years, artificial intelligence (AI) techniques have been applied in archaeology. The ArchAIDE project realised an AI-based application to recognise archaeological pottery. Pottery is of paramount importance for understanding archaeological contexts. However, recognition of ceramics is still a manual, time-consuming activity, reliant on analogue catalogues. The project developed two complementary machine-learning tools to propose identifications based on images captured on-site, for optimising and economising this process, while retaining key decision points necessary to create trusted results. One method relies on the shape of a potsherd; the other is based on decorative features. For the shape-based recognition, a novel deep-learning architecture was employed, integrating shape information from points along the inner and outer profile of a sherd. The decoration classifier is based on relatively standard architectures used in image recognition. In both cases, training the algorithms meant facing challenges related to real-world archaeological data: the scarcity of labelled data; extreme imbalance between instances of different categories; and the need to take note of minute differentiating features. Finally, the creation of a desktop and mobile application that integrates the AI classifiers provides an easy-to-use interface for pottery classification and storing pottery data.

Graphical Abstract

1. Introduction

Over the last decade, artificial intelligence (AI) has become widespread across science and technology. Born in 1955 [1], the different facets of AI have gone through waves of innovation before becoming ubiquitous. Machine learning (ML) algorithms were developed in the 1980s [2], but their use has become common only within the last decade with the ability to produce huge datasets (Big Data), and with the advent of neural networks. Traditionally, AI addresses tasks such as reasoning, knowledge representation, planning, learning, natural language processing (NLP), perception, and robotics. Methods include statistics, computational intelligence, and symbolic AI (AI with human-readable representations). The tools used in these tasks consist mainly in mathematical optimisation, statistical tools, and artificial neural networks (ANNs).
Within archaeology, the usefulness of AI is now formally explored. Five to ten years ago, ML algorithms and neural networks were concepts unknown to archaeologists; now, there are sessions dedicated to AI at archaeological conferences. AI techniques have been applied in various field of archaeology, especially for (i) the discovery of archaeological sites; (ii) the recognition and reassembling of archaeological pottery; (iii) the extraction of text and name entity recognition (NER); (iv) the analysis of human remains; (v) murals and graffiti drawings; and (vi) robotics. In general, archaeology benefits from AI when a vast amount of data needs to be analysed; and when complicated, subjective, highly specialised, and time-consuming activities are required (such as in the identification of finds).
Artificial neural networks (ANNs) are used to manage some of the severe problems that manifest in archaeological data: incompleteness, noisiness, messiness, and non-linear relationships between the data. Techniques such as (i) multilayer perceptron network (MLP) [3], which provides a supervised learning technique called backpropagation that permits finding the weights of a network [4]; (ii) probabilistic neural network (PNN) [5] that works with a kernel density estimation; (iii) convolutional neural network (CNN) [6], a group of neural networks used in computer vision, in which the connection between artificial neurons resembles the structure of the visual cortex; and (iv) self-organizing feature map (SOM) that employs an unsupervised competitive learning method to obtain dimensionality reduction are applied.
Some early AI archaeological implementations focussed on the classification, seriation, and analysis of material culture, such as artistic representations [7,8], use-wear of prehistoric tools [9], historical glass artefacts, and ancient coins [10]. The application of ML and deep learning in archaeology underwent a decisive turn towards the detection of archaeological sites during the last years. Examples in the detection and exploration of terrestrial and marine archaeological sites come from various projects. The Archäoprognose Brandenburg project [11] adopted a combined PNN and SOM solution to develop archaeological predictive modelling for identifying the possible location of archaeological sites in Brandenburg (Germany). In the Dzungaria Landscape project [12], CNN was employed to detect Iron Age tombs in the Eurasian steppe, to find archaeological sites and related toponyms in historical cartography [13], and to identify pottery fragments in drone imagery [14]. A random forest algorithm has been used for the detection of archaeological mounds in the Cholistan (Pakistan) employing a large-scale collection of multitemporal synthetic-aperture radar and multispectral images [15] and using aerial laser scanning (ALS, lidar) data to identify megalithic funerary structures in the region of Carnac (France) [16].
Supervised learning approaches (machine and deep learning) for the automated classification of three-dimensional (3D) architectural components (columns, facades, and more) in large datasets have also been recently explored [17]. Arch-I-Scan realised a prototype system for the detection and classification of whole pottery vessels [18].
Virtual reconstruction of artefacts from fragments has been handled in different contexts, such as automatic puzzle solving. Recently, clustering techniques were designed to group fragments for re-building the original image by ordering the pieces identified. An advanced variation of puzzle-solving is the reassembling of archaeological artefacts. Some research teams proposed approaches based on 3D models using the information encapsulated in the thickness of the potsherd [19], or adopting a comparison of vectors and surfaces, performed linearly, applying an appositely developed algorithm (Fragmatch) [20]. The solving of archaeological puzzles using both 3D models of fragments and images has also been explored by the GRAVITATE project [21]. Reconstruction of potsherds and text has been achieved on a group of ostraka with demotic inscriptions, focusing on 2D reconstruction techniques using a specific multilayer architecture of deep neural network (DNN) called Siamese neural network, which distinguishes similar pairs [22].
Archaeological texts are often reported into epigraphical inscriptions. Frequently, inscriptions are damaged, fragmentary, and illegible, making it difficult for NLP. Pythia [23] is an automated ancient text restoration system that recovers missing characters from damaged text using DNN. More generally, NLP techniques have been employed in archaeology from the 1990s [24] to identify the process model from the text [25], in iconographic representation research, for numismatic [26,27] and artwork studies [28], in zooarchaeology [29], and to make grey literature more accessible [30].
AI has been applied to the study of human remains. Bewes et al. [31] developed a neural network for identifying the sex of individuals starting with 3D reconstructions of skulls based on CT scans. The transfer learning technique, based on pre-trained GoogLeNet, coupled with backpropagation, was applied. Czibula et al. [32] compared two supervised regression models—one based on an ANN and the other based on genetic algorithms (GA)—to estimate stature from bone measurements. The ANN achieved a better result than the GA.
Geochemical [33] and archaeobotanical [34] research is now setting up different projects to develop automated identification procedures, which can boost traditionally arduous and time-consuming techniques.
ML techniques were used for automated petroglyph image segmentation with interactive classifier fusion [35], in reconstructing fresco segments [36], and with remote sensing in the Mogao Caves [37].
The use of robots has been explored by projects related to underwater explorations and museums. As for the first, the VENUS project [38] used AUVs/ROVs (autonomous underwater/remotely operated vehicles) coupled with data acquisition techniques (sonar and photogrammetry) for underwater exploration of shipwrecks aimed at data collection and extraction of 3D models. A similar approach has been used to map the floor of the Mediterranean Sea around the island of Malta [39]. As for the second, many cultural institutions and museums have proposed AI solutions to engage visitors, using chatbots and robots to understand questions, communicate responses, create paths in the museum to create a more in-depth understanding, and develop software to automate the organisation of exhibitions. Robovie-R ver.2 [40] is a humanoid robot that reproduces the description of artworks with movements akin to those of a human guide, using face recognition, and response methods implemented with AI. Minerva software [41] uses a multiagent system developed using distributed AI for grouping artefacts according to the user’s criteria and arranges them in the rooms of a museum.
The present paper presents a short overview of the ArchAIDE project (Section 2), explains the methods adopted for developing the shape-based and appearance (decoration)-based recognition of potsherd through one picture taken from a mobile device or a camera (Section 3), and discusses the results obtained and the steps followed for improving the application (Section 4). Section 5 describes the importance of data availability and mostly open access to research data for training the neural networks. It also points out how sharing the AI algorithm as open source code is essential in an open science environment. Section 6 exemplifies the facility of use of the ArchAIDE system through its mobile application. The final section, Section 7, discusses the difficulties encountered and the project’s future development.

2. ArchAIDE Project

Within this scenario, the ArchAIDE project (2016-2019) developed two different deep neural networks (DNNs) devoted to recognising pottery through images using a mobile device. One of the networks is dedicated to image recognition (also called appearance-based recognition, for pottery decorations), the other to shape recognition (for pottery types). ArchAIDE was thought of as a response to well-defined archaeological needs. During archaeological investigations, pottery is the most common type of finding, and its analysis and classification allow the understanding of much information related to the archaeological contexts, from the chronology to the function, and social structures. Ceramic identification is a repetitive and time-consuming activity based on the archaeologist’s expertise and is usually made by matching potsherds to exemplars in catalogues of archaeological typologies (Figure 1). The ArchAIDE project operated for optimising this identification process, developing a new system that simplifies the practice of pottery recognition in archaeology, through an AI approach, and without replacing the knowledge of domain specialists. On the contrary, ArchAIDE assured archaeologists’ role at the centre of the decision-making process within the identification workflow.
To achieve its goals, the ArchAIDE project created (i) a digital comparative collection for pottery types [42], decorations [43], and stamps [44], combining digital collections, digitised paper catalogues, and data acquired through photo campaigns; (ii) a semi-automated system for paper catalogues’ digitisation [45]; (iii) a multilingual thesaurus of descriptive pottery terms, mapped to the Getty Art and Architecture Thesaurus, which includes French, German, Spanish, Catalan, Portuguese, English, and Italian [46]; (iv) two distinct neural networks for appearance-based and shape-based recognition (partially discussed here [47,48]); and (v) an app connected to the AI classifiers to support archaeologists in recognising potsherds during excavation and post-excavation analysis, with an easy-to-use interface.
The ArchAIDE system is based on a pipeline where archaeologists take a picture of a potsherd and send it to the specifically trained classifier, which returns five suggested matches from the comparative collections. Once the correct type is identified, the information is linked to the photographed sherd and stored within a database that can be shared online (Figure 2).

3. Materials and Methods

The set of tools developed by the project addresses two scenarios: (i) when the pottery is undecorated, the identification relies on the shape (i.e., profile’s geometry) of the sherd; (ii) if decorations (i.e., colours and patterns) are present, classification is usually based on those, since they can provide a more reliable diagnostic than the shape of the sherd.
The first goal of ArchAIDE was to realise a proof of concept. The selection of pottery classes was based on the need (i) to find types that relied on shape-based and decoration-based characteristics for identification; and (ii) to realise a system that could have a real-world implementation. The decision was made to choose four classes: amphorae manufactured throughout the Roman world between the late 3rd century BCE and the early 7th century CE (Figure 3a); Roman Terra Sigillata manufactured in Italy, Spain, and South Gaul between the 1st century BCE and the 3rd century CE; Majolica produced in Montelupo Fiorentino (Italy) between 14th and 18th century; and medieval and post-medieval Majolica from Barcelona and Valencia (Spain) (Figure 3b).

3.1. Shape-Based Recognition

Since the goal was aiding archaeologists in the field, we tackled classifying a potsherd profile based on a single picture of it. A significant challenge in building the necessary AI tools is that one cannot obtain sufficient real-world samples to train neural networks. Furthermore, given its variability, an archaeological dataset would contain only a small fraction of the possible sherds. Instead, we defined each class (i.e., a pottery type) by two-dimensional drawings of the profile of the complete vessel. Whereas the drawing describes the geometry of the entire vessel’s profile, a real potsherd is a part of it (many times a tiny one) which contains minimal information about the shape as a whole. Consequently, the recognition tool was designed as a two-phase process, where the classification algorithm was first developed on one dataset and then validated on other datasets for different types of pottery. Separation of datasets enables avoiding an overfit due to multiple hypothesis testing, thus enabling better confidence in the results. The dataset used in the first phase was composed of 435 sketches of Terra Sigillata Italica (TSI), grouped into 65 standardised top-level classes (i.e., the top-level types defined in the Conspectus catalogue [49]). From these drawings, class-balanced synthetic data (i.e., 3D models) were created, while reserving the real-world sherds’ outlines to be used solely for testing. The real-world outlines were traced from potsherd photographed in archaeological warehouses throughout Europe using the dedicated ArchAIDE mobile app (see Section 6). The real-world test dataset contained 240 extracted outlines from 29 different top-level classes. Nevertheless, the classifier was trained on all 65 classes.
On the dataset side, 3D models of each pottery type were reconstructed by automatically extracting the profile of the entire vessel from 2D drawing, and by rotating the profile around its revolution axis and shattering it to derive synthetic sherds [45] (Figure 4). To circumvent the computation overhead of 3D reconstruction, we imagined circles going around the vertical axis for each point in the profile, then generated a random 3D plane, and calculated how all the circles intersect the plane, connecting the intersection points from the circles along the profile to generate the fracture face. To create a more realistic synthetic fracture, we reduced its size to match real potsherds’ dimensions [48].
The network was trained based on the distinctive characteristics of archaeological profiles, including the requirement to divide the inner and the outer profile of the sherd, the relevance of the position of the points along the profile outline, the intrinsic noise in the tracing procedure, and the requirement to overcome sub-optimal data acquisition processes [48] (Figure 5). The architecture of ArchAIDE’s classifier is similar to Point-Net [50]; it uses pooling to achieve a representation that is invariant to the order of the elements, following a local computation at each element. Such pooling is the only way to obtain this invariance under mild conditions [50,51]. Novel applications regarding shape classification include PointNet ++ [50], and PointCNN [52]. While most of the preceding work has been directed on 3D point clouds identification, ArchAIDE network encoded a 2D outline and took advantage of the information that arises from the position of the points along the outline.
In the ranking, the OutlineNet’s real-world top 2 classification rate was 1.5 times the top 1 classification rate when training the model, suggesting that the classes were easily confused. Ablation experiments (i.e., a method to assesses the performance of the NN by removing specific components, to understand their contribution to the model) showed that separation of inner and outer profiles, angle information, group-hot encoding (i.e., the conversion of categorical data in order to be processed by a NN), and adaptive sampling each add to the overall top-K performance, even when changes in the top 1 accuracy were small. Similarly, augmentation also contributed to the top-K result, without significant impact on the top 1 accuracy. Top-K processing finds a list of K results with the highest scores, assuming that all the K results are independent. In practice, some of the top-K results obtained can be very similar to each other or redundant. A plausible reason is that all these modifications to the model and training are less meaningful for samples that are carefully collected and informative, and mainly impact the accuracy of the lower-quality samples.
This fits with its function as a reference tool for pottery specialists who would be glad to evaluate a shortlist of results as part of the obligatory expert validation but would be disappointed to use a tool where the correct result is often completely omitted.
Following the first phase development on the Terra Sigillata Italica (TSI) dataset, three other datasets were added. The first was a supplementary TSI dataset that includes the profiles of additional 96 sherds belonging to 11 classes that were not considered during the test; the other two contain Terra Sigillata Hispanica (TSH) and South Gaulish Terra Sigillata (TSSG) data. These also describe Terra Sigillata pottery, but there is no intersection in classes between TSI, TSH, and TSSG.
On the new TSI test set, using the same model from phase I (without any retraining/adaptations), the accuracy values obtained were even better than the phase I dataset. Additionally, for the datasets containing new typologies, similar or better accuracy (measured relative to the number of classes) was obtained using precisely the same training method, without any adaptations (Figure 6).

3.2. Appearance-Based Recognition

Pottery decorations can be classified based on the presence and combination of colours, the type of patterns, the areas that are decorated, and more. In this case, a transfer-learning technique was applied, as happens in domains characterised by data’s paucity. A pre-trained version of the ResNet-50 network [53] trained on the ImageNet collection [54] was employed. Images were scaled to a 224 × 224 to fit the expected input dimensions of the ResNet model. To train the network to work with varying amounts of decorations/background, we added augmented versions of each image to the original dataset, scaling it to four different sizes. On each scaled image, we created three versions: unflipped, horizontally flipped, and vertically flipped. All these images were cropped, leaving just the centre square. As a result, 12 images from each original one were obtained, increasing the dataset from around 8000 images to about 100,000 images.
In the first testbeds, the most challenging factor that affected identification was varying illumination. To improve robustness, we simulated different white balance, brightness, and contrast adjustments. The luminosity (“brightness”) of all the pixels within each image was multiplied using a randomised factor to simulate different lighting conditions. An analogous random multiplicative factor was applied to each channel in the image compensating the white balance setups; every red/green/blue channel was multiplied by a different random constant factor, to change the ratio between the colours.
Moreover, the imaging conditioned (i.e., background and ruler) varied significantly, leading to an inherent bias. The foreground was extracted automatically from the training images using the GrabCut algorithm to avoid this conditioning [55] (Figure 7).

4. Results

The development of the two neural networks was extremely challenging. In particular, we faced: (i) the paucity of real-world data to train the networks; (ii) the partiality of the potsherd in comparison to the whole object and its high variability due to a random breakage process; (iii) the non-informativeness of a large portion of the sherds, among both the synthetic and real-world data; (iv) the similarity between types which can cause ambiguity in the classification; and (v) the noisiness of the acquisition process due to the procedure for extracting and scaling the profile from potsherd images (shape), and the variability in illumination and background (decorations).
Most neural network loss functions would be prone to sacrificing challenging classes to improve the average accuracy across all classes. Nonetheless, a reference tool is more valuable when it achieves less obvious identification; i.e., it can also recognise less common types. For tackling the heterogeneous and unbalanced nature of the data, the algorithm was trained adopting a novel weighting technique that considers both the error of each ground truth class and false positives in each class. Ground truth means checking the neural network results for accuracy against the real world. This reweighting scheme addressed the difficulty of correctly classifying a sample from a given class and the frequency of the current classification of a sample. The achieved results show quite good recognition accuracy in the face of these challenges.
The full development of the algorithm was implemented through a two-phase process. In the first phase, the method was applied to one dataset of potsherds of one specific family; in the second, the same method, pipeline and parameters, were used to three additional datasets. With the Phase-I dataset (composed of 65 classes), the identification of almost 74% of the sherds within the top 10 results was achieved. Adding the three new (supplementary Terra Sigillata Italica dataset, Terra Sigillata Hispanica (TSH), and South Gaulish Terra Sigillata (TSSG)) datasets, without any change of the pipeline, we reached 81%, 68%, and 60% top 10 accuracy for 65, 98, and 94 classes, respectively. The ranking is essential in information retrieval because it appoints the relative order between the classes, ranking classes with a high degree of relevancy higher than those with a low degree of relevancy. Hence, the ArchAIDE system works as a reliable reference tool to be used in the field, allowing to narrow the list of relevant types to be considered for each potsherd.
The evaluation of the shape-based identification was done both on the captured real-world data (used in the testing phase), and in an end-to-end fashion, with users capturing new photos, annotating them, and using the classification algorithm.
The end-to-end evaluation was done using 381 different pictures of sherds of TSI, taken from 42 different types (out of 65 types). Most images were taken with a smartphone or a tablet (as would be the case in the field), with only 25 pictures using a regular camera. The average mobile-app top 5 accuracy was 50.8% and the top1 accuracy was 18.9%. This is slightly lower than 22.0% top 1 accuracy and 57.9% top 5 accuracy reported in our evaluation, but these results are still useful for archaeologists.
The results reported on the testing data are reported in Table 1:
The assessment of the decoration recognition was achieved using on both the mobile and desktop applications. The results for the classification are reported in Table 2:
The evaluation was performed on 49 different genres (out of 84) using more than 820 images taken both on mobile devices (700 by phones and tablet) and with a camera (120). Results show that the accuracy, in both the applications, was not affected by the lighting, giving similar results both with artificial and natural light.

5. Open ArchAIDE

As previously discussed, one of the most complex aspects of the practical application of AI is not the development of the algorithms themselves, but the creation of the dataset used to train them. Archaeology is widely digitised, but rarely datafied [56]. Unfortunately, datafication is essential because AI algorithms need data, preferably Big Data, that is also FAIR (findable, accessible, interoperable, and reusable).
The ArchAIDE neural networks also rely on a vast amount of data from digital collections, paper catalogues (necessary to the creation of digital comparative collections included in the reference database) and photography campaigns (for the creation of training datasets). The project has used two main digital collections: the “Roman Amphorae: a digital resource” [57], created by Simon Keay and David Williams of the University of Southampton and published as open data on the Archaeology Data Service, that includes the principal types of roman amphorae between the late 3rd century BCE and the early 7th century CE; and the “CERAMALEX” database [58], a proprietary database of the German and French excavations in Alexandria, Schedia, and Marea, available thanks to partnership with the University of Cologne. In addition to these two collections, printed catalogues in the form of books and papers have been digitised to populate ArchAIDE database.
For achieving the correct management of the material which falls under copyright or database protection, the EU directives on Copyright (2001/29/EC) and Database protection (96/9/EC) were analysed [59]. The scientific research exception permitted the implementation of the project, to the extent justified by a non-commercial purpose mentioning the source and the authors’ name. For training the algorithm, multiple photo campaigns were also carried out in several archaeological warehouses. The aim was to obtain a dataset of images for all the chosen ceramic classes. Considering that it has not been possible to collect all the data in one warehouse, this task requested a significant effort, involving more than 30 different institutions in Austria, Italy, and Spain. Other images were collected by associates’ participation, who sent pictures of their assemblages from many countries. Detailed guidelines were prepared for helping the consortium partners and project associates to take images of sherd profiles that could fit the training of the neural network. All this procedure which included the finding, classifying, photographing, and creating a digital storage was very time-consuming, as images of at least ten different potsherds for every ceramic type were needed to provide enough training information for the algorithm. It appeared that not every top-level type and sub-type could be represented. In some instances, the presence of rare types and the significant number of unclassified sherds inside the warehouses made it impossible to reach the amount needed. Overall, 3498 sherds were photographed for training the shape-based recognition model. For appearance-based recognition, a dataset of 13,676 pictures was collected through multiple photography campaigns.
Participating in H2020 open data pilot, ArchAIDE was committed to creating sustainable outputs where the project held the copyright. Unfortunately, not all the collected data could be disseminated as open data. The research exceptions allowed by the EU Directives [59] do not mean the ArchAIDE project automatically holds the copyright to the newly digitised or remixed data. Negotiation with copyright holders will be necessary for making these data available outside the project. ArchAIDE is able to demonstrate that paper catalogues, once digitised, can be actively reused, also many years later from the first publication. This opens to the possibility of reaching an agreement with publishers and other data providers for making their resources available in new ways, “with a tangible benefit (seeing their data in use within the app), thus furthering the long-term discourse around making research data open and accessible” [60]. Instead, data owned by the project, i.e., multilingual vocabularies, videos created by the project, as well as the 2D and 3D models created from the ADS Roman Amphorae digital resource [57], were made available for download [46] (Figure 8). The ArchAIDE archive contains 2D vector drawings in SVG format and interactive 3D models navigable through a 3DHOP 3D viewer [61], that can also be downloaded for 3D printing (Figure 9). These models exemplify an excellent standard of best-practice reuse. When the Roman Amphorae digital resource was deposited in 2005, creating automated 2D and 3D models for training a neural network could not have been a use envisioned. As 2D and 3D models were produced for each type included in the digital resource, it was possible to link the two archives, amplifying their mutual usefulness.
It was also hoped the thousands of photos taken by the project for training the algorithms might result in new comparative collections that could be deposited as open research data into the ArchAIDE archive. Still, in many European countries, copyright on cultural heritage is very restrictive and did not allow us to make available the images of potsherds taken by ArchAIDE partners in national and regional collections. Showing the usefulness of these data within the ArchAIDE application might help convince cultural heritage national institutions to move towards more open data policies. Finally, the source code and neural network models are publicly available as open source in a GitHub repository [62] to allow re-use and future development by other researchers. Although all the data collected by users are, by definition, private and are not published, and all system components are designed to comply with this privacy statement, the system offers the option to publish the data as open data. Sponsoring the open data philosophy and EU open data pilot, ArchAIDE suggests to the user to share the data with the community, leaving each user the choice to do that or not.

6. The App

Mobile and desktop applications were developed to make ArchAIDE fully operational. Their functionality was designed taking into account the workflow of pottery analysis, from the finding in the field to post-excavation examination, considering the environmental context in which these activities are performed (warehouse, remote places, etc.) and the related constraints. Through continuous feedback from the archaeological companies involved in the consortium and external associates who collaborated with the project [63], it was possible to collect suggestions on automating this workflow, improving the design, and generating new prototypes. In the end, various needs have been taken into consideration, from the use as a recognition tool, to collecting and storing data in the form of digital assemblages. The design prioritised intuitive access and ease of use (Figure 10). The final result is a digital ecosystem in which mobile and server-side applications interact through an API server mediating all the communications and activities.
The ArchAIDE Desktop Web Server and the ArchAIDE mobile application provide search and retrieval tools to access the reference database and the classification functionalities. The choice for satisfying this requirement fell on Liferay 7.1, an open-source Portal Server technology widely used to build medium/large web portals. The reference database and the desktop website implemented a single sign-on infrastructure based on CAS (central authentication server) to share the same user archive between the app, the reference database, and the desktop website. The Shape Recognition and Decoration Recognition Model servers implement the pottery type prediction as a unique service. In the first case, the input is an SVG file representing a sherd fracture’s outer and inner profile. In the second one, the input is an image of the sherd surface. The result, returned as a JSON array, is a list of ranked ceramic type (or decoration) identifiers paired with a score of relevancy. The ArchAIDE mobile application also gives access to the “my sites” area, dedicated to registered users where it is possible to store information about sites and assemblages. The mobile application was designed for allowing the use in lack of internet connectivity, such as in storehouses or remote rural areas. In these environments, the app permits storing new images of potsherds or browsing the reference database. The app registers the information locally when offline and then saves the information into the server online (Figure 11).
To sum up, access to the reference database and the automatic classification tools are available for all the users without any registration. Registration is mandatory for storing and managing information about sites/assemblages/sherds (e.g., classification information obtained from the classifier, or provenance of a sherd that belongs to an assemblage from a site) that is stored in the local memory of the device and on the ArchAIDE server. The ArchAIDE App is free and available for Android and iOS platforms, respectively on Google Play Store and Apple Store.

7. Discussion

ArchAIDE has shown the ability of artificial intelligence in identifying archaeological pottery, but it has also pointed out some of the challenges that AI applications in archaeology have to deal with. The first is related to the amount of data necessary for training neural networks. Despite popular perception, one of the most complex aspects of the practical application of AI is not the development of the algorithm itself, but the creation of the dataset used to train it. Archaeology is widely digitised, but rarely datafied [64], and data availability represents a critical aspect of AI applications. AI algorithms need data, preferably Big Data, that is also FAIR (findable, accessible, interoperable, and reusable), as well as consolidated, persistent digital infrastructures. However, this is not enough because vast amounts of data are often unavailable in archaeology, and frequently, data is unusable due to copyright or legislation. Collections accessible in digital format, both for open re-use and as comparative data for AI applications, like the open databases of the Samian Research of the Roman-Germanic Central Museum [65], the Roman Open Data [66], or the already mentioned Roman Amphorae: a digital resource [57] are extremely rare. Furthermore, producing the necessary training and comparative data is time-consuming and demanding, and until this can be addressed, the ability for archaeology to use AI to answer research questions will be irregular, producing low-quality results. In the case of ArchAIDE, this resulted in a massive effort to digitise the paper catalogues and collect primary data through time-consuming photo-campaigns. These allowed us to gather around 17,000 pictures, on the whole, considering a minimum threshold of 10 real-world potsherd images for each type and 100 real-world potsherd images for each decoration genre respectively for the shape-based and appearance-based algorithm.
The second was that it was not reasonable to design an image recognition system that could identify pottery using contemporaneously decoration-based and shape-based characteristics. It appeared evident that it was necessary to develop two different algorithms. If needed, ceramic classes for which both shape data and appearance data are available can be recognised using the two different classifiers to obtain more detailed results. Moreover, the project represents a proof of concept, and new experiments could be conducted with other ceramic classes.
The third was that the archaeological classification is not based on shape or decoration alone. Archaeologists and especially pottery specialists as domain experts use other considerations such as locations, the composition of the assemblage, fabric, and more as elements that permit filtering out some classes. At present, these elements are not captured in ArchAIDE scheme. The fabric is not recognisable through a picture taken by a mobile device, given to the technological and methodological limitation. Nevertheless, fabric and other elements can be employed to filter the information on top of the class ranking predicted by ArchAIDE. Consequently, we can assume that the gap between ArchAIDE and human archaeologists in distinguishing ceramic types based on their shape or decoration, is probably much lower than the achieved error rates. Moreover, the error rates are probably exaggerated due to problems related to the correct labelling of potsherds. These have been gathered based on the labelling that is documented in catalogues and established collections, even if, in some cases, mistakes about the exact provenance of the assignment or the ground truth classification are likely to be present.
ArchAIDE developed a novel data generation technique, a new shape representation scheme, an original reweighting method to deal with a large set of compounding challenges, and a real-world cross-modality matching problem. ArchAIDE, thanks to the innovations built up, provides a real-world scenario working application and a case study of deep learning applied to real-world data where the “sim2real domain shift” is broad, and most conventional assumptions are widely disrupted.
Finally, ArchAIDE has demonstrated that it may be used for a variety of pottery types if the necessary comparative data can be gathered (and potentially other artefact types as well). This will allow maintaining the system as fully operational and useful to archaeologists; new catalogues must be added into the reference database, as well as training datasets for having more recognisable ceramic classes. From the end of the project (May 2019), the MAPPALab, a research unit of the University of Pisa, pursued this goal (Figure 12). In this period, decoration and types of Maiolica Arcaica (a medieval tin-glazed ware) produced in Pisa were added. At this moment, all the data are available to the users as a comparative collection. In the next months, the data collected will be used to train two specific neural networks and potentially test their performances. Bronze Age pottery coming from central Italy and Roman Common ware are now being implemented by a research team composed of researchers from the Museo delle Civiltà in Rome, the University of Cassino, the Deutsches Archäologisches Institut in Rome, and the Italian General Directorate for Education, Research and Cultural Institutes of the Ministry for Cultural Heritage and Activities and Tourism. These collections will be available to users in the next months. Work on Bronze Age pottery represents an opportunity and a challenge for ArchAIDE. The recognition algorithms were developed with standardised pottery productions such as Terra Sigillata, benefiting from a long tradition in classification and analysis [49]. Working with Bronze Age pottery means stress-testing the algorithms to demonstrate that they can also work thoroughly with less standardised pottery productions common in other historical periods than the Roman period and non-Mediterranean archaeology. This could bring to an overall improvement, broader collaboration, and implementation of the ArchAIDE system.

Supplementary Materials

The following materials: ArchAIDE Multilingual Pottery Vocabularies (ArchAIDE Mappings; ArchAIDE Triples; ArchAIDE Wordlists); ArchAIDE 2D and 3D Pottery Models (3D Models and Vector Images); ArchAIDE Videos (Project Videos; Partner Videos) are available online at https://archaeologydataservice.ac.uk/archives/view/archaide_2019/index.cfm.

Author Contributions

Introduction, G.G.; ArchAIDE project, M.L.G.; Materials and Methods, G.G.; Results, G.G.; Open ArchAIDE, F.A.; The App, F.A.; Discussion, F.A., G.G., and M.L.G.; final revision, G.G. and F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the EU Horizon 2020 Reflective societies-cultural heritage and European identity. Grant Agreement No. 693548.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in Archaeology Data Service at https://doi.org/10.5284/1050896.

Acknowledgments

We thank all the members of the ArchAIDE team (http://www.archaide.eu/teams). In relation to this paper, special thanks go to Nachum Dershowitz, Barak Itkin, and Lior Wolf for the work done in developing the neural networks; Nevio Dubbini for the statistical analysis of the applications; Massimo Zallocco and the INERA team group for developing the mobile and desktop application; and Julian Richards, Holly Wright, and Tim Evans for data curation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Andresen, S.L. John McCarthy: Father of AI. IEEE Intell. Syst. 2002, 17, 84–85. [Google Scholar] [CrossRef]
  2. Crevier, D. AI: The Tumultuous History of the Search for Artificial Intelligence; Basic Books: New York, NY, USA, 1993. [Google Scholar]
  3. Baum, E.B. On the Capabilities of Multilayer Perceptrons. J. Complex. 1988, 4, 193–215. [Google Scholar] [CrossRef] [Green Version]
  4. Boden, M. A Guide to Recurrent Neural Networks and Backpropagation. Dallas Proj. 2002, 24, 1–10. [Google Scholar]
  5. MacKay, D.J.C. A Practical Bayesian Framework for Backpropagation Networks. Neural Comput. 1992, 4, 448–472. [Google Scholar] [CrossRef]
  6. Valueva, M.V.; Nagornov, N.N.; Lyakhov, P.A.; Valuev, G.V.; Chervyakov, N.I. Application of the Residue Number System to Reduce Hardware Costs of the Convolutional Neural Network Implementation. Math. Comput. Simul. 2020, 177, 232–243. [Google Scholar] [CrossRef]
  7. Barceló, J.A. Seriación de Datos Arqueológicos Ambigüos o Incompletos. Una Aplicacion de Las Redes Neuronales. Apl. Inf. Arqueol. Teoría Sist. 1995, 2, 99–116. [Google Scholar]
  8. Di Ludovico, A.; Ramazzotti, M. Reconstructing Lexicography in Glyptic Art: Structural Relations between the Akkadian Age and the Ur III Period. In Proceedings of the 51st Rencontre Assyriologique Internationale, Chicago, IL, USA, 18–22 July 2005; pp. 263–280. [Google Scholar]
  9. Van den Dries, M.H.; Archeology, F. Archaeology and the Application of Artificial Intelligence: Case-Studies on Use-Wear Analysis of Prehistoric Flint Tools. Available online: https://openaccess.leidenuniv.nl/handle/1887/13148 (accessed on 18 November 2020).
  10. Van der Maaten, L.; Boon, P.; Lange, G.; Paijmans, H.; Postma, E. Computer Vision and Machine Learning for Archaeology. In Proceedings of the 34th Computer Applications and Quantitative Methods in Archaeology, Fargo, ND, USA, 18–21April 2006; pp. 112–130. [Google Scholar]
  11. Ducke, B. Archaeological Predictive Modelling in Intelligent Network Structures. In Proceedings of the 29th CAA Conference, Heraklion, Greece, 2–6 April 2002; pp. 267–272. [Google Scholar]
  12. Caspari, G.; Crespo, P. Convolutional Neural Networks for Archaeological Site Detection–Finding “Princely” Tombs. J. Archaeol. Sci. 2019, 110, 104998. [Google Scholar] [CrossRef]
  13. Garcia-Molsosa, A.; Orengo, H.A.; Lawrence, D.; Philip, G.; Hopper, K.; Petrie, C.A. Potential of Deep Learning Segmentation for the Extraction of Archaeological Features from Historical Map Series. Archaeol. Prospect. 2021. Unpublished work. [Google Scholar]
  14. Orengo, H.A.; Garcia-Molsosa, A.; Berganzo-Besga, I.; Landauer, J.; Aliende, P.; Tres-Martínez, S. New Developments in Drone-Based Automated Surface Survey: Towards a Functional and Effective Survey System. Archaeol. Prospect. 2021. Unpublished work. [Google Scholar]
  15. Orengo, H.A.; Conesa, F.C.; Garcia-Molsosa, A.; Lobo, A.; Green, A.S.; Madella, M.; Petrie, C.A. Automated Detection of Archaeological Mounds Using Machine-Learning Classification of Multisensor and Multitemporal Satellite Data. Proc. Natl. Acad. Sci. USA 2020, 117, 18240–18250. [Google Scholar] [CrossRef]
  16. Guyot, A.; Hubert-Moy, L.; Lorho, T. Detecting Neolithic Burial Mounds from LiDAR-Derived Elevation Data Using a Multi-Scale Approach and Machine Learning Techniques. Remote Sens. 2018, 10, 225. [Google Scholar] [CrossRef] [Green Version]
  17. Grilli, E.; Özdemir, E.; Remondino, F. Application of machine and deep learning strategies for the classification of heritage point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 447–454. [Google Scholar] [CrossRef] [Green Version]
  18. Tyukin, I.; Sofeikov, K.; Levesley, J.; Gorban, A.N.; Allison, P.; Cooper, N.J. Exploring Automated Pottery Identification [Arch-I-Scan]. Internet Archaeol. 2018. [Google Scholar] [CrossRef]
  19. Stamatopoulos, M.I.; Anagnostopoulos, C.N. 3D Digital Reassembling of Archaeological Ceramic Pottery Fragments Based on Their Thickness Profile. arXiv 2016, arXiv:1601.05824. [Google Scholar]
  20. Filippas, D.; Georgopoulos, A. Development of an Algorithmic Procedure for the Detection of Conjugate Fragments. In Proceedings of the XXIV International CIPA Symposium, Strasbourg, France, 2–6 September 2013; pp. 2–6. [Google Scholar]
  21. Derech, N.; Tal, A.; Shimshoni, I. Solving Archaeological Puzzles. arXiv 2018, arXiv:1812.10553. [Google Scholar]
  22. Ostertag, C.; Beurton-Aimar, M. Matching Ostraca Fragments Using a Siamese Neural Network. Pattern Recognit. Lett. 2020, 131, 336–340. [Google Scholar] [CrossRef]
  23. Assael, Y.; Sommerschield, T.; Prag, J. Restoring Ancient Text Using Deep Learning: A Case Study on Greek Epigraphy. arXiv 2019, arXiv:1910.06262. [Google Scholar]
  24. Richards, J.; Tudhope, D.; Vlachidis, A. Text Mining in Archaeology: Extracting Information from Archaeological Reports. In Mathematics and Archaeology; Barcelo, J., Bogdanovic, I., Eds.; CRC Press: Boca Raton, FL, USA, 2015; pp. 240–254. ISBN 978-1-4822-2681-2. [Google Scholar]
  25. Epure, E.V.; Martín-Rodilla, P.; Hug, C.; Deneckère, R.; Salinesi, C. Automatic Process Model Discovery from Textual Methodologies. In Proceedings of the 2015 IEEE 9th International Conference on Research Challenges in Information Science (RCIS), Athens, Greece, 13–15 May 2015; pp. 19–30. [Google Scholar]
  26. Tolle, K.; Klinger, P.; Gampe, S.; Peter, U. Semantic search based on natural language processing–a numismatic example. J. Anc. Hist. Archaeol. 2018, 5, 68–79. [Google Scholar] [CrossRef]
  27. Vlachidis, A.; Tudhope, D.; Wansleeben, M.; Azzopardi, J.; Green, K.; Xia, L.; Wright, H. D16. 4: Final Report on Natural Language Processing. UCL Discov. 2017. Available online: https://discovery.ucl.ac.uk/id/eprint/10069106/ (accessed on 11 December 2020).
  28. Brun, C.; Hagège, C. Labeling of Work of Art Titles in Text for Natural Language Processing. U.S. Patent 7788084B2, 31 August 2010. [Google Scholar]
  29. Talboom, L. Improving the Discoverability of Zooarchaeological Data with the Help of Natural Language Processing. Unpublished MSc Digital. Archaeology Dissertation, University of York, York, UK, 2017. [Google Scholar]
  30. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K. Excavating Grey Literature: A Case Study on the Rich Indexing of Archaeological Documents via Natural Language-Processing Techniques and Knowledge-Based Resources. Aslib Proc. New Inf. Perspect. 2010, 62, 466–475. [Google Scholar] [CrossRef]
  31. Bewes, J.; Low, A.; Morphett, A.; Pate, F.D.; Henneberg, M. Artificial Intelligence for Sex Determination of Skeletal Remains: Application of a Deep Learning Artificial Neural Network to Human Skulls. J. Forensic Leg. Med. 2019, 62, 40–43. [Google Scholar] [CrossRef]
  32. Czibula, G.; Ionescu, V.-S.; Miholca, D.-L.; Mircea, I.-G. Machine Learning-Based Approaches for Predicting Stature from Archaeological Skeletal Remains Using Long Bone Lengths. J. Archaeol. Sci. 2016, 69, 85–99. [Google Scholar] [CrossRef]
  33. Oonk, S.; Spijker, J. A Supervised Machine-Learning Approach towards Geochemical Predictive Modelling in Archaeology. J. Archaeol. Sci. 2015, 59, 80–88. [Google Scholar] [CrossRef]
  34. Creating a Novel, Innovative Toolkit for the Identification of Agricultural Management Regimes in the Past Using Seed Shape|IShape3DSeed Project|H2020|CORDIS|European Commission. Available online: https://cordis.europa.eu/project/id/892502/it (accessed on 23 November 2020).
  35. Seidl, M.; Breiteneder, C. Automated Petroglyph Image Segmentation with Interactive Classifier Fusion. In Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing, Mumbai, India, 16–19 December 2012; pp. 1–8. [Google Scholar]
  36. Funkhouser, T.; Shin, H.; Toler-Franklin, C.; Castañeda, A.G.; Brown, B.; Dobkin, D.; Rusinkiewicz, S.; Weyrich, T. Learning How to Match Fresco Fragments. J. Comput. Cult. Herit. 2011, 4, 1–13. [Google Scholar] [CrossRef]
  37. Kogou, S.; Shahtahmassebi, G.; Lucian, A.; Liang, H.; Shui, B.; Zhang, W.; Su, B.; van Schaik, S. From Remote Sensing and Machine Learning to the History of the Silk Road: Large Scale Material Identification on Wall Paintings. Sci. Rep. 2020, 10, 1–14. [Google Scholar] [CrossRef] [PubMed]
  38. Drap, P.; Scaradozzi, D.; Gambogi, P.; Gauch, F. Underwater Photogrammetry for Archaeology-The Venus Project Framework. In Proceedings of the Third International Conference on Computer Graphics Theory and Applications, Funchal, Portugal, 22–25 January 2008; pp. 485–491. [Google Scholar]
  39. Wu, J.; Bingham, R.C.; Ting, S.; Yager, K.; Wood, Z.J.; Gambin, T.; Clark, C.M. Multi-AUV Motion Planning for Archeological Site Mapping and Photogrammetric Reconstruction. J. Field Robot. 2019, 36, 1250–1269. [Google Scholar] [CrossRef]
  40. Iio, T.; Shiomi, M.; Shinozawa, K.; Shimohara, K.; Miki, M.; Hagita, N. Lexical Entrainment in Human Robot Interaction. Int. J. Soc. Robot. 2015, 7, 253–263. [Google Scholar] [CrossRef]
  41. Amigoni, F.; Schiaffonati, V. The Minerva Multiagent System for Supporting Creativity in Museums Organization. In Proceedings of the IJCAI 2003 Workshop on Creative Systems: Approaches to Creativity in AI and Cognitive Science, Milano, Italy, 9–10 August 2003; pp. 65–74. [Google Scholar]
  42. Types—ArchAIDE Desktop. Available online: https://archaide-desktop.inera.it/types (accessed on 9 December 2020).
  43. Decorations—ArchAIDE Desktop. Available online: https://archaide-desktop.inera.it/decorations (accessed on 9 December 2020).
  44. Stamps—ArchAIDE Desktop. Available online: https://archaide-desktop.inera.it/stamps (accessed on 9 December 2020).
  45. Dellepiane, M.; Callieri, M.; Banterle, F.; Arenga, D.; Zallocco, M.; Scopigno, R. From Paper to Web: Automatic Generation of a Web-Accessible 3D Repository of Pottery Types. In Proceedings of the EUROGRAPHICS Workshop on Graphics and Cultural Heritage 2017, Graz, Austria, 27–29 September 2017; pp. 65–70. [Google Scholar]
  46. ARCHAIDE Portal for Publications and Outputs. Available online: https://archaeologydataservice.ac.uk/archives/view/archaide_2019/ (accessed on 26 November 2020).
  47. Itkin, B.; Wolf, L.; Dershowitz, N. Computational Ceramicology. arXiv 2019, arXiv:1911.09960. [Google Scholar]
  48. Anichini, F.; Dershowitz, N.; Dubbini, N.; Gattiglia, G.; Itkin, B.; Wolf, L. The Automatic Recognition of Ceramics from Only One Photo: The ArchAIDE App. J. Archaeol. Sci. Rep. 2021. Unpublished work. [Google Scholar]
  49. Ettlinger, E.; Hedinger, B.; Hoffmann, B.; Kenrick, P.M.; Pucci, G.; Roth-Rubi, K.; Schneider, G.; Von Schnurbein, S.; Wells, C.M.; Zabehlicky-Scheffenegger, S. Conspectus Formarum Terrae Sigillatae Italico Modo Confectae (Materialien Zur Römisch-Germanischen Keramik; Habelt: Bonn, Germany, 1990. [Google Scholar]
  50. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
  51. Zaheer, M.; Kottur, S.; Ravanbakhsh, S.; Poczos, B.; Salakhutdinov, R.; Smola, A. Deep Sets. arXiv 2018, arXiv:1703.06114. [Google Scholar]
  52. Hua, B.-S.; Tran, M.-K.; Yeung, S.-K. Pointwise Convolutional Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 984–993. [Google Scholar]
  53. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  54. Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  55. Rother, C.; Kolmogorov, V.; Blake, A. “GrabCut”: Interactive Foreground Extraction Using Iterated Graph Cuts. ACM Trans. Graph. 2004, 23, 309–314. [Google Scholar] [CrossRef]
  56. Gattiglia, G. Think Big about Data: Archaeology and the Big Data Challenge. Archäologische Inf. 2015, 38, 113–124. [Google Scholar] [CrossRef]
  57. Roman Amphorae: A Digital Resource. Available online: https://archaeologydataservice.ac.uk/archives/view/amphora_ahrb_2005/ (accessed on 23 November 2020).
  58. CERAMALEX—Ancient Pottery in Alexandria and Its Chora. Available online: https://archaeologie.phil-fak.uni-koeln.de/en/research/research-projects/finished/ceramalex-ancient-pottery-in-alexandria-and-its-chora (accessed on 23 November 2020).
  59. Burrell, R.; Coleman, A. Copyright Exceptions: The Digital Impact, 1st ed.; Cambridge University Press: Cambridge, UK, 2005; ISBN 978-0-521-84726-1. [Google Scholar]
  60. Anichini, F.; Banterle, F.; Garrigós, J.B.I.; Callieri, M.; Dershowitz, N.; Dubbini, N.; Diaz, D.L.; Evans, T.; Gattiglia, G.; Green, K.; et al. Developing the ArchAIDE Application: A Digital Workflow for Identifying, Organising and Sharing Archaeological Pottery Using Automated Image Recognition. Available online: http://intarch.ac.uk/journal/issue52/7/index.html (accessed on 16 June 2020).
  61. 3DHOP—Home. Available online: https://3dhop.net/ (accessed on 30 December 2020).
  62. MappaLab—Overview. Available online: https://github.com/mappaLab (accessed on 30 December 2020).
  63. Associates—ArchAIDE. Available online: http://www.archaide.eu/associates (accessed on 29 December 2020).
  64. Anichini, F.; Gattiglia, G. Big Archaeological Data. The ArchAIDE project approach. In Proceedings of the Conferenza GARR_17 Selected Papers, Venezia, Italy, 15–17 November 2017; pp. 22–25. [Google Scholar]
  65. Mees, A. Available online: https://www1.rgzm.de/samian/home/frames.htm (accessed on 2 January 2021).
  66. Roman Open Data. Available online: https://romanopendata.eu/#!/ (accessed on 2 January 2021).
Figure 1. Archaeologists must spend much time classifying thousands of pottery sherds. ArchAIDE meets archaeologists’ needs creating a portable, user-friendly tool for mobile devices that can be used everywhere, speeding up the classification phase both in the field and during work in the warehouses.
Figure 1. Archaeologists must spend much time classifying thousands of pottery sherds. ArchAIDE meets archaeologists’ needs creating a portable, user-friendly tool for mobile devices that can be used everywhere, speeding up the classification phase both in the field and during work in the warehouses.
Heritage 04 00008 g001
Figure 2. The double workflow for appearance-based and shape-based recognition from an input image to top 5 results.
Figure 2. The double workflow for appearance-based and shape-based recognition from an input image to top 5 results.
Heritage 04 00008 g002
Figure 3. The Roman amphorae (a) and Majolica of Montelupo Fiorentino (b) are two of the main test classes used to train the system, for their peculiar characteristics useful to stress the algorithms for shape-based, and appearance recognition, respectively. Thanks to the collaborations of different institutions, museums, research groups, and colleagues worldwide, it was possible to collect photos of thousands of sherds. In this figure, part of the sherds were from the Roman site of Spoletino (Viterbo-Italy), and fragments stored in the Museum of Ceramic in Montelupo Fiorentino warehouse.
Figure 3. The Roman amphorae (a) and Majolica of Montelupo Fiorentino (b) are two of the main test classes used to train the system, for their peculiar characteristics useful to stress the algorithms for shape-based, and appearance recognition, respectively. Thanks to the collaborations of different institutions, museums, research groups, and colleagues worldwide, it was possible to collect photos of thousands of sherds. In this figure, part of the sherds were from the Roman site of Spoletino (Viterbo-Italy), and fragments stored in the Museum of Ceramic in Montelupo Fiorentino warehouse.
Heritage 04 00008 g003
Figure 4. The figure shows the steps from the extraction of inner and outer profiles from 2D drawings, to the creation of 3D models ready to be randomly broken to obtain synthetic sherds to train the algorithms [45].
Figure 4. The figure shows the steps from the extraction of inner and outer profiles from 2D drawings, to the creation of 3D models ready to be randomly broken to obtain synthetic sherds to train the algorithms [45].
Heritage 04 00008 g004
Figure 5. The automated extraction of the outer (green) and inner (red) profiles from a real-world sherd image.
Figure 5. The automated extraction of the outer (green) and inner (red) profiles from a real-world sherd image.
Heritage 04 00008 g005
Figure 6. The shape-based algorithm’s continuous improvement from its first release (D6.2) to the final version (D6.3).
Figure 6. The shape-based algorithm’s continuous improvement from its first release (D6.2) to the final version (D6.3).
Heritage 04 00008 g006
Figure 7. The appearance-based algorithm’s continuous improvement from its first release (March 2018) to the final version (February 2019).
Figure 7. The appearance-based algorithm’s continuous improvement from its first release (March 2018) to the final version (February 2019).
Heritage 04 00008 g007
Figure 8. The ArchAIDE portal is available at the Archaeology Data Service of the University of York. Multilingual pottery vocabularies, 2D and 3D pottery models, and all the videos produced by ArchAIDE can be freely downloaded.
Figure 8. The ArchAIDE portal is available at the Archaeology Data Service of the University of York. Multilingual pottery vocabularies, 2D and 3D pottery models, and all the videos produced by ArchAIDE can be freely downloaded.
Heritage 04 00008 g008
Figure 9. In the ArchAIDE database, the “ADS archive Roman Amphorae: a digital resource” is now enriched by interactive 3D models, maps of distributions for the origins and the occurrences of each type of .svg file with extracted profiles.
Figure 9. In the ArchAIDE database, the “ADS archive Roman Amphorae: a digital resource” is now enriched by interactive 3D models, maps of distributions for the origins and the occurrences of each type of .svg file with extracted profiles.
Heritage 04 00008 g009
Figure 10. The figure shows the working of the shape recognition tool inside the ArchAIDE app. The app has been designed with a user-friendly interface. Both for shape-based and appearance-based recognition, the system offers five results to the user at the end of the recognition process. Each item is linked to the reference database for verifying the exactness of the matching. After checking, the user can flag and save the right one.
Figure 10. The figure shows the working of the shape recognition tool inside the ArchAIDE app. The app has been designed with a user-friendly interface. Both for shape-based and appearance-based recognition, the system offers five results to the user at the end of the recognition process. Each item is linked to the reference database for verifying the exactness of the matching. After checking, the user can flag and save the right one.
Heritage 04 00008 g010
Figure 11. The ArchAIDE server components.
Figure 11. The ArchAIDE server components.
Heritage 04 00008 g011
Figure 12. The ArchAIDE system was presented for the first time during the project final conference, held in Pisa in May 2019, and numerous international events.
Figure 12. The ArchAIDE system was presented for the first time during the project final conference, held in Pisa in May 2019, and numerous international events.
Heritage 04 00008 g012
Table 1. Decoration-based identification.
Table 1. Decoration-based identification.
AccuracyTSI (# 1)TSI (# 2)TSHTSSG
Top 122.0%30.5%27.6%14.5%
Top 232.7%43.6%40.6%25.0%
Top 557.9%62.8%58.4%41.9%
Table 2. Comparison between mobile and desktop performances.
Table 2. Comparison between mobile and desktop performances.
AccuracyMobile PerformanceDesktop Performance
Top 155.2%51.0%
Top 583.8%77.2%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gualandi, M.L.; Gattiglia, G.; Anichini, F. An Open System for Collection and Automatic Recognition of Pottery through Neural Network Algorithms. Heritage 2021, 4, 140-159. https://doi.org/10.3390/heritage4010008

AMA Style

Gualandi ML, Gattiglia G, Anichini F. An Open System for Collection and Automatic Recognition of Pottery through Neural Network Algorithms. Heritage. 2021; 4(1):140-159. https://doi.org/10.3390/heritage4010008

Chicago/Turabian Style

Gualandi, Maria Letizia, Gabriele Gattiglia, and Francesca Anichini. 2021. "An Open System for Collection and Automatic Recognition of Pottery through Neural Network Algorithms" Heritage 4, no. 1: 140-159. https://doi.org/10.3390/heritage4010008

APA Style

Gualandi, M. L., Gattiglia, G., & Anichini, F. (2021). An Open System for Collection and Automatic Recognition of Pottery through Neural Network Algorithms. Heritage, 4(1), 140-159. https://doi.org/10.3390/heritage4010008

Article Metrics

Back to TopTop