Next Article in Journal
A Data Quality Strategy to Enable FAIR, Programmatic Access across Large, Diverse Data Collections for High Performance Data Analysis
Previous Article in Journal
Relative Quality and Popularity Evaluation of Multilingual Wikipedia Articles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Web-Based Scientific Exploration and Analysis of 3D Scanned Cuneiform Datasets for Collaborative Research

1
Department of Computer Science VII, TU Dortmund University, 44227 Dortmund, Germany
2
Department of Ancient Cultures, Ancient Near Eastern Studies, University of Würzburg, 97070 Würzburg, Germany
*
Author to whom correspondence should be addressed.
Informatics 2017, 4(4), 44; https://doi.org/10.3390/informatics4040044
Submission received: 8 November 2017 / Revised: 4 December 2017 / Accepted: 9 December 2017 / Published: 12 December 2017

Abstract

:
The three-dimensional cuneiform script is one of the oldest known writing systems and a central object of research in Ancient Near Eastern Studies and Hittitology. An important step towards the understanding of the cuneiform script is the provision of opportunities and tools for joint analysis. This paper presents an approach that contributes to this challenge: a collaborative compatible web-based scientific exploration and analysis of 3D scanned cuneiform fragments. The WebGL -based concept incorporates methods for compressed web-based content delivery of large 3D datasets and high quality visualization. To maximize accessibility and to promote acceptance of 3D techniques in the field of Hittitology, the introduced concept is integrated into the Hethitologie-Portal Mainz, an established leading online research resource in the field of Hittitology, which until now exclusively included 2D content. The paper shows that increasing the availability of 3D scanned archaeological data through a web-based interface can provide significant scientific value while at the same time finding a trade-off between copyright induced restrictions and scientific usability.

1. Introduction

Recent advances in 3D scanning technology have led to increased efforts to create digital reproductions of archaeological artifacts such as cuneiform tablet fragments, as shown in Figure 1, resulting in large databases of high-resolution 3D meshes [1]. However, the existence of these 3D data-repositories does not result to the same extent in an increased availability of the data for associated research purposes. Therefore, accessibility to the scientific community is an ongoing issue with large scale archaeological data-repositories. The reasons for this accessibility gap range from legal and political issues, tied to the copyright on the data and data preparation times/costs, to technical challenges regarding visualization and transfer of large amounts of data [2]. As will be detailed later on, an increasing amount of web-based 3D libraries and efficient 3D content transfer techniques becomes available, which leads to an increasing amount of 3D digital heritage content presented in the web. The non-research-oriented presentation objectives in many of these projects result in a high suitability for museum presentation but at the same time in a very limited suitability for scientific metrological analysis. Occurring restrictions regarding the scientific use may originate from factors such as low scan or texture resolution, missing measurement and lighting tools, restricted viewport navigation, insufficient shading capabilities for enhancing subtle features on untextured datasets and software/data licensing incompatibilities. As the requirements to the data and its presentation vary depending on the field of research, Section 2 will introduce the use case of cuneiform fragment collation with a specific set of traditional research methods and requirements to be targeted in a scientifically oriented 3D cuneiform framework. This set of requirements will then be used in the course of Section 3 to construct a suitable 3D framework for web-based analysis of cuneiform script. The aspects to be covered include the choice of a WebGL (Web Graphics Library) environment, analysis of 3D cuneiform data characteristics and preparation, visualization and an exemplary integration into an established online research architecture in Section 4. The subsequent evaluation in Section 5 shows that, for the application scenario of web-based 3D cuneiform script analysis, a trade-off between legal and copyright aspects on one side and free availability and scientific usability on the other side can be found to generate added scientific value. The paper closes with a short conclusion and an outlook regarding further research in Section 6.

2. Cuneiform Fragment Collation

Cuneiform script is, aside from Egyptian hieroglyphs, one of the oldest known writing systems [2]. It was used for over three millennia starting with its invention around 3500 BCE in Ancient Near East by a variety of cultures like the Sumerians, the Babylonians, the Akkadians, the Elamites, the Assyrians and the Hittites. In contrast to modern scripts, cuneiform manuscripts were usually created by imprinting an angular stylus into tablets of moist clay resulting in small and often overlapping tetrahedron shaped grooves, which makes cuneiform a three-dimensional script. Most known remains of cuneiform manuscripts have been preserved in form of (in many cases fragmented) clay tablets, with a quantity of over 500,000 discovered artifacts, stored mostly in museum collections [1]. Aside from museum presentation, these cuneiform fragments are subject to scientific research in the field of Ancient Near Eastern Studies. Based on the decipherment of cuneiform script, the fragments are analyzed and translated in order to reassemble the manuscripts and extend our knowledge of this part of human history.
In philology, including Ancient Near Eastern Studies of cuneiform fragments, collation denotes the verification of a text against an original as a central research method. This includes, amongst other things, the verification of transcriptions and transliterations. Ideally, “original” refers to the original manuscript in the form of a real cuneiform tablet or fragment. However, accessibility issues and artifact corrosion often result in the replacement of the original artifact by hand copies, photographic reproductions and, recently, 3D scanned digital reproductions. Some examples for these types of reproductions are shown in Figure 1. The following will detail some essential collation-related aspects to be considered during the development of suitable web-based access methods.

2.1. Application Scenarios

Despite very individualistic philological working methods and work flows, some typical application scenarios for collation tasks can be identified. One of these scenarios is the inspection of transliterations for missing signs and text lines, which frequently occur, especially on larger or difficult to read manuscripts. This includes checking whether the signs can be read as specified in the transliteration. Another reason for conducting a collation are well-founded doubts about the correct reading of text passages or signs. In this context, collation can operate on a more semantic level to determine how the signs, consisting of identified wedges, must be read or, on a more geometry-oriented level, to determine what wedges can be found on the artifact and which signs can be derived. The second case is particularly important regarding damaged text passages and gaps. Resolving these kinds of geometric issues includes checking whether the complemented signs have the correct size with respect to comparable signs on the same tablet. Additionally, the remaining geometric clues are checked against the geometry of complemented signs and possible sign alternatives, which are then validated regarding the textual and paleographic context.

2.2. Collation of Various Media Types

For collation purposes, a real artifact is usually viewed from different angles to estimate wedge depths and surface distances of damaged areas. This helps with distinguishing scratches and fracture faces from remains of cuneiform wedges. This technique is in most cases complemented by using a light source to enhance the perceptibility of geometric features by shadow casting. Aside from the depth of grooves, the surface finish can help with the differentiation of scratches and wedges. It can be determined to a limited extend by employing a magnifying glass. An additional frequent issue in this regard is the presence of non-uniform colored surface textures on the artifacts, which can exert a huge negative effect on the visual perceptibility of geometric features.
Capability-wise, collation on real artifacts can assess the geometric shape by changing viewing direction and shadow casting using a light source. The surface finish can help with distinguishing scratches and wedges. However, due to the very small wedges, most genuine cuneiform artifacts are not accessible to manual, damage-free measurements. Additionally, non-uniform surface textures are an issue during geometric analysis.
Collation of photographic reproductions, on the other hand, enables non-depth related measurements using image magnification, while the fixed camera position and lighting setup complicates depth related measurements. Rudimentary depth perception is solely possible based on a static shadow-casting light setup. The effects of the lighting setup are, however, in many cases not separable from effects caused by surface texture color and tone-mapping or exposure related issues of the utilized photographic setup.
As opposed to the two previously-mentioned collation media, collation on 3D scanned models provides damage free access to precise 3D measurements including depth values, which is especially important as cuneiform script is a three-dimensional type of script. Aside from that, 3D models can be viewed from all directions and with arbitrary lighting setups, which also includes high possible levels of magnification. An additional advantage is the separability of geometry and surface texture. This allows the examination of geometric features without obstructions from non-uniform surface colors. Regarding collaborative collation tasks, 3D scanned cuneiform data benefits, inter alia, from data mobility, reproducible viewing conditions and the conservation of the artifact state. A suitable 3D framework should therefore provide interactive visualization, flexible viewport navigation and light manipulation, feature enhanced rendering of untextured non-2-manifold models and precise measurement tools, as well as features for collaborative research.
Table 1 shows a comparison of collation accessibility features for real artifacts, photographic reproductions and 3D scans. In this context, representation completeness rates the possible amount of data of a good reproduction compared to the original artifact at the time of its creation. It is an important fact that at a given time the original artifact does not necessarily contain the most complete data as corrosion based damage may have degraded the quality of the original artifact after the fabrication of an analog or digital reproduction. This may render an old photography or a 3D scan the most complete data source even though photographs may be of bad quality and 3D scans may only cover an incomplete subset of the artifact surface. The data sizes in Table 1 refer to typical data set sizes on the Hethitologie-Portal Mainz [3]. It is worth mentioning that measurement precision on photographs is usually superior to the original artifact because of the possible image magnification. However, images are affected by projective distortions, exposure and lighting issues as well as visually non-separable texture colors.

3. Methods for Web-Based Visualization of 3D Cuneiform Data

This section will present an interactive web-based approach for the analysis of 3D scanned cuneiform data, to address the use case of cuneiform fragment collation, described in Section 2. The result will use the Nexus format in a Three.js-based visualization environment extended with high quality visualization methods and application scenario specific tools for cuneiform collation tasks in the Hethitologie-Portal Mainz. Starting with a review of related work, this section will therefore detail the components of a Viewer concept as shown in Figure 2. Section 3.2 will discuss cuneiform data characteristics to motivate the required data preparation process. The employed visualization methods will be presented in Section 3.3, followed by a description of the design of a suitable user interface in Section 3.4. Aspects of the integration on the server will be covered in Section 4.

3.1. Related Work

An easy to use 3D web application should be based on a readily available and wide-spread 3D API without the need to install any third party plugins. WebGL is a JavaScript 3D API with native integration in all modern browsers that satisfies these needs. Although WebGL 1.0 [4] is based on OpenGL ES 2.0 [5], some features widely used in desktop OpenGL environments are only available in the form of WebGL extensions with varying degree of support by the browsers. Especially mobile device browsers lack some widely used extensions because of hardware related incompatibilities. To facilitate many basic and reoccurring WebGL tasks, like loading shaders and 3D models, the use of high level WebGL libraries is common. In particular, open source libraries like Three.js [6] or SpiderGL [7], as opposed to commercial and more platform oriented projects like Sketchfab [8] and Unity [9], are well suited for integrating custom framework designs and open source licensed algorithms.
Although most WebGL libraries offer a variety of methods to load a large amount of 3D geometry file formats, many of the existing formats are not well suited for web-based loading of large 3D models. This motivates the use of web-optimized 3D formats that offer features like data streaming, geometry compression and progressive model loading. Some noteworthy formats in this regard include X3D [10] and XML3D [11] as human readable text based formats and the X3DOM Binary Format [12] that externalizes the mesh data in form of directly GPU compatible binary blobs while still containing text based descriptions. The OpenCTM format [13] addresses the large data sizes with entropy reduction and LZMA entropy encoding, which, as a side effect, adds a decoding overhead on the client side. The WebGL-Loader format [14] combines post-transform vertex cache optimization [15] for faster rendering with sophisticated delta-encoding and quantization to output a UTF-8 encoded data stream that additionally benefits from the GZIP compression of a web-server. This results in a comparatively high compression combined with very low decompression times when compared to X3D and OpenCTM [16]. X3DDOM was further extended by the POP Buffer format [17] that employs a hierarchical quantization-based compression scheme, which enables progressive model loading. The Nexus format [18,19], which will be detailed in Section 3.2, introduces a hierarchical mesh reduction with fast decoding and view dependent rendering, enabling a progressive visualization of very large models. In the context of 3D scans, an additional useful feature of the nexus format is its capability to handle non-2-manifold meshes.
Over the last years, various 3D frameworks for the online presentation of cultural heritage artifacts have been introduced, some of which are linked to the publication of scanning projects in museum collections. Some examples in this regard with an exclusive focus on museal presentation are the British Museum, which resorts to publishing small parts of their collection via Sketchfab [20], the Smithonian Institution with the Memento-based [21] (now Autodesk Remake) Smithonian X 3D framework [22] or the large collection of 3D scanned Vietnamese statues and architectural artifacts of VR3D [23]. Aside from these large scale projects, there are also free cultural heritage presentation frameworks like 3DHOP [24], which can combine 3D and multimedia content for museum and teaching purposes and which use the Nexus data format [18,19]. Common features of many applications of these cultural heritage viewers are the use of low to medium resolution models combined with medium to high resolution textures and a time consuming, elaborate model preparation regarding the correction of scanning issues.
Related work directly associated with analysis and presentation of 3D cuneiform content includes the integral invariant based extraction of cuneiform signs as 2D vector drawings of Mara et al. [25,26] integrated into the GigaMesh framework and the cuneiform wedge extraction presented by the authors [27] integrated into the CuneiformAnalyser framework. However, both solutions are targeted at offline usage due to large datasets and computationally expensive geometry processing. In 2017, the Hilprecht Sammlung Jena [28] started publishing their large collection of cuneiform tablets online, taking advantage of owning the original artifacts for all published datasets including the copyright. Access is provided by offering downloadable datasets and by loading the uncompressed datasets into MeshLabJS [29], a web-based descendant of MeshLab [30], a popular, free mesh editing and conversion solution. Also in 2017, Collins et al. launched the Virtual Cuneiform Tablet Reconstruction Project [31], which includes a basic online viewer for manually and automatically joining cuneiform fragments. While providing a working tool for joining low resolution fragments with fracture faces, this method would not work on incomplete high resolution datasets with missing fracture faces. Connected with this work, Woolley et al. evaluated methods for the development of a collaborative virtual environment for 3D reconstruction of cuneiform tablets [32]. This interesting study is focused on the analysis of interaction modalities and user groups regarding the effectiveness of the collaboration, which is an important aspect of system design.

3.2. 3D Cuneiform Data Characteristics and Preparation

Most 3D scanned cuneiform datasets can be classified into two main categories often tied to the origin of the data. The first category is mostly created by museums digitizing their own inventory, which predominantly produce complete, high quality datasets including texture information. This is possible due to unlimited artifact access and employing significant amounts of time for scanning and post processing the data. A second type of datasets is mainly produced by scientists with limited access to artifacts owned by foreign institutions. This can significantly change the priorities during data acquisition. For metrological analysis of cuneiform script, as targeted in the BMBF project ‘3D-Joins und Schriftmetrologie’, capturing the script covered parts of the artifacts with a sufficient resolution and partial coverage of fracture faces was combined with the target to scan as many fragments as possible during the available time slots with access to the fragments. In this process, the geometrical completeness of non-script covered areas and the acquisition of texture data was only a secondary objective. Therefore, the resulting scans often contain unscanned surface areas, holes and non-2-manifold geometry. A suitable geometry data format must be able to deal with these deficiencies and be able to handle and stream large geometries as well as satisfy the philological requirements for collation tasks as described in Section 2. In addition to this, the data format should allow the deployment of methods to obstruct full quality model extraction to a certain extent when needed for legal reasons.
An interactive web-based scenario requires an optimization of the latency for viewing a 3D model while the scientific usability requires a minimum model quality to enable precise measurements. In view of the typical data sizes of cuneiform models, ranging from several megabytes to several gigabytes, and the current speed of internet connections, this implies the use of streaming and state of the art compression techniques. Another important aspect is client side decoding time, which must be low enough to not outweigh the reduction of transfer time gained by data compression.
Of the methods mentioned in Section 3.1, the Nexus format [18,19] was the only one to feature high compression rates, progressive and local model loading and the possibility to adapt the level of detail to less capable hardware. In addition to this, the nexus format is well extendable to custom data characteristics and features a well documented open source tool set for creation and modification.
The Nexus format [18,19] is a progressive format that uses a patch-based multiresolution structure where the patch geometry is reduced and compressed independently. Patches of neighboring resolution levels must not share common mesh borders to achieve a uniform reduction by the quadric simplification algorithm over multiple resolution levels. A suitable volume partition is created by using a kd-tree with an alternating split ratio. The geometry encoding for individual patches uses a rule-based connectivity compression and a quantization-based geometry and attribute compression with entropy encoding. Without lossy compression, generated Nexus files for typical cuneiform scans are around 1.7 times the size of the original model, which results from the generated multi-resolution data. Lossy compression rates, however, depend on the quantization settings. A usable compression rate around 4.1 can be achieved for cuneiform data, as will be shown in Section 5. Perceived loadings times are very fast, as a low resolution approximation from the highest level of the compression hierarchy is shown almost immediately. The model is then progressively refined based on a predefined draw budget, a maximum cache size and a view dependent target error.
On 3D scanned cuneiform data, the incremental patch-based geometry reduction of the nexus format is an important aspect for producing high quality low resolution meshes. This results from the fact that many mesh simplification methods, like the used quadric edge decimation, tend to create meshes of poor quality when the target edge length is much smaller than the average edge length of the original mesh and the original mesh is non-2-manifold. Non-2-manifoldness is, however, a frequent property of many 3D scanned cuneiform datasets. As an important factor for less capable hardware, the above mentioned possibility to define a draw budget allows to limit the amount of graphics memory used during model rendering. The cache size and a definition of a view dependent target error, on the other hand, allow to limit the main memory consumption and the visual level of detail. This enables the viewing of datasets on hardware that, otherwise, would not be capable of visualizing datasets of this size.
Before the scan data can be converted into the nexus format, the vertex data has to be extended with the ambient occlusion and curvature information required for visualization as described in the following section. The mesh data produced by the used scanner software [33] only includes monochromatic color information. Therefore, the additional data can be color coded and stored in the, up to this point, unused color channels. As a positive side effect, this avoids the introduction of additional data channels into the nexus format. Also, the data-enriched intermediate ply-files remain readable by most standard mesh processing software like MeshLab [34].

3.3. Visualization Methods

A scientific visualization environment for 3D cuneiform data has to address several basic requirements, the most important of which is a high quality visualization of large, high resolution, untextured meshes. This results from the fact that many 3D scanned cuneiform datasets represent damaged cuneiform fragments with subtle details. Depending on the scanning methods, many of the resulting 3D models are untextured, lack color information or include color information that exerts a negative influence on the visual perceptibility of geometrical details. Therefore, methods should be used that optimize the visual spatial representation of subtle details while at the same time providing a realistic, high quality visualization of the 3D models. The rendering is complemented by tools for taking measurements, user controllable lighting, a stylized autography mode and an intuitive user interface including viewport navigation and parameter controls.
As the set of visualization methods used in previous work of the authors [27] has proven its usefulness for offline analysis of cuneiform datasets, the framework uses a combination of ambient occlusion, radiance scaling and lit sphere shading to render the datasets. Vertex-based ambient occlusion [35] approximates the self shadowing properties of the surface that can significantly improve the depth perception of surface structures when used to control the luminosity of the surface shading. Although a real time approximation for ambient occlusion can be computed in screen space [36], this framework uses a precomputed, view-independent, gpu-accelerated ambient occlusion term, as described in [37], for improved accuracy.
Realistic shading is achieved by using precomputed lighting in the form of lit-spheres. In a preparation step, an orthographic image of a shaded sphere, including a complex surface material an arbitrary number of lights and a reflective environment, is rendered into a square texture. The shading is then computed by projecting the normalized surface normal or the reflected view direction onto the lit-sphere texture. The process of pre-rendering a texture can be replaced by taking photographs of spherical objects in real world material and lighting conditions and cropping the resulting image to the extends of the sphere. It is worth mentioning that the resulting lit-sphere textures must not contain local surface features as opposed to components of the shading function like global material color, lighting and reflections. Therefore local surface features must be removed by appropriate image processing method like smoothing or cloning.
To address cuneiform script analysis specifically, several types of lit-sphere textures were generated to resemble clay and metal type materials. This was realized by taking photos of clay spheres, including necessary post-processing, as well as by rendering spheres with complex shaders using a high quality offline renderer. The associated lighting setups were chosen to resemble frequently used lighting situations that can be found in traditional photographic cuneiform reproductions to enhance wedge visualization. An especially common lighting setup includes a directional light source positioned with a 45 degree angle at the top left of a fragment as this light position casts advantageous shadows on many cuneiform wedge types.
A selection of three types of lit-spheres used in the framework and the resulting visual appearance including the lit-sphere textures is shown in Figure 3a–c. The depicted lit-sphere textures have been additionally processed to counteract rendering artifacts by extrapolating the color at the sphere border to the square texture border.
To visually enhance geometrical details on the scanned surfaces, the radiance scaling technique [38] is employed that correlates surface variations to aspects of the shading function. As the name suggests, radiance scaling adjusts the reflected light intensity depending on the surface curvature and material properties. This is realized by applying an independent monotonic scaling function to the original Bidirectional Reflectance Distribution Function. As detailed in [38], its design depends on an invariance point α , in order not to affect planar regions, and a scaling magnitude γ . The scaling function is parameterized using a normalized reflectance and a normalized curvature. In an OpenGL- and GLSL-based environment the reflectance can be obtained by computing the length of the color vector resulting from an arbitrary shading function like phong shading, lit-sphere shading or a shading function component such as specularity or ambient occlusion. The normalized curvature is obtained as a hyperbolic tangent mapped mean curvature, as in described in [39].
In this work, the scaling function is parameterized with a scaling invariant point α = 0 . 1 and a user controllable scaling magnitude γ [ 0 , 2 ] . The scaling is applied only to the color term of the shading function. Additional components, like the ambient occlusion, are applied independently as they have been found to exert a negative influence on the rendering of geometric details within cuneiform wedges. As both the normal-based gradient and curvature calculations require accessing a local neighborhood of a pixel to be shaded, a deferred shading architecture is used.
Figure 4a–d shows the amount of visual improvements as opposed to simple phong-shading on an exemplary cuneiform fragment. The effects of detail enhancement using radiance scaling on cuneiform wedges is shown in Figure 5. The method has proven to be particularly useful on abraded and corroded surface areas with subtle details.

3.4. User Interface Design for Cuneiform Analysis

Aside from the aspects directly tied to transfer and rendering of the 3D models, the user interface of the viewer is designed with two main objectives in mind. The interface should provide an intuitive access to the models, while at the same time addressing most of the requirements of collation related tasks.
Philologists often analyze geometric details on real artifacts by varying the position of a directed light source. Therefore, the framework features a light direction mode that can be used to directly control the direction of lighting with the mouse pointer on a virtual sphere. While phong-shading offers unrestricted control of the light position, the lighting position of lit-sphere shading can only be influenced by rotating the lit sphere textures in the image plane. Also inspired by traditional cuneiform analysis methods, the framework integrates an autography mode, which renders a stylized black and white image of the geometry, as shown in Figure 5c, based on precomputed maximum curvature values. This rendering style resembles hand drawn autographies of cuneiform fragments, like the one depicted in Figure 1a, which can be helpful during the transcription process.
As the orientation of scanned cuneiform fragments and the contained texts in 3D space is not known a priori, the viewport navigation is realized using the quaternion-based arcball method. This allows turning the models in any desired direction without experiencing gimbal lock related side effects of euler rotation and, as opposed to the trackball method, an intuitive way of rotating the viewport in the image plane. The latter is especially important to align cuneiform text lines horizontally in the viewport. The initial zoom is coupled with the size of the geometry to always display the full 3D model when opening a new dataset. To avoid depth buffer issues with the required wide zoom range, the framework uses an optimized logarithmic depth buffer implementation [40] like in the Outerra Engine.
To assist with complex measurement tasks, the framework integrates a measurement mode and an orthographic camera mode with a dynamic scale object. The measurement mode allows to position multiple measurement tapes on the surface of the object, as shown in Figure 6. The measured distances are displayed as text overlays on the screen and in form of a millimeter sized stripe pattern on the measurement tape object. It has to be noted that the measurement mode works only on scanned objects with a known system of measurement. Although most scans are measured in millimeter scale, scans created using photometric reconstruction techniques, like light domes [41], may exhibit significant distortions that prevent precise measurements. Much like in many traditional photographic reproductions of cuneiform fragments, the scale object, as can be seen in Figure 6, consists of an alternating dual row pattern of black and white squares. The scale of the squares and an associated measurement unit is adjusted dynamically as the user zooms the viewport. This way, measurements can be taken even in printed screenshots. To further support collaborative work on the 3D models, the viewer supports the export of links to reopen models with the current camera configuration, which is also useful for inventory related tasks. For instance, occurrences of sign forms could be directly referenced from a 3D sign inventory database.

4. Integration into the Hethitologie-Portal Mainz

The web-based 3D approach is integrated into the Hethitologie-Portal Mainz, the leading online research resource in the field of Hittitology. One central feature of the portal is the concordance of Hittite cuneiform tablets. This online-accessible database includes a large searchable catalog of known cuneiform tablets linked with a photographic collection, join-sketches, transcriptions, transliterations, publication and provenience data. To ensure an intuitive data integration, the 3D models can be accessed directly from the concordance query results much like the traditional photographic reproductions. The possibility of accessing a 3D model depends not only on the existence of a respective 3D scan but also on the publication state of the corresponding cuneiform manuscript, which only permits access to already published manuscripts. Figure 7 shows representative query result in the concordance, listing cuneiform documents which match the search criteria. Beginning on the left hand side, the column “Inventarnummer” contains the inventory number, an icon indicating the existence of a join layout sketch, a red icon indicating the existence of photographic reproductions and the text “3D” if an interactive 3D scan is available. Layout sketches, 2D reproductions and 3D scans are directly linked from these elements. The remaining columns, if available, include a publication number (with linked autography images), an index to the Catalogue des Textes Hittites (a genre classification of Hittite documents), a place of discovery, a dating and a set of annotations regarding manuscript content and citations. The list-based presentation allows quick access to fragments belonging to a common manuscript and accessing all available data of an object. This facilitates comparative tasks involving join layout sketches, transcriptions, photographic reproductions and now also 3D scans. Especially the latter provides detailed data for proofreading questionable aspects in manual transliterations and hand copies.
In contrast to the 2D-reproductions, the 3D scans are not linked with thumbnails for previewing as the orientation of the scans in 3D space is not known a priori. Instead, accessing the 3D scans is coupled with a splash screen listing the fragments contained in the respective scan, the size of the dataset and a set of associated copyright terms.
To account for legal and copyright peculiarities regarding the 3D scanned datasets, access to the 3D data is coupled with a user registration including an obligation of the users to accept a set of data licensing terms. A central aspect of this data license is to restrict the use of the 3D datasets to research and personal purposes and to prohibit unauthorized publication, commercial use or reproduction of the 3D datasets. Especially reproduction of 3D datasets via 3D printing devices is a critical aspect as many museum institutions link the allowance for taking 3D scans of their artifacts to a responsible handling of the resulting data. This also includes not creating unauthorized 3D printed copies of the artifacts for commercial purposes. Although the creation of precise 3D prints from the cuneiform datasets is complicated by incomplete scanning and the non-watertight, non-2-manifold data characteristics of the cuneiform 3D scans, the 3D web viewer employs an additional layer of access protection. For this purpose, the 3D data is stored at a non-web-accessible server location and the data access is piped through a data servlet that controls data access depending on user authentication and data access pattern to ensure the data is being accessed with the 3D viewer. This access regulation also includes a retrospective modification of the data streaming format, to make any kind of unintended data extraction more difficult. All access control mechanisms are kept independent of the data storage and data visualization, which keeps the 3D viewer compatible for usage scenarios without the necessity of data access control.

5. Evaluation

The web-based framework, presented in this paper, has been evaluated with regard to performance aspects and suitability for the application. The deployability primarily depends on the hardware and software requirements of the framework. On the hardware side, the 3D viewer requires an environment to support WebGL including a specific set of WebGL-extensions, which should be supported by OpenGL 3.3 capable hardware. As the memory requirements for the NEXUS format can be manually restricted, the viewer also works on hardware with low amounts of main and graphics memory. The viewer has been tested to work on gaming and office class hardware and even achieved interactive frame rates on a 10 year old mobile Intel Core2Duo P8700 with a Nvidia GeForce GTS160M. Since the typical philologist’s workplace is not expected to include high level gaming hardware, this an important factor for increasing the usability.
On the software side, the viewer primarily requires a WebGL enabled browser with a minimum set of supported WebGL extensions like EXT_frag_depth, WEBGL_depth_texture and WEBGL_draw_buffers, which are necessary for the radiance scaling and picking features. As opposed to Microsoft Internet Explorer 11, most newer versions of Mozilla Firefox and Google Chrome browser support these features by default. However it has to be noted that on some older browser versions the features have to be activated manually.
Regarding viewer performance, compression rates and access times are essential elements, which may vary depending on the use case even when using the same compression techniques. Table 2 shows performance measurements for a set of four exemplary cuneiform fragments, which have been selected to cover a wide range of data sizes. Aside from the vertex count, which ranges from 200,000 to above 77 million vertices, the file sizes for storage in ply-format and the compressed version of the nexus format are included. While the ply-format is the form preferred to store the scanned triangle meshes for geometrical analysis with the CuneiformAnalyser [27], the compressed nexus format can be directly accessed by the 3D web viewer. The quantization of the nexus compression was parameterized to use 18 color bits per vertex, 10 normal bits and an adaptive quantization error boundary of 0.1. With this parameterization, the nexus format typically achieves compression rates of around 4.1 on the cuneiform datasets of the Hethitologie-Portal Mainz, which is a good starting point for web-based data transfer.
A more user relevant view of data compression can be benchmarked by measuring the time and the transfer size at a point where a sufficiently sized part of the respective dataset has been loaded, which results in a usable interactive model with sufficient resolution. As data in the nexus format is loaded progressively and asynchronously, this point is reached long before the complete model has been transfered. Low transfer sizes have been proven to be especially useful at remote locations, like excavation sites or museums, with comparatively slow internet connections or limited transfer volume. With observed loading times of less than 15 s for typically sized models on a speed limited connection, the 3D viewer achieves usable results.
The low hardware requirements of the 3D web viewer enable the user to open multiple fragments simultaneously. This allows a direct visual comparison of script features or reference occurrences of cuneiform signs. In the course of this, the dynamic scale object can be used to match the scale of multiple viewer instances. In addition, the measurement tapes can be employed to compare features relevant for fragment joining such as the distances of paragraph and column separators or the mutual distance of feature points on the fracture faces. Therefore, the viewer can be used for virtual join verification, as shown in Figure 8.
For practical use, the 3D viewer enables philologists to conduct high precision measurements on the scans of cuneiform fragments of the Hethitologie-Portal Mainz with a close integration into established Hittitology research resources. The collation of fragments particularly benefits from the now examinable depth data of the cuneiform artifacts. Subtle features, as shown in Figure 9, can be enhanced using detail improvement techniques like radiance scaling, while at the same time offering realistic shading combined with established examination techniques like a variable light system. The possibility to generate direct links to specific views of the artifacts is an important tool for both collaborative work and the generation of 3D sign reference databases. Concerning the Hethitologie-Portal Mainz, direct referencing is a feature, unique to the 3D content.
The given concept provides a good trade-off between open-access to the cuneiform data on one side and the copyright related issues connected to the data on the other side. To address the open access aspect, the fragments can be accessed by the scientific community, including a sufficient tool set for many collation related tasks. To accommodate copyright concerns, the access is limited in a way to complicate data extraction with the target of creating counterfeit reproductions of the scanned artifacts. This is realized by multiple techniques such as providing only incomplete scans of problematic objects and using a data format that, like the NEXUS format, only transfers segmented or low resolution data. In addition to that, the data transfer is checked for non-viewer specific access patterns and a mesh format obfuscation layer is employed to further complicate external data reconstruction. Beyond that, binding data access to a user registration facilitates blocking suspicious user activities. Overall, the amount of employed safety precautions was perceived to create a sufficiently sized obstacle against unauthorized creation of digital artifact reproductions to be able to publish the 3D data in this form. It is worth pointing out that this could not have been achieved by simply lowering the resolution of the 3D-models. Precise script feature measurements, scan quality assessment and collation tasks require the models to be accessible in full resolution. However, accessing only sections of a model at one time in a format that takes disproportionate costs to convert into a 3D-printable representation may represent an acceptable compromise.

6. Conclusions and Outlook

This paper demonstrates a concept for increasing the availability of 3D scanned archaeological data through a web-based interface. Despite existing copyright induced data restrictions, it is possible to find a working trade-off between legal and copyright aspects on one side and scientific usability on the other side. The exemplary realization, in form of a specialized 3D viewer, was integrated into the Hetitologie-Portal Mainz to promote the usage of 3D collation techniques through a central research resource in the field of Hittitology.
The features for collaborative work integrated up to now represent, especially regarding the aspect of direct user-to-user interaction, only a basic set. This concerns the flexibility of the existing features like the measurement mode and the integration of additional features for cuneiform script analysis. The current state should, however, provide a solid base for evaluating collaborative concepts for cuneiform script analysis similar to the work of Woolley el al. [32].
In a future viewer version, the measurement tools could be extended to support polyline measurements on curved surfaces, as commonly found on smaller cuneiform tablets. This could be complemented by an input mode that allows placing point and polygonal shaped markers on the tablet surface, to flag points of interest and mark cuneiform signs. To yield a significant benefit, these features should be complemented by the possibility to annotate and share these markers. Without server interaction for user management and data persistence, this could be realized by adding an export/import feature for XML-based user content description files. This leaves the future possibility of adding a server-side database for management and storage of user data. Further work on this topic also covers the integration of the large amounts of cuneiform segmentation data generated in the course of the BMBF project ‘3D-Joins und Schriftmetrologie’. The large amount of segmentation and user data then requires the introduction of data layers for content filtering when working on the level of wedge, cuneiform signs, text paragraphs or manuscripts. For the future, it is also planned to integrate automatic methods for 3D script feature extraction and allow an integrated analysis of photographic (2D) and 3D scans. These features will require methods for query specification and query result presentation.

7. Materials

The data repository of the Hethitologie-Portal Mainz including its web-based 3D extension, as described in this paper, can be freely accessed at www.hethiter.net. The code of the 3D viewer is available at the same location under the GNU General Public License. A demo version including a selection of cuneiform fragments is available at webglviewer.cuneiform.de.

Acknowledgments

The work on this paper has been supported by the University of Würzburg, the Technical University of Dortmund and the Gisela and Reinhold Häcker Foundation. This work builds upon parts of the research supported by the German Federal Ministry of Education and Research within the BMBF project ‘3D-Joins und Schriftmetrologie’.

Author Contributions

Denis Fisseler conceived and designed the experiments; Denis Fisseler and Gerfrid G. W. Müller performed the experiments; Denis Fisseler and Gerfrid G. W. Müller analyzed the data; Gerfrid G. W. Müller contributed contributed reagents/materials/analysis tools; Denis Fisseler, Gerfrid G. W. Müller and Frank Weichert wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cuneiform Digital Library Initiative; University of California, Los Angeles; Max-Planck-Institut für Wissenschaftsgeschichte, Berlin, Gemany. Available online: https://cdli.ucla.edu/ (accessed on 7 December 2017).
  2. Babeu, A. Rome Wasn’t Digitized in a Day: Building a Cyberinfrastructure for Digital Classics; Council on Library and Information Resources: Washington, DC, USA, 2011. [Google Scholar]
  3. Müller, G.G.W. Hethitologie-Portal Mainz. 2017. Available online: http://www.hethiter.net/ (accessed on 7 December 2017).
  4. Khronos Group. WebGL Specification. 2011. Available online: https://www.khronos.org/webgl/ (accessed on 7 December 2017).
  5. Khronos Group. OpenGL ES 2.0 Specification. 2007. Available online: https://www.khronos.org/registry/OpenGL/specs/es/2.0/es_full_spec_2.0.pdf (accessed on 7 December 2017).
  6. Cabello, R. Three.js. 2010. Available online: https://threejs.org/ (accessed on 7 December 2017).
  7. Di Benedetto, M.; Ponchio, F.; Ganovelli, F.; Scopigno, R. SpiderGL: A JavaScript 3D Graphics Library for Next-Generation WWW. In Proceedings of the 15th International Conference on Web 3D Technology, Los Angeles, CA, USA, 24–25 July 2010. [Google Scholar]
  8. Pinson, C.; Denoyel, A.; Passet, P.A. Sketchfab. 2012. Available online: https://sketchfab.com/ (accessed on 7 December 2017).
  9. Unity Technologies. Unity. 2005. Available online: https://unity3d.com/ (accessed on 7 December 2017).
  10. X3D Working Group. X3D Specifications. 2006. Available online: http://www.web3d.org/standards/ (accessed on 7 December 2017).
  11. Sons, K.; Klein, F.; Rubinstein, D.; Byelozyorov, S.; Slusallek, P. XML3D: Interactive 3D Graphics for the Web. In Proceedings of the 15th International Conference on Web 3D Technology, Los Angeles, CA, USA, 24–25 July 2010; ACM: New York, NY, USA, 2010; pp. 175–184. [Google Scholar]
  12. Behr, J.; Jung, Y.; Franke, T.; Sturm, T. Using Images and Explicit Binary Container for Efficient and Incremental Delivery of Declarative 3D Scenes on the Web. In Proceedings of the 17th International Conference on 3D Web Technology, Los Angeles, CA, USA, 4–5 August 2012; ACM: New York, NY, USA, 2012; pp. 17–25. [Google Scholar]
  13. Geelnard, M. OpenCTM. 2009. Available online: http://openctm.sourceforge.net/ (accessed on 7 December 2017).
  14. Chun, W. WebGL Models: End-to-End. In OpenGL Insights; Cozzi, P., Riccio, C., Eds.; CRC Press: Boca Raton, FL, USA, 2012; pp. 431–453. Available online: http://www.openglinsights.com/ (accessed on 7 December 2017).
  15. Forsyth, T. Linear-Speed Vertex Cache Optimisation. 2006. Available online: https://tomforsyth1000.github.io/papers/fast_vert_cache_opt.html (accessed on 7 December 2017).
  16. Limper, M.; Wagner, S.; Stein, C.; Jung, Y.; Stork, A. Fast Delivery of 3D Web Content: A Case Study. In Proceedings of the 18th International Conference on 3D Web Technology, San Sebastian, Spain, 20–22 June 2013; ACM: New York, NY, USA, 2013; pp. 11–17. [Google Scholar]
  17. Limper, M.; Jung, Y.; Behr, J.; Alexa, M. The POP Buffer: Rapid Progressive Clustering by Geometry Quantization. Comput. Graph. Forum 2013, 32, 197–206. [Google Scholar] [CrossRef]
  18. Ponchio, F. Multiresolution Structures for Interactive Visualization of Very Large 3D Datasets. Ph.D. Thesis, Clausthal University of Technology, Clausthal-Zellerfeld, Germany, 2009. [Google Scholar]
  19. Ponchio, F.; Dellepiane, M. Fast decompression for web-based view-dependent 3D rendering. In Proceedings of the 20th International Conference on 3D Web Technology, Heraklion, Greece, 18–21 June 2015; pp. 199–207. [Google Scholar]
  20. The British Museum. The British Museum’s 3D models on Sketchfab. Available online: https://sketchfab.com/britishmuseum/models (accessed on 7 December 2017).
  21. Autodesk. Autodesk ReMake. 2017. Available online: http://remake.autodesk.com (accessed on 7 December 2017).
  22. Smithonian Institution. Smithonian X 3D. 2013. Available online: https://3d.si.edu/ (accessed on 7 December 2017).
  23. VR3D CENTER TECH. VR3D. 2017. Available online: http://vr3d.vn/en/ (accessed on 7 December 2017).
  24. Potenziani, M.; Callieri, M.; Dellepiane, M.; Corsini, M.; Ponchio, F.; Scopigno, R. 3DHOP: 3D Heritage Online Presenter. Comput. Graph. 2015, 52, 129–141. [Google Scholar] [CrossRef]
  25. Mara, H.; Krömker, S.; Jakob, S.; Breuckmann, B. GigaMesh and Gilgamesh—3D Multiscale Integral Invariant Cuneiform Character Extraction. In Proceedings of the 11th International Conference on Virtual Reality, Archaeology and Cultural Heritage, Paris, France, 21–24 September 2010; pp. 131–138. [Google Scholar]
  26. Mara, H. Multi-Scale Integral Invariants for Robust Character Extraction from Irregular Polygon Mesh Data. Ph.D. Thesis, Ruprecht-Karls-Universiät Heidelberg, Heidelberg, Germany, 2012. [Google Scholar]
  27. Fisseler, D.; Weichert, F.; Cammarosano, M.; Müller, G.G.W. Towards an interactive and automated script feature analysis of 3D scanned cuneiform tablets. In Scientific Computing and Cultural Heritage; Springer: New York, NY, USA, 2013. [Google Scholar]
  28. Krebernik, M. Hilprecht Archive Online. 2017. Available online: https://hilprecht.mpiwg-berlin.mpg.de/ (accessed on 7 December 2017).
  29. Cignoni, P.; Visual Computing Lab, ISTI-CNR. MeshLabJS. 2006. Available online: http://www.meshlabjs.net/ (accessed on 7 December 2017).
  30. Cignoni, P.; Corsini, M.; Ranzuglia, G. MeshLab: An Open-Source 3D Mesh Processing System. ERCIM News 2008, 73, 45–46. [Google Scholar]
  31. Collins, T.; Woolley, S.; Ch’ng, E.; Hernandez-Munoz, L.; Gehlken, E.; Nash, D.; Lewis, A.; Hanes, L. A Virtual 3D Cuneiform Tablet Reconstruction Interaction. In Proceedings of the British HCI Conference, Sunderland, UK, 3–6 July 2017; Available online: http://virtualcuneiform.org (accessed on 7 December 2017).
  32. Woolley, S.I.; Ch’ng, E.; Hernandez-Munoz, L.; Gehlken, E.; Collins, T.; Nash, D.; Lewis, A.; Hanes, L. A Collaborative Artefact Reconstruction Environment. In Proceedings of the British HCI Conference, Sunderland, UK, 3–6 July 2017. [Google Scholar]
  33. AICON 3D Systems. OPTOCAT. Available online: https://www.aicon3d.de (accessed on 7 December 2017).
  34. ISTI-CNR. MeshLab. Available online: https://www.meshlab.net (accessed on 7 December 2017).
  35. Pharr, M.; Green, S.; Fernando, R. Ambient Occlusion. In GPU Gems; Addison-Wesley: Boston, MA, USA, 2004; pp. 279–292. [Google Scholar]
  36. Mittring, M. Finding Next Gen: CryEngine 2. In Proceedings of the ACM SIGGRAPH 2007 Courses, San Diego, CA, USA, 5–9 August 2007; ACM: New York, NY, USA, 2007; pp. 97–121. [Google Scholar]
  37. Sattler, M.; Sarlette, R.; Zachmann, G.; Klein, R. Hardware-accelerated ambient occlusion computation. In Proceedings of the Vision, Modeling, and Visualization 2004, Standford, CA, USA, 16–18 November 2004; Girod, B., Magnor, M., Seidel, H.P., Eds.; Akademische Verlagsgesellschaft Aka GmbH: Berlin, Germany, 2004; pp. 331–338. [Google Scholar]
  38. Vergne, R.; Pacanowski, R.; Barla, P.; Granier, X.; Schlick, C. Radiance Scaling for Versatile Surface Enhancement. In Proceedings of the Symposium on Interactive 3D Graphics and Games, Washington, DC, USA, 19–21 February 2010. [Google Scholar]
  39. Vergne, R.; Pacanowski, R.; Barla, P.; Granier, X.; Schlick, C. Improving Shape Depiction under Arbitrary Rendering. IEEE Trans. Vis. Comput. Graph. 2011, 17, 1071–1081. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Kemen, B. Outerra: Maximizing Depth Buffer Range and Precision. 2012. Available online: http://outerra.blogspot.de/2012/11/maximizing-depth-buffer-range-and.html (accessed on 7 December 2017).
  41. Willems, G.; Verbiest, F.; Moreau, W.; Hameeuw, H.; Van Lerberghe, K.; Van Gool, L. Easy and cost-effective cuneiform digitizing. In Proceedings of the 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST 2005), Prato, Italy, 8–11 November 2005; Eurographics Association: Geneva, Switzerland, 2005; pp. 73–80. [Google Scholar]
  42. Data Arts Team, Google Creative Lab. dat.GUI, JavaScript Controller Library. 2017. Available online: https://github.com/dataarts/dat.gui (accessed on 7 December 2017).
Figure 1. Examples of common cuneiform representations. An autography of cuneiform fragment 262/p is shown in (a), with a corresponding photographic documentation in (b) and a visualization of a 3D scan in (c). Source: [3].
Figure 1. Examples of common cuneiform representations. An autography of cuneiform fragment 262/p is shown in (a), with a corresponding photographic documentation in (b) and a visualization of a 3D scan in (c). Source: [3].
Informatics 04 00044 g001
Figure 2. Outline of the viewer concept. The data preparation step extends the scan data and performs the conversion to the nexus format. The server hosts the nexus files, delivers obstructed data and performs access control. The WebGL client is responsible vor 3D visualization and providing a user interface. Sharing views and screenshots are manual collaborative features for exchange between multiple WebGL clients.
Figure 2. Outline of the viewer concept. The data preparation step extends the scan data and performs the conversion to the nexus format. The server hosts the nexus files, delivers obstructed data and performs access control. The WebGL client is responsible vor 3D visualization and providing a user interface. Sharing views and screenshots are manual collaborative features for exchange between multiple WebGL clients.
Informatics 04 00044 g002
Figure 3. A set of different materials, with the corresponding square lit-sphere texture shown in the bottom right, including diffuse clay (a), glossy clay (b) and polished gold (c).
Figure 3. A set of different materials, with the corresponding square lit-sphere texture shown in the bottom right, including diffuse clay (a), glossy clay (b) and polished gold (c).
Informatics 04 00044 g003
Figure 4. Rendering improvement over phong-shading (a) with lit-sphere shading (b), additional ambient occlusion (c) and additional ambient occlusion and radiance scaling (d) on cuneiform fragment E971.
Figure 4. Rendering improvement over phong-shading (a) with lit-sphere shading (b), additional ambient occlusion (c) and additional ambient occlusion and radiance scaling (d) on cuneiform fragment E971.
Informatics 04 00044 g004
Figure 5. Section of cuneiform fragment E971 without (a) and with radiance scaling (b) and a stylized rendering (c) of the same section in autography mode.
Figure 5. Section of cuneiform fragment E971 without (a) and with radiance scaling (b) and a stylized rendering (c) of the same section in autography mode.
Informatics 04 00044 g005
Figure 6. The graphical user interface of the framework, including quick access buttons on the top left, the dat.GUI [42] based configuration of the rendering pipeline on the top right, the dynamic scale object at the bottom left and several measurement tapes on the model.
Figure 6. The graphical user interface of the framework, including quick access buttons on the top left, the dat.GUI [42] based configuration of the rendering pipeline on the top right, the dynamic scale object at the bottom left and several measurement tapes on the model.
Informatics 04 00044 g006
Figure 7. Integration of the 3D viewer in a representative query result of the concordance in the Hethitologie-Portal Mainz.
Figure 7. Integration of the 3D viewer in a representative query result of the concordance in the Hethitologie-Portal Mainz.
Informatics 04 00044 g007
Figure 8. Comparing feature distances on fracture faces in two concurrent instances of the 3D viewer for joining fragments Bo69/331 and Bo8060.
Figure 8. Comparing feature distances on fracture faces in two concurrent instances of the 3D viewer for joining fragments Bo69/331 and Bo8060.
Informatics 04 00044 g008
Figure 9. Section with subtle features on fragment E219 without (a) and with radiance scaling (b).
Figure 9. Section with subtle features on fragment E219 without (a) and with radiance scaling (b).
Informatics 04 00044 g009
Table 1. Collation accessibility features on different types of original media.
Table 1. Collation accessibility features on different types of original media.
AspectReal ObjectPhotography3D Scan
lightingvariablefixedvariable
viewing directionvariablefixedvariable
measurementsx, yx, yx, y, depth
manual measurement precision+++
separable texture colors
accessibilityreal world fixed locationweb-basedweb-based
data sizereal objectmediumlarge
representation completenesscorrosion dependent++
Table 2. Performance measurements on an exemplary set of four cuneiform fragments. Nexus-size denotes the size of a compressed nxs file, while loading time and transfer size refer to a measuring point, where a subset of data resulting in a usable model has been transfered on a 2000 kbit connection.
Table 2. Performance measurements on an exemplary set of four cuneiform fragments. Nexus-size denotes the size of a compressed nxs file, while loading time and transfer size refer to a measuring point, where a subset of data resulting in a usable model has been transfered on a 2000 kbit connection.
Fragment#VerticesPly SizeNexus SizeTransfer SizeLoading Time
104/b289,00211.22 MB2.70 MB<1.4 MB<6 s
7/a938,81536.28 MB9.81 MB<2.0 MB<7 s
Bo71/2223,372,959131.84 MB31.31 MB<2.5 MB<11 s
Bronzetafel77,211,0223089.50 MB763.25 MB<20.0 MB<80 s

Share and Cite

MDPI and ACS Style

Fisseler, D.; Müller, G.G.W.; Weichert, F. Web-Based Scientific Exploration and Analysis of 3D Scanned Cuneiform Datasets for Collaborative Research. Informatics 2017, 4, 44. https://doi.org/10.3390/informatics4040044

AMA Style

Fisseler D, Müller GGW, Weichert F. Web-Based Scientific Exploration and Analysis of 3D Scanned Cuneiform Datasets for Collaborative Research. Informatics. 2017; 4(4):44. https://doi.org/10.3390/informatics4040044

Chicago/Turabian Style

Fisseler, Denis, Gerfrid G. W. Müller, and Frank Weichert. 2017. "Web-Based Scientific Exploration and Analysis of 3D Scanned Cuneiform Datasets for Collaborative Research" Informatics 4, no. 4: 44. https://doi.org/10.3390/informatics4040044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop