Next Article in Journal
Quality Control Procedure Based on Partitioning of NMR Time Series
Next Article in Special Issue
Advanced SBAS-DInSAR Technique for Controlling Large Civil Infrastructures: An Application to the Genzano di Lucania Dam
Previous Article in Journal
Hybrid GMR Sensor Detecting 950 pT/sqrt(Hz) at 1 Hz and Room Temperature
Previous Article in Special Issue
Design and Implementation of a New System for Large Bridge Monitoring—GeoSHM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds

Institute of Geodesy, University of Warmia and Mazury in Olsztyn, 10-719 Olsztyn, Poland
*
Authors to whom correspondence should be addressed.
Sensors 2018, 18(3), 791; https://doi.org/10.3390/s18030791
Submission received: 20 December 2017 / Revised: 2 March 2018 / Accepted: 4 March 2018 / Published: 6 March 2018
(This article belongs to the Special Issue Sensors for Deformation Monitoring of Large Civil Infrastructures)

Abstract

:
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.

1. Introduction

Non-invasive 3D measurement techniques in the railway sector use active and passive measurement methods. In this scope of survey and inspections, mainly mobile systems are used, based on terrestrial laser scanning (TLS), which is often supported by sequences of digital images captured by video cameras (e.g., Lynx Mobile Mapper by Teledyne Optech Vaughan, Canada). Mobile LiDAR data were the basic data source for railway line surveys [1], for automated detecting and modelling of rails [2], in automated recognition of railroad infrastructure [3], in automatic extraction of centerlines of railroads [4] and for measuring the railway clearance gauge [5].
However, vision-based methods using digital sensors and data processing were applied among others for defect detection on rail surfaces [6], for measurement of railroad clearance obstacles [7], for continuous monitoring and assessment of the condition of the rail tracks [8], as well as for automated visual inspection of railroad tracks [9].
The photogrammetric reconstruction of 3D objects is currently performed based on point clouds generated by dense image matching techniques. One of the most commonly implemented methods is the SfM (Structure from Motion) technique with extensions [10]. The dense point clouds and advanced data processing allow to recognize and completely reconstruct 3D objects, and then to measure and extract geometric and semantic information [11]. The image-based point clouds measurement method may be used to measure shapes [12], displacements and deformations of objects in static [13], dynamic [14], and multi-temporal processes [15].
An important task and research problem is testing the accuracy of point clouds generation from dense image matching and data processing for 3D surface reconstruction in close range and searching for new areas of practical implementation of that method. This task was initially realized in our previous work [16]. In this paper, we extend it by the following: the result of proposed photos configuration, the results of study on 3D point clouds accuracy, the visualization and analytical analysis of quality and accuracy of railway track 3D models, discussion of the obtained results in relation to technical standards, the scheme of a proposed concept of a mobile vision system, and the final results of testing proposed image-based point clouds measuring method.
A new possible application is proposed by the novel approach to the use of point clouds and mesh models generated from sequences of digital images for 3D reconstruction and measurement of selected geometric parameters of railway tracks with high accuracy (error m < 1 mm). According to the authors’ knowledge, a mobile vision system and other scientific works that only use imaged-based point clouds as a measuring method for railway purposes and allow to achieve such high accuracy, do not exist at present.

2. Geometric Parameters of a Railway Track

Railway tracks are 3D elongated objects which consist of elements with cross-sections similar to the Double T-Bar (DTB). At present, most of the complete measurements of geometric parameters of railway tracks are performed using a track geometry measuring trolley (TGMT) [17,18,19,20], coupled with a one-man station. For rectilinear sections of tracks, the system measures standard geometric components (Figure 1): the spatial location of tracks, the track gauge (G), and the cant (C), also named cross level or super elevation. The track gauge is a slope distance measured between the rail heads, typically 14 mm below the running surface, and the cant is the difference of elevation between the rail heads [21,22]. Measurements of geometric parameters are performed for a defined interval (I) of the track distance [22,23]. According to the Polish branch railway instruction Id-14 (D-75) [22], the measurement of rail track gauge and cant is done every 5 m on a straight line section and every 2.5 m in a curve length with a radius less than 300 m, but for the computation of other geometric parameters (twist, gauge gradient) a 1.0 m measuring interval is needed. In addition, the guard check gauge and the guard face gauge are measured for turnouts. Examples of mobile measurement platforms include the Trimble GEDO CE [24,25] and the Topcon GG-05 [25] systems. Geometric parameters of tracks may be also measured using a hand track gauge and a cant measuring device MOD (manually operated devices) [26], e.g., Graw DTG Track or Sola SW 9182.

3. Acquisition, Measurement and Processing of Digital Images Set

The presented study was performed on the basis of configuration of terrestrial photos and the reference control network, which had exactly simulated the conditions of operations of the concept of a mobile vision platform for measuring selected geometric parameters of a railway track.

3.1. Test Object

The test object was the rectilinear section of a railway track 1 m in length. The dataset adopted for testing includes the signalized control points and uncalibrated imagery for generation of photogrammetric dense point clouds.
Artificially signalized control points (CrlP) were placed on each rail head in 3-1-3 groups. A set of CrlP simulated approximately the referential control points template (constant, fixed). The distances between control points were measured in all possible combinations using the caliper ACCUD ABS 118-080-11 (2 m in length) of the practical reading accuracy SD = 0.05 mm. Then, the free adjustment of linear network was performed using the WinKalk application [27], and a mean planar error for 14 control points SXY = 0.22 mm was obtained. Elevations of control points were determined using precise levelling (PL) with the accuracy SZ = 0.10 mm (Leica DNA03) [28]. A geodetically measured control points set was applied for point clouds orientation and calibration.
At the initial stage of research, many different configurations of multiple photos were tested (for example [16]) to get the best results of image-based point clouds generation in commonly used software systems, and only the best one is presented in this paper. The following selection criteria of the optimization were assumed: the smallest number of applied photos, deviation on control points, and the quality of generated point clouds for the 3D rail reconstruction. The authors chose the configuration of six photos (nearly vertical, oblique, and oblique–convergent photos).
The digital SLR (Single-Lens-Reflex) Nikon D5100 camera, equipped with a CMOS (Complementary Metal-Oxide Semiconductor) sensor (23.6 × 15.6 mm size, resolution 4928 × 3264, pixel size p’xy = 4.8 μm) and a lens with optical stabilization [29], focused (focal f = 18 mm) on imaging distance YF = 2.0 m, was used to acquire the photos (Figure 2).
Performed research works were characterized by the following real conditions of photogrammetric acquisition and processing of digital images:
  • long-shaped object,
  • six photos in two strips,
  • short focal length of the wide-angle camera lens,
  • full longitudinal overlap,
  • base/distance ratio υ = ½ ÷ 1,
  • use of a reference system with 14 control points (SXYZ = 0.24 mm),
  • self-calibration of DSLR Nikon D5100 camera 16 MP (megapixels),
  • images radiometric quality dependent on weather (dry and wet rails) and lighting conditions.
Depending on the local scale of imaging [30], the diameter of signalized control points on the images varied from 7 to 21 pixels. The signalized control points were measured using the center weighted method (centroid operator) by means of Matching, the original software [31]. The calculated mean subpixel accuracy of this automatic measurement was sx’y’ = 0.15 pixel.

3.2. Point Clouds and Mesh Generation

The practical verification of the quality and reliability of the 3D rail track models generation was performed using two methods implemented in two different software:
  • the Scalable Multi-View Stereo (MVS) in RealityCapture v. 1.0.2.2256 RC (commercial license) of Capturing Reality s.r.o [32,33,34],
  • the modified stereo Semi-Global Matching (SGM) in PhotoScan v. 1.2.6 (commercial license) of Agisoft LLC [12,35,36].
The RealityCapture is quite a new application on the market, but it has enjoyed very good opinions among the users, and PhotoScan is a well-known and very popular suite. We tested them with some other application before [16,37] and after that we decided to use them for the verification of our measuring method. Both applications are dedicated, first of all, to reconstruction of 3D objects based on a large number of variously oriented digital images. They also allow an automatic determination of camera interior orientation parameters, the elements of a spatial orientation of photos, generation of the dense point clouds, a 3D mesh model, and finally the DSM (Digital Surface Model) and a orthomosaic.
The Agisoft PhotoScan was used by authors as an additional computation and research tool for verification of digital processing results in RealityCapture. This methodology allows to receive some additional information about applications reliability and functionalities in the field of a railway optical metrology, but a benchmark is not the main focus of the paper.
The selection of functions, configuration, and modification of data processing options are performed in the interactive and manual mode in the RealityCapture and PhotoScan software. Different functionalities of both tools required different methodology of the advanced digital processing. In both cases, processing parameters were set using the option without scaling images and with an unlimited number of features objects (“Max. features objects” option), which means that the computations are longer and applications find as many key points as possible, but with a bigger number of less reliable points. They should be rejected at the tie point estimation stage [38].
Possibly, higher accuracy and automation of processing can be achieved when structured light is used for projection of artificial profiles on the rail track (interested area is unambiguously defined). That should ensure the optimum exposure parameters, independently of external illumination conditions.
Therefore, the computations of the 3D rail track models were performed in two variants of dense point clouds generation:
  • Basic model (BM) processed without an externally defined profile on the track (Section 4.1),
  • Profile model (PM) performed with an externally defined (in situ) artificial profile that simulated structured light on the track (Section 4.2).
The study workflow of geodetic and photogrammetric measurements and processing is presented in the Figure 3.

3.2.1. RealityCapture Settings

Image matching, construction, and coloring of a model are required to generate a mesh draped on the point cloud in the RealityCapture [37]. Image matching was performed using the “Alignment” option. Detector sensitivity was set to “Ultra”, max. features per megapixel was set at 800 thousands, max. features per image was set at 3 million, max. feature reprojection error was set at 2.0 pixels, images overlap was set to “High” and image downscale factor was set at 1.0. The minimal distance between two vertices was d = 0.1 mm. The camera calibration model was selected by defining the number and approximate values of parameters to be determined (f, x’o, y’o, K1 ÷ K3, P1, P2, C1, C2). The construction of the model was performed according to the “Normal” variant with manual setting of parameters, identical to parameters in the “High detail” option. This operation allowed to omit errors of the CUDA drivers of the old workstation graphic card, but after RealityCapture update, the problem does not appear any longer (older CUDA graphics card are also supported). During model generation: in image depth map calculation and model colorize image downscale was set at 1.0 and coloring method was set to multi-band.

3.2.2. PhotoScan Settings

When the PhotoScan application is used to generate a 3D model, image matching, generation of a dense point cloud, construction of a model, and texture building must be performed by several steps, as shown in numerous papers [30,36]. For these purposes, the processing accuracy was set as “Highest” with searching for tie points for each possible stereo-pair (option “Pair preselection: disabled”). The key point limit was set at 3 million and tie points limit was 800 thousands. The dense point cloud was generated at ”Ultra high” settings with aggressive depth filtering. The construction of the mesh model was performed using the surface type “Arbitrary” and the face count “High”, without interpolation. Texture building was performed with the mapping model “Generic” and “Average” blending mode, which means that PhotoScan tries to create as uniform texture as possible for arbitrary geometry and uses the weighted average value of all pixels from individual photos for texturing [38] (p. 18).

3.3. Mesh Filtering and Measurement

The 3D mesh models of the test section of railway tracks, generated from dense point clouds with the use of RealityCapture and PhotoScan software, were further processed using MeshLab [39,40]. Model filtration consisted of the elimination of mesh edges longer than 2 cm and the elimination of surrounding areas of the test section. The final section of the railway track model was limited to the width of the railway sleeper and to the distance between three successive sleepers.
The track gauge and cant values were determined for the railway track section located in the center of the test section using CloudCompare [41]. Within the area where control points were located, for the section d = 1.0 m a tracing line was marked along the rail and cross-sections were created with the interval Δd = 0.1 m. The interested area of the cross-sections were the cant points (CP), located in the center of the rail heads and the gauge points (GP) defined at the side of the rail heads, 15 mm below the CP point. The scheme of determination of CP and GP points is presented in Figure 4. First, CP points and then GP points were determined.
The closest point to the center of rail on the rail head and the GP point localization were determined manually by the user. Differences of elevation between the CP points were the cant value. The gauge value was analytically calculated as a distance between GP points. The measured gauge and cant values were compared with the results of direct geodetic measurements.

4. Results of the 3D Model Processing

RealityCapture and Agisoft PhotoScan applications were used for processing on the workstation with the processor Intel® Core™ i7-950 (8 MB Cache, 3.06 GHz), 24 GB RAM DDR3-1333 MHz memory, NVIDIA® Quadro® 4000 graphic card and SATA3 7200 rpm disk.

4.1. Basic Model Analysis

Table 1 presents parameters of processing of generated point clouds and 3D mesh models. Calculated RMS values are related to deviations on control points of the 3D model.
Visualizations of 3D mesh models, generated with the use of MVS and SGM techniques implemented in RealityCapture and PhotoScan applications, respectively, are presented in Figure 5 and Figure 6.
The processing time of a railway track reconstruction, in the case of RealityCapture, was shorter. In the authors’ opinion, it is related to algorithms which are used by applications. PhotoScan uses pairwise matching [38], while RealityCapture uses loop-closing techniques based on SURF-based visual words [42] and tf-idf scores [43]. Furthermore, in the RealityCapture most of the computations are performed by GPU cores, while PhotoScan uses CPU and random-access memory (RAM) and only some calculations are performed by GPU.
The 3D surface reconstruction obtained using RealityCapture application is of better visual quality, with the exclusion of model noise in fragments with the control points (CrlP) triplet. They were apparently caused by the amorphous texture of the signalized control points background (Figure 7).
PhotoScan generated a denser point cloud, based on a smaller amount of tie points. The use of the processing parameters in the “highest quality” option does not eliminate defects on 3D mesh model. The measurements on the model were performed using CloudCompare application. During the analysis of the 3D model generated using PhotoScan, double edges within an approximate distance of 3 mm (Figure 8) on rail head were observed.
Table 2 presents the track gauge (G) and cant values (C), directly measured in the field using a caliper and precise levelling (PL), values measured (CloudCompare) on the 3D models generated in RealityCapture (GRC, CRC) and PhotoScan (GPS, CPS) software and additionally their differences (∆GRC, ∆CRC, ∆GPS, ∆CPS). On the section of a railway track (1 m in length), the cross-sections in the interval of 0.1 m is defined, which does not coincide with points used for global geo-referencing the 3D reconstruction (excluding CrlP 1 and 7).

4.2. Profile Model Analysis

Due to visual analysis and the time consumed to process BMs, the MVS algorithm (RealityCapture) appears to provide more stable computation, and in this part of study it was the only algorithm used. In the case of the PM computation, the parameters and results of digital processing were almost identical to BM and they are shown in Table 1.
Figure 9 presents a 3D extended model of the railway track with the simulated structured light (SSL) profile. It also shows how the SSL profile coincides with the computed cross-section that is perpendicular to the rail.
In the next step of the two models, the BM and PM were compared (Figure 10 and Figure 11). The question is analyzed in more detail in Figure 11, where both 3D models are superimposed. Places where fragments of BM cover PM are marked by green color.
In places on BM where the running surface is generated with some deformations, a refinement of the rail heads edges on PM could be observed (Figure 12). The measured distances on the gauge points between 3D models are ΔGP (L) ≈ 1.0 mm for left rail and ΔGP (R) ≈ 0.1 mm for right rail, and the ΔCP ≤ 0.3 mm for running surfaces.
Track gauge and cant measurements on 3D models compared to geodetic survey are presented in Table 3.
The obtained RMS differences of the BM and PM geometric parameters in the horizontal (gauge G) and vertical (cant C) planes in relation to geodetic measurements using the caliper and precise levelling amount respectively to RMS∆BM = 0.38 mm and RMS∆PM = 0.48 mm. The values of these differences are statistically insignificant.

5. Discussion

The experimental study of the proposed concept of image-based point clouds measuring method for geometric parameters determination of a railway track was performed on the basis of the photogrammetric configuration of six terrestrial photos (nearly vertical, oblique, and oblique–convergent photos). In these conditions, the measurement potential of 3D object reconstruction in close range using photogrammetrically generated point clouds was also tested.
Two methods implemented respectively in RealityCapture and PhotoScan applications were used to generate the dense point cloud and the 3D mesh models for the test section of the railway track. The study found that the appropriate 3D models can be generated using point clouds from dense image matching. In addition, a discussion of the results of processing and measurements using both applications is presented. The processing results proved the higher usefulness of the RealityCapture application in railway purposes, which generated the 3D model faster, of higher visual quality and robust.
The comparison of the basic model and the model performed with a simulated structured light profile on the track shows that both modes can be used in geometric parameters of the track computation. The main difference is the independence from lighting and reflection conditions in case of using structured light for artificial profile definition.
The test results of the track gauge and cant values obtained using the image-based point clouds measuring method were compared with the results of direct precise geodetic measurements. The obtained RMS difference in horizontal (gauge) and vertical (cant) plane amounted to RMS∆ < 0.45 mm, which is significantly lower than allowable values defined in the European technical norm EN 13848-4:2011 (track gauge ±1 mm, cant ±2.5 mm) [23]. In relation to the tolerances (track gauge and cant ±2 mm) of a TGMT system that was used to measure the Zürich-Thalwil Tunnel rails where the projected operational speed was 200 km/h [44] (p. 141), our proposed measuring method has fulfilled these conditions. Furthermore, in comparison [45] to other geodetic measurements using some fixed-points, chord measuring systems, and inertial measuring systems, described in [46], the obtained results using photogrammetric method are satisfactory. Besides this, it is compliant with EN 13848 series [21,23,45,47,48]. Basing on the track quality class (TQC) and track quality index (TQI) parameters [48], which depend on a speed function and vehicle ride quality, the proposed measuring method can be used for the evaluation of 5th class track geometry quality.
The first positive experimental results of the presented novel approach encourage the authors to continue work on the construction of a dedicated, low-cost mobile vision measuring system. The proposed designs of implementation our method are presented in Figure 13. The same figure shows also two possibilities of use our measuring method: as a mobile trolley and as a railway platform. Both of them use reference templates for computation of a model true geometry. The possible placement and outlook of control network is included (six on Figure 13), but it is needed only for transformation from local railway Coordinate System (millage) to National Coordinate System or ETRF2000. Because track inspection measurements are geolocalized by millage, the control network is mentioned as an extension, e.g., for drawing maps or for future railway lines modernization. The standard localization of control network is two points by each 500 m, which will be enough for this purpose.

6. Conclusions

The presented results concern the testing of the measuring method using point clouds from dense image matching applied to the 3D reconstruction and measurements of the selected geometric parameters (gauge and cant) of a railway track. The performed tests have initiated the design of a mobile vision measuring platform. This concept represents an alternative to common railway measuring systems. A continuously working system could be in practice a competitive solution for the currently used manually operated devices (MOD) [26] and expensive geodetic mobile measurement trolleys (TGMT) [25,44], dynamic inertial measuring systems [49], and other vehicle reaction measuring systems [46]. Besides, the use of a photogrammetric method enables faster track geometry irregularities detection and acquisition of numeric and semantic big data of the railway and its surrounding area.
The achieved accuracy meets the accuracy condition of measurements and inspection of railway tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) [22] and the European technical norm EN 13848-4:2011 [23].

Acknowledgments

Open Access publication was financial supported by University of Warmia and Mazury in Olsztyn Funds (statutory activity).

Author Contributions

G.G. and P.S. conceived and designed the experiments; G.G. and P.S. performed the experiments; G.G. and P.S. analyzed the data; G.G. and P.S. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Leslar, M.; Perry, G.; Mcnease, K. Using mobile lidar to survey a railway line for asset inventory. In Proceedings of the American Society for Photogrammetry and Remote Sensing (ASPRS) Annual Conference, San Diego, CA, USA, 26–30 April 2010; pp. 1–8. [Google Scholar]
  2. Elberink, S.O.; Khoshelham, K.; Arastounia, M.; Benito, D.D. Rail track detection and modelling in mobile laser scanner data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-5/W2, 223–228. [Google Scholar] [CrossRef]
  3. Arastounia, M. 2015 Automated recognition of railroad infrastructure in rural areas from LiDAR data. Remote Sens. 2015, 7, 14916–14938. [Google Scholar] [CrossRef]
  4. Elberink, S.O.; Khoshelham, K. Automatic extraction of railroad centerlines from mobile laser scanning data. Remote Sens. 2015, 7, 5565–5583. [Google Scholar] [CrossRef]
  5. Mikrut, S.; Kohut, P.; Pyka, K.; Tokarczyk, R.; Barszcz, T.; Uhl, T. Mobile laser scanning systems for measuring the clearance gauge of railways: State of play, testing and outlook. Sensors 2016. [Google Scholar] [CrossRef] [PubMed]
  6. Deutschl, E.; Gasser, C.; Niel, A.; Werschonig, J. Defect detection on rail surfaces by a vision based system. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 507–511. [Google Scholar] [CrossRef]
  7. Schewe, H.; Holl, J.; Gründig, L. LIMEZ—Photogrammetric measurement of railroad clearance obstacles. In Proceedings of the 3rd Turkish-German Joint Geodetic Days—Towards a Digital Age, Istanbul, Turkey, 1–4 June 1999; pp. 721–727. [Google Scholar]
  8. Kaleli, F.; Akgul, Y.S. Vision-based railroad track extraction using dynamic programming. In Proceedings of the 12th International IEEE Conference on Intelligent Transportation Systems, St. Louis, MO, USA, 4–7 October 2009; pp. 1–6. [Google Scholar] [CrossRef]
  9. Sawadisavi, S.; Edwards, J.; Resendiz, E.; Hart, J.; Barkan, C.; Ahuja, N. Machine-vision inspection of railroad track. In Proceedings of the Conference on American Railway and Maintenance of Way Association (AREMA), Salt Lake City, UT, USA, 21–24 September 2008; pp. 1–23. [Google Scholar]
  10. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  11. Jazayeri, I.; Fraser, C.S.; Cronk, S. Automated 3D object reconstruction via multi-image close-range photogrammetry. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, XXXVIII, 305–310. [Google Scholar]
  12. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F.; Bruno Kessler Foundation. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef]
  13. Weinmann, M. Reconstruction and Analysis of 3D Scenes. From Irregularly Distributed 3D Points to Object Classes, 1st ed.; Springer: Berlin, Germany, 2016; p. 233. ISBN 978-3-319-29244-1. [Google Scholar]
  14. Ribeiro, D.; Calçada, R.; Ferreira, J.; Martins, T. Non-Contact measurement of the dynamic displacement of railway bridges using an advanced video-based system. Eng. Struct. 2014, 75, 164–180. [Google Scholar] [CrossRef]
  15. Kim, M.-S. Measurement of the Wheel-rail Relative Displacement using the Image Processing Algorithm for the Active Steering Wheelsets. Int. J. Syst. Appl. Eng. Dev. 2012, 6, 114–121. [Google Scholar]
  16. Gabara, G.; Sawicki, P. Study on 3D point clouds accuracy of elongated object reconstruction in close range—Comparison of different software. In Proceedings of the 10th International Conference “Environmental Engineering”, Vilnius, Lithuania, 27–28 April 2017; pp. 1–7. [Google Scholar] [CrossRef]
  17. Jiang, Q.; Wu, W.; Li, Y.; Jiang, M. Millimeter scale track irregularity surveying based on ZUPT-aided INS with sub-decimeter scale landmarks. Sensors 2017, 17. [Google Scholar] [CrossRef]
  18. Glaus, R. Kinematic Track Surveying by Means of a Multisensory Platform. Ph.D. Thesis, Swiss Federal Institute of Technology Zurich, Switzerland, 2006. [Google Scholar] [CrossRef]
  19. Ižvolta, L.; Šmalo, M. Assessment of the Track Geometry Quality from the Aspect of Safe and Reliable Operation of the Railway Track. Procedia Eng. 2015, 111, 344–350. [Google Scholar] [CrossRef]
  20. Li, Y.; Cen, M.; Zhang, T. A novel data processing method for sectional rail measurements to detect track irregularities in high-speed railways. Proc. Inst. Mech. Eng. Part F J. Rail Rapid Transit 2016. [Google Scholar] [CrossRef]
  21. CEN/TC 256. EN 13848-1:2003+A1:2008 Railway Applications—Track—Track Geometry Quality—Part 1: Characterisation of Track Geometry; CEN: Brussels, Belgium, 2008. [Google Scholar]
  22. PKP Polish Railway Lines S.A. Instruction of Taking Measurements, Testing and Evaluating the State of Rails Id-14 (D-75). Available online: http://www.plk-sa.pl/files/public/user_upload/pdf/Akty_prawne_i_przepisy/Instrukcje/Podglad/Id-14.pdf (accessed on 5 October 2017). (In Polish).
  23. CEN/TC 256. 13848-4 2011 Railway Applications—Track—Track Geometry Quality—Part 4: Measuring Systems—Manual and Lightweight Devices; CEN: Brussels, Belgium, 2011. [Google Scholar]
  24. Pinter, O. Using of Trimble® GEDO CE System for Absolute Track Positioning. Master’s Thesis, Department of Special Geodesy, Faculty of Civil Engineering, Czech Technical University, Prague, Czech, 2012. [Google Scholar]
  25. Engstrand, A. Railway Surveying—A Case Study of the GRP 5000. Master’s Thesis, Division of Geodesy and Geoinformatics, Royal Institute of Technology, Stockholm, Sweden, 2011. [Google Scholar]
  26. GRAW DTG. Available online: http://www.graw.com/en/track-measurement/digital-track-and-turnout-gauge-dtg.html (accessed on 25 January 2018).
  27. Winkalk. Available online: http://www.coder.atomnet.pl/english.htm (accessed on 25 January 2018).
  28. Bureš, J.; Švábenský, O.; Vitula, A. Engineering measurements in sphere of building constructions and materials testing. Rep. Geodesy 2011, 90, 59–67. [Google Scholar]
  29. Chen, J.-F.; Fuh, C.-S. Image stabilization with best shot selector and super resolution reconstruction. In Proceedings of the 18th IPPR Conference on Computer Vision, Graphics and Image Processing (CVGIP 2005), Taipei, Taiwan, 24–26 April 2005; pp. 1215–1222. [Google Scholar]
  30. Pepe, M.; Prezioso, G. Two Approaches for Dense DSM Generation from Aerial Digital Oblique Camera System. In Proceedings of the 2nd International Conference on Geographical Information Systems Theory, Applications and Management (GISTAM 2016), Rome, Italy, 26–27 April 2016; Rocha, J.G., Cédric, G., Eds.; 2016; pp. 63–70. [Google Scholar] [CrossRef]
  31. Sawicki, P.; Ostrowski, B. Research on chosen matching methods for the measurement of points on digital close range images. Ann. Geomat. 2005, III/2, 135–144. [Google Scholar]
  32. Jancosek, M.; Pajdla, T. Segmentation based Multi-View Stereo. In Proceedings of the Computer Vision Winter Workshop, Eibiswald, Austria, 4–6 February 2009; Ion, A., Kropatsch, W.G., Eds.; PRIP; Vienna University of Technology: Vienna, Austria, 2009; pp. 1–6. [Google Scholar]
  33. Jancosek, M.; Shekhovtsov, A.; Pajdla, T. Scalable Multi-View Stereo. In Proceedings of the IEEE 12th International Conference on Computer Vision, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 1526–1533. [Google Scholar] [CrossRef]
  34. Heller, J.; Havlena, M.; Jancosek, M.; Torii, A.; Pajdla, T. 3D reconstruction from photographs by CMP SfM web service. In Proceedings of the IEEE 14th IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 18–22 May 2015; pp. 30–34. [Google Scholar] [CrossRef]
  35. Jaud, M.; Passot, S.; Le Bivic, R.; Delacourt, C.; Grandjean, P.; Le Dantec, N. Assessing the accuracy of high resolution digital surface models computed by PhotoScan® and MicMac® in sub-optimal survey conditions. Remote Sens. 2016. [Google Scholar] [CrossRef]
  36. Niederheiser, R.; Mokroš, M.; Lange, J.; Petschko, H.; Prasicek, G.; Elberink, S.O. Deriving 3D point clouds from terrestrial photographs—Comparison of different sensors and software. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B5, 685–692. [Google Scholar] [CrossRef]
  37. Gabara, G.; Sawicki, P. Accuracy study of close range 3D object reconstruction based on point clouds. In Proceedings of the IEEE 2017 Baltic Geodetic Congress (BGC Geomatics), Gdańsk, Poland, 22–25 June 2017; pp. 25–29. [Google Scholar] [CrossRef]
  38. Agisoft LLC. Agisoft PhotoScan User Manual Professional Edition, Version 1.2. Available online: http://www.agisoft.com/pdf/photoscan-pro_1_2_en.pdf (accessed on 5 October 2017).
  39. Cignoni, P.; Callieri, M.; Corsini, M.; Dellepiane, M.; Ganovelli, F.; Ranzuglia, G. MeshLab: An Open-Source Mesh Processing Tool. In Proceedings of the Eurographics Italian Chapter Conference, Salerno, Italy, 2–4 July 2008; Scarano, V., De Chiara, R., Erra, U., Eds.; The Eurographics Association: Goslar, Germany, 2008; pp. 129–136. [Google Scholar] [CrossRef]
  40. MeshLab (v. 1.3.3) (GPL Software). Available online: http://www.meshlab.net (accessed on 27 January 2016).
  41. CloudCompare (v. 2.8) (GPL Software). Available online: http://www.cloudcompare.org (accessed on 13 May 2016).
  42. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Computer Vision—ECCV 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  43. Sivic, J.; Zisserman, A. Video Google: Efficient visual search of videos. In Toward Category-Level Object Recognition; Ponce, J., Hebert, M., Schmid, C., Zisserman, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany; pp. 127–144. [CrossRef]
  44. Glaus, R. The Swiss Trolley—A Modular System for Track Surveying. Geodätisch—Geophysikalische Arbeiten in der Schweiz, 1st ed.; Akademie der Naturwissenschaften Schweiz, ETH Hönggerberg: Zürich, Switzerland, 2006; p. 199. ISBN 3-908440-14-9. [Google Scholar]
  45. CEN/TC 256. EN 13848-2:2006 Railway Applications—Track—Track Geometry Quality—Part 2: Measuring Systems—Track Recording Vehicles; CEN: Brussels, Belgium, 2014. [Google Scholar]
  46. Haigermoser, A.; Luber, B.; Rauh, J.; Gräfe, G. Road and track irregularities: Measurement, assessment and simulation. Veh. Syst. Dyn. 2015, 53, 878–957. [Google Scholar] [CrossRef]
  47. CEN/TC 256. EN 13848-5:2017 Railway Applications—Track—Track Geometry Quality—Part 5: Geometric Quality Levels—Plain Line, Switches and Crossings; CEN: Brussels, Belgium, 2017. [Google Scholar]
  48. CEN/TC 256. EN 13848-6:2014 Railway Applications—Track—Track Geometry quality—Part 6: Characterisation of Track Geometry Quality; CEN: Brussels, Belgium, 2014. [Google Scholar]
  49. Baier, M.; Rulka, W.; Abel, D. Model Based Measurement of Railway Track Irregularities. IFAC Proc. Vol. 2009, 42, 257–262. [Google Scholar] [CrossRef]
Figure 1. Graphic definition of the Cant (C), Gauge (G), and measurement interval (I).
Figure 1. Graphic definition of the Cant (C), Gauge (G), and measurement interval (I).
Sensors 18 00791 g001
Figure 2. Configuration of six terrestrial photos of the tested section of the track: nearly vertical photo (b), oblique photos (a,c,e), and oblique–convergent photos (d,f).
Figure 2. Configuration of six terrestrial photos of the tested section of the track: nearly vertical photo (b), oblique photos (a,c,e), and oblique–convergent photos (d,f).
Sensors 18 00791 g002
Figure 3. Measurement and processing workflow in the tests.
Figure 3. Measurement and processing workflow in the tests.
Sensors 18 00791 g003
Figure 4. Cant point (CP) and gauge point (GP) definition and measurement of the track gauge and cant in CloudCompare application.
Figure 4. Cant point (CP) and gauge point (GP) definition and measurement of the track gauge and cant in CloudCompare application.
Sensors 18 00791 g004
Figure 5. Generated 3D mesh model of the railway track (top and solid view) in Reality Capture (a,c) and PhotoScan (b,d) software.
Figure 5. Generated 3D mesh model of the railway track (top and solid view) in Reality Capture (a,c) and PhotoScan (b,d) software.
Sensors 18 00791 g005
Figure 6. Fragment of generated 3D mesh model of, e.g., left rail (solid view) in RealityCapture (a) and PhotoScan (b) software.
Figure 6. Fragment of generated 3D mesh model of, e.g., left rail (solid view) in RealityCapture (a) and PhotoScan (b) software.
Sensors 18 00791 g006
Figure 7. Holes on control points (CrlP) triplet in the 3D mesh model generated by RealityCapture software.
Figure 7. Holes on control points (CrlP) triplet in the 3D mesh model generated by RealityCapture software.
Sensors 18 00791 g007
Figure 8. Left (a) and right (b) rail double edges on one cross-section generated by PhotoScan.
Figure 8. Left (a) and right (b) rail double edges on one cross-section generated by PhotoScan.
Sensors 18 00791 g008
Figure 9. Generated 3D model of railway track with simulated structured light (SSL) profile.
Figure 9. Generated 3D model of railway track with simulated structured light (SSL) profile.
Sensors 18 00791 g009
Figure 10. Right rail track 3D model without (a) and with (b) SSL profile.
Figure 10. Right rail track 3D model without (a) and with (b) SSL profile.
Sensors 18 00791 g010
Figure 11. Penetrate maps of the 3D models: basic model (BM; green color) in relation to the profile model (PM) (a) on the left; (b) on the right rail.
Figure 11. Penetrate maps of the 3D models: basic model (BM; green color) in relation to the profile model (PM) (a) on the left; (b) on the right rail.
Sensors 18 00791 g011
Figure 12. Discrepancies between cross-sections on the 3D models (BM and PM) (a) on the left; (b) on the right rail.
Figure 12. Discrepancies between cross-sections on the 3D models (BM and PM) (a) on the left; (b) on the right rail.
Sensors 18 00791 g012
Figure 13. Scheme of proposed image-based method application for railway track inspection. (1) measurement environment (configuration of six cameras and reference template); (2) reference template; (3) arrow shows measurement direction; (4) concept of measuring trolley (TGMT) with implemented method; (5) concept of measuring platform with implemented method; (6) possible placement of control points for geo-localization.
Figure 13. Scheme of proposed image-based method application for railway track inspection. (1) measurement environment (configuration of six cameras and reference template); (2) reference template; (3) arrow shows measurement direction; (4) concept of measuring trolley (TGMT) with implemented method; (5) concept of measuring platform with implemented method; (6) possible placement of control points for geo-localization.
Sensors 18 00791 g013
Table 1. Parameters and results of images processing.
Table 1. Parameters and results of images processing.
Processing Parameters & ResultsRealityCapturePhotoScan
Processing time22 min 20 s327 min 2 s
Point cloud coverage area6.99 m26.99 m2
No. of used tie points84,72859,527
No. of cloud points5,147,83025,980,265
Point density per m21,739,1323,717,550
Model coverage area2.96 m22.96 m2
No. of model faces10,083,5053,078,302
No. of model vertices5,063,1022,321,412
StDev sx’y’ on CrlP0.59 pix0.39 pix
RMSΔXY Dev on CrlP0.213 mm0.280 mm
RMSΔZ Dev on CrlP0.075 mm0.048 mm
RMSΔXYZ Dev on CrlP0.226 mm0.284 mm
Table 2. Track gauge (G) and cant (C) measurements results on the 3D rail track basic model.
Table 2. Track gauge (G) and cant (C) measurements results on the 3D rail track basic model.
No. of Cross-SectionMeasurement (mm)Measurements Differences (mm)
CaliperPLRealityCapturePhotoScanRealityCapturePhotoScan
Gauge (G)Cant (C)Gauge (GRC)Cant (CRC)Gauge (GPS)Cant (CPS)ΔGRC = G − GRCΔCRC = C − CRCΔGPS = G − GPSΔCPS = C − CPS
11444.017.191444.67.11444.617.6−0.590.09−0.60−0.41
21444.017.191443.637.62 edges2 edges0.38−0.41-SSSS-
31444.007.181444.667.51442.197.6−0.66−0.321.81−0.42
41444.007.171444.127.11443.687.5−0.120.070.32−0.33
51444.007.161444.097.31441.397.3−0.09−0.142.61−0.14
61443.997.161444.657.51443.886.9−0.66−0.340.110.26
71443.997.151444.537.41443.687.0−0.54−0.250.310.15
81443.997.161444.367.51442.836.9−0.37−0.341.160.26
91444.007.161443.976.91443.726.80.030.260.280.36
101444.007.171443.917.41442.286.90.09−0.231.720.27
RMSΔGRCCRC,ΔGPSCPS0.430.271.290.30
Table 3. Track gauge (G) and cant (C) measurements results on the 3D rail track models.
Table 3. Track gauge (G) and cant (C) measurements results on the 3D rail track models.
Measurements (mm)Differences (mm)
CaliperPLBasic Model (BM)Profile Model (PM)
Gauge (G)Cant (C)Gauge (GBM)Cant (CBM)Gauge (GPM)Cant (CPM)ΔGBM = G − GBMΔCBM = C − CBMΔGPM = G − GPMΔCPM = C − CPM
1444.007.181444.507.381443.747.80−0.50−0.200.26−0.62

Share and Cite

MDPI and ACS Style

Gabara, G.; Sawicki, P. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds. Sensors 2018, 18, 791. https://doi.org/10.3390/s18030791

AMA Style

Gabara G, Sawicki P. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds. Sensors. 2018; 18(3):791. https://doi.org/10.3390/s18030791

Chicago/Turabian Style

Gabara, Grzegorz, and Piotr Sawicki. 2018. "A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds" Sensors 18, no. 3: 791. https://doi.org/10.3390/s18030791

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop