UAV
coordinates


Integrated UAV photogrammetry and automatic feature extraction for cadastral mapping

Nov 2022 | No Comment

The principal objective of this research is to investigate the applicability of UAV photogrammetry integrated with automatic feature extraction for cadastral mapping. We present first part of the paper in this issue.The concluding part will be published in the next issue.

Oluibukun Gbenga Ajayi

Department of Land and Spatial Sciences, Namibia University of Science and Technology, Namibia

Emmanuel Oruma

Federal University of Technology, Minna, Nigeria

Abstract

The applicability of integrated Unmanned Aerial Vehicle (UAV)-photogrammetry and automatic feature extraction for cadastral or property mapping was investigated in this research paper. Multiresolution segmentation (MRS) algorithm was implemented on UAV-generated orthomosaic for mapping and the findings were compared with the result obtained from conventional ground survey technique using Hi-Target Differential Global Positioning System (DGPS) receivers. The overlapping image pairs acquired with the aid of a DJI Mavic air quadcopter were processed into an orthomosaic using Agisoft metashape software while MRS algorithm was implemented for the automatic extraction of visible land boundaries and building footprints at different Scale Parameter (SPs) in eCognition developer software. The obtained result shows that the performance of MRS improves with an increase in SP, with optimal results obtained when the SP was set at 1000 (with completeness, correctness, and overall accuracy of 92%, 95%, and 88%, respectively) for the extraction of the building footprints. Apart from the conducted cost and time analysis which shows that the integrated approach is 2.5 times faster and 9 times cheaper than the conventional DGPS approach, the automatically extracted boundaries and area of land parcels were also compared with the survey plans produced using the ground survey approach (DGPS) and the result shows that about 99% of the automatically extracted spatial information of the properties fall within the range of acceptable accuracy. The obtained results proved that the integration of UAV-photogrammetry and automatic feature extraction is applicable in cadastral mapping and that it offers significant advantages in terms of project time and cost.

Introduction

Unmanned Aerial Vehicles (UAVs), popularly known as drones, are aerial vehicles or aircrafts controlled remotely by a human operator or autonomously by an onboard computer (http://nesac. gov.in/uav-applications/). It uses a combination of Global Positioning System (GPS) navigation technology and aircraft model technology to provide fast and affordable mapping services (Barnes et al., 2014). These autonomously flying systems are typically equipped with a variety of navigation, positioning and mapping sensors, such as still video cameras, and others (Manyoky et al., 2011). Global Navigation Satellite System (GNSS) enabled UAVs have prospective application for quicker, accurate, and less costly remote data collection than piloted aerial vehicles (Ajayi et al., 2018).

The growing use of UAVs for photogrammetric mapping in aerial surveys is unprecedented. Due to the fact that UAVs are relatively cheap, and the increasing global need for access to information on land-based properties as a basis for resource planning, growth and control (Barnes et al., 2014), the utilization of UAVs in land administration and cadastral mapping is fast gaining acceptable attention and critical investigation. Using UAVs in mapping custom lands, urban lands, etc., cadastral maps of high resolution can be easily produced within a short time. The system is fast and easy to use, and it produces comprehensible plot representations as opposed to polygons with no graphic background. The high-resolution orthomosaics generated from the UAV-acquired 2D overlapping images allow the user to identify features that guide property identification and mapping. Such features include footpaths, fields of crops, building footprints like walls, edges, or any identifiable features.

It is very important to update information about land boundaries so that changes in ownership and property division can be documented on time. One of the benefits of using aerial imageries is that they provide a historical record of the places that may be revisited to see what changes have occurred in the future. Archived images can thus provide useful evidence where there are conflicts in the boundary of parcels. In contrast, classical approaches to land and property surveying are time-consuming and require a great deal of effort. In remote areas, particularly in mountainous areas, and when the weather is harsh, it is sometimes very difficult to carry out such surveys. In this situation, aerial photographs can be used as an alternative to classical survey method for the acquisition of spatial information where most measurements can be performed in the office or remotely (Eisenbeiss, 2009). UAV is now employed as a data acquisition platform for the extraction of spatial information of land-based properties (creation and update of cadastral maps) due to its rapid development over the past few years though predominantly through manual delineation of visible cadastral boundaries (Karabin et al., 2021).

Rijsdijk et al. (2013) investigated the usefulness of UAVs in the juridical verification process of cadastral borders of ownership at Het Kadaster; the national land registration service and mapping agency in the Netherlands, using AscTec Falcon 8, Microdrone MD-4 drones. Also, Crommelinck et al. (2016) and Karabin et al. (2021) presented an overview of different case studies investigating the applicability and potentials of UAVs for cadastral mapping and boundary delineation. However, most of the documented case studies deal with manual boundary delineation, with very little attempts to proffer solution to the problem of automatic delineation of cadastral boundaries which is recently gaining wide attention with the advent of machine learning and computer vision. Since a large number of property boundaries are assumed to be visible, as they coincide with natural or manmade object or features such as building footprints (Zevenbergen and Bennett, 2015; Crommelinck et al., 2017), this makes them potentially extractable automatically (Jazayeri et al., 2014) using computer vision methods and algorithms that detect object contours in images.

The aim of this research is to investigate the applicability and efficiency of implementing multi-resolution segmentation (MRS); an object-based approach, at different scale parameters (SP), for the automatic extraction of visible cadastral boundaries (depicted by building footprints) from UAV images.

Imagery-Based Boundary Detection-review

In recent years, research efforts have shown how image-based cadastral mapping is being explored to acquire and modify data quickly and costeffectively. Manual digitization for image-based boundary detection and delineation has been performed in early practices (Manyoky et al., 2011; Ali and Ahmed, 2013; Parida et al., 2014) and the results affirmed that more landed properties can be mapped in less time using high-resolution imagery. Research has also recently shown that image processing and computer vision offer new opportunities to replace manual approaches with an automated approach. Babawuro and Beiji (2012) detected field boundaries from satellite images using canny edge detection and morphological activities by connecting the segmented boundaries with Hough transformation. Though some boundaries were not accurately captured by the method, the findings of the research showed that the implementation of machine vision and integrating it with cadastral mapping, brings about substantial benefits like reduction in personnel and human efforts. Nyandwi (2018) used object-based image analysis (OBIA) to extract cadastral parcels using multi-resolution segmentation and chessboard approach. An object-based approach refers to the extraction of object outlines based on a cluster of pixels with similar characteristics and is applied to high-level features which represent shapes in an image (Crommelinck et al., 2016). The approach was tested using pan-sharpened Worldview-2 imagery at an urban site and a rural site in Rwanda. For precision measurement, reference lines were given a tolerance of 5 m. The method obtained an accuracy of 47.4% and completeness of 45% in rural areas. The authors argued that the findings were counterintuitive in urban areas and that the recovery of residential parcels is difficult for machine vision because the spectral reflection of the roof, garden, and fences in this area varies significantly.

Puniach et al. (2018) posited that an orthomosaic and digital surface model (DSM) generated from UAV-acquired images can be used for updating and maintaining a cadaster. The findings of their research further affirmed that UAV approach can produce significantly better results when multiple ground control points (GCPs) are used, compared to the result obtained from GNSS survey in Real Time Kinematic (RTK) mode.

Also, Wassie et al. (2018) implemented a mean-shift segmentation algorithm; a QGIS open source plugin used for segmentation of objects, to delineate cadastral boundaries. Using Worldview-2 images with a resolution of 0.5 m, three rural areas in the Amhara State of Ethiopia, were used as the test sites for the investigation. The buffer widths from the reference boundary were 0.5 m, 1m and 2 m, and the obtained results showed 16.3%, 24.7% and 34.3% overall accuracy for the first (SI1) selected area. The extractions with mean-shift segmentation are closed object boundaries in vector format and were found to be topologically correct. The mean-shift segmentation was applied to a full extent of satellite images while some of the automatic object extraction methods were applied also using UAV images (Fetai et al., 2019). They also affirmed that almost 80% of the extracted visible boundaries were adjudged to be correct when buffer overlay technique was applied, which shows the potential of cadastral mapping based on UAV imagery.

Furthermore, Luo et al. (2017) investigated cadastral boundaries extraction in urban areas from airborne laser scanner data, designing a semi-automatic workflow with post-refining and automatic extraction of features. Objects such as buildings and roads were segmented in the automatic extraction process using canny edge detector, alpha shape, and skeleton algorithms, while objects such as fences were delineated using centerline fitting techniques. Since not all artefacts extracted were cadastral boundaries, the post-refinement process involves manual support. Visual interpretation in this phase was adopted for the extraction of useful line segments and gaps between line segments were filled manually. With a tolerance of 4 m from the reference boundaries, the designed workflow achieved completeness of 80% and correctness of 60%. In addition, Crommelinck et al. (2017) studied the transferability of globalized probability of boundary (gPb) contour detection technique to UAV images for the automatic detection and extraction of contours for UAV-based cadastral mapping. The result of their investigation using three UAV orthoimages of different extents showing rural areas in Germany, France and Indonesia, shows that the approach is transferable to UAV data and automated cadastral mapping by obtaining completeness and correctness rates of up to 80%.

In order to speed up the process of establishing, maintaining and updating cadastres, Fetai et al. (2019) investigated the potentials of high-resolution optical sensors on UAV platforms for cadastral mapping, using the feature extraction (FX) module of ENVI for data processing. The findings of the study show that about 80% of the extracted boundaries were adjudged correct, while emphasizing on the importance of filtering the extracted boundary maps for the improvement of the results. The described image processing workflow shows that the approach is mostly applicable when the UAV orthoimage is resampled to a larger ground sample distance (GSD). In addition, the findings show that it is important to filter the extracted boundary maps to improve the results.

Also, Crommelinck et al. (2020) developed a methodology that automatically extracts and processes candidate cadastral boundary features from UAV data, consisting of two state-of the-art computer vision methods, namely gPb contour detection and simple linear iterative clustering (SLIC) super pixels, as well as a classification part assigning costs to each outline according to local boundary knowledge. The developed methodology also included a procedure for a subsequent interactive delineation; a user-guided delineation by calculating least-cost paths along previously extracted and weighted lines. The approach was tested on visible road outlines in two UAV datasets obtained from two rural areas in Amtsvenn and Gerleve (Germany), processed using Pix4Dmapper, and the obtained results show that all roads can be delineated comprehensively. When the developed automatic approach was compared to manual delineation, the number of clicks per 100 m reduced by up to 86%, while obtaining a similar localization quality.

Fetai et al. (2020) investigated the applicability of U-Net architecture; a symmetric network containing two parts which gives it a U-shaped architecture, for the automatic detection of visible boundaries from UAV images captured with the aid of a DJI Phantom 4 drone. The architecture was designed using Python and implemented in high-level neural network API Keras (François et al., 2015) running within TensorFlow library, while the training of the BSDS500 datasets (which were concatenated to increase the flexibility in the validation split and the number of training samples) was done through Google Colaboratory which provided a stronger GPU, more memory, and efficient calculations, with the evaluation metrics of the trained model indicating 0.95 overall accuracy. While the average %age of correctly detected visible boundaries was estimated to be almost 80% for the tiled UAV images, the study found out that the automatic boundary detection using U-Net is applicable mostly for rural areas where the visibility of the boundaries is continuous (Luo et al., 2017). The model was evaluated by monitoring the loss and accuracy of training, and validation data using binary cross-entropy was used as a loss function while overall accuracy was adopted as the evaluation metric.

Using Pleiades images, the performance of random forest (RF) when compared with other classifiers such as the support vector machines (SVM), maximum likelihood, and the back propagation neural networks in the automatic extraction of building lines or footprints was investigated by Taha and Ibrahim (2021). Results of the assessment showed that the RF, SVM, maximum likelihood, and back propagation yielded an overall accuracy of 97%, 93%, 95% and 92%, respectively, which proved that the RF outperformed the other classifiers. Also, the completeness and correctness of the extracted footprints using RF indicated that it can accurately classify 100% of buildings.

While different algorithms have been implemented for automatic boundary delineation or property line extraction, most of such experiments have been conducted on satellite or other remotely sensed images while there is relatively sparse evidence of past documented efforts aimed at implementing object oriented automatic feature extraction algorithms on UAV acquired images for property or cadastral mapping as presented in this research.

Edge Detection Techniques

Edge detection is one of the most important features in the application of machine vision, computer vision, image processing and analysis (Zhang et al., 2013; Selvakumar and Hariganesh, 2016). The goal of edge detection is to extract information from object boundaries within the image by detecting discontinuities or abrupt changes in brightness level. Edges provide the image’s boundaries between different regions. The boundaries obtained are useful for the recognition of objects within image for segmentation and for matching purposes (Chai et al., 2012). Edge detection has also attracted enormous attention in medical imaging such as MRI (Giuliani, 2012; Aslam et al., 2015), ultrasound (Chai et al., 2012), CT (Punarselvam and Suresh, 2011; Bandyopadyay, 2012) and X-ray images (Lakhani et al., 2016), road mapping analysis (Sirmacek and Unsalan, 2010; Qui et al., 2016), and other applications, even with the enhancement of noisy satellite images (Jena et al., 2015; Gupta, 2016).

Image segmentation and edge detection are the predominantly used algorithms for semi-automatic or automatic boundary delineation (Crommelinck et al., 2016). Segmentation refers to partitioning images into disjointed regions, where the spectral characteristics of the pixels are similar to each other (Pal and Pal, 1993). On the other hand, edge detection algorithms detect edges in brightness and colour as sharp discontinuities (Bowyer et al., 2001).

Object oriented image analysis has become an important issue in the field of image processing and interpretation. The basic idea behind this approach is to segment an image into parcels, extract features from the segmented parcels, and then complete the image interpretation by classifying its features. The major advantage of object-oriented image analysis is that it deals with parcels, not pixels. These parcels are objects, which provide abundant features and spatial knowledge involved in analysis (Aplin et al. 1999). Object-based image analysis methods are based on aggregating similar pixels to obtain homogenous objects, which are assigned to a target class.

The basic concept of creating an image object is to merge adjacent pixels where the heterogeneity is minimized, and while maintaining its acceptability to human vision. Recently, the implementation of MRS algorithm is gaining wide attention for different applications. Munyati (2018) implemented MRS for the delineation of savannah vegetation boundaries. The result of the studies showed that an overall mapping accuracy of 86.2% was obtained. He also posited that the successful delineation of the savannah vegetation communities indicated that pre-segmentation and analysis of potential objects variance-based texture can provide guidance on parameter values to be specified for the inherently iterative MRS. Chen et al., (2019) developed an approach for MRS parameter optimization and evaluation for very high resolution (VHR) remote sensing images based on mean nanoscience and quantum information (meanNSQI). The findings of the experimentation showed that a discrepancy measure of 85% accuracy was obtained which proved that the segmentation parameter optimization and quality evaluation given by meanNSQI and the discrepancy measure are reliable.

Furthermore, Kohli et al. (2017) investigated the applicability of MRS and estimation of scale parameter (ESP); an object-based approach for the extraction of visible cadastral boundaries from high resolution satellite images (HRSI). Pixel-based accuracy assessment method was adopted and the quality of the feature detection and extraction in terms of error of commission and omission were 75% and 38%, respectively, for the MRS, and 66% and 58%, respectively for the ESP. Within a 41-200 cm distance from the reference boundaries, the localization quality of 71% and 73% was obtained for the MRS and ESP, respectively. The result showed that it is difficult to achieve a balance between high %age of completeness and correctness, and concluded that the resulting segments can potentially serve as a base for further aggregation into tenure polygons using participatory mapping.

Multi-resolution segmentation

MRS is an area-fusing or region based image segmentation algorithm (Witharana and Civco, 2014) that begins with each pixel forming an entity or region (Baatz and Schape, 2000). It is often used as a general segmentation algorithm in the field of remote sensing applications (Neubert et al., 2007) because of its ability to generate image objects with greater geographical significance and strong adaptability (Martha et al., 2011).

The merging criterion for MRS is local homogeneity, which describes the similarity between adjacent objects. The fusion process stops when all potential fusions meet the requirements for homogeneity. MRS relies on several parameters, which are image layer weights, scale parameter (SP), shape and compactness. Image layer weights define the importance of each image layer to segmentation process. For the experiment reported in this paper, equal weights have been apportioned to the three layers; Red, Green and Blue (RGB) of the input image to ensure a more regular shape of the merged parcel because apportioning different weights on the image layers will cause unfair segmentation which will affect the regularity of the shape. The most important parameter in MRS is the scale, which controls the average image object size (Dragut et al., 2014). A larger SP allows higher spectral heterogeneity within the image objects, hence allowing more pixels within one object. Defining the appropriate SP is very critical for the successful implementation of MRS algorithm and attempt has been made in this research to explore the effect of different SPs in the automatic extraction of visible cadastral boundaries and building footprints with a view to identifying appropriate SP for automatic feature extraction in cadastral mapping.

Merging Criterion

A merge cost function that integrates spectral and shape heterogeneity is designed to guide the merging of parcels. The use of shape is to make the outline of the merged parcels more regular. Experimentally, the merging cost function is similar to Eq. (1) according to Baatz and Schäpe, 2000):

Materials and methods

Study area

The study area selected for this study is popularly known as Kuje Low-Cost, a residential estate located in Kuje Area Council in the Nigeria’s Federal Capital Territory (FCT), Abuja. The mapped area has a total area of 224 000 square meters (22.4 hectares). It lies within the boundary of Northings 982 870.00 mN to 982 300.00 mN and Eastings 306 000.00 mE to 306 700.00 mE. The principal characteristics that informed the choice of this study area are the configuration of the area’s terrain which is neither too rough nor too gentle, and also, the area is a planned urbanized zone with land parcels delineated by visible linear features and clear building footprints. Figure 1 depicts the map of the study area.

Data acquisition and processing

Site reconnaissance was first conducted to identify suitable locations for the establishment of ground control points (GCPs), and check points (CPs) in the study area. During this process, the points or stations identified for the GCPs and CPs were pre-marked. Dronedeploy; a flight planning software was used to design the flight plan which was used for the image data acquisition.

GCPs are very important for georeferencing the images and for qualitative analysis of the positional data. Stations premarked for the establishment of GCPs were properly fixed and established using Hi-Target DGPS receivers to acquire their positional data. A total of Eight (8) ground stations were established within the study area, out of which Five (5) were used as the CPs while the remaining three were used as GCPs. Markers were used to mark the points defining the established stations, a sample of which is shown in Figure 2b. For the GNSS data acquisition, two units of Hi-Target DGPS receivers, with one serving as a base (mounted on the base station) while the other served as rover (roving through the pre-marked ground stations), were used to acquire positional data of the control stations with the rover spending a minimum of 20 minutes of occupation time on each of the stations. The point datasets (coordinates) recorded on the Hi-Target DGPS receivers were imported into Carlson with AutoCAD 2012 for further processing and plotting. Table 1 presents the details of the date and time of observation, standard deviation of the coordinates (Nrms, Erms and Zrms), and the position dilution of precision (PDOP) values which describe the error caused by the relative position of the GPS satellites obtained for each of the established ground control stations. The elevation mask value was 5 and it was uniformly applicable to all the control points while a uniform antenna height of 1.881 m was adopted for the measurements.

Also, a total of 785 overlapping images were captured at 70 m flying height, using DJI Mavic Air UAV onboard camera with an integrated 12 megapixels CMOS sensor and f/2.8 lens, and with a 35 mm equivalent focal length of 24 mm to shoot high-quality photos and videos.

All acquired images were processed using Agisoft metashape digital photogrammetric software. The workflow for the processing includes the following; importation of photos into the software working environment, alignment of the imported photos, importation of GCPs, camera calibration, generation of dense point cloud, generation of Digital Surface Model (DSM), and generation of orthomosaic. The ground sampling distance of the generated orthomosaic is 1.44 cm/pix.

The orthomosaic generated from the photogrammetric software was imported into ArcMap 10.5 and eCognition Developer software for further processing and analysis, which includes the digitization of the land boundaries on the orthomosaic in order to aid the positional data comparison, and the implementation of the MRS algorithm.

Automation process

MRS algorithm was implemented in eCognition software (version 9) for the automatic extraction of the visible land boundaries. MRS is a region-merging technique starting from each pixel forming one image object or region (Baatz and Schape, 2000), which implies that improperly defined parcel boundary lines are difficult to automatically extract using this algorithm. This difficulty can however be overcome if the parcel boundary is defined by visible linear features such as fence lines or building footprints that are not covered with shades or canopies, though the technique gives better result for building lines and the results are influenced by the choice of SPs. SP is the most important parameter in the implementation of this algorithm because it controls the average image object size (Dragut et al., 2014). It is a subjective measure that controls the degree of heterogeneity within an image-object (Dragut et al., 2010). Each SP can be generated independently based on the pixel level, or within a hierarchy, where a parent-child relationship exists between the levels. For this study, the ESP; a tool that builds on the idea of local variance (LV) of object heterogeneity within a scene was used for the scale parameter estimation within the eCognition image processing software (Dragut et al, 2010, Kohli et al., 2018).

Since the key control for MRS algorithm is the SP, five (5) different SPs were experimented in order to obtain the optimal SP for the automatic extraction of parcels. The SP values were set at 150, 400, 500, 700, 1000 for the 5 different experiments with constant or fixed shape, and compactness values of 1.5 m and 0.8 m, respectively.

Accuracy assessment

For the orthomosaic accuracy assessment, positional data (coordinates) of the CPs were extracted from the produced orthomosaic. The Easting, Northing and Height (XYZ) component of the coordinates were compared with the GNSS acquired coordinates of the same CPs. The difference between the GNSS acquired coordinates and the extracted coordinates from UAV produced orthomosaic was estimated and used for the computation of change in planimetric coordinates using equation (6) which is the Euler’s distance formula.

Accuracy assessment of the automatic feature extraction

Since the output of the automatically extracted features is in vector format, the adopted strategy for the accuracy assessment was an object-based approach using buffer overlay method, which was also the method adopted by Fetai et al. (2019). The object-based approach consists of a matching procedure which is in two folds (Heipke et al., 1997); firstly, it yields those parts of the extracted data which are supposed to be boundaries, road, building footprints, etc, and which corresponds to the reference data, and secondly, it shows which part of the reference data that indeed corresponds to the extracted data.

In the first step, both networks (extracted and reference data) are split into short pieces of equal length, after which a buffer of constant predefined width (buffer width = 150 cm) was constructed around the reference property data. The percentage of the reference data which is found within the buffer around the extracted data is referred to as completeness and its optimum value is 1 (i.e. 100%). According to the notation of McGlone and Shufelt (1994) and CMU (1997), the matched extracted data is denoted as true positive with length (TP), affirming that the extraction algorithm has indeed found a property data. The unmatched extracted data is denoted as false positive with length (FP), because the extracted property line hypotheses is incorrect, while the unmatched reference data is denoted as false negative (FN).

In the second step, matching is performed the other way round. The buffer is now constructed around the extracted property data, and the parts of the reference data lying within the buffer are considered to be matched. The percentage of the extracted property data which lies within the buffer around the reference network is known as correctness, and it represents the percentage of correctly extracted property data. Its optimum value is also 1 (Heipke et al., 1997).

In order to assess the accuracy of the automatically extracted features using the MRS algorithm, the completeness and correctness of the extraction at each of the experimented different SPs were computed using equations 8 (a and b) and 9 (a and b) respectively, while the overall accuracy (quality) was estimated using the expression in equation (10):

where TP is the true positives, FP is the false positives, FN is the false negative (Galarreta et al., 2015), while Cp is Completeness, Cr is Correctness for low redundancy. OA is the overall accuracy which describes the “goodness” of the extraction. The overall accuracy which is also referred to as the measure of quality, takes into account the completeness of the extracted data as well as its correctness (Heipke et al., 1997).

Estimation of project execution time and cost

In order to estimate the project execution time for the two methods, the entire project was subdivided into different project components and the time expended for each of the component was recorded. Also, the expended cost of each of the project components was estimated by direct costing. While the approximate project time was documented in number of days, the project cost was estimated in Nigerian Naira. As at the time of executing this research project (December, 2019), a US Dollar is exchanged to naira at an average of 362.61 naira to 1 USD on a concrete day. This was also around the same time the minimum monthly wage of Nigerian workers was increased from 18,000 Naira to 30,000 Naira by the Nigerian government.

The paper was first published in Advances in Geodesy and Geoinformation ,Vol. 71, no. 1, article no. e19, 2022 and is republished with the author’s permission. Copyright The Author(s). 2022 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/).

To be concluded in next issue

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 4.00 out of 5)
Loading...


Leave your response!

Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.

Be nice. Keep it clean. Stay on topic. No spam.