**3. Materials and Methods**

A pragmatic research design was adopted with several sequential and inter-linked steps to design the VPA tool, as summarized in Figure 1. It employs a combination of visuals, public opinion and observations to enlist visual pollution objects. These methods have been used in several studies to investigate public preferences [3,25,29,35,49–52]. A carefully selected panel of experts was engaged to group, rank and weight VPOs using AHP, which is a widely tested approach to handle subjectivity [30,48,53]. Furthermore, AHP findings have been arranged in the form of a VPA scorecard. To measure the reliability of the tool, it has been applied to locations with diverse land-uses, and

inter-rater reliability of the tool has been calculated. Figure 1 explains the employed methods, adopted processes and obtained outputs that led to developing the final visual pollution assessment tool. Details of each stage are given below.

First, a list of VPOs has been prepared based on various sources including a literature review, personal observations, a public survey and a university-based photo competition. In literature, only a few VPOs have been frequently listed primarily including outdoor advertisements and billboards. However, through a public survey (available at https://goo.gl/forms/LjKobwAK9m1wUBZc2), 107 participants were asked as to what do they find visually annoying in their neighborhood or urban fabric around them. Similarly, a photo competition was arranged among the students of urban planning to identify and capture VPOs in their surroundings.

The second step included the determination of size for the panel of experts and their identification. Literature indicates that the size of an expert panel for AHP studies may vary from just a few people to large groups depending upon the nature of the problem and availability of experts. Generally, AHP does not need more interviews as results ge<sup>t</sup> stable after a few responses [54]. Furthermore, when the availability of experts is limited, many studies have presented their results with smaller panel sizes; n = 5 [55,56], n = 7 [57], n = 18 [58], and n = 25 [59]. In our case, a group of 20 professionals (with 10 or more years of experience) was selected to help on the grouping, ranking and weighting of various VPOs. By means of variability sampling, it was ensured that due representation was achieved from various stakeholders. Consequently, five members were selected from urban planning related academia, three from city district government, three from development authorities, two from cantonments of armed forces, five from private consultants on urban planning, and two civil society members having a history of expressing concern on urban planning matters. In addition to ensuring the thematic diversity of experts, they were selected from various cities of Pakistan representing a variety of cultural and urban contexts. This diversity was particularly helpful since they have been engaged to record their opinion based on their mental images in urban areas.

**Figure 1.** Visual Pollution Assessment (VPA) Tool Development Process.

In the third step, the experts classified the listed VPOs into 10 groups considering the similarity of objects. For example, billboards, signboards, advertisement banners, poster and streamers have been clubbed into the group 'outdoor advertisements'. Similarly, out-of-proportion building structures, irregular building facades and an uneven skyline have been grouped into 'architecturally poor structures'. The key reason behind this classification was to reduce the number of VPO groups which can be compared to assigning ranks and weights based on their contribution to visual pollution on a site. It was not possible to handle the larger number of VPOs in AHP without this categorization. AHP was employed for the ranking and weighting of VPOs groups to remove subjectivity associated with the measurement of VPOs. AHP allows the decision maker to consider objective and subjective factors in assessing the relative importance of each VPO through a pairwise comparison.

Since there were 10 VPO groups, a 10 × 10 matrix was formed. The AHP template by Goepel [60] had been adopted for the compilation of results. Since the matrix size and number of panel experts was reasonably large, we used a commercial spreadsheet application (Microsoft Excel) for the compilation of results. To capture their opinion, each member was asked to provide (1) ranking for the VPOs and (2) pairwise comparison of all VPOs groups among themselves based on the level of a VPO's contribution to visual pollution.

Similarly, each of the criteria (VPOs group) were compared with every other criterion by means of a pair-wise comparison over a 9-point Saaty scale. Every member was thoroughly trained on the process and the definitions of scale values. To do so, a series of the dedicated session was conducted with experts where along with the scale and ranking criteria, practical examples were also discussed in detail. For example, in a typical comparison, the expert decides which among outdoor advertisement (A) and wall chalking (B) is the bigger contributor to visual pollution. Suppose the expert selects A, then the next question is that on a scale of 1-9 how much more A contributes to visual pollution than B (while 1 means equally severe, 3 means moderately more severe than the other, 5 means more strongly severe than the other, 7 means very strongly severe and 9 means extremely severe compared to the other while the values 2, 4, 6 and 8 represent the micro-scale between them). Figure 2 presents the screenshot of the excel sheet where pairwise comparisons are recorded and how they are used to generate the automated matrix.

**Figure 2.** Screenshot of AHP sheet reflecting the capturing of pair-wise comparison and formation of a comparison matrix by one expert. A Similar process has been adopted to capture inputs of all panel members.

Parallel to the ranking and weighting of VPOs, characteristics of each type of VPO were listed and rubrics for these characteristics were prepared. At this stage, panel members identified those characteristics for each VPOs that have a direct relationship with the visual impact generated by that VPO. Furthermore, rubric values have been defined against each characteristic. At the next step, intra-group VPOs were weighted and finally, the objects were arranged in the form of a scorecard to be used in the field (see Figures 7 and 8). Finally, the validation and reliability of the VPA tool were assessed. The inter-observer/ inter-rater reliability (IRR) analysis (also called the inter-observer agreement) by a pilot study was applied. Reliability analysis facilitated in finding the extent to which results in scale is consistent even when observed by di fferent observers. For IRR, the tool was also experimented with at 20 distinct locations of di fferent land-uses in Lahore (the second largest city of Pakistan) by a group of five trained observers (Figure 3). The reason behind selecting 20 locations was to ensure that they cover di fferent combinations of land-uses (residential, commercial, open spaces and public buildings, etc.) and land-use activity intensity. Each location was a three-or-more-legged road junction. The observer was positioned at the centre of a junction or a similarly appropriate location with a 360o view of the location to record VPOs on the tool. The observers were final year students of urban and regional studies undergraduate programme, who were thoroughly trained on the VPO identification and assessment of their characteristics. Each observer completed the VPA exercise for all 20 locations. Resultantly, 5 assessments (filled VPA scorecards) were available for each location – it is important to highlight that the VPA tool requires capturing 205 values to represent characteristics of di fferent VPOs. Subsequently, responses were analyzed in ten observer pairs and several agreements between each were calculated to ge<sup>t</sup> percentage agreement-based IRR.

**Figure 3.** Map showing the distribution of sites for piloting of VPA tool and IRR analysis (each blue numbered dot represents one site).
