*3.1. Procedure*

To answer the research questions, a content analysis sheet was designed. The categories of this instrument were defined in the code book. These categories were divided into two large groups: firstly, those related to the contents of the library's website—date of publication, authorship, target, initiative, mediator, action, competences, and link to the action—and, secondly, those corresponding to the type of library—name of the library, type of ownership, and country.

To verify the reliability of the instrument, first a pilot was carried out with experts in the field: researchers, documentalists from private university libraries, coordinators of a network of public libraries, and school librarians, who were given the analysis sheet together with the coding book and told how to observe the website. Five selection criteria were considered [23]: independence, professional solvency, research activity, geographic diversity, and level of responsibility. These responses were collected in the statistical software STATA in which the Kappa coefficient of Fleiss (1971) [24] was applied to know the robustness of the instrument. This statistic yielded a significant degree of inter-judge agreemen<sup>t</sup> [25,26] and was of significance (alpha) with a *p* value of 0 (<0.05). Finally, two researchers were involved in the process of coding the content of 216 libraries.
