*2.2. Results*

Signers across cohorts differed in the number of signs to refer to objects. The CTSL-1 signers produced a total of 122 sign strings involving a total of 204 signs in all strings (repetitions excluded). The CTSL-2 signers produced 123 sign strings involving a total 274 signs in all strings. The overall most frequent strings were single signs (38.7%, e.g., CAR, SPOON, MATCHES, or GAME CARDS), followed by two-sign (36.3%, e.g., BREAD BOARD, GLASS, VIDEO CAMERA, or POT), three-sign (13.9%, e.g., COLOGNE), and four-sign or more strings (11%, e.g., COOKING POT). Overall, the CTSL-1 signers used significantly shorter strings of signs ( *M*CTSL-1 = 1.78, *SD*CTSL-1 = 0.62) than the CTSL-2 signers (*M*CTSL-2 = 2.25, *SD*CTSL-2 = 0.79) (*t*(25) = 2.60 *p* = 0.019). In addition, the CTSL-1 signers produced significantly more single-sign responses than the CTSL-2 signers (χ2(1) = 7.83, *p* = 0.0051). In other words, the CTSL-2 signers relied more on the combinatorial strategies over single signs (Figure 2).

**Figure 2.** Distribution of sign strings. The *Y*-axis represents the proportional frequency of responses involving sign strings on the *X*-axis. The blue bars represent CTSL-1, and the orange bars represent CTSL-2 (Ntotal = 245, NCTSL-1 = 122, NCTSL-2 = 123).

While there was a difference in the number of signs and lengths of strings, there were no differences in the implementation of the iconic strategies across cohorts. In single-sign strings (Nsingle-sign = 95), the favored strategy was *action* (46.3% of instances), followed by the simultaneous production of *action* and *object* (35.8%) and *object* (17.9%) strategies. Among the remaining signs produced in the overall multi-sign strings (Ntotal = 383), a slightly different pattern was displayed: the favored strategy was *action* (37% of instances), closely followed by the *object* (34.6%), simultaneous production of *action* and *object* (17%), and *deictic* (7.8%) strategies. BOX, COLOGNE, CAR, CARDS, MOTOR VEHICLE, SIEVE, and SOAP frequently elicited components involving action-based iconic strategies, while COOKING POT, GLASSES, PLATE, and STOVE elicited components involving objectbased strategies, and BROOM, CELLPHONE, FORK, GLASS, MATCHES, etc. frequently

elicited components involving simultaneous use of the object- and action-based strategies (Figure 3).

In two-sign strings (ntwo-sign = 89), *action* and *object* were equally favored (34.1% and 34.1% of instances, respectfully), followed by simultaneous *action* and *object* (19.9%) and *deictic* strategies (10.2%). The most common combination in two-sign strings involved an *object* strategy combined with an *action* strategy irrespective of their ordering (e.g., BREAD BOARD, GLASS, TEAPOT, or POT) or with a simultaneous *action* and *object* strategy to further disambiguate the target object (e.g., FORK or COPPER VESSEL). The other combinations involved strategies of action1—action2, object1—object2, action and object— deictic, etc., with no significant difference across cohorts in either the implementation of iconic strategies or ordering of the constituents in two-sign strings (Figure 4).

**Figure 4.** Combination of strategies used in two-sign strings. The *Y*-axis represents the proportional frequency of responses involving the combination of iconic representations on the *X*-axis (Ntotal = 88, 35.9% of all strings). The categories represent the constituents irrespective of their order (i.e., the bar for action + object also includes object + action combinations).

For instance, to refer to GLASS, the signers tended to reenact drinking (*action*) and then use an *object* sign denoting the dimensions of the target object (Figure 5). For CELLPHONE, they reenacted talking on the phone (*action*) simultaneously with an object sign representing the phone, and then they used an *object* sign representing the size of the object (Figure 6).

**Figure 5.** (**a**) Stimulus item used in the task. (**b**) Reenactment of drinking (*action*). (**c**) Size or dimensions of the target object (*object*). The *action* and *object* combination depicted in (**b**,**<sup>c</sup>**) refers to a GLASS.

**Figure 6.** (**a**) Stimulus item used in the task. (**b**) Cellphone (object) and depiction of the reenactment of talking on the phone (*action*). (**c**) Size of the cellphone (object). The *object*—*action* and *object* combination depicted in (**b**,**<sup>c</sup>**) refers to a CELLPHONE.

#### *2.3. Summary and Conclusions*

The goal of study 1 was to explore and investigate the developmental trajectory of an emerging lexicon in a language in its initial stages. The results show that the CTSL-1 signers produced significantly shorter responses and more single-sign strings for labeling everyday objects as opposed to the CTSL-2 signers, who produced more combinatorial responses, suggesting that the language became morphologically more complex over time. There were no significant differences across cohorts in implementing iconic strategies. The most common strategy produced by both cohorts in the entire task was *action*, followed by *object* and simultaneous implementation of *action* and *object* strategies. In two-sign strings, *action—object* was the most frequent combination, followed by the *object—action* and *object* combination, for both cohorts. These findings corroborate the previous studies suggesting that action simulations are the precursors of iconic forms in a manual lexicon (e.g., Cook and Tanenhaus 2009; Hostetter and Alibali 2008; and Ortega and Özyürek 2020).

The same types of iconic forms were present for CTSL-1, suggesting that they emerged quickly in the first generation of the language, whereas the combinatorial use of them waited until CTSL-2 to emerge. In line with the findings in other emerging sign languages (e.g., ABSL), more established sign languages (e.g., ASL), and also homesign systems, some lexical items were produced as compounds, whereas others elicited longer idiosyncratic sign strings (e.g., Tkachman and Sandler 2013; Klima and Bellugi 1979; Haviland 2013; Morgan 2015; and Ergin et al. 2021). Going beyond the previous findings, this study shows that the lexical items becme more combinatorial and morphologically complex as of CTSL-2.

This study provideD us insight into the emerging lexicon of a newly developing language. However, it was limited in that the target objects were presented in isolation without context. This may have resulted in elicitation of longer descriptions for objects rather than shorter labels.

#### **3. Study 2**

Building upon the findings of study 1, study 2 investigated the emerging lexicon of CTSL in further detail with a new set of everyday objects presented in isolation and in context. The goal of this study was two-fold: (1) to replicate the findings in study 1 and (2) to investigate whether there were any similarities or differences between labeling everyday objects when they were presented in isolation vs. in context.

#### *3.1. Materials and Methods*

**Participants.** Eight deaf signers from 2 successive age cohorts (4 CTSL-1 signers: *<sup>M</sup>*age = 48.7, age range = 44–54; Four CTSL-2 signers: *<sup>M</sup>*age = 40.5, age range = 35–44) were tested. All signers used CTSL as their primary and only means of communication. The CTSL-1 signers were the older siblings of the CTSL-2 signers. The signers in studies 1 and 2 were the same individuals.

**Stimuli and Procedure.** Deaf CTSL signers were paired up with another deaf or hearing addressee fluent in CTSL. They were tested in two consecutive tasks. (1) As in study 1, the signers performed a *picture-naming task* for images of 16 everyday objects (Table 2) depicted in isolation (see Supplementary S3 for the images of the objects). Semantic categories such as tools and fruits and vegetables were intentionally avoided not to create a bias for *object* and simultaneous *object* and *action* strategies (see Hwang et al. 2017 and Ergin and Brentari 2017). The participants viewed the images on a computer screen and labeled them to an addressee. (2) The signers performed a *communicative task* in which they were asked to view short video clips involving the exact same objects (Table 2) and describe the event in the clips to an addressee, who then selected the corresponding picture from an array of three pictures (see Supplementary S4 for a sample trial in the task). All data were collected in August 2014.

**Table 2.** List of objects used in study 2.


The stimuli items in task 2 involved a human agen<sup>t</sup> performing a non-prototypical action on the target objects (Table 3). The rationale behind using non-prototypical actions was to minimize object incorporation into prototypical actions, which is a potential bias for the simultaneous use of object and action strategies (i.e., objects with actions like "reading a book", "drinking from bottle", "putting the jacket, hat, or dress on", "pouring tea from a teapot", etc. were intentionally avoided). Three stimuli items (i.e., a washing basin, plastic bag, and box) were used with their prototypical function as containers and not directly acted upon by an agen<sup>t</sup> but rather as containers for objects acted upon by a human agent.

**Table 3.** List of contexts used in study 2.


**Coding Procedure.** Coding procedure was the same as in study 2 (see Section 2.1).
