**1. Introduction**

#### *1.1. Iconic Representations in the Manual Systems*

The affordances of the body and hands allow iconic representations of linguistic information in manual systems. The non-arbitrary form-meaning mappings of the realworld entities is a ubiquitous property of sign languages that can be observed at many levels of linguistic organization (e.g., Klima and Bellugi 1979; Emmorey 2014; Lepic and Padden 2017; Perniss et al. 2010; Padden et al. 2013; and Taub 2001). For example, iconicity plays a role in the large proportion of a signed lexicon (Pietrandrea 2002); the path, manner and location of a sign are frequently iconic (Senghas et al. 2004); information delivery in the event structure, such as telicity, can be iconic (Wilbur 2003); and the way sign languages use or encode space can have iconic motivations (e.g., Padden 2016; Perniss 2007; and Vermeerbergen 2006).

The non-arbitrary nature of iconic forms may seem to be straightforward for perception and production. However, they are not readily accessible to individuals having no prior experience with communication in the manual modality (e.g., Klima and Bellugi 1979; Ortega et al. 2017; and Pizzuto and Volterra 2000). Taub (2001) proposed that there are a least several sub-processes that are essential in inventing an iconic form, such as recognizing

**Citation:** Ergin, Rabia. 2022. Emerging Lexicon for Objects in Central Taurus Sign Language. *Languages* 7: 118. https://doi.org/ 10.3390/languages7020118

Academic Editors: Wendy Sandler, Mark Aronoff and Carol Padden

Received: 12 November 2021 Accepted: 28 March 2022 Published: 11 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

associations between concepts and a variety of sensory (visual, auditory, and kinesthetic) images, selecting a candidate image representative of the target concept and schematizing and encoding the selected image using a phonologically valid linguistic form. From among many possible candidates as a representative image of the target concept, *which iconic forms are recognized as more salient for selection?*

Previous investigations in the lexicons of manual systems (improvised gestural systems and emerging and established sign languages) presented evidence for the systematic variation in the use of *action-based* or *object-based* iconic forms across semantic categories and event structures. These iconic forms have been discussed under a variety of terms so far, focusing on the representational role of the hands, either representing the agentive role of the signer or a salient property of the referent (e.g., Brentari et al. 2012; Ergin and Brentari 2017; Meir et al. 2013; Müller 2013; Müller et al. 2013; Ortega and Özyürek 2020; Padden et al. 2013, 2015; and Supalla 1982). Among the iconic modes of representation defined in the manual modality are, for example, *acting*, depicting how objects are manipulated (Müller 2013), and *object*-depicting entities through the shape, dimensions, or outline of an object with no action representations involved (Padden et al. 2013). Padden et al. (2013) further elaborated on the iconic patterning for lexical signs for handheld man-made artifacts ("tool") by dividing them into two groups—*handling*, representing an agen<sup>t</sup> manipulating the target tool by handling it, and *instrument*, representing the manipulated tool itself—and presenting evidence for distinct iconic patterning across semantic categories in the use of these iconic forms. For example, "hammer" in American Sign Language (ASL) is expressed with a *handling* type handshape showing how a hammer is grasped, along with the typical downward repeating movement depicting the canonical action associated with this object. "Toothbrush" in ASL is expressed with an *instrument* type handshape with the index finger extended while the hand moves sideways back and forth near the mouth, as in the action of brushing one's teeth. Padden et al. (2013) reported that in response to stimuli involving the images of common objects such as clothes, utensils, cosmetic products, and tools, the signers of ASL, Al-Sayyid Bedouin Sign Language (ABSL), and New Zealand Sign Language (NZSL) tend to produce instrument strategies more frequently as opposed to handling strategies, whereas American and Bedouin non-signing gesturers display the opposite pattern. These findings indicate that *first*, despite speaking different languages and living in different regions of the world, non-signing gesturers display similar cognitive tendencies, and the types of iconic strategies they use systematically differ from the ones used by signers. *Second*, instrument forms as iconic strategies may be an important linguistic tool to expand the lexicon of sign languages by adding more handshape distinctions as opposed to the gestural ones produced in an improvised fashion. Similar to the findings of Padden et al. (2013), in a cross-linguistic analysis on a total of eight established and emerging sign languages, including Central Taurus Sign Language (CTSL), Hwang et al. (2017) reported recurring patterns for naming entities, even if they individually varied in imagistic form: *handling* and *instrument* forms (both involve a manipulative action) are used for tools, whereas *object* forms (i.e., static forms with no action involved) are more often used for fruits and vegetables. Hou (2018) reported similar grouping of iconic strategies for tools and foods in San Juan Quiahije Chatino Sign Language. Tools, as a category of stimuli, seem to strongly elicit forms exhibiting human agency, whereas this is less prevalent in semantic categories like fruits and vegetables.

Other studies on emerging sign languages report systematic variation across event structures. For example, Ergin and Brentari (2017) reported that CTSL signers favor *object* strategies depicting the form of an object over *handling* strategies depicting an action associated with the target object in non-agentive contexts, as opposed to agentive ones. When the object is acting on its own or not acting at all in a non-agentive context, such as "The lollipop is on the table", CTSL signers tend to use object-based iconic strategies (i.e., object handshapes) to represent the form of a lollipop. When the object is being acted upon by an agent, as in "The man puts the lollipop on the table", they tend to use action-based iconic strategies (i.e., handling handshapes) more frequently. Moreover, Ergin and Brentari (2017)

reported that the use of these strategies may evolve over time in that CTSL in its first generation<sup>1</sup> favored handling strategies over object strategies, but as of the second generation, it evolved into a system favoring object strategies over handling strategies. Using the same stimuli, Goldin-Meadow et al. (2015) also reported systematic opposition between non-agentive and agentive contexts in the use of *object* vs. *handling* strategies by Nicaraguan homesigners, the cohort 1 and cohort 2 signers of Nicaraguan Sign Language (NSL), and a group of American Sign Language (ASL) signers. All four groups, including homesigners, used object-based iconic strategies almost exclusively in non-agentive contexts and used handling strategies more frequently in agentive contexts, suggesting that systematically varying morphological constructs are fundamental properties of language that appear under a variety of environmental conditions. Another important finding of this study is the consistency of these iconic handshape types being wider for ASL and NSL signers in comparison with the homesigners. In other words, individuals using a shared sign system with others are more consistent in the type of iconic strategies they use across agentive vs. non-agentive contexts than those using a non-shared system. In addition, as in the case of CTSL, Goldin-Meadow et al. (2015) reported generational differences in the use of these iconic strategies. NSL cohort 2 and the ASL signers produced more handling handshapes than object handshapes in their predicates in agentive contexts as opposed to NSL cohort 1 and homesigners, which suggests that these iconic strategies may evolve and stabilize over time as a system matures.

An important finding of the previous studies is that sign languages exhibit crosslinguistic differences in terms of iconic patterning. For instance, in a comprehensive study conducted on 11 sign languages, Nyst et al. (2021) reported cross-linguistic differences across languages in the use of handling vs. object strategies in response to the images of 10 common objects. Adamorobe, Nanabin, and Ghanaian Sign Language exhibit a preference for object handshapes. Ivory Coast, Malian, and Portugese Sign Language exhibit a preference for handling handshapes. Kenyan, Ethiopian, Guinea-Bissau, and Boukako Sign Language as well as the Sign Language of the Netherlands, is a middle group without a strong preference for either handshape2. In addition, sign languages may display differences in the developmental paths they take. For example, while CTSL begins with handling strategies and evolves into a system favoring object strategies over time, NSL follows the opposite path (Ergin and Brentari 2017 and Goldin-Meadow et al. 2015, respectively). In summation, despite showing certain tendencies across semantic categories (i.e., foods elicit object-based iconic forms and tools elicit action-based iconic forms), there seem to be language-specific tendencies across languages, leading to variation.

The patterning of iconic forms across semantic categories and event structures is not only a property of emerging and established signed lexicons. Recent evidence from the improvised gestures of hearing adults shows alignments between sign languages and gestural communication in that there are systematic variations in the use of iconic gestural forms, possibly shaped by similar cognitive tendencies. For example, Schembri et al. (2005) detected similar movements and locations for the manual productions of non-signing Australians and signers of Australian Sign Language in response to a task involving classifier predicates of motion, but their choice of handshapes differs significantly. In addition, in a pantomime generation task in which participants were asked to produce gestures for written words they were presented on a computer screen, Ortega et al. (2017) showed that Dutch speakers' gestures share varying degrees of form overlap with the signs from the Sign Language of the Netherlands (full, partial, or no overlap). Moreover, hearing participants guessed the meanings of signs with full and partial overlap more accurately, and they assigned these signs higher iconicity ratings than signs with no overlap. These findings sugges<sup>t</sup> that deaf and hearing adults converge in their iconic depictions for some concepts (e.g., TO-CUT, TO-SAW, or LAPTOP), possibly as an outcome of the shared conceptual knowledge and manual-visual modality. Furthermore, Ortega and Özyürek (2020) found systematicity in the implementation of iconic strategies in the gestural forms of various concepts. They showed that action-based iconic forms (i.e., *acting*) through

reenacting the motion of the action associated with the target object were favored to refer to manipulable objects, whereas object-based forms such as recreation the form of an object (i.e., hand *representing* the object) and tracing its shape with the hands (i.e., *drawing*) were favored to refer to the static state and non-manipulable nature of an object, respectively.

In addition, several previous studies argued that action simulations are the precursors of manual iconic forms (Cook and Tanenhaus 2009 and Hostetter and Alibali 2008), with some recent empirical support for action-based iconic representations to be the building blocks of an emerging lexicon in the manual modality. For example, Ortega and Özyürek (2020) presented evidence for an overwhelming tendency for the use of action-based forms, implying that *acting* might be a building block of an emerging lexicon in the manual modality. Similarly, Ortega et al. (2014) claimed that action-based iconic forms are developmental milestones in the language acquisition process and present evidence for action-based signs to be favored more in children-adult interactions and object-based (perceptual) signs to be favored more in adult-adult interactions.

In summation, these findings provide us with insight into the systematic tendency in the use of certain *action-based* or *object-based* iconic features to refer to a certain type of referents and possible pathways for iconic forms to become linguistic tools over time in the manual modality. Specifically, the findings in favor of the dominance of actionbased iconic forms in the gestural productions are intriguing in that they trigger further questions regarding the perception of real-world referents and the invention of iconic forms representing them. *Are the object-based iconic forms or the action-based iconic forms recognized as more salient for selection for iconic representations? What forms the building blocks of an emerging lexicon in the manual modality?*

#### *1.2. Combination of Signs and ˙Iconic Strategies*

Using multi-sign strings such as compounds in order to distinguish concepts across semantic categories is a common property of sign languages (e.g., BLUE ˆ SPOT for "bruise" in ASL (Klima and Bellugi 1979)). Evidence from emerging sign languages indicates that this mechanism is present in the initial stages of a language, and some combinations of signs used for object descriptions are systematic (e.g., Ergin et al. 2021; Meir et al. 2010; and Tkachman and Sandler 2013). Ergin et al. (2021) reported that CTSL signers frequently use multi-sign strings to refer to entities from various semantic categories (e.g., everyday objects, utensils, and fruits and vegetables). While some of these multi-sign descriptions are relatively conventionalized compounds (e.g., TEA ˆ ONE-ON-ANOTHER for "teapot"), others have the flavor of idiosyncratically longer descriptions (e.g., TEA ˆ POUR-FROM-HANDLE ˆ ONE-ON-ANOTHER, FLAME ˆ PUT-ON ˆ ONE-ON-ANOTHER). When expressing a systematic compound3, CTSL seems to be following a certain pattern in terms of sequencing its constituents. Tea, an action involving an iconic constituent delivering information about the function of the object, frequently precedes the constituent signaling the static form or the size or shape of the target object (ONE-ON-ANOTHER) (Figure 1).

Similar results have been reported in Israeli Sign Language (ISL) and Al-Sayyid Bedouin Sign Language (ABSL): the constituents involving the size or shape (i.e., static form) information of the target object occupied the final positions in the compounds (e.g., CHICKEN ˆ OVAL-OBJECT for "egg" in ABSL or LIPSTICK ˆ SMALL-OBJECT for "lipstick" in ISL). However, in its initial stages, when a language does not have a conventionalized lexical item for a referent, longer descriptions become inevitable (e.g., WRITE ˆ ROW ˆ MONTH ˆ ROW ˆ WRITE for "calendar" in ABSL) (Meir et al. 2010; and Sandler et al. 2011). Similarly, Tkachman and Sandler (2013) reported a high tendency in both ISL and ABSL to produce compounds and longer sign strings in response to picture stimuli of unfamiliar objects which did not have a conventionalized lexical item in ISL or ABSL. Morgan (2015) also found that some compounds in Kenyan Sign Language such as BLACK ˆ PEAR-SHAPE ("avocado") display a systematic order among their components, but other multi-sign strings involve longer sequences with constituents in variable orders and with some items repeated multiple times.

**Figure 1.** (**a**) A Turkish teapot. (**b**) The sign for TEA. (**c**) The sign for ONE-ON-ANOTHER.

The findings from Zinacantec Family Homesign (Z) show that compounding is present even in a first-generation language. In response to the picture of a chicken, Z signers first start with using a size and shape specifier depicting how a Zinacantec typically handles a chicken, thereby demonstrating its size and shape, followed by an action depicting how Zinacantecs kill a chicken: a quick jerk to break its neck (Haviland 2013, p. 321). Despite having conventionalized lexical items such as CHICKEN, Haviland (2013) reported that Z signers are not always consistent. For example, for a small SLEDGEHAMMER, a Z signer may produce multi-sign strings starting with a handling handshape showing how a hammer is held, which also indicates the size of the target object, followed by a pounding action and completing with four full vertical strokes. On another occasion, in response to the picture of two ordinary hammers, the same Z signer may produce three distinct vertical pounding movements.

To sum up, using more than one sign or word to refer to real-world referents is a ubiquitous feature of natural languages. Evidence from emerging sign languages and homesign systems suggests that this feature springs up quickly in the initial stages of a language. While rarely used daily objects elicit idiosyncratically longer sequences of constituents (e.g., see Ergin et al. 2021 for "gas tank" variants in CTSL), more frequently used objects (e.g., "teapot" in CTSL) tend to elicit shorter sign strings or systematically ordered compounds. Whether there are generational differences in the combinatorial use of sign strings to refer to everyday objects and whether presenting stimuli in isolation vs. context affects the combinatorial structures remain open questions.

#### *1.3. The Focus of This Study*

Previous studies mentioned in Sections 1.1 and 1.2 mainly focused on either objectbased (i.e., static iconic forms with no action involved) or action-based iconic forms and presented evidence for systematic variation of them across semantic categories or distinct event structures. This study aims to investigate the *action*, *object*, and simultaneous use of *action* and *object* as iconic strategies (see the coding procedure in Section 2.1) and their combinations used for referring to everyday objects across generations in the emerging lexicon of Central Taurus Sign Language. The motivation for this investigation is to understand (1) whether a language in its initial stages favors *action*, *object*, or simultaneous production of *action* and *object* strategies as a more salient property to represent a target object iconically, (2) whether there are generational differences in the use of these strategies and their combinations, and (3) whether signers modulate their use of these strategies and their combinations in response to stimuli presented in isolation vs. context.

Section 1.4 introduces Central Taurus Sign Language (CTSL). Section 2 presents the design and results of study 1, which investigates CTSL responses when the target objects are presented in isolation. Section 3 presents study 2, which compares the CTSL responses when the target objects are presented in isolation vs. context.

#### *1.4. Central Taurus Sign Language*

Central Taurus Sign Language (CTSL) is a village sign language which emerged spontaneously over the past 50 years or so in the absence of a conventionalized linguistic model. It developed in a geographically isolated area with little or no influence from Turkish Sign Language (TiD). It is mainly used in a small village located in the Central Taurus Mountain Range of southern Turkey. The deaf individuals, comprising approximately 4.6% of the village population, are connected to each other by birth or through marriage (see Supplementary S1 for the family tree). The high incidence of deafness in the village (compared with a typical incidence of deafness of approximately 0.5%) is an outcome of recessive deafness in the community and the prevalence of consanguineous marriages in families with deaf individuals. CTSL has about 25 deaf signers today, 17 of whom use CTSL as their sole language, whereas others can use Turkish Sign Language at varying proficiency levels. In addition, there are approximately 80 hearing Turkish speakers who also have some degree of fluency in CTSL.

In order to track the developmental trajectory of the language, we identify three cohorts of signers in the community. CTSL-1 is the first cohort of signers, who were born as the first deaf child in their family and who therefore would have had little or no linguistic input early in life (n = 9; age range = 49–61). CTSL-2 is the second cohort, comprising the deaf and younger siblings of cohort 1 signers. They would have had more linguistic input because they had at least one older sibling who signed (n = 8; age range = 42–54). CTSL-3 is the third cohort of deaf signers from the younger generation: children of CTSL-1 and CTSL-2 signers (n = 4; age range = 24–30) (see Ergin 2017; Ergin and Brentari 2017; Ergin et al. 2018, 2020, 2021). There were also four deaf children who constituted a potential fourth cohort, though their linguistic behavior has not been documented yet.

#### **2. Study 1**

The goal of this study is to investigate object-based and action-based iconic strategies and their combinations across generations when the target objects are presented in isolation.

#### *2.1. Materials and Methods*

**Participants.** Ten deaf signers from 2 successive age cohorts (5 CTSL-1 signers: *<sup>M</sup>*age = 51.8, age range = 43–55; 5 CTSL-2 signers: *<sup>M</sup>*age = 41.4, age range = 35–444) were tested. All of the participants used CTSL as their sole language, and the CTSL-2 signers were the younger siblings of the CTSL-1 signers.

**Stimuli and Procedure.** The deaf CTSL signers were tested in a *picture-naming task*. They viewed stimuli involving pictures of 26 everyday objects (Table 1) on a computer screen and labeled them for another deaf addressee or a hearing family member fluent in CTSL. A previous investigation of CTSL revealed systematic opposition across semantic categories such as tools and fruits and vegetables, which frequently elicit *handling* or *instrument* (cf. simultaneous *action* and *object* in the current coding scheme) and *object* strategies in CTSL, respectively (Hwang et al. 2017). In order not to create a bias for certain iconic strategies in the cumulative results, these semantic categories were not used in the current stimuli set. Instead, a variety of everyday objects that were not previously studied in CTSL for iconic representations were included in the stimuli set. All of the stimuli items were presented in isolation (i.e., non-agentive context) in a single randomized block (see Supplementary S2 for the pictures of the stimuli items). The data were collected in August 2013.


**Table 1.** List of objects used in study 1.

**Coding Procedure.** The responses to the stimuli were transcribed using ELAN, a tool developed at the Max Planck Institute for Psycholinguistics in Nijmegen, for analysis of the spoken language, sign language, and gestures (Crasborn and Sloetjes 2008) and coded<sup>5</sup> based on the following the criteria defining the iconic representations of the target stimuli items.

Action: In this strategy, the signer's hand represents a hand performing an action (cf. "acting" by Müller 2013). For example, for CAR, the sign that represents holding the steering wheel with both hands and controlling it by moving the hands up and down in opposite directions is coded as an "action".

Object: The hand represents the object itself or represents an aspect of the target object, such as its dimensions, size, or shape (Ergin and Brentari 2017 and Padden et al. 2013). There is no motion representing an action. For instance, COOKING POT, a sign involving two C-static handshapes representing the **shape** of the pot, is coded as an "object" sign. A sign representing the **size** of an object, as well as the simultaneous depiction of shape and size, is also coded as an "object" sign6. Object signs depicting the size or shape of an object can be either one-handed or two-handed (e.g., GLASS, an L-handshape after DRINK to represent the size of the glass, or STICK, with two extended index fingers showing the length of the stick). Finally, signs involving the hand or hands in any configuration **tracing**<sup>7</sup> the outline of an object is also coded as an "object" sign (e.g., BREAD BOARD, with two flat hands moving horizontally outward to trace the surface of a bread board, or DRESS, with the flat dominant hand facing upward and tracing the length of the dress or the length of its sleeves on the signer's body).

Note that the goal was to understand whether it was the object-based or the actionbased iconic representations that were more salient to be selected for iconic representations. That is why the object category was not divided into further subcategories (e.g., size, shape, and tracing), but all static forms depicting the physical property of the target object as a whole or an aspect of it (e.g., size) were evaluated under the "object" category.

Action and Object: In this strategy, action and object strategies are used simultaneously. If it is a one-handed sign, the dominant hand is used either as an instrument or an agen<sup>t</sup> handling the object and simultaneously performing the action associated with that instrument (e.g., BROOM, with extended widespread fingers representing the object and simultaneously producing vertical right-to-left movement of the hand representing the sweeping action, or GLASS, where the C-handshape represents handling the object or its shape, and the motion represents bringing the glass to the mouth for drinking) (cf. "instrument" and "handling" by Padden et al. 2013 and "handling" by Ergin and Brentari 2017).

If it is a two-handed sign, both hands are simultaneously used to represent an object with the non-dominant hand and to depict an action performed on that object with the dominant hand (e.g., MATCH, where the dominant hand represents the action of swiping a matchstick, and the non-dominant hand represents the surface of a match box, where the action takes place).

Deictic: Gestures involving showing, pointing, or touching the objects in the immediate physical environment with or without the object present are coded as "deictic" signs (e.g., SCARF, by touching the scarf one is wearing). The pointing can be with an open hand or extended index finger (Kita 2003).

Unrecognizable signs that did not fit any of the categories listed above were coded as "other" (e.g., MATCH, where the signer produces a sign with the index finger and thumb touching each other, but it is not clear whether the signer refers to the size of the match or he or she is holding the match).

Repeated signs in a response were ignored. For example, in a sign string like "action1— deictic1—action1—deictic1", the action sign and the deictic with the same form and function were repeated and therefore ignored. This response was counted as a two-sign string. Likewise, a string involving "object1—action1—action2—object1" was considered a threesign string, as the same object sign was repeated at the end.
