Next Article in Journal
Visualizing Social Media Research in the Age of COVID-19
Previous Article in Journal
Face Identification Using Data Augmentation Based on the Combination of DCGANs and Basic Manipulations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SL: A Reference Smartness Level Scale for Smart Artifacts

1
Computer Science and Communication Research Centre, School of Technology and Management, Polytechnic Institute of Leiria, Campus 2, Morro do Lena—Alto do Vieiro, Apartado 4163, 2411-901 Leiria, Portugal
2
Department of Mathematics, School of Technology and Management, Polytechnic Institute of Leiria, Campus 2, Morro do Lena—Alto do Vieiro, Apartado 4163, 2411-901 Leiria, Portugal
3
INOV INESC Inovação, Institute of New Technologies, Leiria Office, Campus 2, Morro do Lena—Alto do Vieiro, Apartado 4163, 2411-901 Leiria, Portugal
*
Author to whom correspondence should be addressed.
Information 2022, 13(8), 371; https://doi.org/10.3390/info13080371
Submission received: 8 June 2022 / Revised: 27 July 2022 / Accepted: 28 July 2022 / Published: 3 August 2022

Abstract

:
During the last two decades, many products, research projects and prototypes were announced with different characteristics and capabilities, all adopting the “smart” qualifier word. If a smartness property could not be defined simply as true or false, defining a suitable range was not an easy task. This issue led to the proposal of some classification models, frameworks and taxonomies for project classifications, but none of them provide a clear and pragmatic smartness scale able to classify smart artifacts and serve as a reference. This paper aims to propose a smartness scale to help research and non-research communities to better quantify and easily understand the features and autonomy of smart artifacts. The proposed smartness scale considers the main function of physical-device components in smart systems. The provided smartness scale is based on a uni-dimensional typology that defines 12 different smartness levels, created based on our definition of “smart artifact” and by following an evolutionary set of capabilities ranging from traceable-only and sensing-capable artifacts to autonomous, adaptable and self-driven artifacts. In order to show the feasibility of the proposed smartness scale, an analytic model was defined and applied to several research- and market-based artifacts tagged as smart in order to extract their smartness levels.

1. Introduction

Since the vision of Mark Weiser about ubiquitous computing [1], we have witnessed the proliferation of special applications that merge physical things with the digital world, powered and supported by the Internet of Things (IoT) in association with the traditional Internet of Computers. Part of those applications is tailored for data gathering and device configuration and actuation—possibly focused on specific vertical markets—allowing so-called remote control, monitoring and analytics. The other part of the applications is tailored to assist humans and to bring more comfort to their lives through the creation of so-called smart spaces. Smart spaces, characterized by the employment of Information and Communications Technology (ICT) with specialized human–machine interaction, artificial intelligence and smart artifacts, can learn a user’s habits; hence, they are able to assist the user and anticipate their intent. Smart spaces can be divided into two application domains: (i) smart spaces tailored for humans in general and (ii) smart spaces that are especially tuned for a specific group of humans, such as the elderly, the disabled and people with chronic conditions, etc. These two big smart-space application domains have raised two associated research lines called Ambient Intelligence (AmI) [2] and Ambient Assisted Living (AAL) [3], respectively. Inside smart spaces, everyday artifacts are augmented with computing, communication, sensing and actuation in order to make the artifacts smart. These smart artifacts have the ability to extend ICT to the physical things users deal with every day, every time.
During the last two decades, many products, research projects and prototypes were announced with different characteristics and capabilities, all relying on the same “smart” qualifier word. At the beginning, using the word “smart” to characterize the new capabilities of an augmented everyday artifact was acceptable, but as more and more products and prototypes were released, the word became too wide. These days, it is usual to use the word “smart” used interchangeably with “RFID tracking system” and to describe a sophisticated artifact with learning and reasoning capabilities.
This brings confusion to consumers and to the readers of scientific literature, such as students, professors and researchers. In lectures, it is common to have questions from students regarding the different capabilities of augmented artifacts, all tagged as smart in the same way. A lack of a reference smartness scale to qualify smart artifacts thus represents the main motivation for this paper.
This issue is not being discussed in literature currently. Moreover, all the merited work already conducted has not produced results able to be adopted by the community. We argue that the way to solve this issue cannot rely solely on the identification of smart capabilities [4] and that a clear and effective scale should be defined wherein a smart artifact can be indexed in terms of smartness. Taking this approach, it will be possible to quickly identify the smartness capabilities of a smart artifact by simply looking at the product’s short description or at an article’s title wherein a smartness level is stated by relying solely on a two letter acronym, such as in this hypothetical example:
Original title: “A Smart Connected Physical Mailbox”
Alternative title: “An SL2 Smart Connected Physical Mailbox”
Although just three characters were added to the original title, it gained the potential to highlight for the reader (or even consumers in other situations) about the smartness level of the mailbox artifact that is being announced according to a defined scale of SL1 (traceable only) to SL12 (collaborative, autonomous, adaptive and self-driven). The scale is later defined in this work and can be used to highlight the smartness level of existing smart artifacts (those already developed) or to serve as a guide, in terms of capability requirements, when developing new smart artifacts aiming to reach a specific level of smartness. As this is the first time a smartness scale that can serve as a reference has been proposed, this article also has the aim of bringing more researchers to this discussion.
The rest of the paper is organized as follows. Section 2 describes the related work and presents some existing artifacts tagged as smart. Section 3 presents our definition of “smart artifact” and the role of users’ mobile devices. Section 4 provides our proposal to classify smart artifacts in terms of smartness level and a mathematical model to extract the smartness level of a specific smart artifact. Section 5 shows the effectiveness of our mathematical model by calculating the smartness level of some research-based and market-based smart artifacts. Finally, Section 6 concludes the paper.

2. Related Work

The use of the word “smart” to qualify augmented everyday artifacts and systems with advanced digital capabilities started over 12 years ago (see [5,6,7], etc.) according to literature found in IEEE Xplore, the ACM Digital Library and Springer Link sources. At that time, the term “smart” meant the inclusion of sensing, processing and connectivity of any form into everyday artifacts. During the last two decades, what we have known as (reactive) smart artifacts has evolved toward cognitive systems, and now, the same qualification is being applied to different artifacts with remarkably different capabilities. For instance, in [8], the authors referred to a remote garage door opener as a “simple smart space”, but one could ask how simple it is and why.

2.1. Artifacts Tagged as Smart Artifacts

In [9], the author proposed an augmented traditional mailbox with technology in order to turn it into a smart mailbox. According to the author, the smart component is associated with the action of notifying the mailbox’s owner about its internal status (whether it has mail or does not have mail), apart from it being possible to get the smart mailbox state from the internet and from smart speakers. In [10], the authors presented an umbrella augmented with a solar panel in order to power LED lights (during the night), power fans, to charge smartphones, etc., and they refer to it as a smart umbrella. In [11], another smart umbrella was presented that is able to guide the way to one’s destination on rainy days as a way to avoid the direct use of a smartphone and thus avoid safety problems because of a dispersion of the field of view. The physical umbrella was augmented with eight LEDs attached to the umbrella ribs, a three-axis electronic compass and Bluetooth communication; one’s smartphone sends address information to the umbrella, and then a specific LED is turned on at each moment to show the right direction to the umbrella’s owner. The authors of [12] presented a smart helmet tailored for air quality and hazardous event detection for the mining industry. The helmet artifact was augmented with (i) a ZigBee communication interface for multi-hop communication, (ii) an accelerometer sensor to detect bumps, (iii) an infrared sensor to detect when the helmet is off a worker’s head and (iv) air quality sensors. In a case of a hazard or abnormal sensor readings, the helmet flashes a front mining light several times, announcing the alert and the location (in the dark mine environment). ZigBee communication is used for notifying (hop-by-hop) a supervisor or control unit. In [13], a smart fork prototype was presented as being able to detect the food pickup gesture and food weight. The standard artifact was augmented with an inertial measurement unit, a load cell and a Bluetooth communication interface to report measured data. The authors of [14] presented the concept of a smart dish for an automatic checkout (in large-scale canteens) while providing consumers with healthy diet recommendations based on their body condition and dietary nutrition. In this case, the authors decided to tag the catering system as smart. Park et al. in [15], presented a smart chair able to record and visualize a user’s posture through a smartphone application. The physical chair was augmented with pressure and tilt sensors and a Bluetooth interface to report data to a nearby smartphone app. Finally, for [16], the authors designed a 3D-printed smart cup equipped with a wireless RF module and a single accelerometer for detecting drinking events and accurately recognizing a complete episode of drinking activities. These are some examples (among many others) that show the usage of the word “smart” in different works and prototypes that expose different features and capabilities. As an attempt to solve this “one size fits all qualification”, the research community has proposed different taxonomies as a way to classify smart artifacts according to different features.

2.2. Existing Works on Classifications Based on Smartness

To the best of our knowledge, this is the first time that a smartness scale and a model to obtain a smartness level and serve as a reference has been proposed. However, several very valuable published works have already proposed different ways to classify smart artifacts according to smartness, although with a purpose other than assigning a clear smartness level to existing or new smart artifacts. Despite differences in the terms in the proposals, all authors, including our team, related an artifact’s smartness with the artifact’s capabilities, possibly inspired by the authors of [17], who stated that “product smartness consists of the dimensions of autonomy, adaptability, reactivity, multi-functionality, ability to cooperate, human-like interaction, and personality”.
In [4], the author defined a classification matrix that relates four smart capabilities (information processing, internal regulation, action in the world and knowledge acquisition) with five categories of smartness (not smart at all, scripted execution, formulaic adaption, creative adaption and unscripted or partially scripted invention. The four capabilities were divided into 23 dimensions; every dimension is a continuous variable extending from not smart at all to extremely smart. However, unlike our proposal, which seeks the assignment of a smartness level, this interesting and valuable work is more tailored for guiding someone to make a device or system smarter.
From the same perspective, in [18], a framework was presented for the design and classification of smart objects based on a multi-dimensional characterization of intelligence (knowledge management, reasoning, learning what, learning how, human–object interaction/object–object interaction and social relations), the aim of which, as stated by the authors, is not to provide a global score of smartness but to use the concept of smartness as a design and classification criterion.
In [19], the authors proposed a classification model for smart objects that is based on core and optional capabilities (e.g., energy harvesting, programming, rule adaption, goal-orientation, etc.), following an evolutionary path, resulting in five capability levels named essential, networked, enhanced, aware and IoT complete. However, the authors’ aim seems to be the definitions of capability levels for smart objects and not the assignment of a smartness level.
Another similar work is presented in [20], wherein the authors present a multi-layer taxonomy of smart things that comprises 10 dimensions (sensing capabilities, acting capabilities, interaction direction, interaction multiplicity, interaction partner, thing compatibility, data source, data usage, offline functionality and main purpose). For each smart object, these dimensions are assigned with a percentage meaning the level of support (0% means absence and 100% means support). The authors conclude the paper by presenting classifications of 50 real-life smart things from the Business-to-Business (B2B) context, highlighting the hit ratio percentage (sum of all capabilities’ support percentage) for each assessed smart object. However, the authors never employ the word smartness nor relate the hit ratio with smartness.
With the aim of evaluating the impacts of smart things on business models, the authors of [21] defined four levels of smartness (reactive, adaptive, autonomous and collaborative) to be related to level of connectivity (closed system, open system with a restricted protocol and open system with full interoperability). With 12 different unlabeled cells representing different smartness levels, the authors identified four levels of smart things’ impacts on business models (the aim was never the assignment of a smartness level to specific smart things).
Table 1 presents a summary of the existing and most related works, and comparisons are made with the work described in this article.

3. Definition of Smart Artifact

In the scientific literature, there are multiple terms used interchangeably to refer to the nodes (sensors and/or actuators) belonging to the physical layer of the IoT architecture, such as “smart object”, “smart thing” and “smart artifact”. This section presents some of those term’s definitions, highlights the role of connection and proposes a definition that will be the foundation for the specified smartness scale presented in section four.

3.1. “Smart Object”, “Smart Thing” and “Smart Artifact”

In [22], smart objects are defined as the result of an artifact’s augmentation that may provide continuous access to the artifact’s physical state and context or even embody autonomous behavior.
The authors of [23] define a smart everyday object as an everyday items, such as a chair, book or medicine, that is augmented with active sensor-based computing platforms able to perceive the item’s environment, collect information about the context of a nearby user and collaborate with other objects in the vicinity using wireless technologies.
Kortuem and Kawsar, in [8], define smart objects as autonomous and self-governed objects that operate independently and can collaborate with other objects globally; hence, such objects are able to not only interpret sensor data and make decisions but also communicate and cooperate with each other.
In [24], the authors perceive smart objects as states and related behaviors produced by the transition of states. For this, a smart object must include sensors and actuators as well as a wireless communication interface and behave as a REST server that exposes APIs described both semantically and syntactically.
The authors of [25] state that all smart connectable products from home appliances to industrial equipment share three core elements: (i) physical components (electrical and mechanical parts); (ii) smart components, such as sensors, microprocessors, data storage, software, digital interfaces, etc. and (iii) connectivity components.
In [26], the authors present the definition of a smart object from the perspective of a software paradigm, not related to the physical objects connected to the internet: “A smart object is an object representation that is computationally aware—meaning self-defining and self-reflecting and, possibly, self-modifying/self-adapting”. In relation to this definition, various capabilities are identified, such as object representation, self-definition, self-reflection, self-modification, autonomous operation, understanding (AI), semantic annotation, machine learning, computational awareness and context awareness.
In [4], smart devices and systems are also defined: “Purposefully designed entity X is smart to the extent to which it performs and controls functions that attempt to produce useful results through activities that apply automated capabilities and other physical, informational, technical and intellectual resources for processing information, interpreting information and/or learning from information that may or may not be specified by its designers”. By using this definition and according to the authors, the focus is on information capturing, storing, retrieving, transmitting, manipulating and displaying plus drawing conclusions and creating new knowledge.
The IPSO Alliance ([27]) defines a smart object as a collection of reusable resources that has a well-known object ID and that represents a particular type of physical sensor, actuator, connected object or other data source. In the same way, there are slightly different definitions for smart things.
In [28], a smart thing is defined as a software artifact that can analyze its current state, infer knowledge and monitor possible changes.
In [29], smart things are defined as anything around us with the ability of sensing, processing, communicating and/or actuating, and in [30], they are defined as autonomous physical or digital objects augmented with sensing, processing, acting and network capabilities.
As can be seen, even for the same terms, there are slightly different definitions. Some definitions are more focused on hardware while others are more focused on software, digital representation and API types. As the smartness scale proposed here is specially tailored for physical everyday objects or artifacts with well-known functions, shapes and appearances, we prefer the term “smart artifact” to refer to physically augmented everyday objects.

3.2. The Connectable Role

As can be seen in the previous subsection, all authors enforce a connection requirement for a smart artifact that at first sight makes sense. However, some odd situations may appear that promote confusion.
Let us consider (i) a flower pot with the ability to measure and report through Bluetooth its soil moisture and (ii) a non-connected fridge with an embedded large and high-resolution display able to alert a user if its door is left open for too long, measure and display its internal occupancy, identify internal items and even alert its manufacturer when it is not functioning well.
If we enforce that being connected is a mandatory requirement, the flower pot is considered a smart object (with a high or low smartness level) while the presented fridge is not, although it has advanced features when compared with the flower pot. This seems a kind of contradiction, but it can be easily justified if we consider the concept of a smart environment. If a smart home is considered, the connectable ability, as well as cooperation and data sharing, are paramount while non-connectable artifacts contribute to the creation of “smart tight islands or silos” that do not contribute to the whole smart space.
Thus, in our opinion, the connectable capability (either direct or indirect) is mandatory for the adoption of the smartness scale proposed by this paper.

3.3. Our Definition of “Smart Artifact”

The multitude of existing and new smart artifacts makes them inherently different in terms of shape (e.g., smart showers), dimensions (e.g., smart bridges), functions (e.g., smart cars), capabilities and even identification technology employed (RFID, bar codes, beacons, etc.), but all of the artifacts have a place inside the domain of smart artifacts with a higher or lower smartness level. Thus, instead of defining that a smart artifact must include sensors, actuators, communication interfaces, etc., we argue that not all smart artifacts must belong to the same level of smartness; thus, there are or there will be artifacts with a minimal level of smartness while others can be considered fully autonomous. From this perspective, our definition of a smart artifact is the following:
“A smart artifact is a traceable everyday artifact that is directly or indirectly digitally augmented and connected in order to improve its capabilities or expose new functions.”
The provided smart artifact definition demands at first a place for some sort of identification (mandatory) in order for it to be traceable. The words directly or indirectly mean that both identification and connection (also mandatory) can be implemented in an artifact itself or by relying on a surrounding infrastructure (that will be able to create the artifact’s virtual representation in the digital world [31]). For instance, a cup tagged with RFID or a bar code is the minimum requirement to fit in the provided definition as the infrastructure including the RFID reader can detect, identify, localize and report data about the RFID-enabled artifact. Although it is considered a smart artifact, later we are going to see that its smartness level is the minimum defined on the smartness scale. However, a standard espresso coffee machine augmented with a low-power wireless communication interface, sensors and actuators that is able to recognize a standing person and learn his/her coffee and other habits can improve the experience of obtaining and drinking a coffee. Of course, this second example of an augmented artifact is positioned on the opposite end of the scale of smartness.

3.4. The Role of Users’ Mobile Devices

Users’ mobile devices, such as smartphones, have been employed in a myriad of commercial and research application domains ranging from monitoring to diagnoses, which is a clear signal of the strength, versatility and flexibility of these personal devices. These devices are called smart because of their intrinsic capabilities and advanced operating systems providing out-of-the-box features, such as voice assistants, object recognition and activity recognition, apart from the ability to install apps to enhance or leverage new smart capabilities.
Despite all these smartphones’ capabilities, they also have the potential to always be with their user; hence they, can also be seen as intermediaries [32,33].
In this paper, we follow the same vision for smartphones and similar devices, such as tablets and smartwatches, as they are preferably seen as intermediate devices because of their capacity to bridge the physical world with the digital world in the hands of their user in terms of data flows, notifications, remote displays and interfaces (e.g., their capacity to be remote controllers).

4. Smartness Level of Device Capabilities

According to the related work section, multi-dimensional taxonomies were proposed to classify smart artifacts according to their capabilities. However, no clear scale or model was defined or extracted that could be possible to use to index smart artifacts as performed by SAE International [34] in the context of cars’ autonomy. Our opinion is that besides the classification of smart artifacts, a reference smartness scale is useful for the immediate presentation of a smartness artifact class. We intend that this smartness scale will provide a useful although coarse classification when comparing smart artifacts with different application domains (e.g., smart bridges vs. smart coffee machines) and a more fine-grained classification when comparing smart artifacts with the same function domain (e.g., smart alarm clocks).

4.1. Problem Statement

This section focus on presenting a model to identify the smartness levels of existing smart artifacts following a pragmatic approach. Our proposal relies on the autonomy level of a smart artifact when accomplishing the main task associated with the physical artifact.
More concretely, a smart artifact can exhibit three general capabilities that directly contribute to its smartness: (i) traceability, (ii) awareness and (iii) actionability by order or impact. “Traceability” means that the artifact is addressable either directly or indirectly. This is the minimum capability required to assign the artifact a minimum level of smartness–an identification. The awareness capability means that besides identification, an artifact knows its internal and/or external state. Finally, the actionability capability means that an artifact is able to modify the world on different levels.
According to [35], classification is the general process of grouping entities by similarity and can be either uni-dimensional or multi-dimensional. In turn, classification can be broken down into two essential approaches, typology and taxonomy, wherein the former is primarily conceptual and the latter is empirical [35]. Considering the three general capabilities above (traceability, awareness and actionability) and our definition of “smart artifact”, we defined a uni-dimensional typology of smartness that consists of 12 categories. The uni-dimensional typology and the associated requirements for each category are presented in Table 2.
The defined categories specified in Table 2 (SLx) represent the different smartness levels and were identified from an evolutionary vision organized according to the traceability, awareness and actionability general capabilities. For instance, according to Figure 1, an artifact considered smart at the lowest level must include some form of identification so it is possible for it to be directly or indirectly traceable. This is the only mandatory criteria in Table 2, and it is supported by the smart fridge example described in Section 3.2. In the next capability set are the devices that are able to know their internal and external states. In the third capability set are the devices that have the ability to act by being remotely manually driven, self-driven under a user’s supervision or fully autonomously driven with learning, adaptability and collaborative strengths. All the extracted inner capabilities from these three ordered capability sets (Figure 1) define the ordered smartness categories that make part of the proposed typology (Table 2).
It is important to note that the defined smart levels have no precedence. This means that if a smart artifact is labeled as SL4, it is not mandatory for it to also be SL3 and SL2 (SL1 is mandatory for all smart artifacts). For example, consider a movable smart artifact labeled SL4 that uses a camera to broadcast acquired images directly to an operator. In this case, the smart artifact is SL4 and SL1 because there is no need to be aware of its internal or external state. In the same way, a collaboratively, adaptively self-driven (SL8) smart artifact does not have to implement the reactive feature from SL5 and SL6.
It is worth mentioning that the provided smartness scale is completely independent of the process of developing smart artifacts. For existing smart artifacts, the smartness scale can be used to tag them with a smartness level (according to their exhibited capabilities). For new smart artifacts, the smartness scale can be used to define the capabilities to be implemented in order to reach a specific smartness level. Another idea that is worth mentioning is that to reach the different smartness levels, there is a need to rely on different knowledge areas, such as connectivity, digital identification, sensing and data gathering, integration and interoperability, learning and reasoning, actioning/robotics, real-time systems and standardization, among others. However, all these knowledge areas are out of the scope of this article.

4.2. The Model

According to Table 2, the smartness level (column 1) is related with the capabilities achieved by a smart artifact (column 2). Thus, a way to “translate” a set of capabilities into a smartness level is required.
We propose identifying the smartness level by summing the smart artifact’s capabilities. Why use the sum operation?
Consider a smart fork labeled as SL1 (it includes an internal RFID tag). If we want to improve it in order to reach the next smartness level (SL2), we have to add the internal state capability, probably relying on a memory chip and some internal sensors. Then, if we want to improve it to reach SL3, we have to provide some hardware and/or software to be able to obtain the external context state.
Thus, if we assign some (growing) weights to each smartness level and some (growing) weights to the associated capabilities, there is the chance to determine the smartness level by summing all exposed capabilities. However, as there is no smartness level precedence (e.g., an SL4 smart artifact has only the capabilities associated with SL4 and SL1), we have to assign integer intervals to the smartness levels instead of constant values (weights).
From that perspective, it was decided to weigh each capability using the power of two in order to represent the “power” of each capability and to assign an integer interval to each SL wherein the sum of all lower capabilities must lay on a smartness level interval, as shown in the next simple example:
Capabilities: Traceable = 1, internal state = 2, context state = 4, etc.
SL intervals: SL1[0..1], SL2[2..3], SL3[4..7], etc.
(a) If we want to identify the smartness level for a device that just has the traceability capability, the sum is 1 (traceable only) and it fits in SL1[0..1];
(b) If we want to identify the smartness level of a device that is traceable and is aware of its surrounding context (1 + 4 = 5), it fits into SL3[4..7]
(c) If we want to identify the smartness level of a device that is traceable, internal-state aware and surrounding-context aware (1 + 2 + 4 = 5), it also fits into SL3[4..7].
Scenarios (b) and (c) show that this approach is compliant with the non-precedence rule.
Next, this model is presented in its mathematical form.
Let us consider 12 disjoint intervals representing the smartness levels in the form of:
S L n = [ a n , b n ] , n = 1 , . . . , 12
where
a 1 = 0 , b 1 = 1 ,
and
a n = 2 n 1 , b n = a n + 1 1 , 2 n 12 , n N .
Therefore the integer range of each set (SL) is presented next:
S L 1 = [ 0 , 1 ] , S L 2 = [ 2 , 3 ] , S L 3 = [ 4 , 7 ] , . . . S L 12 = [ 2048 , 4095 ] .
This means that a smart artifact assigned SL3 shows that the summation of its capabilities lies in the [4..7] range.
Definition 1
(Set of weighted capabilities). Let K be the set of all possible weighted capabilities, given by:
K = ( c 1 , w 1 ) , ( c 2 , w 2 ) , . . . , ( c 12 , w 12 )
where c n represents the capability name extracted from Table 2, column 2, and w n is an assigned growing weight in the power of 2, such that
w n = 2 n , n = 1 , . . , 11 .
Therefore:
( T r a c e a b i l i t y , 1 ) ( I n t e r n a l S t a t e A w a r e n e s s , 2 ) ( C o n t e x t S t a t e A w a r e n e s s , 4 ) ( R e m o t e M a n u a l D r i v e n , 8 ) ( R e a c t i v e S e l f d r i v e n , 16 ) ( C o l l a b o r a t i v e R e a c t i v e S e l f d r i v e n , 32 ) ( A d a p t i v e S e l f d r i v e n , 64 ) ( C o l l a b o r a t i v e A d a p t i v e S e l f d r i v e n , 128 ) ( A u t o n o m o u s R e a c t i v e S e l f d r i v e n , 256 ) ( C o l l a b o r a t i v e A u t o n o m o u s R e a c t i v e S e l f d r i v e n , 512 ) ( A u t o n o m o u s A d a p t i v e S e l f d r i v e n , 1024 ) ( C o l l a b o r a t i v e A u t o n o m o u s A d a p t i v e S e l f d r i v e n , 2048 )
Definition 2
(Set of a specific artifact’s capabilities.). Let C be the set of capabilities of a specific artifact, given as:
C = ( c 1 , w 1 ) , , ( c N , w N ) , N = 1 , , 12
such as
C K .
Definition 3
(Summation of a specific artifact’s capabilities.). Let S be the summation of a specific artifact’s capabilities’ weights, given as:
S = i = 1 N w i
Definition 4
(Smartness level of a specific artifact.). The smartness level of a specific artifact is given by looking to the interval S L n , n = 1 , , 12 , where S:
a n S b n .
Although we are just interested in determining the smartness level for each smart artifact, at this stage, the provided model is also prepared to distinguish between artifacts with the same smartness level by considering the summations of the capabilities’ weights. Thus, in the future, if a subscale is required, it can be easily provided. As an example, consider the S L x and S L x + sublevels for each SLx if the summation fits in the first half or the second half of the SLx interval.
In the next section, this model that assigns a smartness level to a specific artifact will be applied to research work, prototypes and market products tagged as smart in order to determine their smartness levels.

5. Fitting Smart Artifacts

In the previous section, a uni-dimensional typology and an analytical model were defined in order to classify artifacts in terms of smartness level. As a way to realize its effectiveness, we are going to apply the model to identify and assign the smartness levels of the surveyed research work described in Section 2 as well as other real-world market artifacts referred to as smart.

5.1. Research Work and Prototypes

The smart mailbox [9] described in Section 2 is able to notify its owner about its internal status (whether it has mail or does not have mail) as well as allow the owner to access the smart mailbox’s state from the internet. Therefore,
C = {(Traceable,1),(Internal State Awareness,2)}
S = 3
which fits into the SL2, because:
2 ≤ S ≤ 3.
This means that according to the model provided, the smart mailbox presented in [9] is an SL2 smart mailbox because it is just aware of its internal state apart from being traceable. In order to exhibit its smartness level, the paper title would be (for example) “An SL2 Smart Solar-Powered IoT Connected Physical Mailbox Interfaced with Smart Devices”. Then, when a reader would look at the paper title, they would have information about the capacities of the proposed smart artifact, which belongs to the lower levels in this case.
Let us now consider the umbrella augmented with a solar panel to power a nearby user’s gadgets and smartphones, presented in [10]. Despite being an interesting and valuable prototype, according to the information provided, this artifact cannot adopt the word smart because it is not traceable (mandatory capability).
The other smart umbrella referenced in a related work [11] includes a Bluetooth interface, a three-axis electronic compass and LEDs in each rib and is driven by a smartphone to show directions by lightening the right LEDs. Therefore,
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4),(Remote Manual Driven, 8)}
S = 15
which fits into the SL4:
8 ≤ S ≤ 15.
According to Table 2, SL4 means that the smart artifact has the ability to be remotely driven apart from being aware of its internal and context state. In the same way as above, the authors could employ the following title in order to exhibit its smartness level to readers: “An SL4 Smart Umbrella for Safety Directions in Internet of Things”.
Let us now consider the smart helmet [12] tailored for air quality and hazardous event detection. According to the proposed smartness scale, this artifact is traceable (provides the address of the wireless communication interface), is able to report internal and context states and implements a form of actuation in response to internal or context states without user supervision:
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4)}
S = 7
which fits into the SL3:
4 ≤ S ≤ 7.
According to Table 2, SL3 means that the smart helmet is aware and can report its internal state as well as context states, in this case by using wireless communication and/or light.
The fork [13] referenced in the related work section is able to detect the food pick-up gesture and food weight, is traceable through a wireless communication interface and is able to report internal state data (weights and pick-up gestures):
C = {(Traceable, 1),(Internal State Awareness, 2)}
S = 3
Its smartness level is SL2.
The described smart dish [14] is able to fire automatic checkouts in canteens and inform consumers about a healthy diet and nutrition by incorporating an ID tag:
C = {(Traceable, 1)}
S = 1
Its smartness level is SL1.
As far as the smart chair [15] is concerned, which is able to record and visualize a user’s posture through a smartphone application, it has the following weighted capability set:
C = {(Traceable, 1),(Internal State Awareness, 2)}
S = 3
Its smartness level is SL2.
The smart cup [16] is equipped with a low-power wireless communication module and a single accelerometer for detecting drinking events and drinking activities; it has the following weighted capability set:
C = {(Traceable, 1),(Internal State Awareness, 2)}
S = 3
Its smartness level is SL2.
This subsection shows the applicability of the provided model to the research prototypes and the extraction of smartness levels. In the next subsection, we do the same but apply it to well-known market products tagged as smart in order to determine their smartness levels.

5.2. Real-World Smart Artifacts

The Nest learning thermostat [36] is addressable by related mobiles and from several home kits; hence, it is traceable, aware of internal as well as external states (through external sensors and other devices) and self-driven (reactively, collaboratively and adaptively) without requiring necessary user supervision:
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4),
(Collaborative Autonomous Adaptive Self-driven, 2048)}
S = 2055
which fits into the SL12
The Nest thermostat is an example of a smart product that shows that smartness level precedence is not mandatory (it is SL12, SL2 and SL1).
EvaDrop [37] is presented as a smart shower device that can save up to 50% of water. EvaDrop is able to sense water temperature as well as a user’s distance to adjust its water flow. It also allows the setting of timers to remind a user that it is time to get out of the shower by flashing LED lights. According to the provided information, a user needs to open their water tap and press a shower button to turn on the water flow. The device includes a mobile app for statistics and preferences and a Bluetooth communication interface:
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4)}
S = 7
Its smartness level is SL3.
Thermo [38] is presented as a smart temporal thermometer. According to the provided information, Thermo uses 16 infrared sensors that read body temperature by sensing heat on a forehead, and then an acquired temperature is shown in an embedded 20 × 5 pixel LED display. The artifact can record up to 32 temperatures, which can later be synchronized via Bluetooth or Wi-Fi with a free Thermo app:
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4)}
S = 7
Its smartness level is SL3.
In the same way, Awair’s smart indoor air monitoring device [39] allows the tracking of temperature, humidity, Carbon Dioxide (CO2), Volatile Organic Compounds (VOC) and fine inhalable particles with diameters that are generally 2.5 micrometers and smaller (PM2.5) via the Awair Home app. It also provides insights about how to improve indoor air quality:
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4)}
S = 7
Its smartness level is SL3.
Mint [40] is presented as the first smart oral health monitor that is able to analyze breath and detect indicators of harmful bacteria in the mouth. This artifact works with a smartphone (Bluetooth) to show test results, store history data and store progress data:
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4)}
S = 7
Its smartness level is SL3.
STRAFFR [41] is described as a smart resistance band able to measure speed, strength and repetitions. This artifact stores data via Bluetooth in a smartphone application that is also able to give indications to its users:
C = {(Traceable, 1),(Internal State Awareness, 2),
(Context State Awareness, 4)}
S = 7
Its smartness level is SL3.
In this section, both research and real-world smart artifacts were applied with the provided model to identify their smartness levels. The easy use and the clear obtained results show the applicability of the provided uni-dimensional typology and the resulting analytic model.

6. Conclusions

The usage of the “smart” qualifier word with recent technological products and research work demands clarification. This is not an issue raised by us as some other authors have already tried to address this concern. However, the few existing works that have done so have relied on 2D taxonomies that try to define different classification perspectives without specifically defining a clear smartness level able to be used to distinguish between different smart artifacts. This paper presents a clear and pragmatic scale useful to define the smartness level of a smart artifact. Although doing so is not an easy task because smartness cannot be resolved to be only true or false, to the best of our knowledge, we provide the first smartness scale ranging from identification and sensing capabilities to self-driven, adaptive and autonomous capabilities. When applying the proposed classification scale model, one must be aware that depending on a physical artifact’s functions, the scale may result in a highly complex smart artifact being classified with a lower smartness level than an artifact of lower complexity. Examples are a smart toaster and a smart car. Nevertheless, the assignment of a smartness level immediately tells a user about an artifact’s capabilities, and when comparing similar smart artifacts, it is possible to identify the smartest one. To show the feasibility of the provided smartness scale, various smart artifact samples from both the market and research domains were analyzed and tagged with their related smartness levels. As this was the first attempt to provide a clear smartness scale to be applied to smart artifacts, this paper also aims to bring more researchers into this discussion, providing a basis for the future adoption of a smartness scale that effectively allows for the qualification and understanding of the characteristics and autonomy of smart artifacts.

Author Contributions

Conceptualization, N.C., N.R., M.A.S. and A.P.; investigation, N.C.; applicable existing smart artifacts, N.R. and A.P.; writing—original draft preparation, N.C. and A.P.; model, M.A.S. and N.C.; writing—review and editing, N.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by national funds through the Portuguese Foundation for Science and Technology (FCT), I.P., under the project UIDB/04524/2020 and was partially supported by Portuguese National funds through FITEC-Programa Interface with reference CIT “INOV-INESC Inovação-Financiamento Base”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AALAmbient Assisted Living
AIArtificial Intelligence
AmIAmbient Intelligence
APIApplication Programming Interface
B2BBusiness-to-Business
CO2Carbon Dioxide
ICTInformation and Communications Technology
IDIdentification
IoTInternet of Things
IPSOInternet Protocol for Smart Objects
LEDLight-Emitting Diode
RFRadio Frequency
RFIDRadio Frequency Identification
SLSmartness Level
VOCVolatile Organic Compounds

References

  1. Weiser, M. The Computer for the 21st Century. Sci. Am. 1991, 265, 94–104. [Google Scholar] [CrossRef]
  2. CassensJörg, J.; CassensRebekah, W. Ambient Explanations: Ambient Intelligence and Explainable AI. In Proceedings of the 15th European Conference, AmI 2019, Rome, Italy, 13–15 November 2019. [Google Scholar]
  3. Calvaresi, D.; Cesarini, D.; Sernani, P.; Marinoni, M.; Dragoni, A.; Sturm, A. Exploring the ambient assisted living domain: A systematic review. J. Ambient. Intell. Humaniz. Comput. 2017, 8, 239–257. [Google Scholar] [CrossRef]
  4. Alter, S. Making Sense of Smartness in the Context of Smart Devices and Smart Systems. Inf. Syst. Front. 2020, 22, 381–393. [Google Scholar] [CrossRef]
  5. Kindberg, T.; Barton, J. Towards a real-world wide web. In Proceedings of the 9th Workshop on ACM SIGOPS European Workshop: Beyond the PC: New Challenges for the Operating System, Kolding, Denmark, 17–20 September 2000; pp. 195–200. [Google Scholar]
  6. Pentland, A. Smart rooms, desks and clothes. In Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Munich, Germany, 21–24 April 1997; Volume 1, pp. 171–174. [Google Scholar]
  7. Holmquist, L.; Mattern, F.; Schiele, B.; Alahuhta, P.; Beigl, M.; Gellersen, H. Smart-Its Friends: A Technique for Users to Easily Establish Connections between Smart Artefacts. In Proceedings of the UbiComp’01: 3rd International Conference on Ubiquitous Computing, Atlanta, GA, USA, 30 September–2 October 2001; pp. 116–122. [Google Scholar]
  8. Helal, S. The Monkey, the Ant, and the Elephant: Addressing Safety in Smart Spaces. Computer 2020, 53, 73–76. [Google Scholar] [CrossRef]
  9. Khan, T. A Solar-Powered IoT Connected Physical Mailbox Interfaced with Smart Devices. IoT 2020, 1, 128–144. [Google Scholar] [CrossRef]
  10. Manpreet, S.K.; Manish, K.; Bobby, K.; Priyam, T.; Harpreet, K.C. Designing and Implementation of Smart Umbrella. Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol. 2019, 5, 13–17. [Google Scholar] [CrossRef]
  11. Han, Y.; Lee, C.; Kim, Y.; Jeon, S.; Seo, D.; Jung, I. Smart umbrella for safety directions on Internet of Things. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 8–10 January 2017; pp. 84–85. [Google Scholar] [CrossRef]
  12. Behr, C.J.; Kumar, A.; Hancke, G.P. A smart helmet for air quality and hazardous event detection for the mining industry. In Proceedings of the 2016 IEEE International Conference on Industrial Technology (ICIT), Taipei, Taiwan, 14–17 March 2016; pp. 2026–2031. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Zheng, H.; Rempel, S.; Hong, K.; Han, T.; Sakamoto, Y.; Irani, P. A smart utensil for detecting food pick-up gesture and amount while eating. In Proceedings of the AH ’20: 11th Augmented Human International Conference, Winnipeg, MB, Canada, 27–29 May 2020; pp. 1–8. [Google Scholar] [CrossRef]
  14. Zhou, L.; Wang, A.; Zhang, Y.; Sun, S. A smart catering system base on Internet-of-things technique. In Proceedings of the 2015 IEEE 16th International Conference on Communication Technology (ICCT), Hangzhou, China, 18–20 October 2015; pp. 433–436. [Google Scholar] [CrossRef]
  15. Park, M.; Song, Y.; Lee, J.; Paek, J. Design and Implementation of a smart chair system for IoT. In Proceedings of the 2016 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 19–21 October 2016; pp. 1200–1203. [Google Scholar] [CrossRef]
  16. Liu, K.C.; Hsieh, C.Y.; Huang, H.Y.; Chiu, L.T.; Hsu, S.J.P.; Chan, C.T. Drinking Event Detection and Episode Identification Using 3D-Printed Smart Cup. IEEE Sens. J. 2020, 20, 13743–13751. [Google Scholar] [CrossRef]
  17. Rijsdijk, S.; Hultink, E. How Today’s Consumers Perceive Tomorrow’s Smart Products. J. Prod. Innov. Manag. 2009, 26, 24–42. [Google Scholar] [CrossRef]
  18. Cena, F.; Console, L.; Matassa, A.; Torre, I. Multi-dimensional intelligence in smart physical objects. Inf. Syst. Front. 2019, 21, 383–404. [Google Scholar] [CrossRef]
  19. Pérez Hernández, M.E.; Reiff-Marganiec, S. Classifying Smart Objects using capabilities. In Proceedings of the 2014 International Conference on Smart Computing, Hong Kong, China, 3–5 November 2014; pp. 309–316. [Google Scholar] [CrossRef]
  20. Püschel, L.; Röglinger, M.; Schlott, H. What’s in a Smart Thing? Development of a Multi-layer Taxonomy. In Proceedings of the ICIS, Dublin, Ireland, 11–14 December 2016. [Google Scholar]
  21. Langley, D.J.; van Doorn, J.; Ng, I.C.L.; Stieglitz, S.; Lazovik, A.; Boonstra, A. The Internet of Everything: Smart things and their impact on business models. J. Bus. Res. 2021, 122, 853–863. [Google Scholar] [CrossRef]
  22. Kortuem, G.; Kawsar, F.; Sundramoorthy, V.; Fitton, D. Smart objects as building blocks for the Internet of things. IEEE Internet Comput. 2010, 14, 44–51. [Google Scholar] [CrossRef]
  23. Siegemund, F.; Krauer, T. Integrating Handhelds into Environments of Cooperating Smart Everyday Objects. In Proceedings of the EUSAI, Eindhoven, The Netherlands, 8–10 November 2004. [Google Scholar]
  24. Ventura, D.; Monteleone, S.; Torre, G.L.; Delfa, G.C.L.; Catania, V. Smart EDIFICE—Smart EveryDay interoperating future devICEs. In Proceedings of the 2015 International Conference on Collaboration Technologies and Systems (CTS), Atlanta, GA, USA, 1–5 June 2015; pp. 19–26. [Google Scholar]
  25. Porter, M.E.; Heppelmann, J.E. How Smart, Connected Products Are Transforming Companies. Harv. Bus. Rev. 2015, 93, 53–71. [Google Scholar]
  26. Kaisler, S.H.; Money, W.H.; Cohen, S.J. Smart Objects: An Active Big Data Approach. In Proceedings of the HICSS, Hilton Waikoloa Village, HI, USA, 3–6 January 2018. [Google Scholar]
  27. OMA SpecWorks. Available online: https://omaspecworks.org (accessed on 14 May 2021).
  28. Samaniego, M.; Deters, R. Internet of Smart Things—IoST: Using Blockchain and CLIPS to Make Things Autonomous. In Proceedings of the 2017 IEEE International Conference on Cognitive Computing (ICCC), Honolulu, HI, USA, 25–30 June 2017; pp. 9–16. [Google Scholar] [CrossRef]
  29. Moraes do Nascimento, N.; de Lucena, C.J.P. Engineering cooperative smart things based on embodied cognition. In Proceedings of the 2017 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), Pasadena, CA, USA, 24–27 July 2017; pp. 109–116. [Google Scholar] [CrossRef]
  30. Madakam, S. Internet of things: Smart things. Int. J. Future Comput. Commun. 2015, 4, 250–253. [Google Scholar] [CrossRef]
  31. Caswell, D.; Debaty, P. Creating Web Representations for Places. In Proceedings of the Handheld and Ubiquitous Computing; Thomas, P., Gellersen, H.W., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 114–126. [Google Scholar]
  32. Mattern, F.; Florkemeier, C. Vom Internet der Computer zum Internet der Dinge. Informatik-Spektrum 2010, 33, 107–121. [Google Scholar] [CrossRef]
  33. Kees, A.; Oberländer, A.M.; Röglinger, M.; Rosemann, M. Understanding the internet of things: A conceptualisation of business-to-thing (B2T) interactions. In Proceedings of the European Conference on Information Systems, Munster, Germany, 26–29 May 2015; Volume 23. [Google Scholar]
  34. SAE International. Levels of Driving Automation. Available online: https://www.sae.org (accessed on 14 May 2021).
  35. Bailey, K.; Sage Publications, I. Typologies and Taxonomies: An Introduction to Classification Techniques; Number 102 in Quantitative Applications in the Social Science; SAGE Publications: Sauzen d’Oaks, CA, USA, 1994. [Google Scholar]
  36. Google Nest. Available online: https://www.nest.com (accessed on 22 October 2021).
  37. EvaDrop, World’s First Smart Shower. Available online: https://evadrop.com (accessed on 4 October 2021).
  38. Smart Temporal Thermometer. Available online: https://www.withings.com (accessed on 8 October 2021).
  39. Smart Indoor Air Monitor. Available online: https://uk.getawair.com (accessed on 8 October 2021).
  40. Breathometer. Available online: https://www.breathometer.com/mint (accessed on 8 October 2021).
  41. Your Gym and Personal Trainer to, Go. Available online: https://en.straffr.com (accessed on 8 October 2021).
Figure 1. Evolutionary set of capabilities, growing from bottom to top.
Figure 1. Evolutionary set of capabilities, growing from bottom to top.
Information 13 00371 g001
Table 1. Summary of the existing works aimed at classifying or segmenting smart artifacts in some way.
Table 1. Summary of the existing works aimed at classifying or segmenting smart artifacts in some way.
Work/CharacteristicsApproachMethodAimSample Output
[4]Classification matrixFour smart capabilities: information processing, internal regulation, action in the world and knowledge acquisition expanded to 23 dimensions. Although the authors defined five different levels of smartness (i.e., not smart at all, scripted execution, formulaic adaptation, creative adaptation and unscripted or partially scripted invention, the 23 dimensions are evaluated with two smart levels: somewhat smart and extremely smart.Guiding making a device or system smarter.A table for each category and related dimensions wherein each dimension is characterized in terms of somewhat smart and extremely smart.
[18]FrameworkSix capabilities: knowledge management, reasoning, learning what, learning how, human–object interaction/object–object interaction and social relations. For each smart physical object, each capability is assigned with a qualitative level of smartness. A descriptive conclusion is created according to the assigned levels of smartness.Guiding designing and comparing different smart physical objects.“Smart physical object with interaction capabilities (smart and innovative input modalities) with limited reasoning capabilities enabling context awareness and adaptive reminder”, quoted from [18].
[19]Classification modelFive levels of capabilities: Level 1 (essential), level 2 (networked), level 3 (enhanced), level 4 (aware) and level 5 (IoT complete). For each capability level, a set of capabilities is defined. The capability level is reached when a smart object implements the specified capabilities.Helping to determine what a smart object is able to do by itself and what requirements can be covered externally by applications, services, platforms and other objects.This projects defines capability levels and not smartness levels.
[20]Multi-layer taxonomyTen capability dimensions (sensing, acting, direction, multiplicity, partner, thing compatibility, data source, data usage, offline functionality and main purpose); for each smart thing, a support percentage is assigned to each dimension. In the end, a hit ratio is calculated.The authors presented calculated hit ratios for different smart things per thing and per dimension.This project is not focused on smartness level. Instead, the multi-layer taxonomy is used to calculate hit ratios for individual smart things and for dimensions when considering multiple smart things.
[21]TaxonomyMatrix that relates four capabilities (reactive, adaptive, autonomous and cooperative) with three connectivity levels: closed system, open system with restricted protocol and open system with full interoperability. Twelve different unlabeled cells are defined (apparently defining 12 different smartness levels) to highlight the different implications of a smart thing for business models.Describing the business model implications according to different level-of-smartness smart things.“Business models based on delegating decision-making to smart things”, quoted from [21].
Our proposalUni-dimensional typologyTwelve levels of smartness (SL1..SL12) associated with sets of capabilities: traceable, internal state awareness, context state awareness, remote manual driven, reactive self-driven, collaborative reactive self-driven, adaptive self-driven, collaborative adaptive self-driven, autonomous reactive self-driven, collaborative autonomous reactive self-driven, autonomous adaptive self-driven and collaborative autonomous adaptive self-driven. A math model based on capability sets and capability weights is provided to extract the smartness level of each smart artifact.Assigning a smartness level and guiding smart artifact development in terms of the requirements to achieve a specific smartness level.The smart chair has an SL3 smartness level.
Table 2. Proposed uni-dimensional typology for classifying smart artifacts’ smartness levels.
Table 2. Proposed uni-dimensional typology for classifying smart artifacts’ smartness levels.
Smartness LevelRequired CapabilityCriteria
SL1TraceabilityThe smart artifact must include a unique identification
even though it relies on the surrounding infrastructure.
Bar codes, QR codes, beaconing and RFID are some examples
of identification technologies that make artifacts traceable.
Any smart artifact must have at least
the traceability capability.
SL2Internal state awarenessThe smart artifact is able to report simple internal states ranging
from its battery level, temperature and vibration, etc. to more complex
internal diagnosis reports.
SL3Context state awarenessThe smart artifact is able to provide a report of its surrounding
context, apart from its internal one.
SL4Remotely, manually drivenThe smart artifact has the ability to be manually,
remotely driven either partially or totally.
SL5Reactively self-drivenThe smart artifact is able to react by itself to its internal or
external context but under user supervision
considering its main function.
SL6Collaboratively, reactively self-drivenThe smart artifact is able to react by itself according to its internal
or external context and from a collaboration with other
smart artifacts under user supervision
considering its main function.
SL7Adaptively self-drivenThe smart artifact is able to react and adapt itself by learning
from past data and events under user supervision
considering its main function.
SL8Collaboratively, adaptively self-drivenThe smart artifact is able to react and adapt itself by
learning from past data and events and from a collaboration
with other smart artifacts under user supervision
considering its main function.
SL9Autonomously, reactively self-drivenThe smart artifact is able to react by itself to its internal
or external context without requiring user supervision
considering its main function.
SL10Collaboratively, autonomously and reactively self-drivenThe smart artifact is able to react by itself to its internal
or external context and from a collaboration
with other artifacts without requiring user supervision
considering its main function.
SL11Autonomously, adaptively self-drivenThe smart artifact is able to react and adapt itself by learning
from past data and events without requiring user supervision
considering its main function.
SL12Collaboratively, autonomously and adaptively self-drivenThe smart artifact is able to react and adapt itself by learning
from past data and events and from a collaboration with other
smart artifacts without requiring user supervision
considering its main function.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Costa, N.; Rodrigues, N.; Seco, M.A.; Pereira, A. SL: A Reference Smartness Level Scale for Smart Artifacts. Information 2022, 13, 371. https://doi.org/10.3390/info13080371

AMA Style

Costa N, Rodrigues N, Seco MA, Pereira A. SL: A Reference Smartness Level Scale for Smart Artifacts. Information. 2022; 13(8):371. https://doi.org/10.3390/info13080371

Chicago/Turabian Style

Costa, Nuno, Nuno Rodrigues, Maria Alexandra Seco, and António Pereira. 2022. "SL: A Reference Smartness Level Scale for Smart Artifacts" Information 13, no. 8: 371. https://doi.org/10.3390/info13080371

APA Style

Costa, N., Rodrigues, N., Seco, M. A., & Pereira, A. (2022). SL: A Reference Smartness Level Scale for Smart Artifacts. Information, 13(8), 371. https://doi.org/10.3390/info13080371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop