Next Article in Journal
A Novel Multi-Scale Feature Fusion-Based 3SCNet for Building Crack Detection
Next Article in Special Issue
Understanding User Preferences in Location-Based Social Networks via a Novel Self-Attention Mechanism
Previous Article in Journal
Quality-of-Life Perception among Young Residents and Visitors: The Impact of COVID-19
Previous Article in Special Issue
Water Column Detection Method at Impact Point Based on Improved YOLOv4 Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Temporal Event Scheduling through STEP Perpetual Learning

1
Faculty of Innovation Engineering, Macau University of Science and Technology, Macau SAR 999078, China
2
Computer Engineering Technical College, Guangdong Polytechnic of Science and Technology, Guangzhou 510640, China
3
School of Information Engineering, Nanchang Institute of Technology, Nanchang 330029, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(23), 16178; https://doi.org/10.3390/su142316178
Submission received: 20 October 2022 / Revised: 24 November 2022 / Accepted: 29 November 2022 / Published: 3 December 2022
(This article belongs to the Special Issue Artificial Intelligence Applications for Sustainable Urban Living)

Abstract

:
Currently, most machine learning applications follow a one-off learning process: given a static dataset and a learning algorithm, generate a model for a task. These applications can neither adapt to a dynamic and changing environment, nor accomplish incremental task performance improvement continuously. STEP perpetual learning, by continuous knowledge refinement through sequential learning episodes, emphasizes the accomplishment of incremental task performance improvement. In this paper, we describe how a personalized temporal event scheduling system SmartCalendar, can benefit from STEP perpetual learning. We adopt the interval temporal logic to represent events’ temporal relationships and determine if events are temporally inconsistent. To provide strategies that approach user preferences for handling temporal inconsistencies, we propose SmartCalendar to recognize, resolve and learn from temporal inconsistencies based on STEP perpetual learning. SmartCalendar has several cornerstones: similarity measures for temporal inconsistency; a sparse decomposition method to utilize historical data; and a loss function based on cross-entropy to optimize performance. The experimental results on the collected dataset show that SmartCalendar incrementally improves its scheduling performance and substantially outperforms comparison methods.

1. Introduction

To date, most machine learning applications follow a one-off metaphor: given a static dataset and a learning algorithm, generate a function or a model for a task. Such a metaphor prevents these applications from adapting to the dynamic environment and achieving incremental task performance improvement [1,2,3]. Unlike the one-off metaphor, human learning is a long-term process with continuous knowledge refinement [4]. Researchers have proposed several paradigms that model the complexity, diversity, and accumulative nature of human learning [5,6,7,8,9]: Lifelong Learning, Never-ending learning, and STEP perpetual learning.
Lifelong Learning (LL) learns to obtain knowledge continuously by carrying out sequential tasks [6,7,8]. The knowledge of past tasks is accumulated so that the learner can make use of them to help learn a new task [6]. Never-ending learning is a paradigm that: learns various types of knowledge, improves subsequent learning based on learned knowledge, and with sufficient self-reflection [3,5]. NEL defines the problem as an ordered pair comprised of a set of learning tasks and coupling constraints [3]. However, these solutions do not formally define learning stimuli, nor do they distinguish learning episodes and working episodes.
Unlike previous studies, STEP perpetual learning (PL) is a novel paradigm that regards learning stimuli as important as tasks, experience, and performance measures [2,10,11,12,13]. Each time a perpetual learning agent (PeLA) encounters stimuli, learning episodes will be triggered to improve the agent’s problem-solving knowledge, which leads to better performance. Therefore, STEP PL enables us to design a special PeLA, SmartCalendar, whose performance will be progressively better and can satisfactorily perform tasks in the end.
This work chooses the calendar application as one usage scenario to demonstrate how a PeLA incrementally improves its performance through continuous knowledge refinement. In today’s fast-paced world, many people use calendar systems to improve productivity and organize life. Internet giants recognized the importance of calendar systems, and released their applications, e.g., Google calendar, Microsoft Outlook, and Apple calendar [14,15,16]. Despite the vast investment in these applications, we find that these applications are still flawed in the core function, which is helping users schedule events. For instance, if the user has two events to attend on 1 October 2022. One is an exam from 9 a.m. to 10 a.m. in Room A201, and the other is a meeting from 10 a.m. to 11 a.m. in Room C408. Because there is not enough time between two events for the user to move from one location to another, the user cannot attend the second event on time. We say there is a temporal conflict or temporal inconsistency (TI) between two events [17] (formal definitions of events and TIs are given later). In this case, the user can reschedule events by adjusting the starting point and ending point of events. Considering that it is difficult for users to detect all conflicts in time, especially those caused by the transition of events, an intelligent calendar system should be able to detect and solve temporal conflicts between events. However, none of the aforementioned calendar applications has such capabilities.
Researchers proposed some scheduler models [18,19,20,21,22] to resolve conflicts. TASHA [18] utilizes rules based on event types to resolve temporal conflicts. ADAPTS employs a Decision Tree (DT) to make decisions from four strategies: modify the original event, modify the new event, modify both events, and delete the original event [19,20]. As an updated version, Ref. [22] adopts a non-linear programming model which objects to minimizing the duration changes of conflicting events. However, the above scheduler models are still far from an intelligent event scheduling system. They treat all events as equally important [21], which contradicts the highly personalized nature of the calendar scene. Refs. [19,22] trained a DT model on the complete dataset, which is incompatible with the cumulative nature of the calendar system. Additionally, Refs. [19,20,22] oversimplify the user’s actual strategies by categorizing solutions into four classes.

1.1. Challenges

Here are questions an intelligent event scheduling system must answer:
  • How to schedule temporally conflicting events using existing knowledge?
What the system knows is past events and conflicts with solutions. The number of events and TIs the user has per day is limited, whereas most of the attributes of a TI are discrete attributes with dozens of values. Therefore, for a long time after the use, the system’s knowledge is insufficient for constructing a conventional ML model to solve TIs directly [1,23]. In this work, we propose three methods replicate solutions of identical cases (RSIC), reference solutions of similar cases (RSSC), and generate solutions with strategy distribution (GSSD) to deal with TIs in different scenarios: Given a TI, RSIC looks for cases where user preferences are determined, i.e., the same TI in history, then adopts the strategy used in the same TI. RSSC seeks similar TIs based on the importance of attribute values and draws on their solutions to generate a strategy. If there does not exist the same or similar TI, GSSD obtains a strategy from the distribution of historical user decisions. Therefore, at all stages of operation, the system chooses the most appropriate method to resolve the current conflict.
2.
How to consistently improve the system’s performance?
As a personalized system, the user’s preference is the golden rule for addressing TIs, which requires the system to incrementally improve its performance to adapt to an individual’s preferences over time. To this end, we design the stimuli-driven learning processes according to the STEP PL framework: The completion of each working episode drives the system to resolve knowledge inadequacy by recording relevant data and revising the corresponding meta-knowledge. Disagreement between the user and the system on the TI solution prompts the system to address knowledge deficiency by refining the problem-solving model. As a result, the system has more data for reference and is gradually approaching users’ decision preferences. Eventually, the system can generate user-acceptable solutions to TIs.

1.2. Contributions

Our work’s contributions are summarized as follows:
  • Interval temporal logic (ITL) only considers the original interval relationships within events. While in real life, realistic events could occur at various locations, and the transition of events cannot be ignored for the fulfillment of an event. Thus, we propose complete temporal classes based on ITL to identify temporal relations by considering the events’ transition.
  • To the best of our knowledge, we are the first to model temporal events scheduling and management system under STEP PL. We introduce a theoretical model to provide guidelines to develop algorithms to recognize, resolve, and more importantly, learn from TIs. The stimuli-driven learning processes enable the system to realize incremental performance improvement.
  • There is no existing calendar dataset that matches our definitions. We collected a five-user dataset, which is available at https://github.com/hensontang9/Temporal_conflicts, (accessed on 14 September 2022). For each user, there are about 4000 events and 600 Tis. Experiments on the collected dataset verified the feasibility and self-improvement capability: Our system exhibits significantly better performance than the comparison methods. Moreover, it incrementally improves its scheduling performance during use. A prototype is implemented on the Android platform, which is available at https://github.com/hensontang9/SmartCalendar, (accessed on 14 September 2022).

2. Related Work

2.1. Temporal Logics

Temporal logics are systems of rules and symbolism for temporal representation and reasoning, which are categorized by Refs. [17,24]: “propositional versus first-order”, “linear versus branching”, “point versus interval”, etc. ITLs are first-order, linear time, intervals, and continuous operators [17,25,26] and possess a richer representation formalism to define interval relations than the point-based scheme due to the additional expressiveness obtained by reasoning about time interval [24,27]. Hence, we chose ITL to help qualitatively delineate temporal relations within events.

2.2. Temporal Scheduling Applications

To the best of our knowledge, the existing temporal scheduling applications focus on either scheduling periodical events [28,29,30,31], tracking events that users have interest in [32,33,34], efficiently creating and sharing appointments according to the given availability preference [35,36], or planning events with the minimized start-to-end time duration [37,38]. They do not cover developing an event scheduling system that acquires the capability to resolve temporal conflicts and improve performance over time.

2.3. Incremental Performance Improvement Paradigms

M.I. Jordan et al. pointed out that one of the challenges in machine learning is to develop systems that never stop learning new tasks and improving their performance continuously [3]. Related explorations include lifelong learning [6,7,8], never-ending learning [5], perpetual learning [2,10,11,12,13], etc.
Lifelong learning agents learn one task at a time, continuously acquire new knowledge and refine existing knowledge [6,7,39]. By adding a specific KB, the capability to identify new tasks, and the ability to learn on the job, Z. Chen et al. extended the definition of LL and employed the LL method in the field of topic modeling [8,40,41]. NEL is a system that can improve itself using the knowledge learned from self-supervised experience [5]. As a case study, Never-Ending Language Learner has been equipped with 120 million candidate beliefs by extracting information from the Web [9,42,43].
There are several important differences between the above paradigms and STEP PL: First, the learning episodes in PL are discrete and triggered by stimuli, whereas learning in the aforementioned approaches is not triggered by any events and is largely continuous. Second, PL emphasizes the accomplishment of incremental task performance improvement through sequential learning episodes, whereas the above work is primarily oriented toward knowledge acquisition. In this paper, our calendar system is designed following the concept of STEP PL.

3. Preliminaries

3.1. Events

An event ε is defined to be:
ε = ( ϵ , α , τ ,   ϑ , p ,   ,   ζ , ι )
where
-
ϵ indicates the event type, ϵ { work ,   study ,   social ,   family ,   entertainment ,   personal } ;
-
α represents the activity, α A c t ϵ where A c t ϵ is the set of activities belonging to event type ϵ ;
-
τ is the time interval, τ = ( S t , S t + D u r ) , S t is the starting point, D u r is the duration;
-
ϑ denotes an event’s flexibility, ϑ { rigid ,   flexible } . The time intervals of rigid events need to be strictly adhered to, while that of flexible events can be adjusted;
-
p represents the host and participant, p = ( p h o s t , p p a r t ) , in which
-
p h o s t denotes a host in a set p h o s t of all events’ hosts, i.e., p h o s t p h o s t ;
-
p p a r t is a participant in a set p p a r t of all events’ participants, i.e., p p a r t p p a r t ;
-
indicates the location, = ( n a m e , l o n g ,   l a ) , n a m e , l o n g and l a are the name, longitude, and latitude of the location;
-
ζ denotes the periodicity, ζ { once ,   every   day ,   every   week ,   every   month ,   every   year }
-
ι is a flag representing the whole event’s importance, ι { important ,   normal } .

3.2. Complete Temporal Classes

Based on the primitive relationship Meets [25,44,45], Figure 1 depicts the interval relationships between τ i and τ j ( τ i and τ j are time intervals of events ε i and ε j ) [17]: Before ( τ i , τ j ) ,   Overlaps ( τ i , τ j ) ,   Starts ( τ i , τ j ) , During ( τ i , τ j ) , Finishes ( τ i , τ j ) , Equals ( τ i , τ j ) .
However, in the calendar scenario, events’ transitions cannot be ignored since events may occur in different locations. Let τ i j = ( S t i j , S t i j + D u r i j ) denote the time interval to travel from i to j ( i , j are locations where ε i , ε j take place. S t i j and D u r i j are the starting point and duration of transition). If ε i and ε j can be realized concerning a single given circumstance, then ε i and ε j are consistent with each other and we use ( ε i , ε j )   to   denote   that . Otherwise, we say that they are inconsistent and this is denoted as ( ε i , ε j ) . We propose temporal class (TC) to represent the temporal relation between two events by considering events’ transitions.
There does not exist temporal inconsistency between ε i and ε j and both events can be scheduled without compromise if any of the following statements is true:
-
Meets ( τ i , τ j ) ,   when   D u r i j = 0
( NTI - M 0 )
-
Before ( τ i , τ j ) ,   when   D u r i j = 0
( NTI - B 0 )
-
Before ( τ i , τ j ) S t j S t i j + D u r i j ,   when   D u r i j > 0
( NTI - B 1 )
The presence of overlapping time intervals indicates direct temporal conflicting circumstances. Direct temporal conflicting circumstances can be further divided into the following two subcases. The first subcase is that conflicting events occur at the same location and the user does not need extra time to commute ( D u r i j = 0 ) . We are aware that there is a direct temporal inconsistency between ε i and ε j without the travel time complication if any of the following conditions is true:
-
Overlaps ( τ i , τ j ) ,   when   D u r i j = 0
( DTI - O 0 )
-
Starts ( τ i , τ j ) ,   when   D u r i j = 0
( DTI - S 0 )
-
Equals ( τ i , τ j ) ,   when   D u r i j = 0
( DTI - E 0 )
-
During ( τ i , τ j ) ,   when   D u r i j = 0
( DTI - D 0 )
-
Finishes   ( τ i , τ j ) ,   when   D u r i j = 0
( DTI - F 0 )
Let dti - nt ( ε i ,   ε j ) denote the following:
dti - nt ( ε i , ε j ) def [ ( ε i , ε j ) [ DTI - O 0 DTI - S 0 DTI - E 0 DTI - D 0 DTI - F 0 ] ] .
The second subcase is that the locations of conflicting events are different and the user needs extra time to travel from one location to another ( D u r i j > 0 ) . We are aware that there is a direct temporal inconsistency between ε i and ε j with the travel time complication if any of the following conditions are true:
-
Overlaps ( τ i , τ j ) ,   when   D u r i j > 0
( DTI - O 1 )
-
Starts ( τ i , τ j ) ,   when   D u r i j > 0
( DTI - S 1 )
-
Equals ( τ i , τ j ) ,   when   D u r i j > 0
( DTI - E 1 )
-
During ( τ i , τ j ) ,   when   D u r i j > 0
( DTI - D 1 )
-
Finishes   ( τ i , τ j ) ,   when   D u r i j > 0
( DTI - F 1 )
Let dti - tc ( ε i , ε j ) denote the following:
dti - tc ( ε i , ε j ) def [ ( ε i , ε j ) [ DTI - O 1 DTI - S 1 DTI - E 1 DTI - D 1 DTI - F 1 ] ] .
Let dti ( ε i , ε j ) denote the direct temporal inconsistency between ε i and ε j :
dti ( ε i , ε j ) def [ dti - nt ( ε i , ε j ) dti - tc ( ε i , ε j ) ] .
Even if ε i and ε j do not overlap, the completion of events will be compromised if the duration of commuting time is greater than the gap between ε i and ε j . We refer to this kind of conflicts as indirect temporal inconsistencies (ITI) and denote it by iti ( ε i , ε j ) :
iti ( ε i , ε j ) def [ ( ε i , ε j ) [ ITI - M 1 ITI - B 1 ] ] ,
where ITI - M 1 and ITI - B 1 are cases met the following conditions:
-
Meets ( τ i , τ j ) ,   when   D u r i j > 0
( ITI - M 1 )
-
Before ( τ i , τ j ) S t j S t i j + D u r i j ,   when   D u r i j 0
( ITI - B 1 )
Temporal inconsistency between ε i and ε j , denoted as ti ( ε i , ε j ) , is defined to be:
ti ( ε i , ε j ) def [ dti ( ε i , ε j ) iti ( ε i , ε j ) ] .
It is noteworthy that when a user commutes to another location it is subjective and not fixed. Here, we assume the user commutes immediately after completing one event, corresponding to the shortest conflict length of all temporally conflicting cases.

3.3. Strategies to Overcome Temporal Conflicts

As an essential basis for adjusting events, the importance of an event is collectively influenced by attributes other than time information. These attributes are called decision attributes, including event type, activity, host, participant, location name, and periodicity.
An action  A defines the modification applied to an event’s time point:
A = ( a t y p e ,   a t i m e ) ,
where action type  a t y p e { h o l d ,   a d v a n c e ,   p o s t p o n e ,   a b a n d o n } . hold implies remaining a time point unchanged; advance/postpone represents adjusting a time point earlier/later; abandon indicates discarding the time interval. a t i m e represents the time length to adjust, which is zero when a t y p e is hold or abandon.
The strategy C describes adjustments applied to a TI’s events, and is defined as:
C = ( A 1 , s t a r t ,   A 1 , e n d ,   A 2 , s t a r t ,   A 2 , e n d ) ,
where A i ,   s t a r t and A i , e n d ( i = 1 , 2 ) are actions applied to the starting point and ending point of the i -th event in the TI.
We use the strategy type  c t y p e to qualitatively analyze a TI, which is defined to be:
c t y p e = ( a 1 , s t a r t t y p e ,   a 1 , e n d t y p e , a 2 , s t a r t t y p e ,   a 2 , e n d t y p e ) ,
where a i , s t a r t t y p e and a i , e n d t y p e   ( i = 1 , 2 ) are action types of A i ,   s t a r t and A i , e n d , respectively.
For brevity, we use c n s y s and c n u s e r to denote the strategy type that the system suggested and the user adopted in T I n , respectively.

4. STEP PL Framework

STEP PL focuses on developing a PeLA that can consistently and continuously improve its performance at tasks over time [2]. As depicted in Figure 2, a PeLA runs in an episodic manner, where an episode can be classified as a working or learning episode.
Working episode. Each time the agent performs a task, it first queries corresponding knowledge in the knowledge base (KB), i.e., experience E. Based on the knowledge queried, the problem-solving component generates a result for the task. By evaluating the processing result with metrics P, we can tell how well the agent performs. However, the agent may fail to achieve the desired performance for the task if experience E is flawed or the environment changes. Hence, the agent improves its performance through the stimuli-driven learning processes, where learning stimuli S are served by knowledge deficiencies. Once the agent detects a stimulus, it knows its knowledge in E cannot properly and adequately handle the task. At this point, a subsequent learning episode is triggered by the stimulus.
Learning episode. In a learning episode, the learning component uses stimulus-specific algorithms or heuristics to augment or revise the existing knowledge, resulting in an improvement in P. In the long-running process, the agent refines E continuously, leading to incremental performance improvements of P at T over time. In the end, the agent can satisfactorily perform tasks in T.

5. Learning through Solving Temporal Conflicts

This section first briefly introduces the system from STEP PL perspectives, then describes approaches to resolve TIs, and finally presents the stimuli-driven learning processes.

5.1. STEP PL Perspectives of the SmartCalendar System

Figure 3 illustrates how SmartCalendar consistently and incrementally improves performance from the STEP PL perspectives. We define the Tasks, Learning stimuli, Performance measure, and Experience as follows.
-
Tasks: We define tasks T = {   T 1 ,   T 2 } . T 1 is an arrangement of an ordinary event that does not temporally conflict with others. T 2 is a rescheduling of temporally conflicting events.
-
Stimuli: We define learning stimuli S = { S 1 , S 2 , S 2 } . S 1 and S 2 stand for successful processing tasks T 1 and T 2 , respectively; S 2 refers to the user disagreeing with the system suggestion.
-
Experience: we define experience E as the knowledge base that saves data regarding past events and TIs, and meta-knowledge including the IPR profile union, historical TIs’ dvalues, system-user strategy matrix, and model parameters.
-
Performance metric: we define performance metric P as the set containing two metrics. The first metric is weighted cross-entropy, which evaluates the distance between the system’s prediction and the user’s decision. And the second metric is strategy type acceptance rate, which assesses whether the system’s strategy type is compatible with the user’s.
In Section 5.2, we introduce methods for resolving TIs. In Section 5.3, we propose three learning components L C 1 , L C 2 and L C 3 to refine knowledge regarding events, TIs and update models. After processing T 1 , S 1 triggers L C 1 . After processing T 2 , the system generates a suggestion to reschedule conflicting events. If the user accepts the system’s suggestion, S 2 triggers L C 1 and L C 2 sequentially. Otherwise, S 2 , the inconsistency between the system’s and the user’s decision, triggers L C 1 , L C 2 and L C 3 sequentially.

5.2. Resolutions for Temporal Conflicts

5.2.1. Heuristics

The system provides recommendations based on the following principles.
Principle 1: A feasible strategy is reasonable and effective.
Reasonableness demands that a strategy satisfies events’ flexibility, makes as few changes as possible, and obeys the linearity of time: the starting point should precede the ending point. Effectiveness requires a strategy conducive to the TI’s settlement.
Given a T I n comprises ε i and ε j ( S t i S t j ). Table 1 lists all reasonable strategy types { c ( l ) t y p e } l = 1 17 when both events are flexible.
The effectiveness of a strategy type is determined by comparing the conflict length with the adjustable length of events in the specified directions. We first describe how to calculate the adjustable length of a single event in a specific direction. Let ε i , d a y denote the set of events on the day of ε i and that do not conflict with ε i . Denote by ε i , p r e ε i , d a y , ε i , n e x t ε i , d a y the set of events that end no later than S t i , and the set of events that start no earlier than S t i + D u r i , respectively. If ε p is the event with the largest ending point in the ε i , p r e , then we say ε p is the previous event of ε i , denoted as P r e v i o u s ( ε p , ε i ) . If ε p is the event with the smallest starting point in the ε i , p r e , then ε p is the next event of ε i , denoted as N e x t ( ε p , ε i ) . Denote by D u r s h o r t e n , D u r a d v a n c e , and D u r d e l a y the maximum length that an event can be shortened, started earlier, and ended later, respectively. The value of D u r s h o r t e n is 0 if the event is rigid, and half the event’s duration otherwise. The D u r a d v a n c e and D u r d e l a y of ε i , denoted as D u r i a d v a n c e and D u r i d e l a y , are given by:
D u r i a d v a n c e = { 0 , ε i   i s   r i g i d S t i S t p D u r p , ε p   s a t i s f i e s   P r e v i o u s ( ε p , ε i ) S t i T i m e s t ( α i ) , otherwise ,
D u r i d e l a y = { 0 , ε i   i s   r i g i d S t p S t i D u r i , ε q   s a t i s f i e s   N e x t ( ε p , ε i ) T i m e e n d ( α i ) S t i D u r i , otherwise ,
where T i m e s t ( α i ) and T i m e e n d ( α i ) are the user-defined earliest starting point and latest ending point of the event type α i , respectively.
For brevity, we use the q -th time point ( q { 1 , 2 , 3 , 4 } ) to refer to the first event’s starting point and ending point, and the second event’s starting point and ending point in a TI, respectively. For the q -th time point in T I n , its adjustable length under includes two aspects: the length that shortening or moving an event D u r n ,   q ,   l s i n g l e , and the length that additionally shortens the duration after moving an event D u r n , q ,   l a d d _ s h o r t , where D u r n ,   q ,   l s i n g l e and D u r n , q ,   l a d d _ s h o r t are defined to be:
D u r n , q , l s u n g l e { 0 , | q th   action   type   is   hold   or   abandon D u r i s h o r t e n , | q { 1 , 2 } l { 5 , 6 , 10 , 12 , 14 , 16 } D u r i a d v a n c e , | q { 1 , 2 } l { 7 , 11 , 17 } D u r i d e l a y , | q { 1 , 2 } l { 8 , 13 , 15 } D u r j s h o r t e n , | q { 3 , 4 } l { 1 , 10 , 11 , 12 , 13 } D u r j a d v a n c e , | q { 3 , 4 } l { 2 , 14 , 15 } D u r j d e l a y , | q { 3 , 4 } l { 3 , 16 , 17 }
D u r n , q ,   l a d d _ s h o r t = { D u r i s h o r t e n , q = = 1 l { 8 , 13 , 15 } D u r i s h o r t e n , q = = 2 l { 7 , 11 , 17 } D u r j s h o r t e n , q = = 3 l { 3 , 16 , 17 } D u r j s h o r t e n , q = = 4 l { 2 , 14 , 15 } 0 , o t h e r w i s e .
D u r n , l s i n g l e , the adjustable length that shortens or moves events in T I n under c ( l ) t y p e , is defined as:
D u r n , l s i n g l e = { 0 , l { 4 ,   9 } D u r n , 2 ,   l s i n g l e + D u r n , 3 ,   l s i n g l e , l { 1 ,   3 ,   6 ,   7 , 10 ,   11 ,   16 ,   17 } D u r n , 1 ,   l s i n g l e + D u r n , 4 ,   l s i n g l e , l { 2 ,   5 ,   8 ,   12 ,   13 ,   14 ,   15 } .
D u r n , l m a x _ a d j u s t , the maximum length that events in T I n can be adjusted under c ( l ) t y p e , is given by:
D u r n , l m a x _ a d j u s t = { D u r n , 2 ,   l s i n g l e + D u r n , 2 ,   l a d d _ s h o r t + D u r n , 3 ,   l s i n g l e + D u r n , 3 ,   l a d d _ s h o r t , l { 1 ,   3 ,   4 ,   6 ,   7 ,   9 , 10 ,   11 ,   16 ,   17 } D u r n , 1 ,   l s i n g l e + D u r n , 1 ,   l a d d _ s h o r t + D u r n , 4 ,   l s i n g l e + D u r n , 4 ,   l a d d _ s h o r t , otherwise
The minimum length to resolve T I n with c ( l ) t y p e , denoted as D u r n , l m i n _ r e q u i r e , is defined as:
D u r n , l m i n _ r e q u i r e = { 0 , l { 4 ,   9 } S t i + D u r i + D u r i j S t j , l { 1 ,   3 ,   6 ,   7 , 10 ,   11 ,   16 ,   17 } S t j + D u r j + D u r j i S t i , l { 2 ,   5 ,   8 ,   12 ,   13 ,   14 ,   15 } .
If D u r n , l m a x _ a d j u s t is no smaller than D u r n , l m i n _ r e q u i r e , then we say c ( l ) t y p e is effective in resolving T I n , the time length to be adjusted is assigned in proportion to their adjustable length. Algorithm 1 formalizes this idea.
Principle 2: Adjustments should be consistent with the user’s preference
The calendar’s personalized nature dictates that the user’s preference is the golden rule for addressing TIs. Assuming that the user’s preference for the attribute values and behavior pattern is constant, the user should handle similar situations the same way.
Algorithm 1: Allocate Time Length to a Strategy Type
Input: A   conflict :   T I n
A   strategy   type :   c ( l ) t y p e   ( l { 1 ,   2 ,   ,   17 } )
Output:A binary flag of effectiveness: effect_flag
Time length for current strategy type: Dur c _ t i m e
1. Calculate   D u r n , l m a x _ a d j u s t , D u r n , l m i n _ r e q u i r e
2. If   D u r n , l m a x _ a d j u s t < D u r n , l m i n _ r e q u i r e :
3. effect _ flag   False, Dur c _ t i m e [ 1 , 1 , 1 , 1 ]
4.Else:
5. effect _ flag   True, Dur c _ t i m e [ ]
6. For   q = 1 , 2 , 3 , 4 do:
7. D u r a _ t i m e D u r n , q ,   l s i n g l e min ( D u r n , l m i n _ r e q u i r e D u r n , l s i n g l e ,   1 ) + D u r n , q ,   l a d d _ s h o r t   w = 1 4 D u r n , w ,   l a d d _ s h o r t       max ( D u r n , l m i n _ r e q u i r e D u r n , l s i n g l e , 0 )
8. Dur c _ t i m e Dur c _ t i m e + D u r a _ t i m e
9.Return effect_flag, Dur c _ t i m e
In the following, we introduce three approaches to resolve TIs: RSIC, RSSC, and GSSD. After encountering a TI, the system tries the above methods in turn until it gets a feasible strategy.

5.2.2. Replicating Solutions of Identical Cases (RSIC)

Two events are the same if they have the same values in attributes other than the starting point and ending point. Two TIs are the same if they have identical temporal classes and conflict lengths, and their events are correspondingly the same. Suppose T I m consists of two events S t k and S t l ( S t k S t l , m < n ). If T I m , T I n are the same TIs, then we say that T I m occurs again, and the strategy adopted in T I m should be followed according to Principle 2.

5.2.3. Referencing Solutions of Similar Cases (RSSC)

According to Principle 2, the system can solve a TI by referring to its similar TI, where the similarity between TIs depends on three aspects: (1) the importance difference between conflicting events, (2) the TCs, and (3) the feasibility of the target strategy type.
The importance difference between two events is calculated from two perspectives: the importance of individual decision attribute values and the order relation between them. For brevity, the values discussed below refer to the decision attribute values. We first introduce how to derive a value’s importance. Values are divided into two categories: common values and rare values. A value is a common value if it occurs no less than a specified number of times, i.e., the corresponding threshold in θ r a r e = ( θ e t y p e , θ a c t ,   θ h o s t ,   θ p a r t ,   θ l o c ,   θ p e r i o ) ; otherwise, it is a rare value. A common value’s importance is evaluated by its frequency in important events, whereas a rare value’s importance is estimated using hidden correlation [46,47], as defined below.
Given two events, one has a value v a l u e i on attribute a t t r 1 , and the other has a value v a l u e j on attribute a t t r 2 . If two events have an identical value v a l u e k on one of the remaining attributes, then we say v a l u e k is one co-involved value for v a l u e i and v a l u e j . Hidden correlation [46] between v a l u e i and v a l u e j , denoted as H C i j , is defined as:
H C i j = k K ( ( v ( i | k ) ¯ v ( i ) ¯ ) ( v ( j | k ) ¯ v ( j ) ¯ ) k K ( v ( i | k ) ¯ v ( i ) ¯ ) 2 k K ( v ( j | k ) ¯ v ( j ) ¯ ) 2 ,
where v ( i ) ¯ and v ( j ) ¯ are the frequency at which events containing v a l u e i and v a l u e j are important, respectively; v ( i | k ) ¯ and v ( j | k ) ¯ indicate the frequency of events involving v a l u e i and v a l u e k , v a l u e j and v a l u e k are important, respectively. K is the set of all co-involved values for v a l u e i and v a l u e j . Given a rare value, common values under the same attribute are alternative values. The extent to which an alternative value matches the event is assessed by the transition probability (TP) [46]:
T P i = j C V H C i j ,
i = arg   max i T P i ,     s . t .   T P i θ t r a n s ,
where i represents the alternative value v a l u e i , C V is the set of common values in the event, θ t r a n s is the threshold at which an alternative value can be selected. A rare value’s importance is temporarily substituted by that of v a l u e i if there exists v a l u e i obtained by Eq. (18) and is estimated by its frequency in important events otherwise. For two conflicting events, we construct d f r e = ( d 1 f r e , d 2 f r e , ,   d 6 f r e ) , where d i f r e is the importance difference between their i-th values.
Next, we describe how to establish order relations between values. An Importance preference relation (IPR) is a special case of strict partial order on a set of events, represented by the symbol (See Appendix A for proof) [48,49,50]. Given an important event ε a and a normal event ε b , we denote by ε a ε b the fact that the user prefers ε a to ε b , which is equivalent to ( ϵ a ,   α a , p a h o s t , p a p a r t , a n a m e , ζ a ) ( ϵ b ,   α b , p b h o s t , p b p a r t , b n a m e , ζ b ) .
Property 1.
Provided that the operations are meaningful for an IPR they are applied to, the IPR still holds if adding or subtracting the same value to both sides of the relation.
According to Property 1, we introduce F i l t e r ( ε a , ε b ) to filter the duplicate information by replacing the same value in ε a as in ε b with “null”. We use I P R L and I P R R to denote the left and right-hand side in an I P R . The union of the left or right-hand side of two IPRs, represented by the symbol , is a tuple combining elements of the corresponding positions.
Theorem 1.
Given I P R i and I P R j , the strict partial order still holds for the union of I P R i L and I P R j L , and the union of I P R i R and I P R j R , which is denoted as I P R i I P R j :
I P R i L I P R j L I P R i R I P R j R .
If the left and right-hand sides correspond the same, then we say I P R i is equal to I P R j , denoted as I P R i = I P R j . The original information of I P R i , denoted as O I ( I P R i ) , takes the value null if there exists historical relations I P R m , I P R n , , I P R k whose union equals to I P R i and takes the value itself otherwise.
We use I P R i L I P R j R to denote relative complement of I P R i L in I P R j R , which is the tuple of elements in I P R i L but not in I P R j R .
Theorem 2.
The extra information inferred from I P R i and I P R j , denoted as E I ( I P R i , I P R j ) , is defined to be:
E I ( I P R i , I P R j ) = { I P R i L I P R j R I P R i R I P R j L , subValue ( I P R j L ,   I P R i R ) subValue ( I P R j R , I P R i L ) n u l l , o t h e r w i s e ,
where subValue ( I P R j L ,   I P R i R ) takes the value True if all not-null elements of I P R j L are also elements of I P R i R , False otherwise.
For a tuple pair t p 1 , t p 2 , I P R i is a regular subIPR if it fits subValue ( I P R i L ,   t p 1 ) subValue ( I P R i R ,   t p 2 ) ; I P R i is an inverse subIPR if it meets subValue ( I P R i L ,   t p 2 ) subValue ( I P R i R ,   t p 1 ) .
IPR profile union. An IPR profile Γ n is the transitive closure of all IPRs in O I P R n and E I P R n , where O I P R n and E I P R n are sets of all original information and extra information derived from T I n 1 to T I n , respectively [48]. An IPR profile union Ω n is the transitive closure of IPRs in IPR profile Γ n and past IPRs in IPR profile union Ω n 1 , which is defined to be:
Ω n = { Γ n , n = 1 Γ n Ω n 1 , o t h e r w i s e .
We quantify the order relation between ε i and ε j by the following procedure:
  • Get a tuple pair F i l t e r ( ε i , ε j ) , F i l t e r ( ε j , ε i ) by filtering out duplicate information.
  • Look for regular subIPRs and inverse subIPRs of F i l t e r ( ε i , ε j ) , F i l t e r ( ε j , ε i ) .
  • Construct d s u b I P R = ( d 1 s u b I P R ,   d 2 s u b I P R , , d 6 s u b I P R ) for each subIPR. For a subIPR I P R k , d i s u b I P R is defined as:
d i s u b I P R = { 0 , i th   element   of     I P R k L   is   null   U / N I P R k , I P R k   i s   a   r e g u l a r   s u b I P R U / N I P R k , I P R k   i s   a n   i n v e r s e   s u b I P R ,
where U is a hyper-parameter, N I P R k is the number of not-null values in the I P R k L .
Construct d n I P R = ( d 1 I P R ,   d 2 I P R , , d 6 I P R ) , where d i I P R is the largest value on the i-th element obtained from all subIPRs.
Dvalue. We use dvalue  d n = d n f r e + d n I P R to indicate the importance difference between two events in T I n . Let M n = ( d 1 ,   d 2 , , d n 1 ) denote the matrix that consists of dvalues for past n-1 TIs. Given d n and M n , X = ( x 1 ,   x 2 , , x n 1 ) T denote the coefficients corresponding to M n , and is defined as follows through the sparse decomposition (SD) process [1,51,52,53]:
min X X 1 ,   s . t .   M n X = d n ,
where · 1 is the L 1 -regularization and gives as few non-zero coefficients as possible.
We then introduce how the system utilizes the previous strategies. The historical strategies can be used directly or after being modified by Trans ( · ) , which is a function that transforms a strategy type using the following steps: First, swap action types on different events. Second, swap action types on each event’s starting point and ending point and reverse adjustment directions. Two strategy types c ( i ) t y p e and c ( j ) t y p e are inverse strategy types, denoted as c ( i ) t y p e c ( j ) t y p e , if the following condition holds:
c ( i ) t y p e c ( j ) t y p e , s . t . Trans ( c ( i ) t y p e ) = c ( j ) t y p e , Trans ( c ( j ) t y p e ) = c ( i ) t y p e .
The set of all inverse strategy types is { c ( 1 ) t y p e c ( 6 ) t y p e , c ( 2 ) t y p e c ( 8 ) t y p e , c ( 3 ) t y p e c ( 7 ) t y p e , c ( 4 ) t y p e c ( 9 ) t y p e , c ( 10 ) t y p e c ( 10 ) t y p e , c ( 11 ) t y p e c ( 16 ) t y p e , c ( 12 ) t y p e c ( 12 ) t y p e , c ( 13 ) t y p e c ( 14 ) t y p e , c ( 15 ) t y p e c ( 17 ) t y p e } .
We say T I n and T I m have inverse flags if ζ i ζ j and ζ i , ζ j are identical with ζ l , ζ k , respectively. We train two calibrated support vector machine (SVM) [1,54,55] f R and f I on { ( x i ,   j , y i ,   j r e g u l a r ) } and { ( x i ,   j , y i ,   j i n v e r s e ) } , where i, j denotes T I i , T I j , i j . x i ,   j is the training sample that contains TCs, feasible strategy types, and flags within T I i and T I j . y i ,   j r e g u l a r and y i ,   j i n v e r s e are true if feasible strategy types, flags, and the user’s strategy type of T I i and T I j are all the same or inverse, respectively; false otherwise.
The regular TC and inverse TC between T I m and T I n , denoted as TC r e g u l a r ( T I m ,   T I n ) and TC i n v e r s e ( T I m ,   T I n ) , are given by:
TC r e g u l a r ( T I m ,   T I n ) def [ f R ( x m ,   n ) θ T C T C m = = T C n ] ,
TC i n v e r s e ( T I m ,   T I n ) def [ f I ( x m ,   n ) θ T C ] ,
where T C m and T C n are temporal classes of T I m and T I m , respectively; θ T C is the threshold for the SVM model.
Let STI r s c ( T I m , T I n ) and STI i s c ( T I m , T I n ) denote the regular similar case and inverse similar case between T I m and T I n :
STI r s c ( T I m , T I n ) def [ c m u s e r   i s   f e a s i b l e x m θ S D TC r e g u l a r ( T I m ,   T I n ) ] ,
STI i s c ( T I m , T I n ) def [ Trans ( c m u s e r )   i s   f e a s i b l e x m θ S D TC i n v e r s e ( T I m ,   T I n ) ] .
Similar TIs between T I m and T I n , denoted as STI ( T I m , T I n ) , is defined to be:
STI ( T I m , T I n ) def [ STI r s c ( T I m , T I n ) STI i s c ( T I m , T I n ) ] .
We use STI to denote all similar TIs of T I n . Let x m l , o n l denote score on c ( l ) t y p e obtained by T I m and all similar TIs of T I n , respectively. x m l , o n l are given by:
x m l = { | x m | , ( STI r s c ( T I m , T I n ) c m u s e r = = c ( l ) t y p e ) ( STI i s c ( T I m , T I n ) Trans ( c m u s e r ) = = c ( l ) t y p e ) 0 , o t h e r w i s e ,
o n l = m S T I x m l .
y ^ n l , the possibility that the system recommends c ( l ) t y p e , is given by:
y ^ n l = exp ( o n l ) k = 1 17 exp ( o n k ) .
Let y ^ n = ( y ^ n 1 ,   y ^ n 2 , , y ^ n 17 ) denote the prediction probability distribution of T I n . Let y n = ( y n 1 ,   y n 2 , , y n 17 ) denote the label probability distribution, where y n l is 1 if the user adopted c ( l ) t y p e and 0 otherwise. The system suggests the strategy type with the highest prediction probability returned by Z ( y ^ n ) :
Z ( y ^ n ) = c ( l ) t y p e ,   where   l = arg   max k   y ^ n k .
Algorithm 2 describes how to find similar TIs and generate strategies based on KB ( n ,   w n ) , where KB ( n ,   w n ) specifies the contents of KB correspond to T I n and w n = ( θ n r a r e , θ n m a t r i x ,   θ n t r a n s , θ n S D , θ n T C ) assigns values of thresholds.
Algorithm 2: Resolve Temporal Inconsistencies by Similar Circumstances
Input: A   conflict :   T I n
Knowledge   base :   KB ( n ,   w n )
Output: System s   suggestion :   C sys
Initialization:
C sys [ [ hold , 1 ] ,   [ hold , 1 ] ,   [ hold , 1 ] ,   [ hold , 1 ] ]   # -1 indicates an invalid action
STI [] # indexes of all similar TIs
Dur a l l _ c _ t i m e [ ] # adjustment lengths for all possible strategy types
1. Get   d n f r e   and   d n I P R ,   d n d n f r e + d n I P R
2.Sparse decomposition: min X X 1 ,   s . t .   M n X = d n
3.For i = 1 ,   2 ,   , n 1 do:
4. If x i θ S D   and   TC r e g u l a r ( T I i ,   T I n ) :
5. Get effect_flag and Dur c _ t i m e of c i u s e r # by Algorithm 1
6. Elif x i θ S D and TC i n v e r s e ( T I i ,   T I n ) :
7. Get   effect _ flag   and   Dur c _ t i m e of Trans ( c i u s e r ) # by Algorithm 1
8. Else:
9. effect _ flag   False
10. If effect_flag is True:
11. Update   S T I   and   Dur a l l _ c _ t i m e
12.If exists similar TI(s) with T I n :
13. Generate prediction probability distribution y ^ n # by Eq. (33)
14. Choose a strategy type c ( l ) t y p e # by Eq. (34)
15. Get corresponding time length of c ( l ) t y p e ,   generate   suggestion   C sys
16.Return C sys

5.2.4. Generating Solutions with Strategy Distribution (GSSD)

The generic solution GSSD exploits the system-user strategy matrix M ctype :
M ctype = [ n 1 , 1 n 1 , 17 n 17 , 1 n 17 , 17 ] = ( n i , j ) 17 × 17 ,
where n i , j is the number of times the system suggested c ( i ) t y p e and the user adopt c ( j ) t y p e . The main diagonal elements indicate the number of times the system and the user have agreed, while elements outside represent discrepancies. Let P j = n j , j k = 1 17 n j , k denote the recommendation probability that the system suggests c ( j ) t y p e   ( P j = 0   w h e n   n j , j = 0 ) . The matrix is gradually populated, and the information it carries is partial and incomplete if the number of TIs is less than the threshold θ m a t r i x . In this case, the system randomly chooses one from all feasible strategies. Otherwise, the system suggests the strategy with the highest recommendation probability. Algorithm 3 formalizes the above process.
Algorithm 3: Generate a suggestion with strategy distribution
Input: A   conflict :   T I n
The system-user strategy matrix: M ctype
The threshold for M ctype : θ m a t r i x
Output: System s   suggestion :   C sys
Initialization:
C p o s s   [ ]   #   all   possible   strategy   types   for   T I n
Dur a l l _ c _ t i m e [ ] # adjustment lengths for all possible strategy types
1. For   l = 1 ,   2 ,   , 17 do: # find all feasible strategy types
2. Get   effect _ flag   and   Dur c _ t i m e of c ( l ) t y p e # by Algorithm 1
3. If effect_flag is True:
4. Update   C p o s s   and   Dur a l l _ c _ t i m e
5. If   n θ m a t r i x :
6. l arg   max j P j ,   where   P j = n j , j k = 1 17 n j , k ,   c ( j ) t y p e C p o s s   ( P j = 0   w h e n   n j , j = 0 )
7.Else:
8. Randomly choose a strategy type c ( l ) t y p e C p o s s
9.Get corresponding time length of c ( l ) t y p e , generate system suggestion C sys
10.Return C sys

5.3. Incremental Performance Improvement through Knowledge Refinement

As depicted in Figure 3, the system uses three learning components L C 1 , L C 2 and L C 3 to refine the existing knowledge, where:
-
L C 1 is the component that records and revises the event-related knowledge. When L C 1 is invoked, it first records the new event. After that, it derives original and extra information from the IPRs generated, based on the new event and historical events, and then updates the IPR profile union accordingly. Event accumulation and IPR profile union updates lead to better estimation of a TI’s dvalue. Hence a subsequent update on historical TIs’ dvalues is in order, which provides a better basis for using RSSC to resolve TIs.
-
L C 2 is the component that records and revises the TI-related knowledge. When L C 2 is invoked, it records the new TI and maintains M ctype by aggregating the system’s and user’s decisions over past TIs. As the number of historical TIs grows, the probability that the system can find the same or similar TI of the current TI increases, and thus the probability that the system uses RSIC and RSSC methods increases. The refinement of M ctype helps the system gradually approach the user’s decision preference in the genera case, which in turn makes the strategy generated by GSSD more acceptable to the user.
-
L C 3 is the component that optimizes models. When L C 3 is triggered, it retrains the calibrated SVM and updates parameters w . The updates of calibrated SVM and parameters w enable the system to determine the similarity of TIs more accurately, thereby improving the success rate of solving TIs by RSSC.
We next define how the system learns from strategy discrepancies. If action types of the system’s and the user’s strategies are correspondingly the same, then we say they agree on the solution of T I n , denoted as c n s y s = c n u s e r . Otherwise, we say there is a strategy discrepancy due to the incompatible action types, denoted as c n s y s · c n u s e r . c n s y s is obtained from y ^ n , which in turn is generated by the system model f :
f :   T I n × KB ( n ,   w n ) y ^ n .
The strategy discrepancy on T I n is evaluated by weighted cross-entropy W C E ( y n , y ^ n ) :
W C E ( y n , y ^ n ) = j = 1 17 ( y n j l o g y ^ n j + ( 1 P j ) ( 1 y n j ) l o g ( 1 y ^ n j ) ) = W C E ( y n , f ( T I n , KB ( n ,   w n ) ) )
Since the data corresponding to   T I n is deterministic, we address the knowledge deficiency from the parameter perspective. We present loss to measure the strategy discrepancies achieved on past n TIs with the specific parameters w k W ( W is the set of all possible values of w ), which is defined as:
l o s s ( n ,   w k ) = 1 n i = 1 n W C E ( y i , f ( T I i , K B ( i ,   w k ) ) ) .
The strategy discrepancy leads the system to update parameters to the ones that obtain minimal loss:
w n * = arg   min w i l o s s ( n , w i ) .

6. Results

6.1. Experimental Setup and Metrics

To validate the effectiveness of our system, we carried out experiments on resolving TIs involving five participants in total (four males and one female). Generally, the number of events ranges from 3359 to 5110, and the amount of TIs ranges from 489 to 853. Table 2 gives participant information. See Appendix B for details of experimental data.
We adopt a heuristic method Base algorithm (BA), SC-based methods, and a traditional ML model SVM@n as benchmarks: BA randomly suggests a choose a strategy from the feasible ones. SC-based methods include SC_V1 (deals with TIs by RSIC and GSSD), SC_V2 (additionally employs the RSSC), and SC_V3 (the SmartCalendar system with complete components); SVM@n is an SVM model that starts from the 80 t h TI and retrains every 20 more TIs.
We propose strategy type acceptance rate (SA) to measure the consistency between system recommendations and user decisions, which is defined to be:
1 MA t = 0 MA I ( c n t s y s = c n t u s e r ) ,
where I ( · ) is an indicator function that takes the value 1 if the statement is true and takes 0 otherwise. We smooth out short-term fluctuations and emphasize longer-term trends through a moving average with a size of 200. A moving average is a series of averages of different fixed-size subsets of the complete dataset.

6.2. Experimental Results

First, we compare the distance between the SC-based methods’ strategies and the user’s strategies in terms of WCE. Figure 4 illustrates that the WCE of SC_V1 shows a steady decline. SC_V2 followed a similar trend, but the result was slightly lower than that of SC_V1, which validates the effectiveness of the RSSC module. Meanwhile, SC_V3 witnessed a significant fall. SC_V3′s lead over SC_V2 verifies the validity of knowledge refinements. Refer to Appendix B for a discussion of convergence.
Next, we evaluate the adaptability to user preferences of our approach, BA, and SVM@n concerning SA. Figure 5 shows that BA and SVM@n remained at a lower level and did not show an ascending trend on SA with increasing data, which indicates that neither the heuristic nor the traditional ML algorithm can be directly applied to the calendar scenario. In contrast, SC-based methods have steadily increased in SA. The SA of SC_V2 is higher than SC_V1, indicating that the system obtains a more accurate recommendation using RSSC. SC_V3 increased sharply and finally led SC_V2 by 26.5%, 9%, 15%, 10%, and 23% among five users, proving that the system gains greater adaptability to the dynamic and complex environment than other approaches.
The system incrementally improves its performance through continuous knowledge refinement. Knowledge refinements can be classified as data accumulation and meta-knowledge updates. Since the occurrence of data is beyond the system’s control, we focus on how the system achieves incremental performance improvement through meta-knowledge refinement.
We present how the meta-knowledge has been updated over time from the perspective of IPRs and parameters. As shown in Figure 6, the number of IPRs experienced a rapid surge in the first 20% of data and continued with a gradual increase. This is because, as the data accumulates, the probability that an incoming event has ever occurred in history increases. Accordingly, the average amount of origin information and additional information it can produce gradually decreases.
Figure 7 delineates the update of w over time for each user. The y-axis represents all sets of values of w and the star marker highlights the moment when w changes. For each user, parameters were updated multiple times and ended up with different values. The above result shows that no model works for every user at every stage, demonstrating the need for continuous learning.
We assess the performance of RSIC, RSSC, and GSSD in SC_V3 in terms of the percentage of successfully solved TIs. Table 3 depicts that for each 10% increase in data, RSSC obtained the highest improvement rate, followed by RSIC, and GSSD with the lowest. The growth in RSIC indicates that the data augmentation provides a good basis for the system to refer to history. In addition, the refinement in meta-knowledge further improves the performance of RSSC and GSSD, which is why SC_V3 outperforms SC_V2 in Figure 4 and Figure 5.
Figure 8 shows that the gap between RSSC and GSSD continues to narrow as data increases, and RSSC outraced GSSD on Users 2, 3, and 5. The fact that the system prefers RSSC over GSSD indicates that the system gradually approaches the user’s decision preferences through continuous knowledge refinements. And RSSC is playing a more and more important role in resolving TIs. In summary, continuous knowledge refinement is indispensable and effective for a PeLA to acquire self-improvement capability.

6.3. Discussions

STEP PL emphasizes incremental task performance improvement through sequential stimuli-driven learning episodes. In contrast to STEP PL, learning processes in LL and NEL are neither triggered by stimuli nor oriented toward incremental performance improvement, which makes these two paradigms inapplicable for developing an intelligent calendar system that can gradually adapt to the user’s preference.

7. Conclusions and Future Work

In contrast to the one-off metaphor, STEP PL emphasizes accomplishing incremental task performance through continuous knowledge refinements. In this work, we investigated the problems that a long-term event scheduling system encounters when resolving TIs, and proposed a system SmartCalendar based on the STEP PL. First, we formally define events, complete temporal classes, and strategies. Then, we theoretically model the SmartCalendar system to detect, handle, and learn from TIs, enabling it to consistently improve its problem-solving performance. Finally, we conduct experiments to validate our approach and demonstrate that our approach outperforms the comparison algorithms in terms of strategy type acceptance rate and self-improvement ability. The collected dataset and prototype system are open source on the GitHub repository.
Future work can be pursued in the following directions:
  • Expand the dataset. The current dataset contains one year of events and TIs for five users. In future work, we plan to expand the dataset in terms of the number of users and the time span to validate the system’s adaptability to a more complex environment.
  • Classify learning stimuli. Learning stimuli play an essential role in a PeLA’s performance improvement. In future work, we plan to classify learning stimuli from the knowledge perspective, which provides guidelines for people to develop PeLAs in other fields.

Author Contributions

Data curation, X.S., H.Q.; Writing— original draft, J.T.; Writing—review and editing, D.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Macau Science and Technology Foundation under Grant 0025/2019/AKP.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset is available at https://github.com/hensontang9/Temporal_conflicts, (accessed on 14 September 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Proof of Strict Partial Order

For all ε i , ε j ,   ε k ε , a binary relation on a set ε satisfies the following conditions:
(Irreflexivity): ε i cannot be more important than itself
(Transitivity): if the user prefers ε i to ε j and prefers ε j to ε k , then it is easy to know that ε i is preferred to ε k (Asymmetry): if the user prefers ε i to ε j , then ε j cannot be preferred to ε i
Therefore, a binary relation on a set ε is a strict partial order. □

Appendix A.2. Proof of Theorem 1

Given I P R i and I P R j , relation still holds by adding I P R j L on both sides of I P R i according to property 1, then we get
I P R i L I P R j L I P R i R I P R j L
similarly, we have
I P R j L I P R i R I P R j R I P R i R
Hence, we obtain the following by transitivity
I P R i L I P R j L I P R i R I P R j L I P R i R I P R j R
Q.E.D. □

Appendix A.3. Proof of Theorem 2

Given I P R i and I P R j that satisfy subValue ( I P R j L ,   I P R i R ) and subValue ( I P R j R ,   I P R i L ) . By property 1, we get
I P R j L I P R i R I P R j L I P R j R I P R i R I P R j L
which equals
I P R i R I P R j R I P R i R I P R j L
with transitivity, we get
I P R i L I P R i R I P R j R I P R i R I P R j L
by assumption we have
subValue ( I P R j L ,   I P R i R )
by property 1, we subtract I P R i L on both sides
I P R i L I P R j R I P R i R I P R j L
Q.E.D. □

Appendix B

It is very difficult and time-consuming to collect real calendar data: either let the user record each day’s events, then the time span of the required experimental data determines the time needed to collect the data, or let the user recall historical events, which is almost impossible due to the forgetfulness of human beings. So, we used an indirect method to collect experimental data: First, users input their non-duplicate daily events. Next, the system generates a 365-day temporal sequence, and each event is scheduled for specific days according to its periodicity. The user is asked to enter solutions to the temporally conflicting events.
The experimental results show that the system does not converge on the WCE metric, which is related to two factors: meta-knowledge and data. Figure 7 reveals that 81.7% of the model parameter updates occurred on the first 400 TIs. This is because, on the one hand, fewer strategy discrepancies occur afterward, and on the other hand, the system is becoming more aware of user preferences, and the current parameters are already the best ones for a given data set. Nevertheless, Figure 5 and Figure 8 delineate that the system’s performance grows steadily. Therefore, data insufficiency is the main reason for the system not converging. However, increasing data volume brings a new problem: The longer the time span of the dataset, the higher the probability that the user’s preference will change. Thus, we choose a time span of one year to balance the dataset being too small and causing preference changes. Assuming that the user preference remains unchanged, the system performance regarding WCE on more data afterward is expected to maintain the downward trend, then eventually converge.

References

  1. Zhou, Z.-H. Machine Learning; Springer Nature: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  2. Zhang, D. From One-off Machine Learning to Perpetual Learning: A STEP Perspective. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 17–23. [Google Scholar]
  3. Jordan, M.I.; Mitchell, T.M. Machine Learning: Trends, Perspectives, and Prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  4. Buckler, B. A Learning Process Model to Achieve Continuous Improvement and Innovation; The Learning Organization: Bingley, UK, 1996. [Google Scholar]
  5. Mitchell, T.; Cohen, W.; Hruschka, E.; Talukdar, P.; Yang, B.; Betteridge, J.; Carlson, A.; Dalvi, B.; Gardner, M.; Kisiel, B.; et al. Never-Ending Learning. Commun. ACM 2018, 61, 103–115. [Google Scholar] [CrossRef] [Green Version]
  6. Chen, Z.; Liu, B. Lifelong Machine Learning. Synth. Lect. Artif. Intell. Mach. Learn. 2018, 12, 1–207. [Google Scholar]
  7. Silver, D.L.; Yang, Q.; Li, L. Lifelong Machine Learning Systems: Beyond Learning Algorithms. In Proceedings of the 2013 AAAI Spring Symposium Series, Palo Alto, CA, USA, 25–27 March 2013. [Google Scholar]
  8. Chen, Z.; Liu, B. Topic Modeling Using Topics from Many Domains, Lifelong Learning and Big Data. In Proceedings of the International Conference on Machine Learning, PMLR, Bejing, China, 22–24 June 2014; pp. 703–711. [Google Scholar]
  9. Mitchell, T.; Fredkin, E. Never-Ending Language Learning. In Proceedings of the 2014 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 27–30 October 2014; p. 1. [Google Scholar]
  10. Zhang, D.; Gregoire, E. Learning through Overcoming Inconsistencies. In Proceedings of the 2016 27th International Workshop on Database and Expert Systems Applications (DEXA), Porto, Portugal, 5–8 September 2016; pp. 121–128. [Google Scholar]
  11. Zhang, D. Learning through Overcoming Temporal Inconsistencies. In Proceedings of the 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC) Beijing, China, 6–8 July 2015; pp. 141–148. [Google Scholar]
  12. Zhang, D. Learning through Explaining Observed Inconsistencies. In Proceedings of the 2014 IEEE 13th International Conference on Cognitive Informatics and Cognitive Computing, London, UK, 18–20 August 2014; pp. 133–139. [Google Scholar]
  13. Zhang, D. Learning through Overcoming Incompatible and Anti-Subsumption Inconsistencies. In Proceedings of the 2013 IEEE 12th International Conference on Cognitive Informatics and Cognitive Computing, New York, NY, USA, 16–18 July 2013; pp. 137–142. [Google Scholar]
  14. Warren, T. Microsoft Is Merging Its Outlook and Sunrise Apps. Available online: https://www.theverge.com/2015/10/28/9627014/microsoft-ios-android-apps-sunrise-outlook-combined (accessed on 23 August 2022).
  15. David, K. What I Learned About Productivity While Reinventing Google Calendar. Available online: https://observer.com/2017/01/what-i-learned-about-productivity-while-reinventing-google-calendar/ (accessed on 23 August 2022).
  16. Darrell, E. Google Acquires Timeful To Bring Smart Scheduling To Google Apps. Available online: https://social.techcrunch.com/2015/05/04/google-acquires-timeful-to-bring-smart-scheduling-to-google-apps/ (accessed on 23 August 2022).
  17. Zhang, D. On Temporal Properties of Knowledge Base Inconsistency. In Transactions on Computational Science V; Gavrilova, M.L., Tan, C.J.K., Wang, Y., Chan, K.C.C., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5540, pp. 20–37. ISBN 978-3-642-02096-4. [Google Scholar]
  18. Roorda, M.J.; Doherty, S.T.; Miller, E.J. Operationalising Household Activity Scheduling Models: Addressing Assumptions and the Use of New Sources of Behavioural Data. In Integrated Land-Use and Transportation Models; Emerald Group Publishing Limited: Bingley, UK, 2005. [Google Scholar]
  19. Auld, J.; Mohammadian, A.K. Activity Planning Processes in the Agent-Based Dynamic Activity Planning and Travel Scheduling (ADAPTS) Model. Transp. Res. Part A Policy Pract. 2012, 46, 1386–1403. [Google Scholar] [CrossRef]
  20. Auld, J.; Mohammadian, A.K.; Doherty, S.T. Modeling Activity Conflict Resolution Strategies Using Scheduling Process Data. Transp. Res. Part A Policy Pract. 2009, 43, 386–400. [Google Scholar] [CrossRef]
  21. Doherty, S.T.; Nemeth, E.; Roorda, M.; Miller, E.J. Computerized Household Activity-Scheduling Survey for Toronto, Canada, Area: Design and Assessment. Transp. Res. Rec. 2004, 1894, 140–149. [Google Scholar] [CrossRef]
  22. Javanmardi, M.; Fasihozaman Langerudi, M.; Shabanpour, R.; Mohammadian, A. An Optimization Approach to Resolve Activity Scheduling Conflicts in ADAPTS Activity-Based Model. Transportation 2016, 43, 1023–1039. [Google Scholar] [CrossRef]
  23. One in Ten Rule. Available online: https://en.wikipedia.org/w/index.php?title=One_in_ten_rule&oldid=1095296451 (accessed on 29 August 2022).
  24. Konur, S. A Survey on Temporal Logics. arXiv 2010, arXiv:1005.3199. [Google Scholar]
  25. Allen, J.F.; Ferguson, G. Actions and Events in Interval Temporal Logic. J. Log. Comput. 1994, 4, 531–579. [Google Scholar] [CrossRef] [Green Version]
  26. Allen, J.F. Towards a General Theory of Action and Time. Artif. Intell. 1984, 23, 123–154. [Google Scholar] [CrossRef]
  27. Lutz, C.; Wolter, F.; Zakharyaschev, M. Temporal Description Logics: A Survey. In Proceedings of the 2008 15th International Symposium on Temporal Representation and Reasoning, Montreal, QC, Canada, 16–18 June 2008; pp. 3–14. [Google Scholar]
  28. Schiewe, P.; Schöbel, A. Periodic Timetabling with Integrated Routing: Toward Applicable Approaches. Transp. Sci. 2020, 54, 1714–1731. [Google Scholar] [CrossRef]
  29. Borndörfer, R.; Lindner, N.; Roth, S. A Concurrent Approach to the Periodic Event Scheduling Problem. J. Rail Transp. Plan. Manag. 2020, 15, 100175. [Google Scholar] [CrossRef]
  30. Liebchen, C.; Möhring, R.H. The Modeling Power of the Periodic Event Scheduling Problem: Railway Timetables—And Beyond. In Algorithmic Methods for Railway Optimization; Springer: Berlin/Heidelberg, Germany, 2007; pp. 3–40. [Google Scholar]
  31. Liebchen, C.; Möhring, R.H. A Case Study in Periodic Timetabling. Electron. Notes Theor. Comput. Sci. 2002, 66, 18–31. [Google Scholar] [CrossRef] [Green Version]
  32. Srirama, S.N.; Flores, H.; Paniagua, C. Zompopo: Mobile Calendar Prediction Based on Human Activities Recognition Using the Accelerometer and Cloud Services. In Proceedings of the 2011 Fifth International Conference on Next Generation Mobile Applications, Services and Technologies, Cardiff, UK, 14–16 September 2011; pp. 63–69. [Google Scholar]
  33. Gkekas, G.; Kyrikou, A.; Ioannidis, N. A Smart Calendar Application for Mobile Environments. In Proceedings of the 3rd International Conference on Mobile Multimedia Communications, Nafpaktos, Greece, 27–29 August 2007; pp. 1–5. [Google Scholar]
  34. Paniagua, C.; Flores, H.; Srirama, S.N. Mobile Sensor Data Classification for Human Activity Recognition Using MapReduce on Cloud. Procedia Comput. Sci. 2012, 10, 585–592. [Google Scholar] [CrossRef] [Green Version]
  35. About Us | Calendly. Available online: https://calendly.com/about (accessed on 23 August 2022).
  36. Halang, C.; Schirmer, M. Towards a User-Centred Planning Algorithm for Automated Scheduling in Mobile Calendar Systems. INFORMATIK 2012. 2012. Available online: https://www.researchgate.net/publication/235970541_Towards_a_User-centred_Planning_Algorithm_for_Automated_Scheduling_in_Mobile_Calendar_Systems (accessed on 29 August 2022).
  37. Zaidi, A.K.; Wagenhals, L.W. Planning Temporal Events Using Point–Interval Logic. Math. Comput. Model. 2006, 43, 1229–1253. [Google Scholar] [CrossRef]
  38. Dvorák, F.; Barták, R.; Bit-Monnot, A.; Ingrand, F.; Ghallab, M. Planning and Acting with Temporal and Hierarchical Decomposition Models. In Proceedings of the 2014 IEEE 26th International Conference on Tools with Artificial Intelligence, Limassol, Cyprus, 10–12 November 2014; pp. 115–121. [Google Scholar]
  39. Chen, Z.; Ma, N.; Liu, B. Lifelong Learning for Sentiment Classification. arXiv 2018, arXiv:1801.02808. [Google Scholar]
  40. Liu, B. Lifelong Machine Learning: A Paradigm for Continuous Learning. Front. Comput. Sci. 2017, 11, 359–361. [Google Scholar] [CrossRef]
  41. Chen, Z.; Hruschka, E.R.; Liu, B. Lifelong Machine Learning and Computer Reading the Web. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 2117–2118. [Google Scholar]
  42. Carlson, A.; Betteridge, J.; Kisiel, B.; Settles, B.; Hruschka, E.R.; Mitchell, T.M. Toward an Architecture for Never-Ending Language Learning. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010. [Google Scholar]
  43. Betteridge, J.; Carlson, A.; Hong, S.A.; Hruschka Jr, E.R.; Law, E.L.; Mitchell, T.M.; Wang, S.H. Toward Never Ending Language Learning. In Proceedings of the AAAI Spring Symposium: Learning by Reading and Learning To Read, Stanford, CA, USA, 23–25 March 2009; pp. 1–2. [Google Scholar]
  44. Venema, Y. Temporal Logic. Blackwell Guide Philos. Log. 2017, 203–223. [Google Scholar] [CrossRef]
  45. Brunello, A.; Sciavicco, G.; Stan, I.E. Interval Temporal Logic Decision Tree Learning. In Proceedings of the European Conference on Logics in Artificial Intelligence, Rende, Italy, 7–11 May 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 778–793. [Google Scholar]
  46. Mu, Y.; Xiao, N.; Tang, R.; Luo, L.; Yin, X. An Efficient Similarity Measure for Collaborative Filtering. Procedia Comput. Sci. 2019, 147, 416–421. [Google Scholar] [CrossRef]
  47. Anand, D.; Bharadwaj, K.K. Utilizing Various Sparsity Measures for Enhancing Accuracy of Collaborative Recommender Systems Based on Local and Global Similarities. Expert Syst. Appl. 2011, 38, 5101–5109. [Google Scholar] [CrossRef]
  48. Pereira, F.S.F.; Gama, J.; de Amo, S.; Oliveira, G.M.B. On Analyzing User Preference Dynamics with Temporal Social Networks. Mach. Learn. 2018, 107, 1745–1773. [Google Scholar] [CrossRef] [Green Version]
  49. Jagerman, R.; Markov, I.; de Rijke, M. When People Change Their Mind: Off-Policy Evaluation in Non-Stationary Recommendation Environments. In Proceedings of the Twelfth ACM International Conference on web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 447–455. [Google Scholar]
  50. Li, C.; De Rijke, M. Cascading Non-Stationary Bandits: Online Learning to Rank in the Non-Stationary Cascade Model. arXiv 2019, arXiv:1905.12370. [Google Scholar]
  51. Cheng, H.; Liu, Z.; Hou, L.; Yang, J. Sparsity-Induced Similarity Measure and Its Applications. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 613–626. [Google Scholar] [CrossRef]
  52. Ma, H.; Gou, J.; Wang, X.; Ke, J.; Zeng, S. Sparse Coefficient-Based k-Nearest Neighbor Classification. IEEE Access 2017, 5, 16618–16634. [Google Scholar] [CrossRef]
  53. Cherian, A. Nearest Neighbors Using Compact Sparse Codes. In Proceedings of the International Conference on Machine Learning, PMLR, Bejing, China, 22–24 June 2014; pp. 1053–1061. [Google Scholar]
  54. Pisner, D.A.; Schnyer, D.M. Support Vector Machine. In Machine learning; Elsevier: Amsterdam, The Netherlands, 2020; pp. 101–121. [Google Scholar]
  55. Suthaharan, S. Support Vector Machine. In Machine Learning models and Algorithms for Big Data Classification; Springer: Berlin/Heidelberg, Germany, 2016; pp. 207–235. [Google Scholar]
Figure 1. Interval temporal relations for events ε i and ε j [17].
Figure 1. Interval temporal relations for events ε i and ε j [17].
Sustainability 14 16178 g001
Figure 2. STEP PL framework.
Figure 2. STEP PL framework.
Sustainability 14 16178 g002
Figure 3. System framework.
Figure 3. System framework.
Sustainability 14 16178 g003
Figure 4. Weighted cross-entropy for five users.
Figure 4. Weighted cross-entropy for five users.
Sustainability 14 16178 g004
Figure 5. Strategy type acceptance rate for five users.
Figure 5. Strategy type acceptance rate for five users.
Sustainability 14 16178 g005
Figure 6. IPR updates for five users.
Figure 6. IPR updates for five users.
Sustainability 14 16178 g006
Figure 7. Parameter updates for five users.
Figure 7. Parameter updates for five users.
Sustainability 14 16178 g007
Figure 8. Percentage of TIs successfully solved by RSIC, RSSC, and GSSD for five users.
Figure 8. Percentage of TIs successfully solved by RSIC, RSSC, and GSSD for five users.
Sustainability 14 16178 g008
Table 1. All reasonable strategy types when both events are flexible.
Table 1. All reasonable strategy types when both events are flexible.
Optional   Pairs   of   Action   Types   for   ε j Optional   Pairs   of   Action   Types   for   ε i
𝒉𝒐𝒍𝒅−𝒉𝒐𝒍𝒅𝒑𝒐𝒔𝒕𝒑𝒐𝒏𝒆−𝒉𝒐𝒍𝒅𝒉𝒐𝒍𝒅−𝒂𝒅𝒗𝒂𝒏𝒄𝒆𝒂𝒅𝒗𝒂𝒏𝒄𝒆−𝒂𝒅𝒗𝒂𝒏𝒄𝒆𝒑𝒐𝒔𝒕𝒑𝒐𝒏𝒆−𝒑𝒐𝒔𝒕𝒑𝒐𝒏𝒆𝒂𝒃𝒂𝒏𝒅𝒐𝒏−𝒂𝒃𝒂𝒏𝒅𝒐𝒏
ℎ𝑜𝑙𝑑− ℎ𝑜𝑙𝑑 c ( 5 ) t y p e c ( 6 ) t y p e c ( 7 ) t y p e c ( 8 ) t y p e c ( 9 ) t y p e
𝑝𝑜𝑠𝑡𝑝𝑜𝑛𝑒−ℎ𝑜𝑙𝑑 c ( 1 ) t y p e c ( 10 ) t y p e c ( 11 ) t y p e
ℎ𝑜𝑙𝑑−𝑎𝑑𝑣𝑎𝑛𝑐𝑒 c ( 12 ) t y p e c ( 13 ) t y p e
a d v a n c e a d v a n c e c ( 2 ) t y p e c ( 14 ) t y p e c ( 15 ) t y p e
p o s t p o n e p o s t p o n e c ( 3 ) t y p e c ( 16 ) t y p e c ( 17 ) t y p e
a b a n d o n a b a n d o n c ( 4 ) t y p e
Table 2. Participant information.
Table 2. Participant information.
UserAgeSexProfessionEducational Background
128MPhD studentPostgraduate
229MIT engineerBachelor
333MSIPI engineerBachelor
428MPhD studentPostgraduate
520FCollege studentHigh school
Table 3. The increase in the percentage of TIs successfully solved by RSIC, RSSC, and GSSD for each 10% increase in data.
Table 3. The increase in the percentage of TIs successfully solved by RSIC, RSSC, and GSSD for each 10% increase in data.
User RSICRSSCGSSD
10.006700.018420.01060
20.012280.013400.00167
30.009490.010600.00725
40.006140.006700.00446
50.002230.015070.00111
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, J.; Zhang, D.; Sun, X.; Qin, H. Improving Temporal Event Scheduling through STEP Perpetual Learning. Sustainability 2022, 14, 16178. https://doi.org/10.3390/su142316178

AMA Style

Tang J, Zhang D, Sun X, Qin H. Improving Temporal Event Scheduling through STEP Perpetual Learning. Sustainability. 2022; 14(23):16178. https://doi.org/10.3390/su142316178

Chicago/Turabian Style

Tang, Jiahua, Du Zhang, Xibin Sun, and Haiou Qin. 2022. "Improving Temporal Event Scheduling through STEP Perpetual Learning" Sustainability 14, no. 23: 16178. https://doi.org/10.3390/su142316178

APA Style

Tang, J., Zhang, D., Sun, X., & Qin, H. (2022). Improving Temporal Event Scheduling through STEP Perpetual Learning. Sustainability, 14(23), 16178. https://doi.org/10.3390/su142316178

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop