5.2.1. Heuristics
The system provides recommendations based on the following principles.
Principle 1: A feasible strategy is reasonable and effective. Reasonableness demands that a strategy satisfies events’ flexibility, makes as few changes as possible, and obeys the linearity of time: the starting point should precede the ending point. Effectiveness requires a strategy conducive to the TI’s settlement.
Given a
comprises
and
(
).
Table 1 lists all reasonable strategy types
when both events are flexible.
The effectiveness of a strategy type is determined by comparing the conflict length with the adjustable length of events in the specified directions. We first describe how to calculate the adjustable length of a single event in a specific direction. Let
denote the set of events on the day of
and that do not conflict with
. Denote by
,
the set of events that end no later than
, and the set of events that start no earlier than
, respectively. If
is the event with the largest ending point in the
, then we say
is the
previous event of
, denoted as
. If
is the event with the smallest starting point in the
, then
is the
next event of
, denoted as
. Denote by
,
, and
the maximum length that an event can be shortened, started earlier, and ended later, respectively. The value of
is 0 if the event is rigid, and half the event’s duration otherwise. The
and
of
, denoted as
and
, are given by:
where
and
are the user-defined earliest starting point and latest ending point of the event type
, respectively.
For brevity, we use the
-th time point
to refer to the first event’s starting point and ending point, and the second event’s starting point and ending point in a TI, respectively. For the
-th time point in
, its adjustable length under includes two aspects: the length that shortening or moving an event
, and the length that additionally shortens the duration after moving an event
, where
and
are defined to be:
, the adjustable length that shortens or moves events in
under
, is defined as:
, the maximum length that events in
can be adjusted under
, is given by:
The minimum length to resolve
with
, denoted as
, is defined as:
If is no smaller than , then we say is effective in resolving , the time length to be adjusted is assigned in proportion to their adjustable length. Algorithm 1 formalizes this idea.
Principle 2: Adjustments should be consistent with the user’s preference The calendar’s personalized nature dictates that the user’s preference is the golden rule for addressing TIs. Assuming that the user’s preference for the attribute values and behavior pattern is constant, the user should handle similar situations the same way.
Algorithm 1: Allocate Time Length to a Strategy Type |
Input: | | | | | | |
| | | |
Output: | A binary flag of effectiveness: effect_flag | | | |
| Time length for current strategy type: | |
1. | | |
2. | : | |
3. | | False, | |
4. | Else: | | | |
5. | | True, | | |
6. | | do: | | | |
7. | | | |
8. | | | |
9. | Return effect_flag, | | | |
In the following, we introduce three approaches to resolve TIs: RSIC, RSSC, and GSSD. After encountering a TI, the system tries the above methods in turn until it gets a feasible strategy.
5.2.3. Referencing Solutions of Similar Cases (RSSC)
According to Principle 2, the system can solve a TI by referring to its similar TI, where the similarity between TIs depends on three aspects: (1) the importance difference between conflicting events, (2) the TCs, and (3) the feasibility of the target strategy type.
The importance difference between two events is calculated from two perspectives: the importance of individual decision attribute values and the order relation between them. For brevity, the values discussed below refer to the decision attribute values. We first introduce how to derive a value’s importance. Values are divided into two categories:
common values and
rare values. A value is a common value if it occurs no less than a specified number of times, i.e., the corresponding threshold in
; otherwise, it is a rare value. A common value’s importance is evaluated by its frequency in important events, whereas a rare value’s importance is estimated using
hidden correlation [
46,
47], as defined below.
Given two events, one has a value
on attribute
, and the other has a value
on attribute
. If two events have an identical value
on one of the remaining attributes, then we say
is one
co-involved value for
and
.
Hidden correlation [
46] between
and
, denoted as
, is defined as:
where
and
are the frequency at which events containing
and
are important, respectively;
and
indicate the frequency of events involving
and
,
and
are important, respectively.
is the set of all co-involved values for
and
. Given a rare value, common values under the same attribute are
alternative values. The extent to which an alternative value matches the event is assessed by the
transition probability (TP) [
46]:
where
represents the alternative value
,
is the set of common values in the event,
is the threshold at which an alternative value can be selected. A rare value’s importance is temporarily substituted by that of
if there exists
obtained by Eq. (18) and is estimated by its frequency in important events otherwise. For two conflicting events, we construct
, where
is the importance difference between their
i-th values.
Next, we describe how to establish order relations between values. An
Importance preference relation (IPR) is a special case of strict partial order on a set of events, represented by the symbol
(See
Appendix A for proof) [
48,
49,
50]. Given an important event
and a normal event
, we denote by
the fact that the user prefers
to
, which is equivalent to
.
Property 1. Provided that the operations are meaningful for an IPR they are applied to, the IPR still holds if adding or subtracting the same value to both sides of the relation.
According to Property 1, we introduce to filter the duplicate information by replacing the same value in as in with “null”. We use and to denote the left and right-hand side in an . The union of the left or right-hand side of two IPRs, represented by the symbol , is a tuple combining elements of the corresponding positions.
Theorem 1. Given and , the strict partial order still holds for the union of and , and the union of and , which is denoted as : If the left and right-hand sides correspond the same, then we say is equal to , denoted as . The original information of , denoted as , takes the value null if there exists historical relations whose union equals to and takes the value itself otherwise.
We use to denote relative complement of in , which is the tuple of elements in but not in .
Theorem 2. The extra information inferred from and , denoted as , is defined to be:where takes the value True if all not-null elements of are also elements of , False otherwise. For a tuple pair , is a regular subIPR if it fits ; is an inverse subIPR if it meets .
IPR profile union. An IPR profile
is the transitive closure of all IPRs in
and
, where
and
are sets of all original information and extra information derived from
to
, respectively [
48]. An IPR profile union
is the transitive closure of
IPRs in IPR profile
and past
IPRs in IPR profile union
, which is defined to be:
We quantify the order relation between and by the following procedure:
Get a tuple pair by filtering out duplicate information.
Look for regular subIPRs and inverse subIPRs of .
Construct for each subIPR. For a subIPR , is defined as:
where
is a hyper-parameter,
is the number of not-null values in the
.
Construct , where is the largest value on the i-th element obtained from all subIPRs.
Dvalue. We use
dvalue to indicate the importance difference between two events in
. Let
denote the matrix that consists of dvalues for past
n-1 TIs. Given
and
,
denote the coefficients corresponding to
, and is defined as follows through the
sparse decomposition (SD) process [
1,
51,
52,
53]:
where
is the
-regularization and gives as few non-zero coefficients as possible.
We then introduce how the system utilizes the previous strategies. The historical strategies can be used directly or after being modified by
, which is a function that transforms a strategy type using the following steps: First, swap action types on different events. Second, swap action types on each event’s starting point and ending point and reverse adjustment directions. Two strategy types
and
are
inverse strategy types, denoted as
, if the following condition holds:
The set of all inverse strategy types is .
We say
and
have
inverse flags if
and
,
are identical with
,
, respectively. We train two calibrated support vector machine (SVM) [
1,
54,
55]
and
on
and
, where
i,
denotes
,
,
.
is the training sample that contains TCs, feasible strategy types, and flags within
and
.
and
are true if feasible strategy types, flags, and the user’s strategy type of
and
are all the same or inverse, respectively; false otherwise.
The
regular TC and
inverse TC between
and
, denoted as
and
, are given by:
where
and
are temporal classes of
and
, respectively;
is the threshold for the SVM model.
Let
and
denote the
regular similar case and
inverse similar case between
and
:
Similar TIs between
and
, denoted as
, is defined to be:
We use
STI to denote all similar TIs of
. Let
,
denote score on
obtained by
and all similar TIs of
, respectively.
,
are given by:
, the possibility that the system recommends
, is given by:
Let
denote the
prediction probability distribution of
. Let
denote the
label probability distribution, where
is 1 if the user adopted
and 0 otherwise. The system suggests the strategy type with the highest prediction probability returned by
:
Algorithm 2 describes how to find similar TIs and generate strategies based on
, where
specifies the contents of KB correspond to
and
assigns values of thresholds.
Algorithm 2: Resolve Temporal Inconsistencies by Similar Circumstances |
Input: | | | | | | |
| | | |
Output: | | | | |
Initialization: | | | |
| # -1 indicates an invalid action |
| STI [] # indexes of all similar TIs | | | |
| # adjustment lengths for all possible strategy types | |
1. | | |
2. | Sparse decomposition: | |
3. | For do: | |
4. | | If : | |
5. | | | Get effect_flag and of # by Algorithm 1 |
6. | | Elif and : | | |
7. | | | of # by Algorithm 1 |
8. | | Else: |
9. | | | False |
10. | | If effect_flag is True: |
11. | | | |
12. | If exists similar TI(s) with : |
13. | | Generate prediction probability distribution # by Eq. (33) |
14. | | Choose a strategy type # by Eq. (34) |
15. | | Get corresponding time length of |
16. | Return | | | |