Next Article in Journal
Generalized Choi–Davis–Jensen’s Operator Inequalities and Their Applications
Previous Article in Journal
Symmetrical Martensite Distribution in Wire Using Cryogenic Cooling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Random Telegraphic Signals with Fractal-like Probability Transition Rates

by
Sergio Elaskar
1,*,
Pascal Bruel
2 and
Luis Gutiérrez Marcantoni
3
1
Departamento de Aeronáutica, FCEFyN e Instituto de Estudios Avanzados en Ingeniería y Tecnología (IDIT), Universidad Nacional de Córdoba and CONICET, Córdoba 5000, Argentina
2
Centre National de la Recherche Scientifique (CNRS), Laboratoire de Mathématiques et de Leurs Applications (LMAP), Inria Cagire Team, Université de Pau et des Pays de l’Adour (UPPA), 64013 Pau, France
3
Facultad de Ingeniería y Ciencias Básicas, Fundación Universitaria Los Libertadores, Bogotá 110441, Colombia
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(9), 1175; https://doi.org/10.3390/sym16091175
Submission received: 30 July 2024 / Revised: 24 August 2024 / Accepted: 26 August 2024 / Published: 8 September 2024
(This article belongs to the Section Mathematics)

Abstract

:
Many physical processes feature random telegraph signals, e.g., a time signal c ( t ) that randomly switches between two values over time. The present study focuses on the class of telegraphic processes for which the transition rates are formulated by using fractal-like expressions. By considering various restrictive hypotheses regarding the statistics of the waiting times, the present analysis provides the corresponding expressions of the unconditional and conditional probabilities, the mean waiting times, the mean phase duration, the autocorrelation function and the associated integral time scale, the spectral density, and the mean switching frequency. To assess the relevance of the various hypotheses, synthetically generated signals were constructed and used as references to evaluate the predictive quality of the theoretically derived expressions. The best predictions were obtained by considering that the waiting times probability density functions were Dirac peaks centered on the corresponding mean values.

1. Introduction

Random processes providing time signals alternating between different possible discrete states can be found in many physical processes such as ion channels in cell membranes, electronic noise, or turbulent premixed flames in the so-called flamelet regime of combustion. The present study focuses on a random telegraph signal, e.g., a time function c ( t ) whose value at time t can take only two possible values. Characterizing the transition rate from one state to another provides insights into the underlying physical processes responsible for the different states and state changes [1]. Usually, Markov models are often used to model and analyze such data [2,3,4]. For a signal driven by a Markov process, the probability per unit time of transitioning between states (e.g., the transition rate) is time-independent. This assumption means that the transition rates depend only on the current state and not on the elapsed time since the signal entered that state and not on which state preceded the current state. Such a signal is called “random” because there is a non-zero probability that the state transition will occur at any time after the signal enters the current state.
In the present study, another type of random telegraph signal is considered by assuming that the probability transition rates are given by fractal-like expressions that will be detailed later on. The term “fractal-like” is deliberately used here to emphasize the fact that the present study is concerned with temporal signals and, thus, not with the kind of objects or structures in space that were originally the subject of the fractal analysis. The time signals studied here were already introduced by [1,5] but the present contribution includes a new and detailed analysis of probability assessments and autocorrelation functions, as well as an examination of signal characteristics such as integral time scale, crossing frequency, and spectrum. New hypotheses are also considered to establish the relationship between mean frequency, integral time scale, and average waiting times. It is shown that the previously developed expressions for probabilities and autocorrelation functions, which were regarded as exact, were only approximations that could be improved by refining the underlying hypotheses of development.
This paper is organized as follows: In Section 2, after recalling the concept of fractals, the time signal at hand is introduced via the fractal-like formulation of the transition rates. Then, the expressions for the waiting times, probabilities, autocorrelations, spectral densities, integral time scales, and mean crossing frequencies are derived. Section 3 presents the results of the numerical experiments aimed at testing the various theoretical expressions derived in the previous section against the results obtained by directly generating and characterizing synthetic reference signals obtained for symmetric and non-symmetric transition rates. Finally, Section 4 presents the main concluding remarks along with some perspectives on further developments.

2. Time Signal with Fractal-like Transition Rates

Drawing on a rich legacy of mathematical concepts, Mandelbrot was instrumental in developing, organizing, and extending the idea of fractals [6]. The theory of fractals explores geometries that exhibit a diverse array of self-repeating shapes and structures. This theory pertains to fractal objects or entities, characterized by the power-law relationship between the size of the object being measured and the scale of resolution of the measuring tool [1,5,6]. Fractals [6] are actively applied in many different fields of research and interested readers can consult [7,8] as recent examples of their application to very different topics.
A fractal spatial object exhibits a consistent replication of a pattern when observed at varying levels of magnification. For instance, the coastline displays similar “wiggly” characteristics on maps of different scales, indicating statistical self-similarity across spatial scales. Certain mathematical objects, like the Cantor set and the von Koch curve, demonstrate exact self-similarity, revealing smaller geometric replicas of the whole object when viewed at finer scales. The fractal dimension, denoted by D, is determined by the formula N = δ D , where δ represents the scale reduction factor and N denotes the number of congruent pieces produced. In conventional spatial dimensions of length, area, and volume, D takes on the values 1 , 2 , or 3, respectively. However, the total length of a coast, denoted by L, varies depending on the scale ε of the map used for measurement. As measurements are conducted on maps of finer scales, resulting in the resolution of finer details, the measured length in kilometers appears longer, as follows: [6]
L = K ε 1 D ,
where K is the constant and D is the fractal dimension.
Alternatively, in terms of self-similarity, a fractal object can be divided into N identical smaller copies of itself, each isotropically scaled down by a factor of δ . The total length of the object can be expressed as L = N ε , where ε is the size of the smaller copy. By substituting ε = ε 0 / δ , we obtain N = δ D and L = K ε 1 D .
Therefore, both definitions of the fractal dimension D are equivalent and capture the scaling behavior and self-similarity of fractal objects.
The relation of similarity can be used to determine the fractal dimension D in a self-similar figure, as follows:
D = ln ( N ) ln ( 1 / r ) ,
where N is the number of elements related to the complete figure by a relation of similarity 1 / r ([6]). Note that r = 1 / δ .
The length of any coastline or the length of the perimeter of the von Koch curve increases as it is measured at a finer spatial resolution.
When it comes to analyzing temporal signals, the concept of fractals has to be revisited by replacing the self-similarity (revealed by isotropic geometric transformations of the spatial coordinates) with the notion of self-affinity (requiring non-isotropic transformations of the signal variables) [9,10]. For a self-affine temporal signal, after a proper expansion e.g. a non-isotropic transformation of its dependent and independent variables, a small time segment of the signal exhibits statistical properties similar to those obtained when considering a longer time segment. If the transition rates are time-dependent, a random telegraph signal will exhibit noticeable differences from a Markov random telegraph signal, namely the following: (i) The future state of the signal will be determined by both the current and past states, with the probability transition rates being proportional to t 1 D , where D is the fractal dimension; and (ii) the probability transition rates will appear similar at all time scales, presenting no preferred time scale. In contrast, the transition rates of the Markov model determine a unique time scale for the process, which is equal to the reciprocal of the transition probability rate. In the present study, and following [1], the effective transition rates k 01 and k 10 are chosen to depend on the timescale at which they are measured, somehow transposing Equation (1) to a temporal context, namely the following:
k 01 = A 01 t 1 D 01 , k 10 = A 10 t 1 D 10 ,
where A 01 and A 10 are constants, D 01 and D 10 are the fractal exponents, and t is the time spent in the current state ( c = 0 or c = 1 ).
Therefore, the probability of jumping from state c = 0 to state c = 1 in δ t is k 01 δ t , and the probability transition to jump from the state c = 1 to c = 0 in δ t is k 10 δ t .
Each transition rate depends on only two parameters: the fractal dimension D, which determines the relative contribution of processes at short and long time scales, and the constant A, which influences whether the jumps occur slowly or rapidly.
In the following subsections, several statistical functions characterizing the time signal are studied: first, the conditional probabilities, then the unconditional probabilities followed by the autocorrelation function, the integral time scale, the mean crossing frequency, and some spectral properties.

2.1. Waiting Times PDFs

In this section, the probability density functions (PDFs) of the waiting or stay times in the c = 0 and c = 1 states are calculated. It is assumed that the transitions between these two states occur instantaneously. The waiting times also are called sojourn times [11].
Let P 11 ( t ) be the probability that the system is at c = 1 during a time t. Also, we define P 11 ( t + Δ t ) as the probability that c = 1 at t, multiplied by the conditional probability, does not jump during the remaining Δ t interval, given that c = 1 has remained for duration t. Then, we can write the following:
P 11 ( t + Δ t ) = P 11 ( t ) . P { c d o e s n o t j u m p f r o m 1 t o 0 d u r i n g Δ t / c ( t ) = 1 } = P 11 ( t ) ( 1 k 10 Δ t ) ,
where k 10 Δ t is the probability that the signal leaves the current state c = 1 to c = 0 during a time interval Δ t , and ( 1 k 10 Δ t ) is the probability that the signal remains at c = 1 during Δ t . Equation (4) can be reorganized as follows:
P 11 ( t + Δ t ) P 11 ( t ) = P 11 ( t ) k 10 Δ t ,
then
P 11 ( t + Δ t ) P 11 ( t ) Δ t = k 10 P 11 ( t ) .
As continuity is assumed in the data series and the second member has a definite limit, we can take the limit Δ t 0 . As a result, the previous equation, which is discrete, turns into one that is differential, namely the following:
d P 11 ( t ) P 11 ( t ) = k 10 d t d P 11 ( t ) P 11 ( t ) = A 10 t 1 D 10 d t ,
where Equation (3) is utilized. The general solution to the previous equation can be expressed as follows:
P 11 ( t ) = θ e A 10 2 D 10 t 2 D 10 ,
where θ depends on the initial condition. Since for t = 0 the current state is c = 1 , then P 11 ( 0 ) = 1 and θ = 1 . Accordingly, the solution of Equation (7) can be written as follows:
P 11 ( t ) = e A 10 2 D 10 t 2 D 10 .
On the other hand, the equation for c = 0 can be calculated from Equation (7) by simply replacing k 10 with k 01 .
Therefore, we obtain the following:
P { c r e m a i n s a t 0 d u r i n g t } = P 00 ( t ) = e A 01 2 D 01 t 2 D 01 P { c r e m a i n s a t 1 d u r i n g t } = P 11 ( t ) = e A 10 2 D 10 t 2 D 10 ,
where t is the duration of the stay in the corresponding state, also called the waiting time [11,12]. Note that from Equation (7), the probability transition rate k i j can be evaluated as follows:
k i j = d ( ln ( P i j ( t ) ) ) d t , i = 0 , 1 , j = 0 , 1 .
Also, from Equation (9), the probability density function related to P 11 ( t ) for the c = 1 state can be obtained as follows:
f 11 ( t ) = d P 11 ( t ) d t = A 10 t 1 D 10 e A 10 2 D 10 t 2 D 10 = k 10 e A 10 2 D 10 t 2 D 10
and, similarly, we have the following:
f 00 ( t ) = d P 00 ( t ) d t = A 01 t 1 D 01 e A 01 2 D 01 t 2 D 01 = k 01 e A 01 2 D 01 t 2 D 01 .

2.2. Average Waiting Times

From Equations (12) and (13), the average duration of the signal in state c = 1 (resp. c = 0 ) denoted by t 1 ¯ (resp. t 0 ¯ ) is defined by the following:
t 1 ¯ = 0 t f 11 ( t ) d t = 0 A 10 t 2 D 10 e A 10 2 D 10 t 2 D 10 d t , t 0 ¯ = 0 t f 00 ( t ) d t = 0 A 01 t 2 D 01 e A 01 2 D 01 t 2 D 01 d t
which leads to the following:
t 1 ¯ = Γ D 10 3 D 10 2 A 10 2 D 10 1 D 10 2
and
t 0 ¯ = Γ D 01 3 D 01 2 A 01 2 D 01 1 D 01 2 ,
where Γ is the Gamma function [13]. The integrals converge if A i j D i j 2 < 0 and since by definition A i j > 0 for i = 0 , 1 and j = 0 , 1 , this implies that D i j < 2 .
Figure 1 shows the evolution of t 1 ¯ for different fractal exponents and with A 10 = 1 (Equation (15)). t 1 ¯ decreases with D 10 and so does t 0 ¯ . Furthermore, we emphasize that for D 10 = 1 , corresponding to a Markov process, the average waiting time satisfies t 1 ¯ = 1 / A 10 .

2.3. Probabilities

We define P 1 ( t ) ( P 0 ( t ) ) as the probability of an even (odd) number of transitions in time t, starting in state c = 1 . Therefore, we have the following:
P 1 ( t ) + P 0 ( t ) = 1 .
where t is the global time of the series. Note that in Section 2.1, t is the local time in each step.
The dependence of the probability transition rates on time is a crucial factor in modifying the random function. As such, it is essential to consider and analyze various alternatives for evaluating the probabilities of finding c ( t ) = 1 and c ( t ) = 0 .
As it is assumed that the transitions between the states ( c = 0 c = 1 ) are instantaneous, there exists only two probabilities: the probability to obtain state c = 1 and that to obtain c = 0 , with their sum amounting to 1.
The theoretical probability that the system is at c = 1 for a time t + Δ t is as follows:
P 1 ( t + Δ t ) = P { c ( t + Δ t ) = 1 } = P { c ( t ) = 1 } . P { c r e m a i n s a t 1 d u r i n g Δ t / c ( t ) = 1 } + P { c ( t ) = 0 } . P { c s w i t c h e s f r o m 0 t o 1 d u r i n g Δ t / c ( t ) = 0 } .
Therefore, the probability P 1 ( t + Δ t ) is the sum of the probabilities of two mutually exclusive events.
To simplify the previous equation, let us use the following:
P 0 ( t ) = P { c ( t ) = 0 } , P 1 ( t ) = P { c ( t ) = 1 } , P 0 ( t ) = 1 P 1 ( t ) , P { c r e m a i n s a t 1 d u r i n g Δ t / c ( t ) = 1 } = 1 k 10 Δ t P { c s w i t c h e s f r o m 0 t o 1 d u r i n g Δ t / c ( t ) = 0 } = k 01 Δ t .
If these last equations are introduced in Equation (18), we have the following:
P 1 ( t + Δ t ) = P 1 ( t ) ( 1 k 10 Δ t ) + ( 1 P 1 ( t ) ) k 01 Δ t .
Working on the previous equation, one obtains the following:
P 1 ( t + Δ t ) P 1 ( t ) = Δ t P 1 ( t ) ( k 10 + k 01 ) + k 01
and the limit is as follows:
d P 1 ( t ) d t = lim Δ t 0 P 1 ( t + Δ t ) P 1 ( t ) Δ t = lim Δ t 0 ( P 1 ( t ) ( k 10 + k 01 ) + k 01 ) .
The probability transition rates given by Equation (3) and adapted to the previous equations yield the following:
k 10 = A 10 ( t 1 + Δ t ) 1 D 10 , k 01 = A 01 ( t 0 + Δ t ) 1 D 01 ,
where t 1 and t 0 are the waiting times corresponding to the global time t. Then, Equation (22) is as follows:
d P 1 ( t ) d t = lim Δ t 0 P 1 ( t ) ( A 10 ( t 1 + Δ t ) 1 D 10 + A 01 ( t 0 + Δ t ) 1 D 01 ) + A 01 ( t 0 + Δ t ) 1 D 01 = P 1 ( t ) ( A 10 t 1 1 D 10 + A 01 t 0 1 D 01 ) + A 01 t 0 1 D 01 .
The sum of the second member of this equation has a definite limit as Δ t 0 . Therefore, the left side tends to the derivative d P 1 ( t ) / d t [11].
In order to solve the earlier equation, it is necessary to establish a relationship between the waiting times t 1 and t 0 and the overall time t (see Figure 2). To accomplish this, three different assumptions will be considered thereafter.

2.3.1. Assumption 1

Here, the average waiting times of each state ( c = 1 and c = 0 ) are related to the overall time t, where t 1 ¯ and t 0 ¯ are the average times in the states c = 1 and c = 0 , respectively, which can be obtained using Equations (15) and (16).
This assumption implies from Equation (3) that k 10 ¯ and k 01 ¯ are as follows:
k 10 ¯ = A 10 ( t 1 ¯ + Δ t ) 1 D 10 , k 01 ¯ = A 01 ( t 0 ¯ + Δ t ) 1 D 01 .
Upon substituting Equation (25) into Equation (24), a new equation that integrates these two expressions is derived, as follows:
d P 1 ( t ) d t = P 1 ( t ) ( A 10 t 1 ¯ 1 D 10 + A 01 t 0 ¯ 1 D 01 ) + A 01 t 0 ¯ 1 D 01 = P 1 ( t ) ( k 10 ¯ + k 01 ¯ ) + k 01 ¯ .
Then, the solution of Equation (26) can be expressed as follows: [14]
P 1 ( t ) = k 01 ¯ k 01 ¯ + k 10 ¯ 1 e ( k 01 ¯ + k 10 ¯ ) t + P 1 ( 0 ) e ( k 01 ¯ + k 10 ¯ ) t ,
where P 1 ( 0 ) is the initial condition. If P 1 ( 0 ) = 1 , then the process becomes semi-random, and Equation (27) results in the following:
P 1 ( t ) = k 01 ¯ k 01 ¯ + k 10 ¯ + k 10 ¯ k 01 ¯ + k 10 ¯ e ( k 01 ¯ + k 10 ¯ ) t
From Equation (17), we obtain the following:
P 0 ( t ) = k 10 ¯ k 01 ¯ + k 10 ¯ 1 e ( k 01 ¯ + k 10 ¯ ) t
Note that Equations (26)–(29) are similar to those obtained for the Markov signal with the following probability transition rates, as follows:
k 10 ¯ = A 10 t 1 ¯ 1 D 10 , k 01 ¯ = A 01 t 0 ¯ 1 D 01 .
Therefore, this assumption presents some characteristics of a Markov signal, although the probability transition rates have a structure similar to fractals (see Equation (25)).
On the other hand, if we consider a semi-random process with initial conditions P 0 ( 0 ) = 1 and P 1 ( 0 ) = 0 , from Equations (27) and (28), the following probabilities result in the following:
P 1 ( t ) = k 01 ¯ k 01 ¯ + k 10 ¯ 1 e ( k 01 ¯ + k 10 ¯ ) t
and
P 0 ( t ) = k 10 ¯ k 01 ¯ + k 10 ¯ + k 01 ¯ k 01 ¯ + k 10 ¯ e ( k 01 ¯ + k 10 ¯ ) t
Independent of the initial condition, the probabilities verify P 0 ( t ) = k 10 ¯ k 01 ¯ + k 10 ¯ and P 1 ( t ) = k 01 ¯ k 01 ¯ + k 10 ¯ .

2.3.2. Assumption 2

We assume that the waiting times are equal to the global time of the process. Then, if we introduce t 1 = t 0 = t in Equation (23), the probability transition rates are as follows:
k 10 = A 10 ( t + Δ t ) 1 D 10 , k 01 = A 01 ( t + Δ t ) 1 D 01 .
Accordingly, Equation (24) is as follows:
d P 1 ( t ) d t = P 1 ( t ) ( A 10 t 1 D 10 + A 01 t 1 D 01 ) + A 01 t 1 D 01 .
An analytical solution exists only for the case when D = D 10 = D 01 , as follows:
P 1 ( t ) = A 01 + ( A 01 + A 10 ) e A 01 + A 10 D 2 t 2 D θ A 01 + A 10 ,
where θ depends on the initial condition. If we assume P 1 ( 0 ) = 1 , the particular solution results in the following:
P 1 ( t ) = A 01 A 01 + A 10 + A 10 A 01 + A 10 e A 01 + A 10 2 D t 2 D , P 1 ( 0 ) = 1 .
Note that this assumption implies that the system is in a state c = 1 (or c = 0 ) for a time interval as long as the overall time of the series, which makes this hypothesis physically less consistent than Assumption 1 (Section 2.3.1). However, we introduced this assumption because it was used in Ref. [5] as an exact solution to calculate the autocorrelation function. We recall that Equation (36) is only an approximated expression for P 1 ( t ) based on the following hypothesis: the waiting times are equal to the global time of the process.

2.3.3. Assumption 3

We assume that the waiting times are zero t 1 = t 0 = 0 . Therefore, it can be understood as the opposite case of Assumption 2.
From Equation (3), the probability transition rates are as follows:
k 10 = A 10 Δ t 1 D 10 , k 01 = A 01 Δ t 1 D 01 .
Then, Equation (24) reduces to the following:
d P 1 ( t ) d t = lim Δ t 0 P 1 ( t ) ( A 10 Δ t 1 D 10 + A 01 Δ t 1 D 01 ) + A 01 Δ t 1 D 01 .
From the last equation, three different situations emerge, as follows:
0 < D 10 < 1 and 0 < D 01 < 1 d P 1 ( t ) d t = 0 P 1 ( t ) = constant
D 10 > 1 or D 01 > 1 d P 1 ( t ) d t
D 10 = 1 and D 01 = 1 d P 1 ( t ) d t = P 1 ( t ) ( A 10 + A 01 ) + A 01 .
The validity of the solution, P 1 ( t ) = constant , is contingent upon the probability of transition rates being equal to zero. In our current research, the presence of a random telegraphic signal diminishes the significance of this result. Therefore, it is pertinent to focus the analysis specifically on Equation (41).
Equation (41) corresponds to a Markov signal with probability transition rates A 10 and A 01 , which is consistent with Equation (3) if D 10 = 1 and D 01 = 1 . Therefore, P 1 ( t ) results in the following:
P 1 ( t ) = A 01 A 10 + A 01 + A 10 A 10 + A 01 e ( A 10 + A 01 ) t ,
where it is assumed that P 1 ( 0 ) = 1 .

2.4. Autocorrelation Function

For a random function, the autocorrelation R ( t 1 , t 2 ) is the expected value of the product c ( t 1 ) c ( t 2 ) [11,15], as follows:
R ( t 1 , t 2 ) = c ( t 1 ) c ( t 2 ) f 2 ( c ( t 1 ) c ( t 2 ) , t 1 , t 2 ) d t 1 d t 2
where f 2 ( c ( t 1 ) c ( t 2 ) , t 1 , t 2 ) is the joint probability density function. R ( t 1 , t 2 ) determines if c ( t 1 ) and c ( t 2 ) are correlated.
For the telegraphic signal, the autocorrelation can be written as follows:
R ( t 1 , t 2 ) = c ( t 1 ) c ( t 2 ) ¯ = i j c i c j { P r o b a b i l i t y t h a t c ( t 1 ) = c i } . { P r o b a b i l i t y t h a t c ( t 2 ) = c j / c ( t 1 ) = c i } .
Since c can only take on the values 0 and 1, then c i c j 0 only for c i = c j = 1 . Accordingly, the last equation is as follows:
R ( t 1 , t 2 ) = c ( t 1 ) c ( t 2 ) ¯ = i j { P r o b a b i l i t y t h a t c ( t 1 ) = 1 } . { P r o b a b i l i t y t h a t c ( t 2 ) = 1 / c ( t 1 ) = 1 } .
where
{ P r o b a b i l i t y t h a t c ( t 1 ) = 1 } = P 1 ( t 1 ) { P r o b a b i l i t y t h a t c ( t 2 ) = 1 / c ( t 1 ) = 1 } = P 1 ( t 2 ) = 1 / c ( t 1 ) = 1
A priori, to calculate P 1 ( t 1 ) and P 1 ( t 2 ) = 1 / c ( t 1 ) = 1 , one may rely either on Assumptions 1 or 2 previously described. The first option is retained here because it is physically more consistent, while the second option was used in [5].
Assumption 1 employs Dirac peak probability distribution functions centered on the average waiting times for c = 1 and c = 2 ( t 1 ¯ , and t 2 ¯ ) to characterize the waiting times. This approach effectively integrates the signal’s physical information, describing the time spent in each step. In contrast, Assumption 2 utilizes time intervals equivalent to the overall time of the series as waiting times, leading to a loss of information regarding state changes throughout the signal’s duration.

2.4.1. Assumption 1

The analysis starts with Equation (28) to describe the probability P 1 ( t ) , which assumes P 1 ( 0 ) = 1 . Assuming that all statistical properties are independent of the time shifts (stationary random function [15]) leads to R ( t 1 , t 2 ) = R ( τ ) where τ = t 2 t 1 . Henceforth, one obtains the following [4]:
R ( τ ) = k 01 ¯ ( k 10 ¯ + k 01 ¯ ) 2 k 01 ¯ + k 10 ¯ e ( k 01 ¯ + k 10 ¯ ) τ .
Since k 10 ¯ and k 01 ¯ are positive, the autocorrelation function decreases exponentially with τ .
For τ , the autocorrelation tends to the following:
R ( τ ) = k 01 ¯ 2 ( k 10 ¯ + k 01 ¯ ) 2
and for τ = 0
R ( 0 ) = k 01 ¯ k 10 ¯ + k 01 ¯ = R ( τ ) .
The dimensionless autocorrelation function can be defined as follows:
R * ( τ ) = R ( τ ) R ( τ ) R ( 0 ) R ( τ ) = c ( t ) c ( t + τ ) ¯ R ( τ ) R ( 0 ) R ( τ ) .
Introducing Equation (49) in the last equation gives the following:
R * ( τ ) = R ( τ ) R 2 ( 0 ) R ( 0 ) ( 1 R ( 0 ) ) .
From Equations (50) and (51), the dimensionless autocorrelation function satisfies the following:
R * ( 0 ) = 1 , R * ( τ ) 0 .
Finally, if Equations (48) and (49) are introduced to Equation (50), R * ( τ ) reads as follows:
R * ( τ ) = ( k 01 ¯ + k 10 ¯ ) 2 R ( τ ) k 01 ¯ 2 k 01 ¯ k 10 ¯ .
On the other hand, we calculate the autocorrelation function for P 1 ( 0 ) = 0 , as follows:
R ( τ ) = k 01 ¯ 2 ( k 10 ¯ + k 01 ¯ ) 2 1 e ( k 01 ¯ + k 10 ¯ ) τ .
From this equation, we obtain the following:
R ( τ ) = k 01 ¯ 2 ( k 10 ¯ + k 01 ¯ ) 2 and R ( 0 ) = 0 .
Hence, the autocorrelation function does not depend on the initial condition for large values of τ .

2.4.2. Assumption 2

In this case, the probability P 1 ( t ) is given by Equation (36), which is also valid for P 1 ( 0 ) = 1 . By assuming that the statistical properties of the function are independent of time shifts, the expression of R ( τ ) is simply R ( τ ) = R ( t 1 , t 2 ) , where τ = t 2 t 1 . This allows us to calculate the autocorrelation function for any time shift τ without having to consider all possible pairs of t 1 and t 2 , as follows:
R ( τ ) = A 01 ( A 10 + A 01 ) 2 A 01 + A 10 e A 01 + A 10 D 2 τ ( 2 D ) .
Note that the last equation is valid for A 10 + A 01 > 0 and D 10 = D 01 = D < 2 .
If A 10 + A 01 > 0 and D 10 = D 01 = D > 2 , the autocorrelation function is as follows:
R ( τ ) = A 01 + A 10 e ( A 01 + A 10 ) τ 2 D 2 D A 01 + A 10 ,
then R ( τ ) = P 1 ( τ ) .
Although the values D 10 and D 01 were not initially restricted, the interpretation of D i j > 2 is not physically consistent [5].
In the limit τ in Equation (56), the autocorrelation is as follows:
R ( τ ) = A 01 2 ( A 10 + A 01 ) 2
and for τ = 0
R ( 0 ) = A 01 A 10 + A 01 = R ( τ ) .
The dimensionless autocorrelation function is expressed mathematically as Equation (50). Upon establishing a relation between Equations (58) and (59) with Equation (56), we obtain the following:
R * ( τ ) = R ( τ ) R ( τ ) R ( 0 ) R ( τ ) = ( A 01 + A 10 ) 2 R ( τ ) A 01 2 A 01 A 10 .
There are differences in the autocorrelation functions computed by Assumptions 1 and 2. One distinction is the exponent of τ , which is 1 for Assumption 1 and depends on D 10 = D 01 for Assumption 2. Another distinction is the applicability of Equations (47) and (56). Equation (56) is only valid when D 10 = D 01 , whereas this constraint does not apply to Equation (47). Additionally, as mentioned earlier, Assumption 1 incorporates the statistics of the waiting times in the probability evaluation, whereas Assumption 2 does not.

2.5. Integral Time Scale and Mean Crossing Frequency

Once the non-dimensional autocorrelation function R * ( τ ) is obtained, it is possible to calculate the corresponding integral time scale T ^ , as follows:
T ^ = 0 R * ( τ ) d τ .
To solve this integral, we use Assumptions 1 and 2 previously described.

2.5.1. Assumption 1

Assuming that k i j = k i j ¯ with i and j equal to 0 or 1, the dimensionless autocorrelation function given by Equation (53) has a mathematical structure similar to that obtained for a Markov signal. However, the relationships between the mean frequency ν and the integral time scale T ^ are different for the two types of signal. For the signal analyzed in this paper, the relation between ν and k i j ¯ differs from that of the Markov signal [14].
Then, T ^ can be calculated directly by integration of R * ( τ ) given by Equation (50), as follows:
T ^ = 0 R * ( τ ) d τ = 1 R ( 0 ) ( 1 R ( 0 ) ) 0 ( R ( τ ) R 2 ( 0 ) ) d τ ,
where
0 R ( τ ) d τ = R 2 ( 0 ) τ 0 + k 01 ¯ k 10 ¯ ( k 01 ¯ + k 10 ¯ ) 3 ,
where Equation (47) determines the autocorrelation R ( τ ). If we introduce the last equation in Equation (62), we obtain a new expression that takes into account the relationship between the two equations, as follows:
T ^ = 1 R ( 0 ) ( 1 R ( 0 ) ) k 01 ¯ k 10 ¯ ( k 01 ¯ + k 10 ¯ ) 3 .
Introducing Equation (49) in Equation (64), we obtain the following:
T ^ = 1 k 01 ¯ + k 10 ¯ .
The correct evaluation of the mean crossing frequency is essential in some physical phenomena, such as in the flamelet model for combustion processes [16]. The mean frequency can be calculated as follows:
ν f = 2 t 0 ¯ + t 1 ¯ .
where t 0 ¯ and t 1 ¯ are the average waiting times. If we introduce the Equations (15) and (16) in the last equation, it results in the following:
ν = 2 Γ D 10 3 D 10 2 A 10 2 D 10 1 D 10 2 + Γ D 01 3 D 01 2 A 01 2 D 01 1 D 01 2 .
On the other hand, if we use the definition of average probability transition rates, the average waiting times can be written as follows:
t 1 ¯ = k 10 ¯ A 10 1 / ( 1 D 10 ) t 0 ¯ = k 01 ¯ A 01 1 / ( 1 D 01 ) .
If we introduce the last equation in Equation (66), we obtain the following:
ν f = 2 k 10 ¯ A 10 1 / ( 1 D 10 ) + k 01 ¯ A 01 1 / ( 1 D 01 )
Now, we can use Equation (65) to relate the mean crossing frequency with the integral time scale as follows:
ν f = 2 1 T ^ k 01 ¯ A 10 1 / ( 1 D 10 ) + k 01 ¯ A 01 1 / ( 1 D 01 ) = 2 ( T ^ A 10 ) 1 / ( 1 D 10 ) ( 1 A 01 t 0 ¯ ( 1 D 01 ) T ^ ) 1 / ( 1 D 10 ) + t 0 ¯ ( T ^ A 10 ) 1 / ( 1 D 10 ) ,
where we use the following:
k 10 ¯ = A 10 t 1 ¯ ( 1 D 10 ) k 01 ¯ = A 01 t 0 ¯ ( 1 D 01 ) .
Equation (70) provides an expression for the mean crossing frequency as a function of the integral time scale and the parameters defining the probability transition rates: A 10 , A 01 , D 10 , and D 01 .

2.5.2. Assumption 2

For this assumption, the integral, i.e.,
0 R ( τ ) d τ = 0 A 01 ( A 10 + A 01 ) 2 A 01 + A 10 e A 01 + A 10 D 2 τ ( 2 D ) d τ ,
is not defined. It is infinite.
To address this issue, we propose a new hypothesis: D 1 . This hypothesis leads to a reduction in the exponent of τ to ≅1 in Equation (72). As a result, we can use the same approach employed in Assumption 1. Following this method, we can approximate the integral time scale as
T ^ 1 A 10 + A 01
and the mean frequency is as follows:
ν f = 2 t 1 ¯ + t 0 ¯ .
On the other hand, from Equations (15) and (16), we can obtain the coefficients A i j as functions of t 1 ¯ and t 0 ¯ , as follows:
A 10 = ( 2 D 10 ) t 1 ¯ Γ D 10 3 D 10 2 ( D 10 2 ) ,
A 01 = ( 2 D 01 ) t 0 ¯ Γ D 01 3 D 01 2 ( D 01 2 ) .
By introducing Equations (75) and (76) into Equation (73), we obtain the following:
T ^ ( 2 D 10 ) t 1 ¯ Γ D 10 3 D 10 2 ( D 10 2 ) + ( 2 D 01 ) t 0 ¯ Γ D 01 3 D 01 2 ( D 01 2 ) 1
also, from Equation (74), we have the following:
t 1 ¯ = t 0 ¯ + 2 ν .
By the last two equations, we obtain the integral time scale as a function of the mean crossing frequency as follows:
T ^ ( 2 D 10 ) 2 ν t 0 ¯ Γ D 10 3 D 10 2 ( D 10 2 ) + ( 2 D 01 ) t 0 ¯ Γ D 01 3 D 01 2 ( D 01 2 ) 1
or the mean crossing frequency as a function of the integral time scale, as follows:
ν 2 Γ D 10 3 D 10 2 1 ( 2 D 10 ) T ^ 2 D 01 2 D 10 t 0 ¯ Γ D 01 3 D 01 2 ( D 01 2 ) 1 ( D 10 2 ) + t 0 ¯ .
It is worth noting from Equations (70) and (80) that the mean crossing frequency has two finite limit values. The first is ν m , which represents the maximum mean frequency for T ^ = 0 . The second value is ν = 0 ; it is obtained when the integral time scale reaches its maximum T ^ m . This behavior contrasts with a Markov process, where ν m and T ^ m tend to infinity.

2.6. Spectral Density

Once the autocorrelation function is determined, we can evaluate the spectral density using the Wiener–Khinchin theorem. Here, we follow Ref. [12] and use the Fourier cosine transform as follows:
S ( f ) = R ( τ ) cos ( 2 π f τ ) d τ ,
where S ( f ) is the spectral density, and f is the mean frequency. We use the definition of S ( f ) given in https://mathworld.wolfram.com/FourierTransform.html (accessed on 24 August 2024).
Also, we highlight that we can use Equation (81) because the autocorrelation function is symmetric for positive and negative values of τ . Therefore, the autocorrelation is an even function, as follows:
E ( τ ) = 1 2 ( R ( τ ) + R ( τ ) ) = R ( τ ) .
The spectral density calculation depends on the assumptions that were previously considered. We use Assumption 1, which is physically consistent, to calculate the spectral density. Accordingly, we utilize the autocorrelation function given by Equation (47), and the spectral density is as follows:
S ( f ) = 2 π k 10 ¯ k 01 ¯ ( k 01 ¯ + k 10 ¯ ) ( f 2 + ( k 01 ¯ + k 10 ¯ ) 2 ) + ( 2 π ) k 01 ¯ 2 δ ( f ) ( k 01 ¯ + k 10 ¯ ) 2 .
After Fourier transforming the autocorrelation function, the δ function is obtained from the constant term k 10 ¯ k 01 ¯ ( k 01 ¯ + k 10 ¯ ) 2 , while rejecting the last term results in a modified spectral density, as follows:
S ( f ) = 2 π k 10 ¯ k 01 ¯ ( k 01 ¯ + k 10 ¯ ) ( f 2 + ( k 01 ¯ + k 10 ¯ ) 2 ) .
Similar to the autocorrelation function, the last two equations are equivalent to those calculated for the Markov signal, albeit with k i j substituted with k i j ¯ [4,14].

3. Numerical Experiments

In this section, we generate numerically synthetic semi-random telegraphic signals without including any simplification or assumption in them. Therefore, the numerical results presented subsequently aim to assess the precision of the derived expressions in the preceding sections. All the numerical tests were performed using a personal computer with a processor, AMD Ryzen 7 3700X 8-Core Processor × 8, (Santa Clara, USA) and a graphics card, NVIDIA Corporation TU106 [GeForce RTX 2060 SUPER] (Santa Clar, USA). For a typical numerical test, the CPU times are of the order of minutes (less than an hour).
We want to create a process that only acquires two states c = 0 and c = 1 . This process can be constructed by using two methods: either by a step-by-step generation with infinitesimally small time intervals ( d t ) or by globally generating random time intervals ( t 0 and t 1 ) for the system to remain in states c = 0 and c = 1 , respectively. Both methods are equivalent to memory-less processes, but the first method allows for memory inclusion in the process. This is why we specifically use this method in our numerical simulations.
The signal’s state at each time is determined by its state at the time Δ t earlier. If the series is in state i at time l Δ t , then the probability that the series will be in state j i at time ( l + 1 ) Δ t is k i j Δ t , where k i j is the probability per unit of time that the system transitions from state i to state j (the probability transition rate). The time step Δ t is selected to verify k i j Δ t 1 . On the other hand, the probability that the series continues in state i is ( 1 k i j ) Δ t . Here, i = 0 , 1 and j = 0 , 1 , representing states c = 1 and c = 0 , respectively.
The sum of probabilities, where c = 0 and c = 1 , must adhere to the equation P 1 ( t ) + P 0 ( t ) = 1 . Therefore, we can delineate line segments on the interval [ 0 , 1 ] , each corresponding to the transition from the state i to the state j and having a length equal to the probability of transition i j . Then, we utilize a generator of pseudo-random real number possessing uniform probability on the interval [ 0 , 1 ] to select a number falling within one of these segments in order to determine the state j of the series at time ( l + 1 ) Δ t [5].
In order to validate the theoretical expressions outlined in this work, we conducted tests using numerical simulations. We examined several different parameters for the probability transition rates. We focused on two specific cases, although it is important to mention that we obtained similar results for other tests as well. In the first case, we observed symmetry between the probability transition rates, where D 10 = D 01 = 1.2 and A 10 = A 01 = 10 . The second test uses non-symmetric probability transition rates with A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . Note that D 10 = 1.261859507 is the similarity dimension for the von Koch curve. Therefore, the probability transition rate k 10 is determined by the von Koch fractal curve [6].

3.1. Case 1: Symmetric Probability Transition Rates

Here, we study a symmetrical test with A 10 = A 01 = 10 and D 10 = D 01 = 1.2 . Therefore, the following probability transition rate results are as follows:
k 01 = 10 t 0.2 , k 10 = 10 t 0.2 .
The resulting signal displays the following parameters.
Processes:
-Total signal time = 149.9658 seconds
-Number of c = 1 plus c = 0 cycles = 1542
Average c:
-Numerical or generated average value of c = 0.499145
-Theoretical average value of c = 0.5
-Percentage error in the average value of c = 0.171039
Average 1 c :
-Numerical or the generated average value of 1 c = 0.500855
-Theoretical average value of 1 c = 0.5
-Percentage error in the average value 1 c = 0.171039
Waiting time of c:
-Numerical or generated average waiting time for c = 0.048543 seconds
-Theoretical average waiting time for c = 0.048205 seconds
-Percentage error in the average waiting time for c = 1.0477
Waiting time of 1 c :
-Numerical or generated average waiting time for 1 c = 0.04871 seconds
-Theoretical average waiting time for c = 0.0482 seconds
-Percentage error in average waiting time for 1 c = 1.0477
Mean crossing frequency:
-Numerical or generated crossing frequency = 20.56469 1/seconds
-Theoretical crossing frequency = 20.74466 1/seconds
-Percentage error in crossing frequency = 0.867577
Probability transition rate k 10 :
-Numerical or generated average probability transition rate k 10 = 18.31357
-Theoretical average probability transition rate k 10 = 18.33923
-Percentage error in average probability transition rate k 10 = 0.1399373
Probability transition rate k 01 :
-Numerical or generated average probability transition rate k 01 = 18.30105
-Theoretical average probability transition rate k 01 = 18.33923
-Percentage error in average probability transition rate k 01 = 0.208234
From the previous data, we can observe an accurate behavior of the theoretical results regarding the numerical values.
In Figure 3 and Figure 4, we can see the conditional probabilities P 00 ( d t ) and P 11 ( d t ) . These probabilities show the likelihood of the signal remaining at c = 0 and c = 1 during a time interval d t . They are calculated using the Equation (10). It is important to note that there is a very high level of accuracy between the theoretical values obtained from Equation (10) and the numerical data.
The autocorrelation function is shown in Figure 5. From the figure, we can deduce that the results calculated with the model “Assumption 1” show a better agreement with the numerical ones. However, the results for both approximated models do not accurately capture the autocorrelation function for intermediate values of τ . On the other hand, for short and large τ , the model Assumption 1 accurately represents the numerical values.
To numerically determine how well models Assumptions 1 and 2 approximate the autocorrelation function, we calculate the error as follows:
e r = 1 N i = 1 N R i t R i n R i n ,
where a is the absolute value of a, and the supra-indexes t and n indicate theoretical and numerical values. From this equation, we obtain e r = 0.02486 for the Assumption 1 model, and e r = 0.03754 for the Assumption 2 model. These errors confirm that the Assumption 1 model exhibits better behavior compared to the Assumption 2 model.

3.2. Case 2: Non-Symmetric Probability Transition Rates

A non-symmetric test is studied here. We use A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . We highlight that D 10 = 1.261859507 is the similarity dimension of the von Koch curve. Therefore, the following probability transition rate results are as follows:
k 01 = 8 t 0.35 , k 10 = 15 t 0.261859507 .
Therefore, we obtain the following:
Processes:
-Total signal time = 149.8895 seconds
-Number of c = 1 plus c = 0 cycles = 2507
Average c:
-Numerical or generated average value of c = 0.1843728
-Theoretical average value of c = 0.1859916
-Percentage error in the average value of c = 0.8703348
Average 1 c :
-Numerical or generated average value of 1 c = 0.81562715
-Theoretical average value of 1 c = 0.8140084
-Percentage error in the average value of 1 c = 0.19886
Waiting time of c:
-Numerical or generated average waiting time of c = 0.01102337 seconds
-Theoretical average waiting time of c = 0.01092298 seconds
-Percentage error in the average waiting time of c = 2.00758
Waiting time of 1 c :
-Numerical or generated average waiting time of 1 c = 0.04876512 seconds
-Theoretical average waiting time of 1 c = 0.04780538 seconds
-Percentage error in average waiting time of 1 c = 2.00758
Mean crossing frequency:
-Numerical or generated crossing frequency = 33.45125 1/second
-Theoretical crossing frequency = 34.05509 1/second
-Percentage error in crossing frequency = 1.773
Probability transition rate k 10 :
-Numerical or generated average probability transition rate k 10 = 72.65762
-Theoretical average probability transition rate k 10 = 72.89065
-Percentage error in average probability transition rate k 10 = 0.31967
Probability transition rate k 01 :
-Numerical or generated average probability transition rate k 01 = 17.64497
-Theoretical average probability transition rate k 01 = 17.73705
-Percentage error in average probability transition rate k 01 = 0.1263
Similar to the previous test, the data show accuracy between theoretical a numerical values.
Figure 6 and Figure 7 show the probabilities P 00 ( d t ) and P 11 ( d t ) for the non-symmetry test, which is given by Equation (10). We highlight the very good accuracy between the theoretical probabilities with the numerical ones.
The autocorrelation function is displayed in Figure 8. The better agreement with the numerical results is obtained with the model “Assumption 1”. The model “Assumption 2” does not correctly capture the numerical autocorrelation. This result is consistent with the restriction of Equation (57), which is only valid for D 10 = D 01 .
We use Equation (86) in order to evaluate the error introduced by the Assumption 1 and 2 models. We obtain e r = 0.0897813 for the Assumption 1 model, and e r = 2.146914 for the Assumption 2 model.

4. Conclusions

In this study, a random telegraphic signal with fractal-like probability transition rates was analyzed, exhibiting more complex behavior compared to that of a Markov series. Theoretical formulas were derived for various aspects such as unconditional and conditional probabilities, waiting times, average phase duration, dimensional and non-dimensional autocorrelation functions, spectral density, mean crossing frequency, and the integral time scale. These also included the relationship between mean crossing frequency, integral time scale, and average waiting times, to describe the properties of this random function. The formulas were validated against numerical simulation results that involved either symmetric or non-symmetric probability transition rates, the latter represented by a von Koch curve for k 10 . Furthermore, a detailed deduction of all the mentioned parameters was provided and it was demonstrated that the previously used autocorrelation function for this type of signal (referred to as the “Assumption 2” model) was only an approximation and not an exact expression. Introducing a new model, referred to as “Assumption 1”, it was shown that it yields a more accurate expression for the autocorrelation function. In conclusion, the new theoretical expressions proved to be quite accurate when compared to their numerically obtained counterparts for both symmetric and non-symmetric probability transition rates.
Finally, there are two potential areas for future research. The first involves evaluating the statistical variables in chaotic intermittency using a random telegraph signal, similar to the one examined here. This would involve a signal with probability transition rates that depend on the waiting or sojourn times [17,18,19]. The second area of research involves turbulent combustion in the flamelet regime. A highly successful model in this context is the BML model [16,20,21], which uses probability transition rates dependent on the Gamma 2 function. An alternative approach could involve extending the model to include fractal probability transition rates.

Author Contributions

Conceptualization, S.E., P.B. and L.G.M.; methodology, S.E. and P.B.; software, S.E., P.B. and L.G.M.; validation, S.E., P.B. and L.G.M.; formal analysis, S.E. and P.B.; investigation, S.E., P.B. and L.G.M.; resources, L.G.M.; writing—original draft preparation, S.E., P.B. and L.G.M.; writing—review and editing, S.E., P.B. and L.G.M.; visualization, S.E., P.B. and L.G.M.; supervision, S.E.; project administration, S.E. and P.B.; funding acquisition, S.E., P.B. and L.G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by PIP-GI-CONICET “Efectos de viento, explosiones y fuego en tanques de almacenamiento de combustibles” and Proyecto Consolidar, SECyT-Universidad Nacional de Córdoba “Desarrollo y aplicación de estudios teóricos, numéricos y códigos computacionales en mecánica de fluidos e intermitencia caótica”. The authors have also received support from the Facultad de Ingeniería y Ciencias Básicas, Fundación Universitaria los Libertadores, CNRS and University Pau and Pays Adour, LMAP, Inria Cagire Team.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liebovitch, L. The fractal random telegraphic signal: Signal analysis and applications. Ann. Biomed. Eng. 1988, 16, 483–494. [Google Scholar] [CrossRef] [PubMed]
  2. Wu, X.; Yang, Q.; Li, J.; Hou, F. Investigation on the prediction of cardiovascular events based on multi-scale time irreversibility analysis. Symmetry 2021, 13, 2424. [Google Scholar] [CrossRef]
  3. Hu, W. Stochastic finite-Time stability for stochastic nonlinear systems with stochastic impulses. Symmetry 2022, 14, 817. [Google Scholar] [CrossRef]
  4. Machlup, S. Noise in semiconductors: Spectrum of a two-parameter random signal. J. Appl. Phys. 1954, 25, 341–343. [Google Scholar] [CrossRef]
  5. Liebovitch, L.; Fischbarg, J.; Koniarek, J. Ion channel kinetic: A model based on fractal scaling rather than multistate Markov processes. Math. Biosci. 1987, 84, 37–68. [Google Scholar] [CrossRef]
  6. Mandelbrot, B. Fractal Geometry of Nature; W.H. Freeman: New York, NY, USA, 1983. [Google Scholar]
  7. Roy, A.; Sujith, R. Fractal dimension of premixed flames in intermittent turbulence. Combust. Flame 2021, 226, 412–418. [Google Scholar] [CrossRef]
  8. Nikan, O.; Avazzadeh, Z.; Tenreiro Machado, J.A. Localized kernel-based meshless method for pricing financial options underlying fractal transmission system. Math. Methods Appl. Sci. 2024, 47, 3247–3260. [Google Scholar] [CrossRef]
  9. Mandelbrot, B. Self-affine fractals and fractal dimension. Phys. Scr. 1985, 32, 257–260. [Google Scholar] [CrossRef]
  10. Taylor, R.; Prilgrim, I. Fractal analysis of time-series data sets: Methods and challenges. In Fractal Analysis; Ouadfeul, S.-A., Ed.; IntechOpen: London, UK, 2018. [Google Scholar]
  11. Papoulis, A.; Pillai, U. Probability, Random Variables, and Stochastic Processes, 4th ed.; McGraw Hill: New York, NY, USA, 2002. [Google Scholar]
  12. Van Kampen, N. Stochastic Processes in Physic and Chemistry, 3rd ed.; North-Holland: Amsterdam, The Netherlands, 1984. [Google Scholar]
  13. Abramowitz, M.; Stegun, I. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th ed.; U.S. Government Printing Office: Dover, NJ, USA, 1972.
  14. Elaskar, S.; Bruel, P.; Frías, M. Proceso de Markov aplicado al estudio de llamas premezcladas turbulentas. In Proceedings of the IEEE Biennial Congress of Argentina (Argencon 2024), San Nicolás de los Arroyos, Argentina, 18–20 September 2024. [Google Scholar]
  15. Pugachev, V. Introducción a la Teoría de las Probabilidades; Ed. Mir: Moscow, Russia, 1973. (In Spanish) [Google Scholar]
  16. Bray, K.; Champion, M.; Libby, P. Scalar dissipation and mean reaction rates in premixed turbulent combustion. In Proceedings of the Symposium (International) on Combustion, Seattle, WA, USA, 14–19 August 1989; Elsevier: Amsterdam, The Netherlands, 1989; Volume 22. No. 1. [Google Scholar]
  17. Elaskar, S.; del Río, E. New Advances on Chaotic Intermittency and Its Applications, Springer: New York, NY, USA, 2017.
  18. Elaskar, S.; del Río, E.; Elaskar, S. Intermittency Reinjection in the Logistic Map. Symmetry 2022, 14, 481. [Google Scholar] [CrossRef]
  19. Elaskar, S.; del Río, E. Review of Chaotic Intermittency. Symmetry 2023, 15, 1195. [Google Scholar] [CrossRef]
  20. Shepherd, I.; Moss, J. Characteristic scales for density fluctuations in a turbulent premixed flame. Combust. Sci. Technol. 1983, 33, 231–243. [Google Scholar] [CrossRef]
  21. Lipatnikov, A. Fundamentals of Premixed Turbulent Combustion; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
Figure 1. t 1 ¯ versus D 10 form Equation (15) with A 10 = 1 .
Figure 1. t 1 ¯ versus D 10 form Equation (15) with A 10 = 1 .
Symmetry 16 01175 g001
Figure 2. Generic representation of the waiting times t 1 and t 0 and the overall time t.
Figure 2. Generic representation of the waiting times t 1 and t 0 and the overall time t.
Symmetry 16 01175 g002
Figure 3. Symmetric signal with A 10 = A 01 = 10 and D 10 = D 01 = 1.2 . Probability that c remains at 0 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Figure 3. Symmetric signal with A 10 = A 01 = 10 and D 10 = D 01 = 1.2 . Probability that c remains at 0 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Symmetry 16 01175 g003
Figure 4. Symmetric signal with A 10 = A 01 = 10 and D 10 = D 01 = 1.2 . Probability that c remains at 1 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Figure 4. Symmetric signal with A 10 = A 01 = 10 and D 10 = D 01 = 1.2 . Probability that c remains at 1 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Symmetry 16 01175 g004
Figure 5. Symmetric signal with A 10 = A 01 = 10 , and D 10 = D 01 = 1.2 . Autocorrelation function. Red line: numerical result. Black line: Assumption 1 model. Blue line: Assumption 2 model. The curves are partially superimposed.
Figure 5. Symmetric signal with A 10 = A 01 = 10 , and D 10 = D 01 = 1.2 . Autocorrelation function. Red line: numerical result. Black line: Assumption 1 model. Blue line: Assumption 2 model. The curves are partially superimposed.
Symmetry 16 01175 g005
Figure 6. Non-symmetric signal with A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . Probability that c remains at 0 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Figure 6. Non-symmetric signal with A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . Probability that c remains at 0 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Symmetry 16 01175 g006
Figure 7. Non-symmetric signal with A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . Probability that c remains at 1 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Figure 7. Non-symmetric signal with A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . Probability that c remains at 1 during d t . Red points: numerical result. Blue line: theoretical values. The curves are superimposed.
Symmetry 16 01175 g007
Figure 8. Signal with A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . Autocorrelation function. Red line: numerical result. Black line: Assumption 1 model. Blue line: Assumption 2 model.
Figure 8. Signal with A 10 = 15 , A 01 = 8 , D 10 = 1.261859507 , and D 01 = 1.35 . Autocorrelation function. Red line: numerical result. Black line: Assumption 1 model. Blue line: Assumption 2 model.
Symmetry 16 01175 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elaskar, S.; Bruel, P.; Gutiérrez Marcantoni, L. Random Telegraphic Signals with Fractal-like Probability Transition Rates. Symmetry 2024, 16, 1175. https://doi.org/10.3390/sym16091175

AMA Style

Elaskar S, Bruel P, Gutiérrez Marcantoni L. Random Telegraphic Signals with Fractal-like Probability Transition Rates. Symmetry. 2024; 16(9):1175. https://doi.org/10.3390/sym16091175

Chicago/Turabian Style

Elaskar, Sergio, Pascal Bruel, and Luis Gutiérrez Marcantoni. 2024. "Random Telegraphic Signals with Fractal-like Probability Transition Rates" Symmetry 16, no. 9: 1175. https://doi.org/10.3390/sym16091175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop