• 黄淳的专栏作者中国国家地理网 2019-06-12
  • 坚守传统手工包粽 探访沪上老字号粽子生产车间 2019-06-10
  • 满满的都是屏 OPPO妹子最爱手机曝光 2019-06-10
  • 北京进入旅游旺季 警察提示游客需防揽客者连环设套忽悠购物 2019-06-08
  • 我写文章不是为了别人的赞许,是为了讨论问题,让人有思考的价值,就像你网名一样,探寻真理。我并非就全盘赞成市场经济,只是在讨论它的合理性,在文中也提问,“既然我们 2019-06-08
  • 山东假存单揽储1.6亿难追回,该省另一相似案件银行被判赔 2019-05-31
  • 天上不会掉馅饼,想要富起来,就要把别人的据为己有,能把别人的据为己有的问世间能有谁,能有几人,所谓的专家明白了吗。 2019-05-31
  • 乌鲁木齐市中级人民法院庭审在线直播 2019-05-21
  • 第530期:为何吃新鲜蔬果能抗肿瘤、强免疫……?因为有“它”! 2019-05-19
  • 世界杯大中华区官方票代谈假票门黄牛倒票行为 2019-05-19
  • 英媒:漫步北京上海,仿佛踏入未来 2019-05-17
  • 2017年各地领导干部给网友书写23万封回信 2019-05-17
  • 回到1396年的波斯街头文章中国国家地理网 2019-04-23
  • 送你一份时代天街附近必吃的火锅名单 2019-04-23
  • 幻想“暴富式创业”侮辱谁的智商? 2019-04-13
  • 江苏七位体彩开奖结果
    学霸学习网 这下你爽了
    相关文章
    当前位置:江苏七位体彩开奖结果 >> >>

    大乐透开奖结果:RV’04 Preliminary Version Simulation of Simultaneous Events in Regular Expressions for Run

    江苏七位体彩开奖结果 www.jwbw.net RV’04 Preliminary Version

    Simulation of Simultaneous Events in Regular Expressions for Run-Time Veri?cation
    Usa Sammapun Arvind Easwaran Insup Lee Oleg Sokolsky 1,2
    Department of Computer and Information Science University of Pennsylvania Philadelphia, PA, USA

    Abstract When specifying system requirements, we want a language that can express the requirements in the simplest and most intuitive form. Although the MaC system provides an expressive language, called MEDL, it is generally awkward to express certain features like temporal ordering of complex events, timing constraints, and frequencies of events which are inherent in safety properties. MEDL-RE extends the MEDL language to include regular expressions to easily specify timing dependencies and timing constraints. Due to simultaneous events generated by the MaC system, monitoring regular expressions by simulating DFAs would result in a potential problem. The DFA simulations would involve concurrent multi-path simulations and result in exponential running time. To handle simultaneous events inexpensively, we generate a dependency graph to identify possible simultaneous events. Further, we augment the original DFAs with alternative transitions, which will substitute for multi-path simulations.

    1

    Introduction

    The monitoring, checking and steering (MaC) framework [9,10,11] has been designed to ensure that the execution of a real-time system is consistent with its requirements at run-time. It provides a language, called MEDL, to specify safety properties based on LTL [13]. The safety properties include both computational and timing requirements. The safety properties are de?ned in terms of events, conditions, auxiliary variables, and auxiliary functions. Events are instantaneous incidents such as variable updates or the start/end of a method call. Conditions are propositions about the program that may be true or false for a duration of time. Those events and conditions can also
    1

    This research was supported in part by NSF CCR-9988409, NSF CCR-0086147, NSF CCR-0209024, and ARO DAAD19-01-1-0473. 2 Email: {usa,arvinde,lee,sokolsky}@saul.cis.upenn.edu This is a preliminary version. The ?nal version will be published in Electronic Notes in Theoretical Computer Science URL: www.elsevier.nl/locate/entcs

    Sammapun, Easwaran, Lee and Sokolsky

    be composed using connectives described in Section 2. Auxiliary variables are temporary storage, which allows us, for example, to count the number of occurrences of an event. Auxiliary functions return values and time stamps of events. The MEDL language provides an elegant and intuitive way to specify computational requirements. It, however, does not provide an intuitive way to specify timing requirements, such as temporal ordering of events with complex timing dependencies, timing constraints or counting of speci?c events in a time interval. The extension of MEDL called MEDL-RE [14] adds the ability to specify ordering of events in the form of regular expressions (RE) over a customized set of events, which o?ers users with clearer and less error-prone speci?cations. In this paper, we propose an e?cient simulation of the corresponding DFAs at runtime. By observing a sequence of events occuring in a target system, a DFA generated by MEDL-RE matches the sequence of events with a speci?c regular expression. However, because the composite events can be triggered simultaneously and cannot be temporally ordered in any way, the DFA must recognize these events for any ordering of the events. This means if a regular expression has some inherent ordering of simultaneous events, the DFA must accept all di?erent permutations of such order. We refer to those permutations as linearizations. To build such a DFA, we augment the original DFA with alternative transitions to provide paths from one linearization to another. We then prove that the original DFA and the augmented DFA are equivalent. Only those DFAs whose underlying regular expressions have candidate simultaneous events in their relevant sets are augmented. The candidate simultaneous events can be statically detected by building and traversing a dependency graph described in Section 4. The paper is organized as follows. Section 2 brie?y explains an overview of the MaC framework. Section 3 introduces an extension MEDL-RE. Section 4 discusses the construction of the dependency graph. Section 5 presents and proves our augmented DFA algorithm. Section 6 presents related work. Lastly, section 7 concludes the paper.

    2

    MaC Overview

    2.1 MaC Architecture The MaC system has been developed to ensure that a program runs correctly with respect to its formal requirement. Fig. 1 shows the MaC architecture. The system works as follows. A user speci?es a requirement of a target program in a formal language. Given a target program and the requirement, the MaC system inserts a collection of probes or a ?lter into the target program. During run-time, the execution of the probed target program is monitored and checked by the MaC system. An event recognizer detects primitive events and conditions from state information received from the ?lter. The primitive events 112

    Sammapun, Easwaran, Lee and Sokolsky
    Monitoring Script
    monitored events and conditions compilation steering actions

    Code

    object names

    Steering Script

    Requirement Specification

    low-level Running Filter information System

    Event Recognizer

    events and condition changes

    low-level steering

    Injector

    steering action invocations

    Checker

    Fig. 1. Overview of the MaC architecture

    are change of value (update(object)), entering method (startM(method)), and leaving method (endM(method)). The primitive conditions are boolean variables or boolean statements composed by primitive typed variables in the target program. These events and conditions are then sent to a run-time checker, which determines whether or not the current execution history satis?es the requirement speci?cation. The execution history is captured from a sequence of events sent by the event recognizer. If the run-time checker detects any violation, it noti?es the user and triggers an injector to take a steering action speci?ed in a steering script and steer the target program back to a safe state. 2.2 MaC Languages The MaC system provides three languages. The requirement speci?cation or the Meta-Event De?nition Language (MEDL), based on an extension of a linear temporal logic (LTL) [13], allows us to express a large subset of safety properties of systems, including real-time properties. The monitoring script, expressed in the Primitive Event De?nition Language (PEDL), is used to de?ne what information is sent from a ?lter to the MaC system and how it is transformed into events and conditions used in MEDL. The steering script written in the Steering Action De?nition Language (SADL) is used to specify actions to be invoked when violations occur. See [9,10] for PEDL and SADL details. 2.2.1 Events and Conditions Events occur instantaneously during the system execution, whereas conditions represent information that hold for a duration of time. For example, an event denoting the call to method init occurs at the instant the control is passed to the method, while a condition (angle < 30) holds as long as the value of the variable angle does not exceed 30. The syntax of events and conditions 113

    Sammapun, Easwaran, Lee and Sokolsky

    is shown below. E ::= e | start(C) | end(C) | E && E | E || E | E when C C ::= c | defined(C) | [E,E) | !C | C && C | C || C | C ? C The boolean connectives used in events and conditions are de?ned in the usual way. Events start(c) and end(c) are triggered when c becomes true and false, respectively. An event e when c is triggered when e is triggered and c is true. A condition defined(c) is true when c is de?ned. A condition [e1 , e2 ) is true between the occurrence of events e1 and e2 where e1 is included but e2 is not. For formal semantics of events and conditions, see [9,11]. 2.2.2 Meta Event De?nition Language (MEDL) MEDL includes events and conditions imported from PEDL, de?nitions of composite events and conditions, safety properties, auxiliary variables, and auxiliary functions. A safety property can be expressed as a condition or as an event called an alarm. A safety property condition must always be true during the execution whereas an alarm must never be raised. Auxiliary variables can be used to de?ne events and conditions. Auxiliary variables allow us, for example, to count the number of occurrences of an event. Auxiliary functions are time(e) and value(e), which return the time stamp and the value of an event e, respectively.

    3

    Syntax and Semantics of MEDL-RE

    This section describes the extension of the MEDL language to include the speci?cation of the ordering of events by expressing them as a regular expression and a frequency of events in a time interval. 3.1 Syntax Let R be a set of regular expression names, e.g., R, R1 , R2 , etc. Let s be ? statements in the MEDL-RE, sM EDL be the existing MEDL statements, E be a list of events, r be regular expressions, and C be the conditions described in Section 2.2.1. We de?ne the syntax of the MEDL-RE extension as follows.
    s r E ::= | ::= ::= | | ::= sM EDL ? RE R { E } = < r > E | r . r | r + r | r? e | start(C) | end(C) E && E | E || E | E when C startRE(r) | success(r) | fail(r) time(E) | value(E) | occur(E, C) Existing MEDL statements New MEDL-RE statement Regular expressions Existing events Regular expression events Auxiliary functions

    f

    The MEDL-RE extension includes regular expressions, events startRE(r), success(r), fail(r), and an auxiliary function occur(E, C). The regular expressions, ranged over a set Σ of events, consists of events (E), concatenation 114

    Sammapun, Easwaran, Lee and Sokolsky

    ΣE = ΣR1 +R2 = ΣR1 . R2 = = ΣR ?

    {E} ΣR1 ∪ ΣR2 ΣR1 ∪ ΣR2 ΣR

    Fig. 2. A function ΣR , which returns the relevant event set of R

    (r . r), union (r+r), and Kleene star (r?). The event startRE(r) indicates that we start observing the regular expression. The event success(r) indicates that we have found a sequence of events that speci?es the regular expression, and the event fail(r) indicates that we have started observing but failed to ?nish ?nding such a sequence. For an event E and a condition C, the auxiliary function occur(E, C) returns a frequency or a number of occurrences of an event E during the time interval when C holds true. 3.2 Semantics Let ΣR in Fig. 2 denote a set of events speci?ed in a RE R and ΣV denote ? a customized set of events speci?ed as E in the new MEDL-RE statement shown in the syntax section. Then, a relevant event set of R is Σ = ΣR ∪ ΣV . We modify the model M de?ned in [9] as follows. A model M is a tuple (S, τ, LC , LE , o), where S = {s0 , s1 , . . .} is a set of states, τ is a mapping from S to the time domain, LC is a total function from S × C to {true, f alse, Λ} where C denotes a set of condition names and Λ denotes unde?ned, and LE is a partial function from S × E to a value domain where E denotes a set of event names. For all ek where LE (si , ek ) is de?ned, there is an order o(si , ek ) such that at time τ (si ), an order for each occurrence ek is distinct. The order o is a total and injective function that maps ek and si to an ordered set of positive integers and o(si?1 , ek ) < o(si , el ) for all i and any k, l. The semantics of MEDL-RE is de?ned using a derivative of a RE [5]. For any RE R and any alphabet a, a derivative of R with respect to a, denoted by Da (R), is the RE, where Da (R) = {x ∈ Σ? |ax ∈ R}. Fig. 3 and Fig. 4 show the semantics of a derivative of a RE and the function E(R) from Σ? to boolean, which tests whether ? ∈ R, respectively, where R1 , R2 are regular expressions, a, b ∈ Σ and a = b. Besides the derivatives, we also need to de?ne a function FIRST(R) and a function Φo (R). FIRST(R) returns a set containing all events that can appear M as the ?rst event in the RE R. Formally, FIRST(R) = {a ∈ Σ ∪ {?} | ax ∈ R where x ∈ Σ? }. Φo (R) represents the remainder of the RE R at an order M o after a sequence of derivatives. We de?ne Φo (R) as follows. M
    ΦM i (R) = R if M, τ (si ) |= e where e ∈ FIRST(R) o(si ,e) o(si ,e)?1 ΦM (R) = De (ΦM (R)) if M, τ (si ) |= e
    o(s ,e)

    We also de?ne a language L (R) of R as follows.
    L(?) = ? L(?) = {?}

    115

    Sammapun, Easwaran, Lee and Sokolsky

    Da (a) Da (b) Da (Λ) Da (?) Da (R1 + R2 ) Da (R?) Da (R1 . R2 )

    = = = = = = =

    ? Λ Λ Λ Da (R1 ) + Da (R2 ) (Da (R)) . R? (Da (R1 )) . R2 if E(R) = False (Da (R1 )) . R2 + Da (R2 ) if E(R) = True

    Fig. 3. A derivative of a regular expression R with respect to a (Da (R)) E(a) E(Λ) E(?) E(R1 + R2 ) E(R1 . R2 ) E(R? ) = = = = = = False False True E(R1 ) ∨ E(R2 ) E(R1 ) ∧ E(R2 ) True

    Fig. 4. A function E(R), which tests whether ? ∈ R L(a) = {a} L(R1 .R2 ) = {x1 .x2 |x1 ∈ L(R1 ) ∧ x2 ∈ L(R2 )} L(R1 + R2 ) = L(R1 ) ∪ L(R2 ) L(R? ) = (L(R))?

    Using FIRST(R) and Φo (R), we de?ne the startRE(R) event, success(R) M event, and the fail(R) event.
    M, t |= startRE(R) i? M, t |= e where e ∈ FIRST(R) o(s ,e)?1 M, t |= success(R) i? M, t |= e and ? ∈ FIRST(De (ΦM i (R))) where t = τ (si ) o(si ,e)?1 M, t |= fail(R) i? M, t |= e and De (ΦM (R)) = Λ where t = τ (si )

    The event startRE(R) is triggered when an occuring event is one of the events that can appear as the ?rst event in the RE R. The event success(R) is triggered when an occuring event e causes the derivative of the remainder to contain empty string, and the event fail(R) is triggered when an occuring event e causes the derivative of the remainder to become unde?ned. Next, we de?ne a frequency function occur(e, c). A function occur(e, c) returns the number of occurrences of an event e during the time interval that a condition c holds true. occur(e, c) is de?ned as follows. Let f (si , e, c) denote a frequency of e in c at time τ (si ). At time τ (si ), occur(e, c) = f (si , e, c).
    f (si , e, c) = = = = Λ 0 f (si?1 , e, c) f (si?1 , e, c) + 1 if if if if M, τ (si ) |= c M, τ (si ) |= start(c) M, τ (si ) |= c and M, τ (si ) |= e M, τ (si ) |= c and M, τ (si ) |= e

    3.3 Examples Two examples of requirements that need such timing dependencies are (i) an event a must not occur three times in a row, and 116

    Sammapun, Easwaran, Lee and Sokolsky

    (ii) the ordered events w,x,y,z can occur out of order for less than ten times when a condition c holds true. Events are a, b, w, x, y, and z where events a and b are not related to events w, x, y, and z, and vice versa. To specify those requirements in the existing MEDL, we need a few auxiliary variables to keep track of when and how many times events a, w, x, y, and z occur. The fragment of MEDL below shows how we specify the above requirements in the original MEDL. var aCount, wxyzCount; alarm a3 = a when aCount == 3; event wxyz = ( y||z when [w,x) ) || ( z when [w, end([x,y))) ) when c; property wxyz10 = wxyzCount < 10; a -> { aCount’ = aCount+1; } b -> { aCount’ = 0; } wxyz -> { wxyzCount’ = wxyzCount+1; } The aCount variable keeps track of how many times an event a occurs. Whenever a occurs, we increment it and whenever b occurs, we reset it. The alarm a3 alerts users when aCount becomes 3. The event wxyz is triggered only when c holds true and when y or z occurs between w and x or when z occurs between x and y. The wxyzCount variable stores the number of times the event wxyz has occured. The property wxyz10 alerts users when wxyzCount is greater than 10. Consider when we need to specify the ordering of more than four events, the event such as wxyz can get very complicated because too many cases need to be considered. This shows that writing requirements using the existing MEDL is error-prone and di?cult to understand. By adding regular expressions to the language, an order of events can be expressed more intuitively. The below fragment of the speci?cation shows how to specify the example requirements in the extended MEDL-RE.
    RE a3RE  = <a.a.a>; RE wxyzRE {} = <w.x.y.z>; alarm a3 = success(a3RE); property wxyz10 = occur(fail(wxyzRE),c) < 10;

    The regular expression a3RE and wxyzRE denote the ordering of events a occurring three times in a row and the ordering of the event w followed by the events x, y, and z. The relevant set  in a3RE indicates that if an event b occurs between two events a, then this sequence would fail to match a3RE. However, the MEDL-RE would ignore an event x if it occurs between two events a and would not fail the sequence. The relevant set {} in wxyzRE works similarly. The alarm a3 alerts users when we successfully match the RE a3RE while the property wxyz10 alerts users when the RE wxyzRE fails more than ten times. The requirements now are much simpler and easier to understand than the original MEDL. 117

    Sammapun, Easwaran, Lee and Sokolsky

    4

    Implementation

    An important property of monitoring is its ability to monitor target systems as e?ciently and quickly as possible using minimal system resources. Hence, using a deterministic ?nite automaton (DFA) to monitor a RE is preferred over a non-deterministic ?nite automaton (NFA). Several existing algorithms have been proposed to e?ciently construct and minimize a DFA from a given RE. We have chosen the algorithm by Aho, Sethi and Ullman [2] because it generates a DFA directly without generating an NFA. The empirical result by Watson [15] also suggests that this DFA construction is e?cient. Simulation of a DFA involves the following steps. Identify a RE that has the current event in its relevant set. Then compare the current event with all the possible transitions from the current state of the DFA and take the appropriate one. If the current event initiates the simulation of a DFA, then it would trigger a startRE event. If the automaton moves to a ?nal state, a success event is triggered under certain conditions as speci?ed in Section 5. Similarly, the event causing the automaton to get stuck triggers a fail event. However, the problem in DFA simulation arises when there are simultaneous events generated by one primitive event or condition. Since there is no inherent order amongst these simultaneous events, the DFA simulation must evaluate multiple paths simultaneously for all possible permutations of these events. This is expensive in terms of resource utilization and may also be non-viable for certain applications especially in real time systems. 4.1 Causes of Simultaneous Events The MaC language consists of high-level or composite events and conditions, and low-level or primitive events and conditions described in Section 2.1 and 2.2. When primitive events or conditions are detected, the ?lter sends them to the event recognizer in the order as they occur. These primitive events and conditions can trigger composite events or change the value of conditions. While the primitive events and conditions are ordered, we cannot order the composite events triggered by one primitive event or conditions because all of the composite events occur at the same time. Since REs can be de?ned on composite events that can occur individually and simultaneously, we need a way to recognize the events that can occur simultaneously. If two events can occur simultaneously, then their order is insigni?cant and their concatenation can be permuted without changing the meaning. To handle simultaneous events inexpensively, we propose the following steps. During the static phase, we determine if events used in the REs could occur simultaneously. If such events exist, we augment the original DFA with alternative transitions. At runtime, we try to take a step using the original transitions. If it is not possible, we try to take a step using the alternative transitions only if there exists a set of events from among the relevant set of this RE that have occured simultaneously. 118

    Sammapun, Easwaran, Lee and Sokolsky

    e1 e2 e3 RE

    = update(A.x) when A.c1; = update(A.x) when A.c1 && A.c2; = update(A.y) when A.c2; test {} = e1 . e2 . e3; Fig. 5. Monitoring script
    e2

    when

    e1

    e3

    when

    &&

    when

    update(A.x)

    A.c1

    A.c2

    update(A.y)

    Fig. 6. Dependency graph

    4.2 Detecting Simultaneous Events To detect possible simultaneous events statically, we construct a dependency graph from the syntax of events and conditions using connectives described in Section 2. The dependency graph G = (V, E) is a directed graph where the vertices V are connectives, events, and conditions, and the edges E represent dependency between them. Let e1 , e2 be events or conditions. Then, (e1 , e2 ) ∈ E if and only if e1 is used to compose e2 . For example, if e3 = e1 ||e2 , then (e1 , ||), (e2 , ||), and (||, e3 ) are in E. Fig. 5 shows an example monitoring script, and Fig. 6 shows the corresponding dependency graph. In the ?gures, there are two primitive events update(A.x), update(A.y) and two primitive conditions A.c1, A.c2. All of them are vertices at the bottom of Fig. 6. Since the event update(A.x) and the condition A.c1 are used to compose a connective when, edges (update(A.x), when) and (A.c1, when) are in E. Since the event e1 is composed of the connective when, an edge (when, e1) is also in E. The events e2, e3 are constructed similarly. We denote that an event e2 depends on an event e1 if and only if there exists a path from an event e1 to an event e2 in G. We also denote that if there is a path to each of e1 and e2 from a common vertex in the graph, then they can occur simultaneously. Thus, a set of events that can be reached from the same primitive event in the dependency graph can all occur simultaneously. For example, in Fig. 6, the events e1, e2 depend on the same primitive event update(A.x), and therefore, they can occur simultaneously. To detect dependency, we traverse the graph in a bottom-up fashion using BFS. However, the analysis of the dependency graph can yield a false positive result. This means that simultaneous events obtained from the graph might 119

    Sammapun, Easwaran, Lee and Sokolsky

    never occur in the actual system. For example, consider the events in Fig. 6. Based on the dependency graph, the events e1 and e2 can occur simultaneously because both depend on the event update(A.x). Suppose that the conditions A.c1 and A.c2 are boolean variables in the target program and they are never true at the same time. Then, events e1 and e2 never occur simultaneously. There are two special cases. The ?rst case is when an event is composed using an and connective (e1 && e1 ). The event with the && connective is triggered if and only if both of its two arguments e1 and e2 occur simultaneously. Therefore, if there exists an && connective along the path, the two arguments of the && connective must depend on one common vertex. Otherwise, the event with the && connective will never be triggered and therefore, can be eliminated from possible simultaneous events. The second case is when an event is composed using a when connective (e when c), which consists of an event side e and a condition side c. The event with the when connective is triggered if and only if the event e occurs when the condition c is true. Assume the two events we are detecting are e1 and e2 , and e1 is composed using a when connective. If the common vertex of e1 and e2 lies on the event side of the when connective in e1 , then e1 and e2 can occur simultaneously. If it lies only on the condition side, then e1 and e2 cannot occur simultaneously. For example, the events e2 and e3 in Fig. 6 cannot occur simultaneously. This is the case because update(A.x) and update(A.y) cannot occur simultaneously based on our assumption that all primitive events are always ordered. However, the events e1 and e2 can occur simultaneously if both A.c1 and A.c2 are true at the same time. A set of possible simultaneous events obtained by the above process can be used to identify REs that would require augmentation. Only those regular expressions that have a subset of simultaneous events in their relevant sets need to be augmented.

    5

    Algorithm for Augmenting and Simulating DFA with Simultaneous Events

    Based on the information of simultaneous events provided in Section 4, we incorporate additional information into the DFA statically. We claim that this additional information can assist the simultaneous simulation of multiple paths in the DFA e?ciently. The following algorithm is proposed to incorporate this additional information into the DFA. Let the DFA be a minimal DFA generated using the algorithm for DFA generation and minimization as described in Section 4. We call this DFA as DF Aorg and the augmented DFA generated by the algorithm as DF Aaug . We construct DF Aaug under the following premises. (i) The set of simultaneous events generated by the event recognizer does 120

    Sammapun, Easwaran, Lee and Sokolsky

    not constitute multiple occurrences of the same event. This assumption is valid because the MaC system does not record the number of occurrences of an event at any given time instant. It only records the presence or absence of the event occurrence. (ii) DF Aorg consists of a single accepting state. Since we detect only the shortest sequence accepted by DF Aorg , we can remove all outgoing transitions from all the accepting states. The minimization procedure would then merge those states into one. Let DF Aorg = (Q, Σ, T, q0 , qf ) where Q is a set of ?nite states, Σ is a relevant event set of R as described in Section 3.2, T : Q × Σ → Q is a partial transition function, q0 ∈ Q is the start state, and qf ∈ Q is the single accepting state. Given DF Aorg , the algorithm constructs DF Aaug = (Q, Σ, T, q0 , qf , D, U, ES, Talt ) where Q, Σ, T, q0 , and qf are obtained from DF Aorg , D : T → N ∪ {0} is the distance annotation of a transition indicating its longest distance from qf , U : T → N ∪ {0} is the unique distance function that maps each transition of a state to a unique natural number or zero, ES : Q → P(Σ? ) is an event string annotation of a state, and Talt : Q × Σ → Q is a partial alternative transition function. We denote T (q, e) = Λ i? (q, e) ∈ dom(T ) and denote q, e, q ′ ∈ T i? T (q, e) = q ′ . D, U, ES, and Talt are described in further details in the following sections. The algorithm for DF Aaug construction has two phases: Annotation of transitions and states, and generation of alternative transitions. 5.1 Phase 1 : Annotation of Transitions and States During this phase, we annotate each transition t ∈ T of DF Aorg with a numeric value denoted as D(t), such that the value (i) equals the distance of the transition t from the accepting state qf . This distance is equal to the number of transitions to be traversed to reach qf from the current state using this transition t. (ii) denotes the maximum of all such distances. Transitions that constitute cycles or self loops are ignored during this phase. This distance generation phase of the algorithm is shown in Fig. 8. This annotation can be done by traversing the DFA using a BFS algorithm starting at qf . This distance information D(t) will assist in ordering of transitions for simulation at runtime. We will traverse one of the outgoing transitions from the current state based on the distance values associated with the transitions, i.e., lower distance value ?rst. The rationale behind this choice is that transitions that are closer to the accepting state should be attempted before those that are farther away from it. In case of a con?ict between transitions (i.e., two transitions having the same distance value), we pick transitions based on the ordering of corresponding events in the relevant set. In Fig. 8 the start 121

    Sammapun, Easwaran, Lee and Sokolsky
    e3 e2 e4

    A
    e1

    B

    C
    e5

    S
    e2

    e6

    D

    e1

    E

    e4

    F

    Fig. 7. Original minimized DF Aorg
    e3 {e1} e2 / 2 e4 / 1 {e1e2e4, e1e2e3e4}

    A
    e1 / 3

    B
    {e1e2, e1e2e3}

    C
    e5 / 0

    S
    e2 / 3

    e6 / 0

    D
    {e2}

    e1 / 2

    E
    {e2e1}

    e4 / 1

    F
    {e2e1e4}

    Fig. 8. Distance D and event string ES annotations of DF Aaug

    state S has two outgoing transitions both numbered 3. We will order these transitions based on the ordering of events e1 and e2 in the relevant set. Let this unique ordering of transitions t be called U (t). During this phase, we also annotate each state of DF Aaug with additional information, indicating the set of possible sequences of input events that could be encountered to reach this state starting from the start state q0 . Each such input sequence is called the event string of that state. Please note that there can be multiple event strings for each state. This annotation step is shown in Fig. 8. The event string set for each state q, called ES(q), is equal to the concatenation of the set of event strings of all its predecessor states with the corresponding transitions. All states that have self loops on an event e have the event e appended to each of their event strings after the event strings are replicated. As shown in Fig. 8, the state B has a self loop on event e3. Its set of event strings are then given by the event string {e1e2} for the normal transition and the event string {e1e2e3} for the self loop. In case there is a cycle abca, then abc is appended to each of the event strings of the states that constitute the cycle after replication of the event strings. Please note that determination of event strings proceeds to the next state only after it is 122

    Sammapun, Easwaran, Lee and Sokolsky

    1 2 3 4 5 6 7 8 9 10

    for each state q ∈ Q ? {q0 , qf } for each event string es ∈ ES(q) for each event e ∈ Σ such that T (q, e) = Λ for each state q1 ∈ Q ? {q0 } such that max?ei ∈Σ,T (q1 ,ei )=Λ (U (T (q1 , ei ))) ≥ min?ei ∈Σ,T (q,ei )=Λ (U (T (q, ei ))) for each event string es1 ∈ ES(q1 ) if (Hashed (es1 ) == Hashed (es)) then Add q, e, T (q1 , e) to Talt (if T (q1 , e) = Λ) else if (Hashed (es) == Hashed (Remove(e, es1 ))) then Add q, e, q1 to Talt Fig. 9. Algorithm to generate alternative transitions Talt

    completely determined for the present state, i.e., all self loops for the current state must be resolved before proceeding to the next state. Cycles would be an exception here because we cannot identify cycles until we proceed ahead and return back to the state. Even in this case, the algorithm will proceed to states following the cycle only after all the event strings for all the states in the cycle have been determined. 5.2 Phase 2: Generation of Alternative Transitions. Before we proceed, we need to de?ne the terms linearization and equivalence of linearization. Linearization Linearization is a total order of a set of events. If this set consists of events that have occured simultaneously then the ordering of such simultaneous events in the set is one linearization of this set. A di?erent ordering of these simultaneous events then constitutes a di?erent linearization of this set. Equivalence Two linearizations are equivalent if and only if they are di?erent permutations of the same set of events. During this phase, we add alternative transitions to appropriate states in DF Aorg . The alternative transitions take the DFA from a state representing one linearization to another state representing an equivalent linearization for some set of simultaneous events. Let Remove(e, es) be an event string obtained by removing one occurrence of e from an event string es. Let Hashed(es) be a hash value associated with an event string es where the ordering of events in es is insigni?cant. This means two event strings have the same hash value if and only if they are equivalent linearizations. For example, Hashed(abc) = Hashed(cab) but Hashed(aabc) = Hashed(abc). 123

    Sammapun, Easwaran, Lee and Sokolsky

    The algorithm shown in Fig. 9 compares the event strings of two states to see whether they are di?erent linearizations of the same input sequence. If so, we say that the two event strings are equivalent linearizations of the input sequence, and we could add alternative transitions from one state to another without changing the meaning of the DFA. Since we always choose a transition t from a state q with the lowest U (t) ?rst, the order of paths taken is unique. Thus, we need to add alternative transitions only from a state with lower U (t) to the states with higher U (t). This means that if q and q1 are states, then we add alternative transitions from q to q1 if and only if the maximum value of U (t1 ) among all transitions t1 of q1 is greater than the minimum value of U (t) among all transitions t of q (lines 4–5). When adding the alternative transitions annotated with an event e from q to q1 , we ensure that the existing transitions in q do not have a transition annotated with the event e (line 3). Thus, the resulting ?nite automata is still deterministic. There are two di?erent cases which the algorithm handles. Assume es ∈ ES(q) and es1 ∈ ES(q1 ) are two event strings such that q has lower U (t) than q1 . The ?rst case is when some event strings of the two states q and q1 have di?erent but equivalent linearizations. For example, es = abc and es1 = bca. Here the algorithm adds an alternative transition on the current event e from the state q to the state represented by T (q1 , e) (lines 7–8). This transition is added for each event e such that there are no transitions currently present from q on any of these events. This case is shown in Fig. 10. Here, q=F , q1 =C, es=e2e1e4, es1 =e1e2e4 and e=e5. The algorithm then adds a transition on e5 to T (C, e5) which is the accepting state qf . The second case is when Remove(e, es1 ) and es represent equivalent linearizations for the two states q and q1 . For example, if es = ba, es1 = aeb, and the current event is e, then the removal of e from es1 returns an event string which is equivalent to es. Here the algorithm adds an alternative transition on the current event e from q to q1 (lines 9–10). Again the transitions are added only for those events that currently do not have transitions from q. This case is shown in Fig. 11. Here, q=F , q1 =C, es=e2e1e4, es1 =e1e2e3e4 and e=e3. The algorithm then adds a transition on e3 to C. After completion of this phase, we can delete the event string information from all the states except the ones with D(t) values of 1. We call these states the penultimate states. Their event strings will be used to verify the acceptance of the input sequence. This process will be explained in the next section. 5.3 DFA Simulation at Runtime At runtime, we simulate DF Aaug without the alternative transitions as long as there is no occurrence of simultaneous events. On the ?rst occurrence of simultaneous events, we activate all the alternative transitions. From the current state, we then take the path which is closest to the accepting state, 124

    Sammapun, Easwaran, Lee and Sokolsky
    e3 {e1} e2 / 2 e4 / 1 {e1e2e4, e1e2e3e4}

    A
    e1 / 3 {e1e2, e1e2e3} e3

    B

    C
    e5 / 0

    S
    e2 / 3

    e5 e6 / 0

    D
    {e2}

    e1 / 2

    E
    {e2e1}

    e4 / 1

    F
    {e2e1e4}

    Fig. 10. Case 1 of the algorithm in Fig. 9
    e3 {e1} e2 / 2 e4 / 1 {e1e2e4, e1e2e3e4}

    A
    e1 / 3 {e1e2, e1e2e3} e3

    B

    C
    e5 / 0

    S
    e2 / 3

    e3

    e5 e6 / 0

    D
    {e2}

    e1 / 2

    E
    {e2e1}

    e4 / 1

    F
    {e2e1e4}

    Fig. 11. Case 2 of the algorithm in Fig. 9

    i.e., the path with the lowest U (t). Next, we simulate DF Aaug normally taking into consideration the alternative transitions and U (t). Because events that occur simultaneously can also occur individually, some linearization that DF Aaug accepts might not be accepted by DF Aorg . When DF Aaug reaches the accepting state, we verify by comparing the input sequence and the event strings in the penultimate states that have an original transition on the last event in the input sequence to the accepting state. Because DF Aorg is minimal, only one such penultimate state exists. If they match, DF Aaug accepts that input sequence. Otherwise, DF Aaug rejects. To be able to verify, we need to keep the history of all events seen from the start state. The history also keeps information about which events have occured simultaneously and which have occured individually. We do so by storing the input sequence as an array of simultaneous sets. To handle Kleene star, we have a ?ag to indicate whether or not each state has been visited. We then keep only events that take the simulation from one state to another unvisited state. This way, the resulting history string will resemble the way event strings are calculated and simplify our veri?cation algorithm. Let 125

    Sammapun, Easwaran, Lee and Sokolsky

    1 2 3 4 5 6 7 8 9

    i = 0, j = 0 for each ei ∈ ESelast if ei ∈ hj then Remove ei from hj else then Reject this input string if hj is empty then j = j + 1 Accept this input string (if it has not been rejected) Fig. 12. Verifying Algorithm

    H = A history of the input sequence. elast = The last event in the input sequence. ESelast = A set of event strings associated with the penultimate state, which has a transition in DF Aorg on elast to the accepting state. ei = A set of events in an event string where i is the position in the event string. hj = A set of simultaneous events hj ∈ H where hj occurs before hj+1 . The verifying algorithm in Fig. 12 is straightforward. For each ei ∈ ESelast (line 2), we check if ei occurs in the current simultaneous set (line 3). If not, the input string is rejected (line 6). When the current set is completely parsed without being rejected, we move to the next set (lines 7–8). If we reach the end of ESelast without being rejected, we accept the input sequence (line 9). For example, let the input sequence recorded when the DFA accepted be {e1e2, e4, e5} where events e1 and e2 have occured simultaneously. Since the last event in the sequence, i.e., elast = e5, we have ESelast = {e1e2e4, e1e2e3e4}. Now since event string {e1e2e4} in ESelast matches with the recorded input sequence {e1e2, e4}, DF Aaug will accept this input sequence. 5.4 Analysis of the Algorithm This augmentation algorithm has the following running times during each phase. For phase 1, we run BFS twice, and thus the running time is O(s + t) where s = |Q| is the number of states , and t = |dom(T )| is the number of transitions. For phase 2, the running time is O(s2 · e · es2 ) where e = |Σ| is the number of events in the relevant set and es = q∈Q |ES(q)| is the number of possible event strings in the regular expression, (worst case analysis). This then constitutes the compile time penalties for augmenting the DFA. The 126

    Sammapun, Easwaran, Lee and Sokolsky

    table in Fig. 13 compares the running time of this algorithm with the naive algorithm of dynamically simulating all possible paths. Since there can be O(2n ) such paths where n is the length of the input string, the dynamic simulation approach can impose an additional running time of O(2n ). In our algorithm, O(n) added to the running time is for veri?cation of the input sequence when DF Aaug accepts. The additional space is only O(n2 ) because we add alternative transitions only from lower numbered to higher numbered paths, which is n i = O(n2 ). Therefore, our algorithm is clearly prefered i=1 over the naive algorithm.
    Algorithm Dynamic simulation of all paths Augmenting with alternative transitions Running Time O(2n ) O(n) Space 0 O(n2 )

    Fig. 13. Comparison between two algorithms

    5.5 Equivalence of DF Aorg and DF Aaug In this section, we provide a proof that DF Aorg and DF Aaug are equivalent. Theorem 5.1 DF Aaug accepts if and only if DF Aorg accepts. Proof. Part 1: If DF Aorg accepts, DF Aaug accepts. This statement can be restated as if DF Aorg accepts some linearization of the input sequence, then DF Aaug accepts all of its equivalent linearizations. We de?ne a path and a set of paths as follows. A path in DF Aorg is any path from the start state q0 to the accepting state qf . Every such path is uniquely ordered by the algorithm. A set of paths is an enumeration of all possible paths from q0 to qf of DF Aorg . By the description of the algorithm, every alternative transition is a transition from one path to another path in the set, and the transitions are always taken from lower U (t) to higher U (t). Let a be the event currently being considered. Let q1 and q2 be the two states being considered. Let es1 and es2 be the two event strings associated with q1 and q2 , respectively. Assume the current state is q1 , and therefore, es1 is one linearization of input seen so far. The algorithm has the following alternatives. (i) The event string es2 is a di?erent equivalent linearization of input seen so far, i.e., (Hashed(es1 ) = Hashed(es2 )). Then, the algorithm adds an alternative transition from q1 to a state q3 on a such that a transition from q2 to q3 on a exists, i.e., T (q2 , a) = q3 . Thus, this alternative transition takes a step from one linearization to the other. (Hashed(concat(es1 , a)) = Hashed(concat(es2 , a))). 127

    Sammapun, Easwaran, Lee and Sokolsky

    (ii) The removal of a from es2 is a di?erent equivalent linearization of the input seen so far. (Hashed(es1 ) = Hashed(Remove(a, es2 ))). Then, this algorithm adds an alternative transition from q1 to q2 on a. Thus again this alternative transition takes a step from one linearization to the other. (Hashed(concat(es1 , a)) = Hashed(es2 )). Now we prove that such alternative transitions exist for every pair of equivalent linearizations. Let states q1 , q2 , q3 , q4 belong to paths p1 , p2 , p3 , p4 , respectively, and let the ordering of these paths be p1 < p2 < p3 < p4 . Then the ordering of states is q1 < q2 < q3 < q4 . If one of the event strings associated with each of these states are di?erent equivalent linearizations and the ordering is as given above, then it can be seen that the algorithm repeats itself for each pair of states (q1 , q2 ), (q1 , q3 ), (q1 , q4 ), (q2 , q3 ), (q2 , q4 ) and (q3 , q4 ) in that order. Thus, the algorithm provides a path to go from the current linearization of the input to any other di?erent equivalent linearization governed by the unique ordering of linearizations (or paths). Since at runtime we will simulate DF Aaug based on the ordering of the paths; from a given path of U (t) = k, we always have a path using the alternative transitions to all paths with U (t) greater than k and having some equivalent linearization. Thus, DF Aaug will eventually reach the path, which represents the linearization for which DF Aorg accepts. Once it reaches this path, the process of picking original transitions ?rst ensures that we will never leave this path. Part 2: If DF Aaug accepts, DF Aorg accepts. We prove this case by contradiction. Assume that DF Aaug accepts and DF Aorg does not for some input sequence. By the de?nition of the algorithm, DF Aaug accepts if the following conditions exist. (i) Simulation reaches the accepting state of DF Aaug . (ii) The input sequence on which DF Aaug has accepted is equivalent to the concatenation of an event elast and some event string associated with the penultimate state which has a transition on an event elast to the accepting state, where elast is the last event that occurs in the input sequence. From the de?nition of event strings at a state, we can claim that the event string stored at the penultimate state is equal to a string parsed by DF Aorg to reach this state. This claim is valid because while determining event strings, we use only the original transitions present in DF Aorg . The transition taken from the penultimate state to reach the accepting state is also an original transition. Therefore, the event string concatenated by the last transition is accepted by DF Aorg , contradicting the assumption above. 2

    128

    Sammapun, Easwaran, Lee and Sokolsky

    6

    Related Work

    There are a few existing works that incorporate regular expressions into logic. The ForSpec Temporal Logic (FTL) [3], Intel’s new formal speci?cation language, extends a linear temporal logic [13] with the ability to specify all ωregular properties. FTL allows a user to de?ne temporal connectives over time windows, regular sequences of Boolean events, and then relate such events via special connectives. Sugar [4] adds an extensive set of operators including regular expressions as a syntactic sugar to CTL [7]. Property Speci?cation Language [1] is a speci?cation language for hardware modeling and veri?cation, which supports LTL [13] and extended regular expressions. Monitoring oriented Programming (MoP) [6] provides a monitoring architecture based on LTL [13] and extended regular expressions. Temporal Rover [8] is an architecture that helps a system do monitoring. Its speci?cation language uses LTL [13] and MTL [12] with regular expressions and Time-Series. Time-Series observes temporal properties over time and is used for properties like stability, monotonicity, temporal average, sum, and max/min value. Comparing to the MEDL-RE based on LTL and regular expressions, FTL [3], MoP [6], and Temporal Rover [8] provide similar languages based on LTL and/or MTL and regular expressions while Sugar [4]’s language is based on CTL and similar regular expressions. However, none of them seem to have the issue of simultaneous events and therefore no algorithm for checking dependency and augmenting a DFA has been proposed.

    7

    Conclusion

    We have presented an algorithm to simulate an augmented DFA for the extension MEDL-RE. The MEDL-RE incorporates regular expressions, which provide a more intuitive and less error-prone language to express complex dependencies between sequence of events, timing constraints, and a frequency of events during a time interval. The events associated with a regular expression o?ers the ability to detect the instance when the regular expression starts and the instance when we succeed or fail to ?nd the regular expression. The DFA is augmented by adding alternative transitions after we statically detect the possibility of simultaneous events. We also prove that the augmented DFA is equivalent to the original DFA.

    References
    [1] Accellera Organization, Inc. Property Speci?cation Language Reference Manual, Version 1.01, 2003. //www.accellera.org/. [2] A. V. Aho, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, Reading, MA, 1986.

    129

    Sammapun, Easwaran, Lee and Sokolsky

    [3] R. Armoni, L. Fix, A. Flaisher, R. Gerth, B. Ginsburg, T. Kanza, A. Landver, S. Mador-Haim, E. Singerman, A. Tiemeyer, M. Y. Vardi, and Y. Zbar. The Forspec Temporal Logic: A New Temporal Property-Speci?cation Language. In Tools and Algorithms for Construction and Analysis of Systems, pages 296– 211, 2002. [4] I. Beer, S. Ben-David, C. Eisner, D. Fisman, A. Gringauze, and Y. Rodeh. The Temporal Logic Sugar. Lecture Notes in Computer Science, 2102:363–367, 2001. [5] J. A. Brzozowski. Derivatives of Regular Expressions. Journal of the ACM, 11(4):481–494, 1964. [6] F. Chen and G. Rosu. Towards Monitoring-Oriented Programming: A Paradigm Combining Speci?cation and Implementation. In Proceedings of the 3nd International Workshop on Run-time Veri?cation, July 2003. [7] E. M. Clarke and E. A. Emerson. Synthesis of Synchronization Skeletons for Branching Time Temporal Logic. In Logic of Programs: Workshop Lecture Notes in Computer Science Vol 131, Dexter and Kozen (editors), Yorktown Heights, New York, 1981. Springer-Verlag. [8] D. Drusinsky. Monitoring Temporal Rules Combined with Time Series. In Proceedings of the 2003 Computer Aided Veri?cation Conference (CAV), July 2003. [9] M. Kim. Information Extraction for Run-time Formal Analysis. PhD thesis, University of Pennsylvania, 2001. [10] M. Kim, I. Lee, U. Sammapun, J. Shin, and O. Sokolsky. Monitoring, Checking, and Steering of Real-Time Systems. In Proceedings of the 2nd International Workshop on Run-time Veri?cation, July 2002. [11] M. Kim, M. Viswanathan, S. K. Han?ne Ben-Abdallah, I. Lee, and O. Sokolsky. e Formally Speci?ed Monitoring of Temporal Properties. In Proceedings of the European Conference on Real-Time Systems - ECRTS’99, pages 114–121, June 1999. [12] R. Koymans. Specifying Real-Time Properties with Metric Temporal Logic. RealTime Systems, 2(4):255–299, 1990. [13] Z. Manna and A. Pnueli. The Temporal Logic of Reactive and Concurrent Systems. Springer-Verlag, 1992. [14] U. Sammapun and O. Sokolsky. Regular Expressions for Run-Time Veri?cation. In Proceedings of the 1st International Workshop on Automated Technology for Veri?cation and Analysis, Dec. 2003. [15] B. W. Watson. Taxonomies and Toolkits of Regular Languages Algorithms. PhD thesis, Eindhoven University of Technology, 1995.

    130


    推荐相关:
    江苏七位体彩开奖结果 | 江苏七位体彩开奖结果
    All rights reserved Powered by 江苏七位体彩开奖结果 江苏七位体彩开奖结果 www.jwbw.net
    copyright ©right 2010-2021。
    文档资料库内容来自网络,如有侵犯请联系客服。[email protected]
  • 黄淳的专栏作者中国国家地理网 2019-06-12
  • 坚守传统手工包粽 探访沪上老字号粽子生产车间 2019-06-10
  • 满满的都是屏 OPPO妹子最爱手机曝光 2019-06-10
  • 北京进入旅游旺季 警察提示游客需防揽客者连环设套忽悠购物 2019-06-08
  • 我写文章不是为了别人的赞许,是为了讨论问题,让人有思考的价值,就像你网名一样,探寻真理。我并非就全盘赞成市场经济,只是在讨论它的合理性,在文中也提问,“既然我们 2019-06-08
  • 山东假存单揽储1.6亿难追回,该省另一相似案件银行被判赔 2019-05-31
  • 天上不会掉馅饼,想要富起来,就要把别人的据为己有,能把别人的据为己有的问世间能有谁,能有几人,所谓的专家明白了吗。 2019-05-31
  • 乌鲁木齐市中级人民法院庭审在线直播 2019-05-21
  • 第530期:为何吃新鲜蔬果能抗肿瘤、强免疫……?因为有“它”! 2019-05-19
  • 世界杯大中华区官方票代谈假票门黄牛倒票行为 2019-05-19
  • 英媒:漫步北京上海,仿佛踏入未来 2019-05-17
  • 2017年各地领导干部给网友书写23万封回信 2019-05-17
  • 回到1396年的波斯街头文章中国国家地理网 2019-04-23
  • 送你一份时代天街附近必吃的火锅名单 2019-04-23
  • 幻想“暴富式创业”侮辱谁的智商? 2019-04-13
  • 双色球图表走势图表 广西快乐十分公式 单双中特中后付款 切尔西与热刺历史记录 qq游戏欢乐升级辅助 上海时时彩最快开奖结果查询 广西快乐十分爱彩网 hlj时时彩开奖视频 360杀号体彩排列5 海南4十1开奖查询 15选5中一等奖 新疆时时彩四星开奖号走势图 湖南彩票定制开发多少钱 上海时时彩11选5下载 天津快乐10分几点开奖结果今天晚上