• agent reasoning;
  • intelligent agents;
  • goal-based reasoning;
  • multiagent systems


  1. Top of page
  2. Abstract

Intelligent agent systems are often used to implement complex software systems. A key aspect of agent systems is goals: a programmer or user defines a set of goals for an agent, and then the agent is left to determine how best to satisfy the goals assigned to it. Such goals include both achievement goals and maintenance goals. An achievement goal describes a particular state the agent would like to become true, such as being in a particular location or having a particular bank balance. A maintenance goal specifies a condition which is to be kept satisfied, such as ensuring that a vehicle stays below a certain speed, or that it has sufficient fuel. Current agent systems usually only utilize reactive maintenance goals, in that the agent only takes action after the maintenance condition has been violated. In this paper, we discuss methods by which maintenance goals can be made proactive, i.e., acting before a maintenance condition is violated, having predicted that the maintenance condition will be violated in the future. We provide a representation of proactive maintenance goals, reasoning algorithms, an operational semantics that realizes these algorithms and an experimental evaluation of our approach.


  1. Top of page
  2. Abstract

Intelligent agents are often used in complex dynamic environments, such as air traffic control (Ljungberg and Lucas 1992), on-board spacecraft diagnosis (Muscettola Nayak, Pell, and Williams 1998), games (Evans 2002), and disaster recovery. Central to the design of all such systems is the balance between proactive behavior (i.e., seeking to achieve a particular outcome) and reactive behavior (i.e., responding to changes in the environment), as the uncertainty inherent in the environment makes it difficult for the agent to determine what to do.

Goals are an essential concept in agent systems. As agent systems (such as the Mars rover robot) are designed to work in dynamic environments, it is crucial for an agent to deliberate over its goals and manage them appropriately. In systems based on the Belief-Desire-Intention (BDI) model of Rao and Georgeff (1992), goals are often achievement goals, i.e., goals which are adopted by the agent to achieve a particular state (such as obtaining some particular soil samples from the surface of Mars, or gathering data on solar activity), and then dropped once this state has been achieved.

Another type of goal that is becoming increasingly important maintenance goals. A maintenance goal has a particular state of the world that the agent seeks to maintain, i.e., the state must be true, and kept this way indefinitely. For example, the Mars rover would be welladvised to ensure that wherever it travels, it always maintains sufficient fuel for the journey back to its base, or to travel at no more than a specified maximum speed. Hence, the main task is to monitor the maintain condition, and to ensure that it never becomes false. Note also that maintenance goals are not dropped once any violation has been restored, and hence must be treated differently to achievement goals.

One way to incorporate maintenance goals is to do so reactively, i.e., to wait until the maintain condition becomes false before taking some action to restore it. In the case of the Mars rover, this would involve waiting until the fuel supply falls below a given level before returning to its base to refuel. This approach is taken many implementations of maintenance goals such as those in Jadex (Pokahr, Braubach, and Lamersdorf 2005b), JAM (Huber 1999), and JACK (Winikoff 2005).

However, it is often more rational to treat maintenance goals proactively as well, i.e., to anticipate when the maintain condition will be violated and act appropriately so that the maintain condition does not become false. In the case of the Mars rover, this would entail checking the estimated fuel use for each of its journeys, and if it anticipates that a particular journey will cause the fuel level to fall below the critical level, it is rational to return to base to recharge before setting out on the journey. A reactive approach to this problem would allow the Mars rover to commence its journey, only to find that the fuel level falls to below the critical value before the journey is complete. At this point, the robot would temporarily abandon its journey and return to base to refuel, thus wasting fuel on a journey whose failure was predictable.

In this paper we investigate the use of maintenance goals with both reactive and proactive behavior. We provide a representation of maintenance goals that captures both reactive and proactive behavior and provide reasoning algorithms that can be used to implement both behaviors. We then give a formal semantics, based on the abstract agent language CAN (Winikoff, Padgham, Harland, and Thangarajah 2002; Sardina, de Silva, and Padgham 2006; Sardina and Padgham 2007), for both reactive and proactive maintenance goals. This is one of the main differences between this paper and our previous work (Duff, Harland, and Thangarajah 2006). Specifically, we separate the decision to act to maintain a maintenance goal from the management of the consequences of doing so. The reactive case is straightforward: action is taken to restore the maintenance goal only when it has been violated. In the proactive case, it is not as obvious what the right mechanism would be for making the decision to act. One possibility is to use a formal semantics for predicting the effects of the agents current plans. This requires not only determining an appropriate “window” in which to predict future states, but also addressing the issue of whether action should be triggered when one possible future path violates a maintenance goal, or whether this is only done when all future paths lead to such a violation. This reflects a more general discussion about how bold or cautious the agent should be, in that a cautious agent would adopt the former method, and a bold agent the latter. A second possibility for making the decision to act could be based on probabilistic reasoning, in that rather than determining whether or not a violation has definitely occurred, it is often sufficient to know that a violation is likely. For example, if the fuel level of the Mars rover drops to say 1% above the threshold to refuel, then any course of action is very likely to trigger a violation. A third possibility is to allow the agent designer to supply a hand-crafted procedure for predicting whether a violation will occur. This issue is discussed in more depth in Section 4.3.

For these reasons, in our semantics we provide an operator future which is used to determine when a (proactive) maintenace goal violation occurs, but we do not provide a definition of future. This means that the agent designer is free to choose whatever means of predicting maintenance goal violations is appropriate, and our semantics can be used to determine the consequences of this decision. This is similar to the semantics of van Riemsdijk, Dastani, and Winikoff (2008), in which there is a similar issue about the generation of plans for a particular (achievement) goal. In the semantics of van Riemsdijk et al. (2008), a procedure MER is used to generate a plan for a given goal, which could be either a real-time planner, or finding an appropriate plan in an existing plan library, or some other approach. In our semantics, the future operator will have a similar role as a “plug-in”.

This approach to the future operator reflects the uncertainty around the future states in which the agent will find itself. As noted above, agents are usually used in complex dynamic environments, i.e., those in which prediction of the precise future state is often difficult. In our case, this necessitates a more flexible approach to proactive maintenance goals than reactive ones.

A further issue with proactive maintenance goals is precisely what action should be taken when a violation is predicted. In particular, as is done in Hindriks and van Riemsdijk (2007), one possibility is to simply avoid current choices which lead to violations by eliminating them from consideration. Another possibility is to schedule preventative activities will avoid the violation. In the case of the Mars rover, this would involve scheduling refueling between journeys, rather than avoiding certain journeys altogether, or dropping goals which must inevitably violate the maintain condition (such as attempting a journey to a point which is more than half of its effective range, as it cannot expect to return from such a journey). We believe it is fundamental that any framework for proactive maintenance goals must be sufficiently flexible to incorporate these various approaches.

It should be noted that using a proactive approach to maintenance goal violation does not make the reactive approach obselete; in fact, it is entirely appropriate to always use the reactive approach in conjunction with the proactive one. If the prediction methods used by the agent are always perfectly accurate, then the proactive method for detecting maintenance goal violation will ensure that the reactive method is never needed. As this is highly unlikely to occur in practice, an agent should have both methods available. It is worth noting that if the reactive method is needed, this can be considered a failure of the proactive method, and so we can use the relative occurrences of each method as a rough measure of the effectiveness of the prediction methods used.

We conclude this paper with experimental evaluation of the proposed approach, illustrating the benefits of maintenance goals with proactive behavior. This involves introducing the experimental case study and the components that are varied through the course of the experiments. In particular, we investigate the performance of both reactive and proactive approaches in noisy environments. We do so by using a particular scenario for the Mars rover, in which its ability to predict distances (and hence fuel consumption for a given journey) may be erroneous. As the error increases, the proactive approach becomes increasingly sub-optimal, whereas the reactive approach degrades more gracefully.

It should be noted that we do not deal with conflicts between goals (of any type) in this paper. There is a lot of existing work on the detection and resolution of conflicts between goals, all of which is orthogonal to the work presented here. In this paper we are interested in the management of maintenance goals, and in particular the combination of reactive and proactive triggering of these goals. Conflict detection and resolution is an essential part of the execution process, but is not something to which we make a contribution in this paper.

It should also be noted that we do not discuss issues such as computation in real time, or concurrent computations, or a BDI deliberation cycle (i.e., a specific operational mechanism conforming to the general parameters of Rao and Georgeff (1991)). This is not because these topics are unimportant, but rather as we believe it is necessary to investigate the nature of maintenance goals and their properties before such issues can be properly considered. In other words, we have tried to be as general as possible, within reason, to identify the most appropriate mechanisms for maintenance goals. Once these are known, it will be important to addresses issues such as these.

This paper is organized as follows. In Section 2, we discuss the background to our work, and in Section 3 we discuss how maintenance goals are represented, including a detailed description of an example based on the Mars rover. In Section 4, we give our formal semantics, and again apply it to the Mars rover scenario. In Section 5, we present and discuss our experimental results, and in Section 6 we give our conclusions and possibilities for further work.


  1. Top of page
  2. Abstract

In this section we give a brief introduction to agent systems relevant to this work, and the types of goals that are used in them. We also discuss how these goal types are implemented in current agent programming tools.

2.1. Agents and Goals

The approach for representing and reasoning about maintenance goals described in this paper, although generic, is most suited to Belief-Desire-Intention (BDI) agents (Rao and Georgeff 1991, 1995). Beliefs represent the information an agent has about itself and the environment (potentially including information about other agents). For example, a soccer playing robot may have the belief that it is located 10 m from the ball, that the ball is located 5 m from the goalkeeper, and that the goalkeeper has seen the ball. From these beliefs, it may determine that it cannot get to the ball before the goalkeeper does. Desires represent states the agent would like to have brought about. For example, the soccer playing robot desires the ball to be in the opponent’s goal. Intentions act as commitments to realizing a particular desire. Many practical agent systems may represent intentions as being goals with plans. A plan represents some method of achieving a goal, and may be represented in many forms. When an agent selects a goal for pursuit, a plan will be instantiated which, when executed, should lead to the satisfaction of the goal.

Goals are similar to desires. They represent states of the world an agent would like to see brought about. A key difference is that the desires of an agent may be inconsistent with one another, where as goals are required to be consistent. The most common type of goals require a particular state of the world to be achieved, termed achievement goals (or perform goals where success is not checked). When an agent achieves a goal, it is aiming to reach a particular state in the world where some condition is satisfied. However, an agent may also aim to keep a particular state true. In the event that the state no longer holds, an agent with a maintenance goal acts to restore that condition. For example, a soccer robot may aim to keep the ball away from the opponent.

A maintenance goal is appropriate in situations such as safety, or where repeated action may be necessary. For example, a moible robot (Pokahr, Braubach, and Lamersdorf 2005a) may use a maintenance goal to ensure that its battery’s charge is always greater than 10%. When this is no longer the case, the maintenance goal activates and causes the robot to find the closest recharger and recharge.

Unlike an achievement goal, a maintenance goal is long-lived, in that it will not be dropped upon success. Success for a maintenance goal is the continued satisfaction of the condition it is maintaining.

There are a variety of ways that this may be achieved, but they can all be described as maintaining a particular state.

One simple approach to maintaining a state is through the use of guarded actions. A guarded action is often employed in conjunction with another goal, for example, an achievement goal. The purpose of a guarded action is to stop the achievement goal in the event that some condition (the guard condition) occurs.

If an agent aims to maintain a state, then this state can be used as the guard. While performing other actions in pursuit of the achievement goal, this guarded state should persist. In the event that it does not, the achievement goal is aborted. For example, (fuel > 10, moveTo(Location10)), indicates that an agent should attempt to achieve the goal moveTo(Location10) so long as the guard condition, fuel > 10 is satisfied. In the event that the guard condition no longer holds, the agent abandons the moveTo(Location10) goal.

This approach achieves a similar effect to maintenance goals, but at the cost of requiring the agent designer to embed the appropriate checks for all maintenance goals into every part of the code where an action may occur. This would result in significantly more complex code, as well as the inconvenience of having to update large sections of the code when a maintenance goal changes. Hence we believe that it is impractical to use guarded actions as a substitute for maintenance goals.

An alternative to guarded actions are reactive maintenance goals. A reactive maintenance goal monitors a maintenance condition, and when this condition is no longer true, only then does it cause the agent to perform some actions, which are intended to make the maintenance condition true once more. If the maintenance condition is never violated, then the reactive maintenance goal will not influence the behavior of the agent.

In some ways, reactive maintenance goals are very similar to achievement goals. Both types of goals are triggered and cause actions to be performed. The important difference between these two forms of goal is that the maintenance goal is not dropped once the maintenance condition has been restored, unlike an achievement goal which is dropped once the desired state has been obtained.

Practical agent systems such as Jadex support reactive maintenance goals. When some maintenance condition is violated, action is triggered, which in turn may lead to the suspension of other goals (and in this way provide behavior similar to that found with guarded actions). Once the maintenance condition is restored, the suspended goals may be resumed. For example, monitoring the battery level in a mobile robot could be realized with a maintenance goal. A reactive maintenance goal to perform such a task would involve having a condition that triggers the maintenance goal if the battery level was less than 10% (for example). Upon activation, the maintenance goal would cause the robot to stop what it was doing and recharge at the nearest base-station. However, this behavior is only triggered when a certain value is reached—it does not take into consideration the current actions or goals of the agent. This can lead the agent to perform inefficiently, or potentially even becoming stranded.

Consider if the agent was located 1m from a base station and its fuel was around 15%. It would not act to recharge at this point. If it was given the task of traveling some long distance, it may trigger the maintenance goal before the journey is complete, causing it to return to the base station. This is an example of inefficient behavior associated with reactive maintenance goals.

One possible improvement to maintenance goals would be to have maintenance goals behave proactively—rather than waiting for the condition to no longer be satisfied, and then reacting to restore it, it may be better to act before the condition becomes unsatisfied.

Hindriks and van Riemsdijk (2007) provide one approach to proactive maintenance goals, which utilized a method similar to a planning mechanism. If an agent determined that its currently selected course of action would cause a violation of one or more of its maintenance goals, it would abort that course of action. Thus, it proactively avoids its maintenance goals from being violated. This does not provide any preventative mechanism however, other than simply not pursuing goals that cause the violation of maintenance conditions. One improvement to this would allow an agent to introduce actions that would allow it to achieve its goals, while ensuring its maintenance conditions remain satisfied (e.g., forcing the agent to refuel its tank to allow it to complete a journey without violating a maintenance condition).

Kaminka Yakir, Erusalimchik, and Cohen-Nov (2007) suggest a multiagent approach to proactive maintenance goals. A team maintenance goal may be to ensure that the distance between two mobile robots never exceeded a certain amount. Proactively, the robots could determine where other robots were heading and thus determine if the distance would exceed its maintenance conditions. If so, the robots would alter their plans to avoid this from occurring. The work presented in this paper focuses on maintenance goals for a single agent. However, we believe that our findings are applicable to many multiagent domains.

Van Riemsdijk et al. (2008) address the issue of the representation and behavior of goals in agent systems. Rather than providing descriptions of various goal types, this work presented a generic representation of goal, which was suitable for a large variety of goals. However, it is noted that the representation presented is insufficient to enabled maintenance goals to be treated in a proactive manner.

The work in this paper builds upon the framework by van Riemsdijk et al. (2008) to provide maintenance goals that exhibit reactive and proactive behavior.

The KAOS goal-oriented requirements engineering framework (Darimont, Delor, Massonet, and van Lamsweerde 1997) includes maintenance goals, as well as avoid goals. Whereas a maintenance goal aims to keep some condition satisfied, the purpose of an avoid goal is to not let some condition become satisfied. By modeling an avoid goal with a maintenance goal (by negating the condition to maintain), it becomes even more apparent how important it is to take the proactive approach concerning maintenance goals. If an agent wishes to avoid a condition from occurring, it will more than likely need to act in advance (i.e., behave proactively) rather than rely only on a reactive approach, particularly for safety-critical applications, or when legal requirements need to be met.

Nakamura, Baral, and Bjäreland (2000), Baral and Eiter (2004), and Baral et al. (2008) focus on defining exactly the behavior of a maintenance goal, utilizing temporal operators. They identify that the temporal operators, always f, is too strong. This does not describe the behavior of a maintenance goal, where it is expected that maintenance conditions may fail, which then need to be repaired. Further, in many cases, it is impossible for an agent to exhibit such a high degree of control over the environment to guarantee always f.

An alternative proposed by Nakamura et al. (2000) is that of always eventually f. This encodes that if f becomes false at any point, it will eventually be (re)satisfied. This encoding is also dismissed by Nakamura et al. as being too strong. It is possible that an agent may be overwhelmed with requests such that it cannot restore the condition it is aiming to maintain. An example presented by Nakamura et al. is that of an agent that monitors a user’s inbox, with the maintenance goal of keeping it empty. The environment is adversarial, keeping the inbox full and thus falsifying the maintenance goal. The agent removes and processes each email at a slower rate than the environment sends email. Therefore, despite the best efforts of the agent, it is unable to maintain the condition of keeping the inbox full. Yet, this behavior would still be described as rational, and attempting to maintain its goals.

Nakamura et al. (2000), Baral and Eiter (2004), and Baral et al. (2008) also define the notion of k-maintainability with respect to maintenance goals. k-maintainability describes the window of opportunity required in order for an agent to perform a number of actions that would restore the maintenance condition. A period less than k does not guarantee that an agent can maintain the desired condition, while given at least k occasionally, will result in the agent attempting to restore the maintenance condition. Nakamura et al. (2000), Baral and Eiter (2004), and Baral et al. (2008) go on to represent and solve the problem of determining k-maintainable controls using a SAT encoding, which is then proven to be polynomial time, and linear time for small k.

2.2. Maintenance Goals in Practical Agent Systems

In this section, we discuss some of the popular families of agent platforms, and how goals, in particular, maintenance goals, behave and are represented in these frameworks.

Due to their shared heritage, the PRS family (PRS, dMARS, and JACK) share many similar notions concerning goals. In this family, goals are not explicitly represented, but are implicitly captured via events. When a particular event is received by the agent, plan selection occurs and a plan appropriate to this event is then pursued. In the default behavior, if the plan begin pursued fails, an alternate plan is selected and the process repeats. The BDI-gap described by Winikoff et al. (2002) is partially a result of not having an explicit notion of goal.

While achievement goals are the most common form of goals, the PRS family includes several alternate goal types including maintain and query goals. The behavior of a maintain goal is to trigger an event (goal) when the maintain condition does not hold during the attempted achievement of the sub-goal. Upon completion of the sub-goal, the agent drops this maintenance goal. Hence, the maintain goal is dynamic, in that it may be adopted and dropped during runtime. If the maintain condition becomes unsatisfied during the execution of the sub-goal, the sub-goal is aborted and recovery actions may be pursued. However, the original goal is dropped, and must be explicitly re-adopted if the agent wishes to continue pursuit of this goal.

The JAM Agent language by Huber (1999) builds upon the UMPRS (Lee et al. 1994) and PRS (Georgeff and Ingrand 1989) implementations of the PRS agent framework. It supports a variety of goal types, including Achieve, Perform, and Maintain. However, the form of maintain goal represented in JAM is reactive in nature.

An example of a JAM agent described by Huber (1999) can be found in Figure 1. After executing the initialize action (which has the highest utility and is thus selected in preference to all other goals), the agent continually performs the wander_lobby action, ensuring that its charge level is always greater than 20% and keeping a safe distance from obstacles.


Figure 1. Example Goals in JAM.

Download figure to PowerPoint

While wandering the lobby, an agent with this goal set will disregard the maintenance goal of ensuring that its charge level is greater than 20%. It is only when this condition no longer holds, that this maintain goal come into effect and cause the activation of a plan that restores this condition. This is therefore a reactive maintenance goal.

Jadex is a Java-based agent language. Originally built upon JADE (Bellifemine, Poggi, and Rimassa 1999), a FIPA (O’Brien and Nicol 1998) compliant framework for hosting and developing multiagent systems, Jadex provides a framework for developing BDI-based agent systems. Jadex provides a comprehensive collection of goal types, including achieve, maintain, perform and query. Further details of Jadex’s goal types can be found in a comprehensive discussion by Pokahr, Braubach, and Lamersdorf (2003). This discussion also provides considerable detail concerning the representation of goals in the Jadex language.

Any goal in the Jadex framework is in any one of three states at any one time.

  • Option:
    This corresponds to the case when this goal can be selected for pursuit, but has not yet been made active. This loosely corresponds to the concept of desire, in that it is a goal the agent would like to pursue, but is currently not taking action towards. This allows goals that are in the Option state to be conflicting.
  • Active:
    Once an agent decides to take action towards achieving a goal, the goal is then considered Active. This indicates that the agent is taking steps towards realizing this goal.
  • Suspended:
    A suspended goal indicates a goal that had been Active, but for some reason, cannot be allowed to continue. It is therefore moved to the Suspended state. After a certain condition is met, indicating the goal is once again applicable, it is moved to the Option state where it may be selected when the agent deems it appropriate.

When a maintenance goal is adopted by an agent, it begins in the Option state. In general, it becomes Active when there is no other Active goal that conflicts with it.

Once Active, it monitors particular beliefs of the agent, triggering a plan when its condition is no longer satisfied. In the event that the agent deems that the condition is no longer possible to maintain, the maintenance goal can be moved to Suspended, pending a possible change to the Option state when the maintenance goal is once again applicable.

Goals are explicitly represented in Jadex, which enables more complex deliberation when compared with some other agent families. However, there are no provisions for reasoning about maintenance goals other than simple inhibition links, as discussed by Pokahr et al. (2003). Much like the behavior found in JAM agents, the (default) behavior of JADEX maintenance goals is to act as a trigger to action when the goal’s maintenance condition is no longer satisfied.

We now move to discussing some of the logic-based agent frameworks.

2.2.1. AgentSpeak and Jason One of the origins of AgentSpeak(L) (Rao 1996) is to provide a mechanism for overcoming the “BDI gap,” caused by the fact that practical implemented agent systems often diverged from the theoretical approach.

AgentSpeak(L) aims to formalize the operation of existing practical agent languages, that being PRS (and to some extent, its successor, dMARS) which had “lacked a strong theoretical underpinning(Rao 1996). It achieves this by representing much of the BDI model in a first-order logical language, containing events and actions.

In Rao (1996), only two forms of goal are considered: achieve-goals, which are the common form of goal found in almost all agent platforms, and test-goals, which allows an agent to determine if it a particular formula is true or false relative to its belief set.

Achievement goals have a context related to them, which must be satisfied before the body can be executed. This context could be utilized to prevent actions from being executed—it does not support any method by which alternate actions could be performed if necessary.

Jason (Bordini and Hübner 2005) is a Java-implementation of AgentSpeak(L), which supports triggering events, to indicate when a plan should be executed. Utilizing this notion could provide support for a form of reactive maintenance goal in Jason.

It is possible to use AgentSpeak(L) to implement reactive maintenance goals (Hübner, Bordini, and Wooldridge 2006). An example of how to do this is given in Figure 2.


Figure 2. Example Goals in Jason.

Download figure to PowerPoint

These rules (and an associated plan that is not listed here) act to cause the agent to believe that its battery is not charged when the level is less than 20%. Dropping this belief can be used to trigger a goal or plan, which is used to direct the agent to recharge. Once the battery level reaches 100%, the second rule causes the adoption of the belief, batterycharged, which can be used to stop the recharging plan.

This behavior is completely reactive, and thus Jason (and AgentSpeak(L)) share the same limitation expressed in discussions concerning other agent systems.

2.2.2. 3APL, GOAL, and Dribble The 3APL family of agent programming languages (Hindriks et al. 1999; Dastani, Dignum, and Meyer 2000) utilize constructs such as an agent’s beliefs and goals, in conjunction with a set of ‘practical reasoning rules’ which revise an agent’s goal set. 3APL also has facilities for creating and modifying plans during the execution of an agent.

An initial extension called GOAL had the planning ability found in 3APL removed, but can use declarative goals in selecting actions to perform. This was to address its inability to determine if goals were completed by alternate means, and to enhance an agent’s ability to reason over its goals.

A later extension called Dribble (van Riemsdijk, van der Hoek, and Meyer 2003) aims to consolidate the procedural and declarative aspects of the agent languages GOAL and 3APL. It features the ability to plan with declarative goals, meaning that it can (in theory) perform more complex reasoning when compared with the original 3APL. However, Dribble is limited in that it is a propositional language (and so excludes variables), limiting its practical use (Dastani et al. 2003).

Incorporating the extensions developed in Dribble, 3APL was extended to include declarative goals and first-order features.

Representing maintenance goals in 3APL can be represented by means of practical reasoning rules to activate certain goals when its maintenance conditions are violated. Maintenance goals can be represented in GOAL with some small modifications (Hindriks and van Riemsdijk 2007), and act mainly to constrain the actions available to an agent, and so, can be considered proactive. However, to achieve proactivity, the agent requires a look-ahead operator (that potentially needs infinite look-ahead) to determine the consequences of its actions. This limits the usefulness of such an approach for practical agents, but it is a useful approach for analysis.

This approach was then extended (Hindriks and van Riemsdijk 2008) to further include support for distinguishing between hard and soft constraints, and preferences, that allow an agent developer to define which goals to pursue in favor of others. This rational action selection architecture (or RASA) realize hard constraints with maintenance goals, and soft constraints via preferences.


  1. Top of page
  2. Abstract

In this section, we introduce proactive maintenance goals via case studies.

A Mars rover is a mobile robot capable of traversing a planet such as Mars. As the rover moves about the environment, it consumes fuel. For simplicity, we assume a linear relationship between the distance traveled and fuel consumed, that is, one unit of distance consumes one unit of fuel. Additionally, we assume that no fuel is consumed for braking or turning. While this is not realistic, it simplifies our experiments. To acquire more fuel, we assume that there is a refuelling depot on the planet, where the rover can go and refill its tank to maximum capacity.

As the rover accomplishes its goals, it will need to manage its fuel usage so that it never gets stranded and that it does not spend all of its time at the refueling station.

The simplest form of maintenance goal to ensure that the rover does not run out of fuel is to ensure that the fuel in its tank is always above a certain threshold, for example, 20% capacity. Let us represent this by the following maintenance condition; fuel > 20. While this condition is satisfied, the rover can continue performing other actions. However, if this condition ceases to be satisfied, the rover will perform a recovery act to re-attain the condition. This is done by returning to the depot and refueling.

There are problems with this approach. The first is identifying what a suitable threshold value is. In this case, 20% has been selected arbitrarily, but is possibly inadequate. It is possible that the rover may be located in a position that requires more than 20% of the maximum fuel to return to the depot. It is likely that the rover could become stranded if it had just completed a long trip to a goal, and was too far from the depot. Clearly, to be safe, this triggering level should be set at 50%. That way, the rover can guarantee that it is always a safe distance from the depot. However, with a triggering level of 50%, it is likely that the rover will spend a lot of time travelling to the depot and refueling. Clearly, this is inefficient.

This form of maintenance goal was presented in Pokahr et al. (2003) as an example of using maintenance goals to manage refueling for a (simulated) security robot.

An alternative to this approach is to use a more complex maintenance condition, such as fuel > distance(depot), which is satisfied if there is enough fuel to move to the depot. Instead of relying on the 20% value to trigger when to return to the depot, the rover can determine how much fuel is required to return to the base, and only refuel when it must do so (although in practice, this should include tolerance levels for safety).

In these examples, we see that the behavior of the agent is reactive with respect to its maintenance goals, in that the maintenance goal only comes into effect after the maintenance condition has been violated.

For example, consider the rover located at the depot and needing to move 20 units away. It currently has 20 units of fuel. If the rover moves towards its goal, once it has moved 10 units, the maintenance goal will be triggered. It will return to the depot, refuel, and then resume moving to its location.

A better solution to this problem would have the agent recognize that attempting to move to the goal 20 units away would cause the maintenance goal to be violated in the current circumstance, and hence perform some preventative actions to avoid this situation. This introduces proactive behavior to maintenance goals.

Let us assume that Mars rover adopts the proactive maintenance goal fuel > distance(depot). Before it begins to move to a new location, it determines how much fuel would remain when it arrives at the goal. If this is insufficient to return to the depot (hence violating the maintenance condition in the future), it would first refuel, and then pursue the goal.

For example, consider a Mars Rover with 20 units of fuel in a fuel tank that has a capacity of 100. It is currently located 10 units away from the refuelling depot. The rover is about to adopt a goal to move to location A, which is 10 units away from its current location, C, and 20 units away from the refueling depot, D (refer to Figure 3).


Figure 3. Scenario Overview.

Download figure to PowerPoint

We first consider how a rover would handle this scenario when using reactive maintenance goals. The behavior of the agent is given in Figure 4. Here the rover has consumed 40 units of fuel—5 units moving towards the goal, 15 units moving back to the depot, and then 20 units moving toward the goal.


Figure 4. Step-by-Step Reactive Agent Example.

Download figure to PowerPoint

Now, we consider this scenario using proactive maintenance goals. The initial conditions are the same; however this time, proactive behavior will be used. The rover then behaves as in Figure 5. In this case, the rover has consumed 30 units of fuel, saving 10 units. It has moved directly from location 10 to the depot, and then from the depot to the goal. This represents a 25% reduction in the fuel usage when compared with only using the reactive maintenance goal.


Figure 5. Step-by-Step Proactive Agent Example.

Download figure to PowerPoint

3.1. Representation

In this section, we describe the representation required to capture both the reactive and proactive behavior of maintenance goals. We do this by first describing the common attributes of an achievement goals and extending this to maintenance goals.

3.1.1. Achievement Goals Achievement goals are goals that have a specific state that the agent is attempting to bring about. This state is referred to as the success condition. When this success condition is realized, the achievement goal is dropped. Following Winikoff et al. (2002), note that this can occur at any time while this goal is active, and may occur regardless as to the state of any plan that is realising this goal. The attributes of an achievement goal are illustrated in Figure 6.


Figure 6. Achievement Goal Definition.

Download figure to PowerPoint

The adopt condition is used to indicate when the agent may choose to pursue this goal. In other words, unless the adopt condition is true, no action will be taken to achieve this goal. The success condition indicates when a goal has been achieved. This decouples the success of the goal from the success of the plan—that is, the agent can check for successful completion of the goal irrespective of the state of the plan. In the same way, the failure condition indicates when the goal fails, not when the plan fails. Therefore, if a plan is executed and completes, but the goal’s success condition is not met, the plan may be retried or a new plan attempted in its place. Similarly, if a new plan is attempted, which fails during execution, rather than abandoning the goal, the goal persists and a new plan is attempted in its place.

3.1.2. Maintenance Goals Utilizing the concepts outlined in the case study, we now describe the attributes of a maintenance goal that are illustrated in Figure 7.


Figure 7. Maintenance Goal Definition.

Download figure to PowerPoint

The recovery goal is an achievement goal that attempts to re-attain the maintenance condition in the event that it no longer holds. It has its own plans, success, and failure conditions. The preventative goal is an achievement goal, with the purpose of preventing a maintenance condition from becoming violated. When activated, it should cause the agent to perform actions that prevent the maintenance condition from being violated, for example, acquiring additional resources, modifying the environment, etc.

As argued in Duff et al. (2006), it is possible for the recovery goal and preventative goal to be different. However, it seems reasonable to require that the achievement of either goal will imply that the maintenance condition is satisfied. Note that in the Mars rover case, both the recovery and preventative goals are the same, which is to completely refuel the rover, and not just to increase the fuel level to be sufficient for its current journey.

It is possible that either one of these goals could be absent. The presence of these goals imposes limitations on the types of behavior possible for the maintenance goal. If the recovery goal is not present, then it is impossible for the maintenance goal to behave reactively, as it has no actions to respond with when the maintenance condition is violated. Similarly, if the preventative goal is absent, the maintenance goal cannot behave proactively, as it has no actions that can prevent failure.

If  both goals are present, then the maintenance goal can act both proactively or reactively. If neither goal is present, then there are no actions available to support either behavior. Instead, it may be possible for deliberation to be performed that considers the presence of the maintenance goal, and not adopt other goals that cause this maintenance goal’s maintenance condition to fail, much like a constraint.

It should also be noted that the use of a single goal here (either for the recovery goal or preventative goal) is not a fundamental restriction, in that as goals (and plans) can contain subgoals, we can specify arbitrarily complex behavior in response to the violation of a maintenance goal.

3.2. Reasoning Algorithms

A maintenance goal, like an achievement goal, begins in a pending state. In this state, it does not influence the behavior of an agent. It may exist in this state because it may conflict with other goals the agent currently has active, or simply because the agent has chosen not to activate it.

A maintenance goal enters the maintaining state once its adopt condition is satisfied. A maintenance goal in this state may influence the behavior of the agent, in that the agent should now monitor this goal’s maintenance condition.

The failure condition is used to indicate when an agent can “give-up” on a goal, and allow it to be dropped. Once dropped, the goal is no longer part of the agent system, and may be used to indicate when some goal can no longer be maintained, or deemed no longer useful to consider by the agent.

In the interim, after the goal has been adopted and unless it has failed, the agent will monitor the maintain condition of the maintenance goal. The recovery goal will be activated if the maintain condition is violated, and the preventative goal will be activated if the agent predicts that the maintain condition will be violated. We now describe algorithms that facilitate these processes.

To exhibit reactive behavior, an agent must continually monitor the maintenance conditions of its maintenance goals. This could involve checking maintenance conditions every cycle through the agent interpreter, or at some frequent interval. In the event that a maintenance condition is not satisfied, the maintenance goal activates the recovery goal associated with this maintenance goal.

3.2.1. Proactive Behavior For proactive behavior, an agent must have some means of predicting the consequences of its actions. We introduce a new check, called proactive-check. Given the current beliefs of an agent, and the goals and plans it is currently pursuing, it is possible to predict the outcome of executing these goals. In practice, this prediction may not be perfect, due to changes in the environment, and the potential inability to perfectly predict the agent’s choices.

One possible method of achieving this is through the use of heuristics. One such heuristic may be to utilize resource summaries(Thangarajah Winikoff, Padgham, and Fischer 2002). Resource summaries involve attaching minimum and maximum resource usage at the plan level which is computed by traversing all possible paths that the plan may take (see Thangarajah et al. (2002) for further details on this technique).

A proactive check that utilizes resource summary information could return one of three outcomes: consistent, inconsistent, or uncertain.

  • • 
    Consistent occurs when there are sufficient resources to execute the goals such that no resource related maintenance condition is violated, irrespective of which plans are chosen to achieve the goals.
  • • 
    Inconsistent occurs when the available resources are less than the minimum required to guarantee execution of the goals such that the maintenance conditions are not violated. In this situation the appropriate preventative goals should be activated.
  • • 
    Uncertain occurs when there are possible executions of the goals that may lead to a safe situation, but others cause the maintenance condition to be violated.

Given the nature of BDI systems, the path of execution cannot be guaranteed. In this case where proactive-check is uncertain, it is left to the agent developer to determine the most appropriate course of action. For example, a bold agent may adopt the goal, risking it causing some maintenance goal to become violated. The reactive maintenance goal could then be triggered at some point in the future. Alternatively, if the agent was cautious, it may elect to adopt the preventative goals before pursuing the plan, even though the preventative goals may not be necessary.

3.3. Mars Rover Case Study Revisited

We return to our case study to offer an operational example of the representation and algorithms. This example will consist of two parts—operation with and without proactive maintenance goals. In this way, we aim to contrast the behavior when this goal type is available to the agent.

The Mars rover moves about some environment, consuming fuel as it moves, at the rate of 1 unit of fuel consumed for every 1 unit of distance moved. It carries a limited supply, but a depot is present (at location (0,0)) where it can refill its tank to full capacity (100 units). In this case study, let us assume that the rover begins at location 10 with 20 units of fuel remaining.

An example achievement goal in this scenario is to move the rover to location 20. Let us assume that the agent can move 1 unit at a time—therefore, this requires the agent to move to location 11, then location 12, and so on until it reaches location 20. This means that in this case, a Plan for this achievement goal is

  • image

The achievement goal structure is shown in Figure 8.


Figure 8. MoveTo20Goal Specification (Achievement goal).

Download figure to PowerPoint

In this example, we have stored the suitable plan in the Plans attribute. In practice however, it is possible that some form of planning will be necessary to generate the appropriate actions for this goal. Our representation and algorithms support such mechanisms, with a slightly altered representation. We therefore support features such as lookups in a plan library and first-order or HTN-style planning.

A suitable maintenance goal for this system is to make sure an agent always has sufficient fuel to return to the depot.

The maintenance goal is said to fail if it ever runs out of fuel, and it is not located at position (0,0). Let us assume that this maintenance goal should always be active, except in the case where there is a meteor storm. If there is a meteor storm, this goal should no longer be considered. This goal is illustrated in Figure 9.


Figure 9. MaintainFuelGoal Specification (Maintenance Goal).

Download figure to PowerPoint

In this situation, the recovery and preventative goals are identical—move the rover to the Refueling Depot and then refuel. This plan will be detailed further in the following example.

Let us treat this maintenance goal as two separate maintenance goals, each describing a single behavior for convenience, i.e., to allow easier comparisons to be made between the two behaviors. Conceptually, these behaviors correspond to the same maintenance goal.

We now illustrate how these goals operate, first, with just reactive behavior and then with both reactive and proactive behaviors.

3.3.1. Reactive Maintenance Goals Only Beginning with the rover in location 10 with 20 units of fuel, the agent only has the reactive maintenance goal adopted. This example begins with the agent adopting the achievement goal of moving to location (20,0). Adopting the moveTo20 Goal, a suitable plan is selected or generated. In this case, this plan is

  • image

Before executing any actions, the rover tests the maintenance conditions of its maintenance goals. Being located at location 10 with 20 units of fuel, the maintenance goal is satisfied, so execution may continue.

Execution of this plan begins, with the rover performing the first step of the achievement goal’s plan, move11. This results in the rover now being located at location (11,0), with 19 units of fuel. Repeated execution will result in the rover eventually being located at location (15,0) with 15 units of fuel remaining. When the agent checks the maintenance condition, fuel > distanceToDepot, on this occasion, it fails. The achievement goal is then suspended and the maintenance goal’s recovery goal, RefuelRoverGoal, is then pursued, resulting eventually in the rover being positioned at location (0,0) with 100 units of fuel after refueling. The achievement goal is then resumed.

3.3.2. Reactive and Proactive Maintenance Goals We begin this example with the same initial conditions as before, the rover located at location (10,0) with 20 units of fuel. It only has the proactive maintenance goal adopted and currently maintaining, and the example begins with the adoption of the achievement goal moveTo20.

Adopting the achievement goal, a suitable plan is selected or generated. In this case (as before), this plan is:

  • image

Before execution begins, the agent must determine if the maintenance condition for the proactive maintenance goal will hold at the conclusion of the achievement goal.

Moving from the current location to location (10,0) will consume 10 units of fuel. Given that the rover has 20 units of fuel initially, this will leave 10 units of fuel remaining after execution of the MoveTo20 Goal. At this time, it will be located at location (20,0) with 10 units of fuel remaining, hence the maintenance condition, fuel > distanceToDepot will not hold. Therefore, the preventative goal, RefuelRoverGoal is activated. Using existing goal conflict resolution strategies, the achievement goal is suspended until this preventative goal is satisfied. After this goal is achieved, the rover is located at location (0,0) with 100 units of fuel. The achievement goal moveTo20 is reactivated.

3.3.3. Potential Optimizations In some environments, continually checking for the states of maintenance conditions may be avoided. In the case of static environments, we only need to perform a check on the future state of maintenance conditions when new goals are added to the agent system. As the environment does not change, the only change is the actions the agent will be performing, which is determined by the other goals in the agent system, and the plans that been selected for it to execute.

It is also important to note that if the prediction model employed by the agent is perfect, reactive maintenance goals will never be pursued (assuming that there is a proactive maintenance goal active for the same maintenance condition). As the proactive maintenance goal is always correct and no unexpected changes in the environment are possible (and all plans succeed as expected), it will always detect future violation of its maintenance conditions before the reactive maintenance goal acts. In practice, this is unlikely however. This will be investigated further in our experimental evaluations ahead.


  1. Top of page
  2. Abstract

In this section, we provide formal semantics for proactive maintenance goals that support the characteristics and behaviors discussed earlier. A framework for the operational semantics of agent systems was developed by van Riemsdijk et al. (2008). We use this framework as a means of developing our operational semantics, although there are some changes required to incorporate proactive behavior. We give some formal results about our framework, and illustrate our semantics on some examples from the Mars rover case study.

4.1. Background—Formalism of van Riemsdijk et al.

We commence with a brief discussion of the formalism of van Riemsdijk et al. (2008) before we introduce our variations of it.

The state of an agent is represented by 〈B, G〉, where B is the agent’s beliefs and G is the agent’s goals. A goal must be in either of two possible states: Suspended or Active (See Figure 10). A suspended goal is one that is not being actively pursued at present; an active goal is one this is currently being pursued. A key aspect of this framework is that transitions between these two states can only occur when a particular set of rules is satisfied, and these rules are defined for each goal. These rules are given in terms of condition/action pairs; if the condition is true, then the action is taken. The three possible actions are Activate, which changes the goal’s state from suspended to active, Suspend, which changes the goal’s state from active to suspended, and DROP, which removes the goal. Note that a goal can be dropped for various reasons, including because it has succeeded.

Some examples of condition/action pairs are below.

  • image

In the first case, if the agent believes that the level of fuel is less than 10, the goal associated with this pair should be suspended. In the second case, the goal is always active. In the third case, the goal is dropped if the agent believes its location is (10,10).

Another key aspect of the framework is that the representation of the goal also includes the current plan being used to achieve the goal, usually denoted as π. There is no particular commitment to any mechanism for associating a goal with a plan; we only assume that there is a method for generating a plan for a given goal. This may be selected from a plan library, or generated on-the-fly by an online planner (or a combination of both). When no (further) plans can be found, the empty plan ε is returned. This method is denoted mer, for means-end-reasoning.

Hence, the information about a goal must include its current state, the plan currently being used to achieve it, and the rules for changing the goal’s state. In the framework of van Riemsdijk et al. (2008), these are given as two separate sets C and E of condition/action pairs, in which those in C are used when the current plan π is not empty, and those in E are used when π is empty. The former is often the case when a plan has to be suspended; the latter can indicate (among other things) that there is no plan yet found for this goal. This means that a goal has the form g(C,E,S,π), where C and E are set of condition/action pairs, S is the state of the goal and π acts as a placeholder for a plan.

The transition rules from van Riemsdijk et al. (2008) are given in Figure 11. The initial state of the agent is given by its initial beliefs together with all goals which the agent has adopted, each of which commences in the suspended state with no current plan being used to achieve it. The transition rules for states are given for individual goals, which can then be applied in any order to any of the goals. This is reflected in rules 1 and 2 in Figure 11. Typically, a goal begins in the Suspended state, makes various transitions between the Active and Suspended states, before finally being dropped from the Active state.


Figure 11. Operational Semantics from van Riemsdijk et al.’s framework.

Download figure to PowerPoint

Rules 3-8 deal with each of the actions Activate, Suspend and Drop, with two rules for each action, depending on whether the current plan is empty or not. To make the transition from Suspended to Active, we must have that the agent believes some condition c is true, where there is a condition/action pair c, Activate in C or E as appropriate. If so, the goal’s state is changed from Suspended to Active, but nothing else is changed (including the current plan for this goal). To make the transition from Active to Suspended is simply the reverse; the goal must be in the Active state when it believes a condition c from a condition/action pair c, Suspend is true, and the goal’s state is changed from Active to Suspended with no other changes (including the current plan for this goal). For a goal to be dropped, we not only require the appropriate condition to be satisfied, but also that the goal be in the Active state (to prevent goals from being dropped from the Suspended state). When this transition occurs, the goal is deleted from the agent’s list of goals.

Rules 9–11 deal only with goals in the Active state. If there is no plan for the current goal, then provided that the goal remains in the Active state, the mer method is used to generate a plan. The goal will remain in the Active state unless there is a condition c in a condition/action pair in E (as the current plan is empty) which is satisfied and for which the action is either Suspend or Drop. Hence the premise of Rule 9 expresses that there is no 〈c, a〉 in E such that Bc and a is either Suspend or Drop. The only change that takes place is that the empty plan is replaced by one generated by mer.

If there is a plan for the current goal, then again provided that the goal remains active, the plan is executed. This is reflected in Rule 10. If for some reason the plan fails, then the plan is replaced by the empty plan (Rule 11). Assuming that no other conditions arise that cause the goal to be suspended or dropped, Rule 9 can then be used to generate another plan for the goal.

4.2. Operational Semantics for Maintenance Goals

One issue with this formalism for the semantics of proactive maintenance goals is that there are three natural states for a maintenance goal:

  • • 
    when the maintenance condition is not being monitored,
  • • 
    when the maintenance condition is being monitored but no violation, present or future, has been detected, and
  • • 
    when the maintenance condition has (or will be) violated

In principle, it is possible to specify the behavior of a maintenance goal directly in the framework of van Riemsdijk et al. (2008). Indeed, this is very much the aim of that work, in that condition-action pairs can be used as a general mechanism for the specification of the behavior of goals. However, we believe that it is more natural to include one state for each of the above cases, rather than reducing the first two to both being in the Suspended state. In some ways this is very much a reflection of the way in which the states of a goal are to be used. If they are used principally as a means of specifying the execution of the agent, then to states will suffice (as no action is taken in either of the first two cases). If the role of states is to also reflect the “cognitive” state of the agent, which is very much in the original spirit of the BDI model, then it seems reasonable to distinguish between the first two cases above by ensuring that they reflect different states of the goal (and hence the agent). Moreover some recent work on aborting, suspending and resuming goals (Thangarajah et al. 2007, 2008) has shown that changing the status of a goal can involve a number of detailed transitions. Reducing these processes to ones involving only two states seems possible, but doing so would make it more difficult to understand the precise status of each goal. Note also that the types of goals typically used in agent systems have been classified into three main classes, being perform, achieve and maintenance goals, and that the first two really only differ in the criterion used for success (Braubach Pokahr, Lamersdorf, and Moldt 2004; Dastani, van Riemsdijk, and Meyer 2006).

For these reasons, we feel justified in introducing a third state Maintaining, which is used exclusively for maintenance goals, and corresponds to when the maintenance condition is being monitored, but no action needs to be taken by the agent.

It should also be noted that we believe it is important to allow as much flexibility as possible in the way that maintenance goals are specified. One approach would be to impose the condition that maintenance goals always have a higher priority than achievement goals, and hence build this into the semantics. Whilst this would simplify the generation of formal results, it would come at the cost of excluding the possibility of some maintenance goals having less priority than some achievement goals. Similarly, we could impose conditios on the condition/action rules used to change the states of goals, but again at the potential cost of some flexibility. For these reasons we have designed our semantics as a framework in which the behavior of maintenance goals can be specified with as much flexibility as possible, so that we do not exclude possibilities of interest. This means that it is correspondingly more difficult to show formal results in our semantics, and requires the agent designer to be more explicit in the specification of how goals interact with each other. This we see as a feature rather than a “bug,” in that we feel it is important to provide a means for experimentation with various possible approaches, rather than provide a fixed interpretation of how maintenance goals are to act.

We also dispense with the separation of condition/action pairs into C and E, as we can build into the condition a test for whether the current plan is empty or not. This is not an essential difference, but simplifies some of the rules. We also rename the Suspended state to Pending, as this seems more intuitive.

We thus have three states Pending, Active, and Maintaining. The state of each goal is then represented by a tuple g(name, G, CAP, S, π) where name is unique identifier for each goal, G is the goal itself (as in Figures 6 and 7), CAP is a set of condition/action pairs, S is the current state and π is the current plan.

The initial state of the agent is given by 〈B, G0〉, where for each goal G which is adopted by the agent there is an element of G0 of the form g(name, G, CAP, Active, ε) where CAP is an appropriate set of condition/action pairs for G.

Note that we do not have explicit counterparts to Rules 1 and 2. We also allow the Drop action to occur from any state.

Rules 1–6 in Figure 12 deal with state transitions. Note that all that is necessary for a transition to occur from one state to another is that the appropriate condition is true; hence in order to ensure that the Maintaining state is only used by maintenance goals, it is necessary to ensure that only maintenance goals use the MAINTAIN action, i.e., that achievement goals do not contain condition/action pairs of the form c, MAINTAIN.


Figure 12. Modified Operational Semantics.

Download figure to PowerPoint

Note that these rules are used in preference to either generating plans for a goal (Rules 8a and 8b) or executing plans (Rule 9). This is enforced by the presence of the premise active(B, CAP) in Rules 8a, 8b, and 9, which ensures that these rules can only be used when there is no applicable instance of Rules 1–6.

Note also that in Rules 1 and 3 we do not require the plan π to be empty. This means that it is possible to suspend an active goal (i.e., while its plan is still executing) and to have it resume the same plan later. This plan may fail (due to environmental or other changes), in which case Rule 8a can be used to find another plan (if one exists) and execution continues with this new plan. An alternative is to require that the second occurrence of π in Rule 3 be replaced by ε, so that whenever a goal enters the Pending state, it does so with an empty plan, which means that if the goal later enters the Active state, Rule 8a will be used immediately to find a new plan.

For maintenance goals, we insist that a goal which enters or leaves the Maintaining state have an empty plan, as goals will not be executing any plans whilst in this state, but actively monitoring the maintenance condition. This also means that such a goal entering the Active state from the Maintaining state will always do so with an empty plan, and hence require a plan to be found.

Rules 7 deals with the Drop action. As noted earlier, the Drop action can occur in any state.

Rules 8a and 8b deal with mean-end-reasoning. We have two rules, unlike van Riemsdijk et al., to allow for planning to fail. If a plan can be generated (or selected from the plan library), then the agent cycle continues as normal (Rule 8a). If no plan can be selected (Rule 8b), the goal is dropped. Arguably, this is rational behavior (to drop plans an agent has no means of achieving), however, alternative approaches may exist, such as holding onto this goal until such a time as a plan is available.

Rule 9 deals with normal plan execution. As in Rule 9 of van Riemsdijk et al., one of the premises here is that the goal is in the Active state and not about to change this. Hence we have the premise that ln  otc, a〉∈CAP such that Bc and a≠ Activate, which we will abbreviate to active(B, CAP). We also have a further premise, which is that no maintenance goal is about to become Active. This means that if any maintenance goal is either violated or is predicted to be violated, then no normal plan execution can occur, so that appropriate action may be taken to deal with the maintenance goal violation. Hence denoting the maintenance goals in G by MG(G) and the condition/action pairs relevant to a goal G as rules(G) (so that in a goal representation g(name, G, CAP, S, π), rules(G) =CAP), the appropriate condition is ∀G′∈MG(G), ln  ot∃〈c, Activate〉∈rules (G′) such that Bc, i.e., that there is no maintenance goal in G which has been activated, which we will abbreviate as inactive(MG(G)).

Note that the decision to halt normal execution is decoupled from the decision that a maintenance goal violation has occurred, in that it is only the change of state of a maintenance goal from Maintaining to Active that is needed. In other words, each maintenance goal may have different ways of detecting or predicting violation, but each will signify that a violation has occurred in the same way.

Rule 10 deals with the case when the is a plan to execute, but the plan fails, in which case the plan is replaced with the empty plan.

Note that the way that Rule 9 is designed has decoupled the prediction of the violation of a maintenance goal from the actions associated with recovery and prevention. This is accomplished by the use of the condition/action pairs in CAP for each maintenance goal. This separation will be discussed further in the following section.

4.3. The future Operator

A crucial aspect of the proactive approach to maintenance goals is to be able to predict when a maintenance goal will be violated. This is generally not straightforward, as it involves projecting forwards from the agent’s current state, and reasoning about the effect of future actions in a changeable environment. Further we discuss some possible approaches to this problem.

One possibility is to use a formal semantics to predict the effect of the agent’s actions. This means using the CAN semantics above to predict the states that will result from a given plan or plans, and then testing each such state produced for maintenance goal violations. In principle, this is simple enough; given a particular set of plans, it is clearly possible to predict their effect, and hence whether or not a violation occurs. In practice, we need to consider how far ahead we should look. One extreme case is to perform model-checking, i.e., considering every possible action of the agent, and looking as far ahead as possible. Not only will this lead to an explosive number of possibilities to be considered, it is not clear that this is entirely appropriate, as an agent’s choice of action is constrained by its beliefs and goals. In other words, if we performed a model-checking calculation and found that a maintenance goal violation was to occur, it is possible that the particular sequence of actions that lead to the violation is not one that the agent would perform. It also seems rational to assume that only the first violation of a maintenace goal is of interest, in that subsequent violations assume that nothing is done about the first violation. Hence we need to be able to restrict the possibilities to only those that the agent would actually choose.

One way to do this is to consider the current “planning horizon,” i.e., the agent’s current set of goals, and to consider only the actions which would lead to the achievement of these goals. This not only ensures that the possibilities considered are only ones that the agent would choose, it also provides a means of limiting the number of states to be explored to a finite number. However, this requires that the agent knows in advance the plans that will be used (such as the traditional practice of plan libraries in BDI implementations). This is not an overly restrictive assumption, but it is a little at odds with the mer mechanism of the semantics of van Riemsdijk et al. (2008), which makes no such assumption. Also, the planning horizon needs to take into account the dynamic nature of the goals of the agent, in that as a new goal is added, for instance, it will be necessary to recheck the planning horizon for violations.

A further issue is whether it is necessary to distinguish between the case when all possible choices of the agent lead to a violation (some possibly sooner than others) and the case when only some possible choices lead to a violation, but others do not. A bold agent may assume that as long as there is at least one path which avoids violation, then it will proceed without any further action. A cautious agent may assume that as long as there is at least one path which leads to violation, then it will adopt the appropriate preventive goal. A more indeterminate agent may conditionally schedule the preventative goal, but only at some point in the future (such as schedule to refuel after several future trips of the Mars rover), or perhaps choose to avoid the earliest violation. A formal semantics for predicting violations would certainly allow such predictions to be made, but does not necessarily resolve all the issues to be considered in the decision to act or not.

A second way to predict violation is to use some form of domain knowledge. In the Mars rover example, it is clear that the maintenance goal of keeping the fuel above certain level has some specific properties common to resource-based goals, in that the resources decrease over time, usually at a predictable rate, and can only be replenished by the agent taking specific action. The measurement of resource usage may also be imprecise (as seen in the possible and necessary properties in the resource summary approach of Thangarajah, Padgham, and Winikoff (2003a,b)). In such cases, it is generally simple to predict resource usage (within some limits), and the method to restore a violation is clear: refuel. Some more sophisiticated possibilities include power management based on a model of demand that varies over time, where the goal is to maintain a slight surplus of supply over demand. If it is known that demand will have a sharp spike around 6.00 p.m., then an obvious preventative action is to increase supply ahead of this spike. A similar example is the scheduling of car maintenance. This is generally done based on how far the car has traveled, rather than a prediction of a specific mechanical fault at some specific point in the future. This kind of reasoning is more specific than can be provided in a necessarily general formal semantics.

A third possibility is to allow the agent designer to include their own prediction method. This seems particularly appropriate for sophisticated devices that may have specific service requirements to be undertaken, or for observing safety regulations (which tend to be overly cautious). This also allows for various techniques to be inserted as appropriate, such as machine learning, neural networks or Markov models.

For these reasons, we have included a function future in our semantics, which is not further specified. This is very much in the manner of the mer construct in the semantics of van Riemsdijk et al. (2008), in that it does not make a commitment in advance to any particular prediction method. As noted earlier, it is not obvious what the “right’ method is, and it is entirely possible that there is no particular method that will suit all maintenace goals, and so we provide the semantics above for as broad a class of maintenance goals as possible.

4.4. Case Studies

We return to our example of the Mars rover, illustrating its behavior with reference to the formal semantics. We first describe the process using only reactive behavior, and then repeat the process with proactive behavior.

A maintenance goal with reactive behavior to manage an agent’s fuel can be represented as

  • image

where CAP consists of the following condition/action pairs.

  • image

The first rule states that if the rover is ever located at a point where the distance to the depot is equal to or exceeds the fuel remaining in the tank, it should activate the maintenance goal. This will cause a plan to be selected for this goal, with the aim being to have a state where the fuel level is 100%.

The second rule states that if the rover was trying to refuel (i.e., this goal’s state was Active), and the fuel level reached 100%, it should go back to Maintaining. This avoids the problem first expressed by Braubach et al. (2004) where a rover may only partially fill the tank due to satisfying the maintain condition.

An achievement goal in this example may be for the rover to move to location 6. This can be represented as the following goal.

  • image

where CAP consists of the following condition action pairs.

  • image

In the first rule, the rover can activate this goal if the goal is Pending, and if the refuel goal is not Active. If the refuel goal is Active, and we attempt to adopt the to6 goal, the goals may interfere with one another, and so they are prevented from both being active simultaneously.

The second rule operates in a similar way, ensuring that if the refuel goal is adopted, the to6 goal transitions to the pending state, again to prevent interference (unless it is already at location 6, in which case the goal should be dropped).

The final rule allows the agent to drop the to6 goal once it is at location 6.

We show how the goal set of the agent evolves as it attempts to achieve the to6 goal. As the goal condition and CAP remains static for the life of each goal, they will not be repeated here. We begin with the rover located in location 4 with 6 units of fuel in its tank. The maintenance goal starts in the Maintaining state, while the to6 goal begins in the Pending state. For clarity, we will also keep explicit track of the fuel level as f(F), where F is the current fuel level. Hence we commence in the configuration below, where we write MG for g(refuelR, Maintaining, ε).

  • image

As the maintain condition is not triggered, the to6 goal is made active, a plan is found for it, and this plan commences execution.

  • image

At this point the fuel level is 4 with the rover at location 4. This means that the maintenance goal is activated, as the condition fueldistanceToDepot is now true. We then move the to6 goal to Pending, activate the maintenance goal and generate a plan for it.

  • image

where m30 is the sequence m3; m2; m1; m0. We now execute the plan for the maintenance goal, which refills the tank. The maintenance goal then returns to the Maintaining state and the original goal is reactivated.

  • image

At this point, the plan fails, as the rover is no longer at position 4. Hence the plan fails, and a new one is found, which then achieves the to6 goal, which is then dropped.

  • image

4.5. Example of Maintenance Goals with Proactive Behavior

A maintenance goal with proactive behavior has the same structure as any other goal in the goal set, that is, g(name, goalcondition, CAP, state, π). Typically, CAP will include condition/action pairs that cause the maintenance goal to become Active when future predicts that its maintain condition does not hold, i.e., that future (ln  ot goalcondition) holds. In these cases, the associated action is to ACTIVATE the maintenance goal.

We expect that achievement goals that should not run concurrently with this maintenance goal have a condition/action pair similar to the following

  • image

and an activation condition that is only satisfied if the maintenance goal is not active. The following example will clarify the behavior of maintenance and achievement goals in our system.

From the previous example, we have illustrated the rover performing useless actions, and then backtracking to recover from violating its maintenance goals. In this example, we illustrate how proactive behavior eliminates this backtracking and useless actions.

A maintenance goal with proactive behavior to manage an agent’s fuel can be represented as the following.

  • image

where CAP consists of the following condition/action pairs, and where we write willfail(F) for Ffuture (ln  ot F).

  • image

The first rule states that if the rover believes that in the future, it will be at a point where the fuel is equal or less than the distance to the Depot, this maintenance goal should be activated. This is similar to the reactive behavior, with the provision that the agent actually anticipate the maintain condition from failing. Note that we also require that the maintain condition is currently true, i.e., that fuel > distanceToDepot to prevent the reactive behavior from being triggered at the same time as this one.

The second rule states that if the rover was trying to refuel (i.e, this goal’s state was active), and the fuel level reached 100%, it should go back to the Maintaining state.

The achievement goal requires slight modification, only to indicate that it should not be active when this new maintenance goal is also active. The condition action pairs for this achievement goal are as follows.

  • image

The first rule states that if either maintenance goal is not active and the to6 goal is pending, it should be activated. The second and third rules state that if either maintenance goal is activated, this achievement goal should move to the pending state to avoid interference. The final rule remains the same, dropping this achievement goal once it has reached its desired location.

As in the previous example, we show how the goal set of the agent evolves as it attempts to achieve the to6 goal. The initial conditions are the same as before, the rover starting in location 2 with 6 units of fuel. The maintenance goals begin in the Maintaining state, while the to6 goal begins in the Pending state. To conserve space, we write MGR for g(refuelR, Maintaining, ε) and MPR for g(refuelP, Maintaining, ε). As above, we omit the goal conditions and CAP for each goal as they stay the same throughout execution.

As in the reactive case, we commence by making to6 active and generating a plan for it.

  • image

where we write m36 for the plan m3; m4; m5; m6.

At this point, we have willfail(fuel > distanceToDepot) being true, as the current fuel level is 6 and the distance to the depot is 2 (so that fuel > distanceToDepot) but that future (fueldistanceToDepot) is satisfied. Hence the to6 goal goes to the Pending state and the proactive maintenance goal MGP is activated. The to6 goal is then moved back to the Pending state while the maintenance goal is active, which results in the tank being filled.

  • image

Once the tank is filled, the maintain condition is restored, and the maintenance goal goes back to the Maintaining state. The to6 goal is then resumed, and having found the original plan fails, it finds another plan, which succeeds and so the goal is dropped.

  • image

where we write m16 for the plan m1; m2; m3; m4; m5; m6.

4.6. Properties

The above example shows how the transition Rules 1–10 in Figure 12 work for the Mars rover. One question that may arise is whether these rules always specify a unique state for each goal, i.e., that there is at most one transition that is applicable at any point in execution. A full formal analysis of this and other similar properties is beyond the scope of this paper, but below we give an informal argument that will establish this property for the Mars rover example. Note that we need to consider not only Rules 1–10 themselves, but also the condition-action pairs CAP for each goal.

We will assume that each goal is initially in exactly one state. It is then not hard to see that as long as CAP is “well-behaved” (i.e., there is at most one action a such that 〈c, a〉∈CAP and Bc), each goal will be in exactly one state, and there will be at most one of Rules 1–10 which is applicable.

First, it is not hard to see that at most one of Rules 7–10 is applicable. To see this, first note that if Rule 7 is applicable (i.e., its premise is true), then none of Rules 8a, 8b, 9, or 10 can be applicable, as Rule 7 is only applicable if 〈c, DROP〉∈CAP with Bc, in which case active(B, CAP) is false. For the same reason, the converse is also true, i.e., that if any of Rules 8a, 8b, 9 and 10 is applicable, then Rule 7 is not. Clearly Rules 8a and 8b cannot be simultaneously applicable (as Π≠ε must hold for Rule 8a and Π=ε for Rule 8b), and similarly for Rules 9 and 10 (as Rule 9 requires 〈B, π〉[RIGHTWARDS ARROW]B′, π′〉 and Rule 10 requires the negation of this). Finally, as 8a and 8b both require π=ε and both of 9 and 10 require π≠ε, we can never have more than one of Rules 7, 8a, 8b, 9, or 10 being applicable.

Second, note that if any of Rules 8a, 8b, 9, or 10 are applicable, then none of Rules 1–6 are applicable. This is because for any of Rules 8a, 8b, 9, or 10 to be applicable, we must have that the goal is in the Active state and that active(B, CAP) holds. The former property means that none of Rules 1, 2, 5, or 6 is applicable, and the latter means that neither of Rules 3 or 4 is applicable. It then only remains to show that whatever state the goal is in, there is at most one action a such that 〈c, a〉∈CAP and Bc. Hence, let us inspect the rules in CAP for each of the goals refuelR, refuelP and to6 in the above example.

  • image

For refuelR and refuelP, there are no DROP or PEND actions, and as there is only one rule for ACTIVATE and one for MAINTAIN, with each requiring the goal to be in a different state, it is clear that no more than one rule can be applicable at any time, and hence at most one action is performed.

For the to6 goal, the only nontrivial case is when the goal is in the Active state. If either (or both) of the maintenance goals become active, then one (or both) of the rules for PEND will become true, but only if the goal has not been achieved. Hence we can only ever have at most one of the PEND or DROP actions being applicable. So either no rule is applicable (in which case the state remains unchanged), or there is a unique new state for this goal.

Hence Rules 1–10 and the rules in CAP for each goal can be used to show that if the initial state contains exactly one state for each goal, then as at most once transition rule is applicable at any time, each goal will be in exactly one state at every point in the execution. This may seem to be a trivial point, as clearly without this property, the transition rules contain some ambiguity; however, when designing the rules for CAP for each goal, it is important to keep this in mind.


  1. Top of page
  2. Abstract

In this section, we provide empirical evidence towards the benefits of proactive maintenance goals. We begin with an explanation of our Mars rover simulator and implementation, then detail several experiments and provide results that compare aspects of proactive maintenance goals. We conclude with a discussion of these results.

5.1. Overview

The experiment consists of a simulated Mars rover, based on the description provided in Section 3. The use of a Mars rover in experiments has been widely employed in the agent community (e.g., Steels 1990; Thangarajah et al. 2002; Meneguzzi and Luck 2007). Variations and scenarios similar to the Mars rover experiment also exist, for example, the carrier agent example from Hindriks and van Riemsdijk (2007). Although the specific details may differ between experiments and scenarios, most variants involve autonomous robots achieving goals with limited resources.

The objective of these experiments is to examine the behavior of maintenance goals, both reactive and proactive, in a variety of settings. To do so, the simulated rover will be given a list of locations to visit. These locations must be visited in the specified order. The objective of these experiments is not to determine how well an agent can find or optimize a particular route from the locations presented, which would be an instance of the well-known travelling salesman problem (see Schrijver (2005) for a comprehensive discussion of this problem). Instead, we will measure the behavior and performance of the maintenance goals in managing its fuel level. As described in the following section, the fuel used in each experiment will reflect the efficiency of the agent and the performance of the maintenance goal employed.

5.2. Experimental Setup

The experiment consists of a simulated Mars rover that moves about an environment. The environment is a flat surface, and we identify locations in this environment via their co-ordinates. For example, Figure 13 illustrates several locations in this environment.


Figure 13. Various locations.

Download figure to PowerPoint

All locations are within 40 units radius from the center of the map, where the single depot is located (at position (0,0)). The depot can be used by the Mars rover to refuel its fuel tank to maximum capacity.

The Mars rover can explore this environment by moving in any direction, 1 unit at a time, consuming some fuel in the process. This is a linear relationship, such that for each unit moved, 1 unit of fuel is consumed. For simplification, there is no cost for turning or breaking, and the rover moves in a fixed speed.

The rover always begins each experiment with a full tank, and starts at the depot. The rover is able to refill its fuel tank by moving to the depot and performing the refuel action. We assume that the depot has an unlimited supply of fuel for this experiment. The rover has the task of visiting several locations. The number of locations to visit varies for each experiment. In this case, the number of locations vary between 10, 100, 1000, and 10,000 randomly generated locations.

A location is a single point in the environment, represented as an (x, y) pair. The straight line distance between any location to visit and the depot is less than 40 units. This is to ensure that it is possible for the rover to visit any location with a full tank (100 units). As the maximum distance to a location is 40, a trip to the most remote location from the depot and back will consume at most 80 units of fuel.

Clearly the ability of the agent to predict its fuel usage is critically dependent on the accuracy of its estimation of distance. Hence we will include in our experiments an error rate in the agent’s estimation of distance. The agent estimates the distance between various locations, its current location, and the depot at various times. There are two parameters to defining the error rate, the upper and lower bounds, which are given in percentages of the correct value.

An error rate of plus or minus 20% has an upper bound of 120%, and a lower bound of 80%. Therefore, if an estimation is made on a distance that is 10 units, the estimation will return a value between 9 and 11 inclusive.

Fixed error rates can also be used by fixing the upper and lower bounds to be equal. For example, an error rate with an upper bound of 150% and a lower bound of 150% will always overestimate the correct distance by half—20 unit distance will have an estimate of 30 units. A similar approach can be used to always underestimate.

Each new location presented to the rover is represented by an achievement goal, MoveTo, which moves the rover to a specified location. An appropriate plan for such a goal is to generate a sequence of unit steps in a straight line between the rovers current location and the location it intends to reach. The rover processes only a single goal each time, ensuring that the locations are visited in the presented order.

The agent has a maintenance goal to maintain fuel level above 20%. We run the experiment with the maintenance goal that has only reactive behavior and then the same experiments with the maintenance goal that has both reactive and proactive behavior. The reactive behavior becomes active when its fuel level is less than or equal to 20% and proactive behavior when it predicts that, based on the currently adopted goals and plans, that pursuing the new task will violate the maintenance condition. In both cases, the appropriate plan for either behavior is to move to the depot (0,0) and refuel.

5.3. Definitions and Terminology

In our experiments we record the following measurements:

  • Goal-directed distance is the distance traveled when the rover moves from the location when it adopted the goal to the goal location, given that it does so uninterrupted. Essentially, this is the shortest distance between the location when it adopted the goal, and the goal location.

  • Backtrack distance represents the distance the rover moves towards the refuelling depot.

  • Waste distance represents the additional distance the rover traveled that could have been avoided by moving directly to the depot.

    It is important to note that we consider both goal-directed and backtrack distances essential to normal operation of the rover. The objective is to minimize the waste distance.

Figure 14 illustrates these components. The Rover can either move toward the Goal, or return to the Depot to refuel. If the Rover moves toward the Goal, but has to backtrack to refuel before reaching it, the Waste distance is calculated as the distance traveled toward the Goal and back to its starting point, and not as the complete distance to the Depot.


Figure 14. Categorisation of movement types.

Download figure to PowerPoint

In the graphs presented in this section, goal-directed distances are represented by white colour, backtrack distance represented by gray, and waste represented as black.

Ideally, an agent should use as little fuel as possible in achieving its goals. To this end, the smaller the waste, the better the performance of the agent. Minimizing the goal-directed and backtracking distances, while possible, is a much harder problem.

Using the above measures we use the following terms to describe the results of each trial:

  • Complete indicates that the agent visited all the locations in the correct order, and did not run out of fuel at any time.

  • Stranded indicates that the agent failed to visit all the locations as it ran out of fuel sometime during its journey. We consider this a bad result, as it indicates that in a real situation, the rover would be stranded without fuel.

    This outcome is only possible if the agent underestimates how much fuel it requires to return to the depot to refuel. If it requires 10 units of fuel to return, but the agent believes only 5 units is required, it will only take action when 5 units remain, which in this case is too late. This outcome is only possible when error is introduced into the simulation.

  • Halted indicates an occasion where the agent believes that it is impossible to achieve a goal, and so does not attempt to pursue it. It therefore stops processing all goal and the simulation stops. Note the difference between this and being stranded. Here, the agent has not run out of fuel, but has rationally chosen to avoid perform any action, as it believes that the goal is impossible to achieve.

  • Looping indicates a trial where the rover is caught in a loop. In attempting to move to some location, a maintenance goal is triggered and so the agent moves to the depot to refuel. It then attempts to move to the original location, but again, the maintenance goal is triggered, and the cycle repeats. The agent cannot progress as it is trapped by its maintenance goal. This outcome is possible even in a situation without errors when using a naïve approach to reactive maintenance goals. If a goal is given that is impossible to reach, the agent will attempt to move toward the target, but require refueling when its fuel tank is half full. It returns to the depot to refuel, only to re-attempt the goal ad infinitum.

    In our experiments we stop any simulation that takes over 1,000,000 steps to complete. This is much larger than any possible route that does complete.

5.4. Experimental Results

Each run of the experiment consist of a rover moving around a particular map. Each map consists of a fixed number of locations, ranging from 10 to 10,000 locations. Each location is randomly generated, but its distance from (0,0) is always less than or equal to 40 units. The rover has a varying size fuel tank, either 100 or 200 units in capacity.

We measure the goal-directed, backtrack and waste distances for each trial, as well as the overall outcome which is one of either complete, stranded, halted or looping. At times, the rover will need to perform estimations—these estimations may be incorrect, influenced by the error rate of the particular experiment. We define the error rates for the reactive and proactive estimations separately, as they may use different algorithms in practice.

The purpose of these experiments is to verify the following statements:

  • • 
    In an error-free environment, if the proactive maintenance goal is present, the reactive maintenance goal is never activated.
  • • 
    In an error-prone environment, as the error rate increases, the reactive maintenance goal is activated more often.
  • • 
    In an error-prone environment, the performance of proactive maintenance goal degrades gracefully.

We will discuss our findings in the following manner: We will look at how maintenance goals behave in environments with varying degrees of errors, beginning first with error-free environments, then to environments that consistently overestimate and underestimate the true distance to goals, before concluding with environments that are capable of both overestimation and underestimation.

5.4.1. Maintenance Goals in Error-Free Environments In this first situation (Figure 15), the estimation of the distance to be traveled by the rover is always correct. We compare how reactive maintenance goals and proactive maintenance goals behave in this environment for three different fuel tank capacities, and several different-sized number of locations to visit.


Figure 15. Error-free environments.

Download figure to PowerPoint

As expected, the cases with proactive maintenance goals out perform the cases where only reactive maintenance goals are used. In all cases, all goals were achieved successfully. As the capacity of the fuel tank increased, the proportion of waste and backtracking distance compared with the overall distance traveled decreased.

On average, waste accounted for approximately 25% of the total distance traveled in the reactive case, and zero when proactive maintenance goals were also used, with 100 units of fuel. The total distance traveled when using proactive maintenance goals compared with not using them is approximately 75%. Therefore, using proactive maintenance goals saves (on average) 25% fuel.

5.4.2. Varying Errors in Maintenance Goals In the above experiment, the environment was error-free, and therefore, estimations were always correct. In the real world, however, it is likely that correct estimation cannot be relied upon. In this experiment, we randomly assign an error rate each time the rover needs to perform a distance check. The range of the error is limited to plus or minus 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100%. For example, with an error rate of plus or minus 20%, and a true distance of 10 units, the estimated distance could be any value between 8 and 12 units. The purpose of this experiment is to determine how well maintenance goals behave in more realistic environments.

The results are illustrated in Figure 16(a). As the error rates increase, we observe that the rover is unable to visit all the goals and complete a trial. This is due to the rover becoming stranded at some point, as illustrated in Figures 16(b) and 16(c). Similar results are present for all map sizes, with the exception that failure occurs for all error rates, other than 0%.


Figure 16. Environments with varying degrees of error.

Download figure to PowerPoint

When using proactive maintenance goals, some successful trials were performed for error rates up to ± 20%. However, this occurred only in the smallest map size. Reactive maintenance goals failed in the presence of errors.

It appears that in the cases where there were errors and small map size, the rover was “lucky” and its estimations were generally safe. As the map size increased, and therefore more estimations performed, it is more likely that a poor estimation is generated which leads the rover to becoming stranded.

Even doubling the maximum capacity of the fuel tank did little to assist the success rate of the rover in the prescience of errors. It did, however, reduce the amount of fuel used when successful.

In the next experiments, we will aim to determine the effects overestimation and underestimation have on maintenance goals separately.

5.4.3. Overestimation in Maintenance Goals In this experiment, the rover overestimates the distance to be traveled. For example, if the true distance to travel is 10 units, and there is an error rate of 120%, the rover believes it needs to travel 12 units. Figure 17 summarize the results of this experiment.


Figure 17. Overestimation.

Download figure to PowerPoint

In this experiment, we found that as error rates increased, successful completion of all goals decreased. This is most apparent in the case of a limited fuel tank. The reason for this is that when overestimating to a high or moderate degree means that the agent believes that some locations are too far to visit and return to the depot, even with a fully stocked fuel tank. Therefore, the agent halts all future goals. In the case where only reactive maintenance goals are used, the result of these failed attempts often results in looping behavior. Some results are illustrated in Figure 18. The only possible outcomes are for the rover to visit all locations (complete), or loop in the case of only reactive maintenance goals, and halt in the case when also using proactive maintenance goals.


Figure 18. Overestimate outcome frequency.

Download figure to PowerPoint

In the case of the 200 unit fuel tank, all attempts were successful, even with an error rate of 200%. This is because the maximum distance a goal can be from the depot is 40 units—even with 200% error, the rover will believe it to be 80 units from the depot, so a trip there and back to the depot is under the 200 unit fuel cap.

Furthermore, it was generally the case that not using proactive maintenance goals allowed the rover to successfully visit all the goals at higher error rates than when using proactive maintenance goals. After the error rate of 130%, cases using the proactive maintenance goal began to fail, where as the reactive maintenance goal continued to have 100% completion until 160%.

Interestingly, as the error rates increased, the total distance traveled only increased slightly.

5.4.4. Underestimation in Maintenance Goals In this experiment, the rover continually underestimates distances. For example, if the true distance was 10 units and the error rate was 80%, the rover would expect that it needs to travel 8 units.

Here, underestimation severely limited the success rate of the rover. Once the number of locations to visit exceeded 10, it was rare that all goals were achieved. The results are shown in Figure 19.


Figure 19. Underestimation.

Download figure to PowerPoint

In this situation, using proactive maintenance goals in conjunction with reactive maintenance goals fared slightly better than using reactive maintenance goals alone. This is apparent in the experiments where the map size was 10, as shown in Figure 20(a–d). In these cases, reactive maintenance goals failed as soon as underestimation was introduced, while also using proactive maintenance goals allowed small amounts (up to 80% accuracy) of underestimation to occur and still have some attempts complete successfully.


Figure 20. Underestimate outcome frequency.

Download figure to PowerPoint

Once the number of goals increase beyond 10 locations, however, only reactive and reactive with proactive maintenance goals have the same results, failing due to becoming stranded. Figures 20(e–f) represent this for a map size of 100, and identical results appear for the map size of 1000 and 10,000. Due to the underestimation, the maintenance goals are triggered at a later stage than required, therefore the rover’s fuel is often inadequate to return to the depot. For some of the cases, especially in the cases with a small number of locations, the rover may be fortunate and not need to refuel to visit all locations. As the number of goals increases, however, it becomes more likely that the refueling will be required, but the rover’s fuel inadequate.

Increasing the size of the fuel tank only aided slightly in increasing the number of attempts successfully completed, with a map size of 10. Even with a 200 unit fuel tank, when underestimation error was present, all goals could not be completed in any attempt.

5.5. Discussion

We have demonstrated in Section 5.4.1 that in ideal settings proactive maintenance goals lead to more efficient behavior than using reactive maintenance goals alone. In this ideal setting, the reactive maintenance goal was never employed (that is, there was no waste) when proactive maintenance goals were present. As resource availability increases, the proportion of waste decreases.

When there is an error in the estimation (Section 5.4.2), positive and negative error rates greatly effect the performance and success rate of the rover experiments. In almost all cases, the rover becomes stranded. Further experiments were required to determine the cause of these failures.

When overestimating (Section 5.4.3), problems can occur in both reactive and proactive maintenance goals. This is especially apparent in the proactive case. The agent can't act “overzealous” and not attempt goals that it believes it cannot do (which is rational), as its beliefs state that the goal is impossible. When goals are close to the limits of the agent, overestimation can cause the agent to believe these goals are impossible, while they actually can be achieved. Instead, the agent halts action. When adequate resources are present however (as in the case of the 200 and 300 capacity fuel tanks), no such problems arise, and the increase in error rate only causes slight increase in the consumption of resources.

Underestimating (Section 5.4.4) can lead to problems for both reactive and proactive maintenance goals. In the case of reactive maintenance goals, looping can occur, leading to a huge consumption of resources. These problems can be addressed by the inclusion of some additional reasoning in the deliberation portion of the agent, to avoid repeated attempts at goals where it is impossible.


  1. Top of page
  2. Abstract

The use of intelligent agents is increasing, especially in cases where requirements include timely response, goal-directed behavior, in environments that have the potential to change rapidly over time. Goal-based agents, such as those that follow the BDI paradigm, are particularly suited to these tasks.

Achievement goals are the most common form of goal found in agent systems, driving agents to perform actions to accomplish these goals. Maintenance goals are also common, but rather than realizing some goal, cause an agent to perform actions to keep some state true.

Current implementations of the agent paradigm, such as Jadex, JACK, and Jam, have support for maintenance goals. However, in these frameworks, maintenance goals are utilized in a reactive manner.

Reactive maintenance goals have been shown to limited in their ability. Reactive maintenance goals generally act as triggers for plans or achievement goals, which is similar to the manner in which achievement goals are utilized. Reactive maintenance goals have no influence over the agent’s behavior until an associated maintenance condition is no longer met, which causes the agent to attempt to repair the condition.

In contrast to this, we have presented proactive maintenance goals. Proactive maintenance goals influence the agent’s behavior continually, aiming to ensure that the maintenance condition is never violated. If the agent can anticipate that failure is imminent, based on the planned actions of the agent, it performs actions that aim to prevent failure occurring. In cases where no action is available that avoids the failure of maintenance goals, the original actions should not be pursued.

Constraints are one example where there is no appropriate recovery or preventative action is possible, therefore the only satisfactory solution is to avoid actions that lead to maintenance condition failure.

After the analysis of the behavior of maintenance goals in several case studies, we developed a representation of maintenance goals that captured both reactive and proactive behaviors. Algorithms for reasoning about these maintenance goals were developed, and were then formalized in Section 4. Our formalism was used to illustrate and prove various ideals discussed by analysing the case studies.

In Section 5, we took an experimental approach to analysing the effectiveness of maintenance goals. Several experiments that were conducted illustrated that using proactive maintenance goals in addition to reactive maintenance goals outperformed reactive maintenance goals alone. One important variable was the error rate in the agent’s perception of the environment. This was altered to reflect occasions when the agent underestimated the true resource requirements, as well as when it overestimated these environments.

Our experimental findings suggests that the when an agent checks its maintenance conditions, the process employed with proactive maintenance goals should not underestimate, as this has the potential to lead to an agent abandoning goals as it believes them to be impossible. This has particular significance as the method of determining if maintenance conditions will be violated due to the agent’s planned actions is based on the agent’s beliefs, which can be inaccurate.

Similar findings were found in cases of reactive maintenance goals. Reactive maintenance goals should not overestimate, as this can cause them to run low on resources prematurely. As demonstrated in the experiments, in some cases, the result of this can be severe.

While proactive maintenance goals have been identified and introduced in this paper, much work remains as to how maintenance goals will be utilized in agent systems in the future.

We have investigated the use of resource summaries as a heuristic We believe that the experiments we performed using this heuristic can act as a guide for how alternate heuristics may perform when inaccurate. Suggestions for alternate implementations of future could include planning-based approaches, historical-based approaches, or perhaps neural networks. Their suitability for this task could be investigated in the future.

An idea mentioned in this topic is the concept of pruning. At the moment, when an achievement goal cause conflict with a maintenance condition, the achievement goal is suspended until the maintenance goal’s preventative (or recovery) goal is performed. An alternate approach may be to determine the particular portions of the goal (such as a sub-goal or plan) is causing the conflict, and finding an alternative to this subset. This is of particular importance for solutions which use a planning-approach to plan generation and selection. Refinement of a goal, as found in Hindriks and van Riemsdijk (2007) is another possibility for integration with our approach. If a goal of several parts is causing conflict, it may be possible to remove or weaken portions of the goal so that it no longer causes conflict. The process for accomplishing such results, as well as determining when this approach is warranted, should be investigated in the future.

One aspect not addressed in this work is agent design with proactive maintenance goals. It was mentioned that at design time, a developer may consider a maintenance goal as a single unit, with both reactive and proactive behaviors. In our representation, reasoning and experimental implementation, these were treated as two separate maintenance goals with either reactive or proactive behavior. Comparing these approaches, as well as determining design patterns or “best-practice” models for maintenance goals are yet to be determined.


  1. Top of page
  2. Abstract
  • Baral, C., and T. Eiter. 2004. A polynomial-time algorithm for constructing k-maintainable policies. In Principles of Knowledge Representation and Reasoning: Proceedings of the Ninth International Conference (KR2004), Whistler, Canada, June 2–5, pp. 720730.
  • Baral, C., T. Eiter, M. Bjäreland, and M. Nakamura. 2008. Maintenance goals of agents in a dynamic environment: Formulation and policy construction. Artificial Intelligence, 172(12-13):14291469.
  • Bellifemine, F., A. Poggi, and G. Rimassa. 1999. JADE – a FIPA-compliant agent framework. In Proceedings of the Practical Applications of Intelligent Agents, Vol. 99. The Practical Application Company Ltd.: London, UK, pp. 97108.
  • Bordini, R. H., and J. F. Hübner. 2005. BDI agent programming in agentspeak using Jason (tutorial paper). In CLIMA VI. Edited by F. Toni and P. Torroni, Volume 3900 of Lecture Notes in Computer Science. Springer: Heidelberg Berlin. pp. 143164.
  • Braubach, L., A. Pokahr, W. Lamersdorf, and D. Moldt. 2004. Goal representation for BDI agent systems. In Second International Workshop on Programming Multiagent Systems: Languages and Tools, New York , pp. 920.
  • Darimont, R., E. Delor, P. Massonet, and A. Van Lamsweerde. 1997. iGRAIL/KAOS: An environment for goal-driven requirements engineering. In ICSE ’97: Proceedings of the 19th International Conference on Software Engineering. ACM Press: New York , pp. 612613.
  • Dastani, M., F. Dignum, and J. Meyer. 2000. 3APL: A programming language for cognitive agents. In ERCIM News, European Research, (53): 2829.
  • Dastani, M., M. B. Van Riemsdijk, F. Dignum, M. Birna, R. F. Dignum, and J. jules Ch. Meyer. 2003. A programming language for cognitive agents goal directed 3APL. In Proceedings of the First International Workshop on Programming Multiagent Systems 2003. Springer: Heidelberg Berlin, pp. 111130.
  • Dastani, M., M. B. van Riemsdijk, and J.-J. Meyer. 2006. Goal types in agent programming. In AAMAS ’06: Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems. ACM Press: New York , pp. 12851287.
  • Duff, S., J. Harland, and J. Thangarajah. 2006. On proactivity and maintenance goals. In AAMAS ’06: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems. ACM Press: New York , pp. 10331040.
  • Evans, R. 2002. Varieties of learning. In AI Game Programming Wisdom, pp. 567578.
  • Georgeff, M. P., and F. F. Ingrand. 1989. Decision-making in an embedded reasoning system. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Vol. 2. Morgan Kaufmann: San Francisco, pp. 972978.
  • Hindriks, K. V., F. S. De Boer, W. van der Hoek, and J.-J. Ch. Meyer. 1999. Agent programming in 3APL. Autonomous Agents and Multi-Agent Systems, 2(4):357401.
  • Hindriks, K. V., and M. B. Van Riemsdijk. 2007. Satisfying maintenance goals. In Declarative Agent Languages and Technologies V, 5th International Workshop, DALT 2007, Honolulu, HI, USA, May 14, 2007, Revised Selected and Invited Papers. Edited by M. Baldoni, T. C. Son, M. B. van Riemsdijk, and M. Winikoff, Vol. 4897 of Lecture Notes in Computer Science. Springer: Heidelberg Berlin, pp. 86103.
  • Hindriks, K. V., and M. B. Van Riemsdijk. 2008. Using temporal logic to integrate goals and qualitative preferences into agent programming. In Declarative Agent Languages and Technologies VI: 6th International Workshop. Edited by M. Baldoni, T. C. Son, M. B. van Riemsdijk, and M. Winikoff, Vol. 5397. Springer-Verlag: Heidelberg Berlin, pp. 215232.
  • Huber, M. J. 1999. Jam: A BDI-theoretic mobile agent architecture. In AGENTS ’99: Proceedings of the Third Annual Conference on Autonomous Agents. ACM Press: New York, pp. 236–243.
  • Hübner, J. F., R. H. Bordini, and M. Wooldridge. 2006. Plan patterns for declarative goals in agentspeak. In AAMAS ’06: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems. ACM Press: New York, pp. 1291–1293.
  • Kaminka, G. A., A. Yakir, D. Erusalimchik, and N. Cohen-Nov. 2007. Towards collaborative task and team maintenance. In AAMAS ’07: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems. ACM: New York , pp. 18.
  • Lee, J., M. J. Huber, E. H. Durfee, and P. G. Kenny. 1994. UM-PRS: An implementation of the procedural reasoning system for multirobot applications. In Proceedings of the Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS), Vol. 94, pp. 842849.
  • Ljungberg, M., and A. Lucas. 1992. The OASIS air-traffic management system. In Proceedings of the Second Pacific Rim International Conference on Artificial Intelligence (PRICAI ’92), Seoul, Korea.
  • Meneguzzi, F. R., and M. Luck. 2007. Motivations as an abstraction of meta-level reasoning. In CEEMAS 2007: Proceedings of the 5th International Central and Eastern European conference on Multi-Agent Systems and Applications V. Edited by H.-D. Burkhard, G. Lindemann, R. Verbrugge, and L. Z. Varga, Vol. 4696 of Lecture Notes in Computer Science. Springer: Heidelberg Berlin, pp. 204214.
  • Muscettola, N., P. Nayak, B. Pell, and B. Williams. 1998. Remote Agent: To boldly go where no AI system has gone before. Artificial Intelligence: 40 Years Later, 103(1–2):547.
  • Nakamura, M., C. Baral, and M. Bjäreland. 2000. Maintainability: A weaker stabilizability like notion for high level control. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on on Innovative Applications of Artificial Intelligence. AAAI Press /The MIT Press: Cambridge, MA, pp. 62–67.
  • O’Brien, P. D., and R. C. Nicol. 1998. FIPA–Towards a standard for software agents. BT Technology Journal, 16(3):5159.
  • Pokahr, A., L. Braubach, W. Lamersdorf. 2003. Jadex: Implementing a BDI-infrastructure for jade agents. EXP – in search of innovation (Special Issue on JADE), 3(3):7685.
  • Pokahr, A., L. Braubach, and W. Lamersdorf. 2005a. A goal deliberation strategy for BDI agent systems. In Proceedings of the Third German Conference on Multi-Agent System TEchnologieS (MATES-2005); Springer-Verlag: Berlin Heidelberg , pp. 8294.
  • Pokahr, A., L. Braubach, and W. Lamersdorf. 2005b. Jadex: A BDI reasoning engine. In Multi-Agent Programming. Edited by R. Bordini, M. Dastani, J. Dix, and A. E. F. Seghrouchni. Springer: New York.
  • Rao, A. S. 1996. AgentSpeak(L): BDI agents speak out in a logical computable language. In MAAMAW ’96: Proceedings of the 7th European Workshop on Modeling Autonomous Agents in a Multi-Agent World: Agents Breaking Away. Springer-Verlag: New York, pp. 42–55.
  • Rao, A. S., and M. P. Georgeff. 1991. Modeling rational agents within a BDI-architecture. In Principles of Knowledge Representation and Reasoning. Proceedings of the second International Conference. Morgan Kaufmann: San Mateo, CA, pp. 473484.
  • Rao, A. S., and M. P. Georgeff. 1992. An abstract architecture for rational agents. In Proceedings of Third International Conference on Principles of Knowledge Representation and Reasoning. Edited by C. Rich, W. Swartout, and B. Nebel. Morgan Kaufmann Publishers: Cambridge , MA , pp. 439449.
  • Rao, A. S., and M. P. Georgeff. 1995. BDI-agents: From theory to practice. In Proceedings of the First International Conference on Multiagent Systems (ICMAS’95): MIT Press: Cambridge, MA, pp. 312319.
  • Sardina, S., L. De Silva, and L. Padgham. 2006. Hierarchical planning in BDI agent programming languages: A formal approach. In AAMAS ’06: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, ACM Press: New York, pp. 1001–1008.
  • Sardina, S., and L. Padgham. 2007. Goals in the context of BDI plan failure and planning. In AAMAS ’07: Proceedings of the 6th International Joint Conference on Autonomous agents and multiagent systems. ACM Press: New York, pp. 1623.
  • Schrijver, A. 2005. On the history of combinatorial optimization (till 1960). In Handbook of Discrete Optimization. Edited by K. Aardal, G. Nemhauser, and R. Weismantel. Elsevier: Amsterdam , the Netherlands , pp. 168.
  • Steels, L. 1990. Cooperation between distributed agents through self-organization. In IEEE International Workshop on Intelligent Robots and Systems 1990. Edited by Y. Demazeau and J.-P. Müller. IEEE: New York, pp. 814 suppl.
  • Thangarajah, J., J. Harland, D. Morley, and N. Yorke-Smith. 2007. Aborting tasks in BDI agents. In AAMAS ’07: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems. ACM Press: New York , pp. 18.
  • Thangarajah, J., J. Harland, D. Morley, and N. Yorke-Smith. 2008. Suspending and resuming tasks in BDI agents. In AAMAS ’08: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems. ACM Press: New York , pp. 405412.
  • Thangarajah, J., L. Padgham, and M. Winikoff. 2003a. Detecting and avoiding interference between goals in intelligent agents. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI2003). Academic Press: New York, pp. 721726.
  • Thangarajah, J., L. Padgham, and M. Winikoff. 2003b. Detecting and exploiting positive goal interaction in intelligent agents. In AAMAS ’03: Proceedings of the second international joint conference on Autonomous agents and multiagent systems. ACM Press: New York, pp. 401–408.
  • Thangarajah, J., M. Winikoff, L. Padgham, and K. Fischer. 2002. Avoiding resource conflicts in intelligent agents. In Proceedings of the 15th European Conference on Artifical Intelligence 2002 (ECAI 2002). IOS Press: Amsterdam, the Netherlands, pp. 1822.
  • Van Riemsdijk, B., W. Van Der Hoek, and J.-J. C. Meyer. 2003. Agent programming in dribble: From beliefs to goals using plans. In AAMAS ’03: Proceedings of the Second International Joint Conference on Autonomous Agents and Msultiagent Systems. ACM Press: New York , pp. 393400.
  • Van Riemsdijk, M. B., M. Dastani, and M. Winikoff. 2008. Goals in agent systems: A unifying framework. In AAMAS ’08: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, pp. 713720.
  • Winikoff, M. 2005. JACK intelligent agents: An industrial strength platform. In Multi-Agent Programming, Volume 15 of Multiagent Systems, Artificial Societies, and Simulated Organizations. Springer: New York, pp. 175193.
  • Winikoff, M., L. Padgham, J. Harland, and J. Thangarajah. 2002. Declarative procedural goals in intelligent agent systems. In Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002), Toulouse, France, pp. 470481.