In their everyday lives people flexibly handle many tasks. They work on one at a time, make quick decisions about which one to work on, and put aside tasks whenever attending to them is not required to achieve the task's goal. This last capability is critical because rather than fixate on a blocked task, a person can work on some other task. For example, a person making bean soup wouldn't watch the beans as they soak overnight. Instead she would take the inability to affect the progress of making bean soup as an opportunity to work on some other, possibly less important task where progress is possible. When taking advantage of these opportunities, people don't completely forget what they were doing. Instead, the put-aside tasks guide the selection of the new tasks. After the beans have soaked and the soup is simmering, a person might go out to get other chores done, but she probably would not forget the soup and eat out. By putting aside a blocked task and remembering it at the appropriate time, a person can complete many other tasks and still accomplish the blocked one in a timely manner. My thesis work describes how an artificial agent, called Laureli, can maintain goal-directed behavior while suspending her blocked tasks similarly to that described above. Laureli serves as an exemplar agent, grounding in detailed and understandable examples the problems that occur when she suspends her tasks for later reactivation. Laureli's suspension-reactivation mechanisms provide for interleaving more available tasks during the task's {\it slack} time (while the time the task is blocked). A task's {\it availability} is defined as how likely the agent expects that working on the task will make progress toward the task's goal. A task's availability changes over time, and depends on both the agent's actions and input from the environment. Laureli's suspension-reactivation mechanisms are her method for representing large changes in a task's availability over time. Representing a task's availability to the agent is important, because the agent can then better schedule its tasks as it executes them. If the task's availability over time is known in advance, then the agent can use that knowledge and generate a schedule that can be simply followed at execution time. However, in many cases the agent either doesn't know or it is difficult to know how the task's availability will change over time. The second step in making bean soup, "boiling the beans for an hour", is such an example. Perhaps Laureli could measure the water, look up the specific heat of the beans and then using the effective heat transfer from the stove, calculate how long until boiling. However, she could also just put the pan of beans and water on the stove, stay in the area, and occasionally check to see if the water was boiling. My thesis advocates this second approach, so the agent can monitor a task's execution rather than attempt to schedule the task, when the agent doesn't have all the availability knowledge. Laureli's suspending and reactivating of her tasks is similar to deliberately excluding the suspended tasks from her decisions about what action to take. However, some decisions based on the smaller number of tasks will be functionally different from similar decisions that consider all the tasks. Since these decisions affect the agent's external behavior, suspending tasks can affect the agent's apparent {\it rationality} in achieving its goals. In many cases, Laureli can act similarly to an agent with access to all its tasks, because she can access all the tasks that affect the decision. Conflicting tasks are accessable because when Laureli suspends a task she builds special conflict-detecting reactivation rules. These rules detect potentially selectable tasks that will conflict if selected with the suspended task. These rules monitor Laureli's choices, pointing out conflicts. At some level of detail these conflicts would be the same conflicts that would be considered, if she were scheduling these tasks before execution. Similar rules are expected to make synergistic tasks accessable. In the cases where Laureli doesn't have access to all her tasks, I am looking at mechanisms that might enable her behavior more closely to approximate her behavior with access to all her tasks. In addition, I am also investigating when suspension is worth the extra effort of the special mechanisms.