Queueing network performance analysis (qn) style for Aesop * The tar file should produce a directory "qn" containing the Aesop style definition. This should be a subdirectory of your aesop directory. aesop/qn/PerfTool contains the performance analysis tool. The tool uses the acme-library, which is not included since it is available separately. * To use the qn style in Aesop, add a line to aesop/lib/styles.dir qn "Queueing Network Analysis Style" * To build the performance analysis tool you will have to change the library and include paths in aesop/qn/PerfTool/Makefile; in particular, the path to the acme-library. They are currently IPATHS = -I. -I/afs/cs/project/able/libs/acme-library/include -I/usr/local/lib/g++-include LPATHS = -L/afs/cs/project/able/libs/acme-library/lib * Using the qn style in Aesop This is a simple asynchronous message passing style with one type of component (Server) and one type of connector (Async). From the point of view of performance analysis, a system described in this style processes requests. At any point in time, a request exists as one of two things. It may be a message in one Server's queue (or otherwise in transit to the Server.) Or it may be undergoing service at one Server, which will either result in total completion of the request or in the Server issuing one message to another Server; then the request is represented by the new message in the new Server's queue, and the first Server begins processing the next message in its own queue. The Server's queueing discipline (FIFO, round-robin, etc) is not important, but at any point in time it can be working on at most one request. The Servers are assumed to run on separate machines and not compete for computing resources. As with the pipe and filter style, an input and output to the outside exist at the top level. (They could be removed for a style in which no requests originate outside the system.) Requests which arrive from outside are assumed to occur with a memoryless probability distribution, the average rate of which is specified in the System workshop (accessed from the Design menu, Open System Props.) A Server is allowed multiple output ports, but only one input port, as a reminder of the underlying performance analysis restriction that each server is treated as having a single input queue. (The style could be modified to allow multiple input ports on a Server, so long as this is remembered.) A Server can both service requests, and spontaneously generate requests. (The style could be modified to add component types which can do only one or neither.) Request generation is assumed to occur with a memoryless probability distribution, the average rate of which is specified in the Server workshop. It is also assumed to take no time for the Server to generate a request (to simplify analysis.) A Server takes some amount of time to service a request (process a message.) This 'service time' is assumed to have a memoryless probability distribution, with average specified in the Server workshop. Any Server can be replicated N times, where N is specified in the workshop. The replicated Server still has a single queue, and a message at the head of the queue will be removed from the queue by whichever of the N replicants is free. (For example, a database replicated 3 times on separate machines.) Replicating a Server which generates requests has the effect of multiplying the rate at which it issues requests. (For example, a Server which represents a user who issues database requests with a rate R could be replicated 100 times to represent 100 such users, who altogether issue with a rate 100*R.) An Async connector represents asynchronous message passing. In the Asynch workshop, a propagation delay can be associated with the connector. * Walkthrough #1 A simple system in which you know most of the performance behavior already. Create two components and one connector arranged in a pipeline. Double-click on a component to open its workshop. Assign it a service time of 5 (milliseconds). Assign it an "optional visits per task" of 1. This means that each request coming from the outside will visit this component an average of once. Do the same to the other component. Open the connector workshop. Assign it a delay time of 1 (ms) and a visits per task of 1. Pull down the Design menu and select Open System Props to open the System workshop. Assign it an arrival rate of 1 (per second). This means that on average one request will enter the system per second. The average request will visit each of the two components once, experiencing a delay of 5 ms at each, and 1 ms at the connector. We expect that this should result in an overall system response time, from the time the request enters the system to the time it leaves, of around 11 ms. Pull down the Tools menu and select Performance. This will output the design as ACME, run it through the performance analysis tool, and input the changed ACME as a new design. [Actually, right now it will probably get a weird error importing the ACME, which I need to fix tonight.] It will prompt you for a name for the new design. You can see the system response time (11.050251 ms), and the average number of outstanding requests (messages) in the system (0.011050), in the System workshop. The Server workshops also have some calculated properties. [more...]