Homework 2: QA Techniques
Due DateTuesday, February 6th at 10:30am
From this assignment, students will learn:
- Common QA techniques
- How each QA technique fits into the software lifecycle
- What kind of issues each technique catches best, and the potential cost of those issues
- The cost of using the technique, both startup and maintenance costs
- The benefits gained from the technique, such as additional information about the issues found, the regressability of tests, or the time when the bugs are caught
You will study different QA techniques by testing the Othello game. You have been provided with the specification, source code, and some tests for this game. However, it has bugs! Be warned, bugs may be ANYWHERE in the project, including the spec and tests. Your task is to use the QA techniques to find these bugs. Some bugs may be found through multiple techniques. When a bug is discovered, add it to your bug table (described later on). After you find the bugs, you'll need to answer questions about the QA techniques used.
We have seeded approximately 25 bugs into the project. We have provided requirements to help you understand the goals of the program.
For this assignment, you will be allowed to work in teams of two. We recommend that you divide and conquer, since this is a larger assignment. In the text file questions.txt, which you will be including as part of your deliverables, you should list the members of your team.
Deliverables and Grading
You will turn in the following documents. Each are described in detail later in this document. All of these files should be turned in together in a file called [username]17654-A2.zip, where [username] should be replaced with one team-member's andrew IDs. Submission for this assignment is through Blackboard's assignment facility. The Blackboard help pages provide further information on this feature. Do not use the Drop Box for submitting your assignment solution.
- 40 pts - questions.txt. Grading based upon the analysis of the QA techniques. This file should also list your team members.
- 20 pts - bugs.html. Grading based upon number of bugs found and analysis of which techniques found them.
- 10 pts - your automated system tests. Grading based upon how well the scripts tests the system.
- 10 pts - manual.html. Grading based upon the readability and repeatability of the instructions and choice of priority and severity levels.
- 10 pts - JUnit tests. The tests that are given as part of the assignment are all located inside a source folder named "src-test" which is separate from the actual source of the game. You should put your tests here and include this entire folder with your deliverables. Greater than 80% coverage of the OthelloBoard, OthelloMove, and Options classes to get full credit. Please do not include binaries or the actual source of the game in your deliverables.
- 10 pts - review.txt. Grading based upon checklist and analysis of what should have been changed.
You will need the following to do this assignment:
- Install Eclipse 3.2. 3.2 is required in order to ensure that the plug-ins work properly.
Download the Othello Eclipse project we have provided. This project can be
found on the "Assignments" section of the Blackboard course site.
To set up the project, do the following:
- Launch Eclipse
- In Eclipse, select go to the "File" menu and select "Import."
- On the "Import" dialog box, under the "General" folder, select "Existing Projects into Workspace" and select "Next."
- On this page, make sure "Select root directory" is selected, go to "Browse" and navigate to the project folder that you downloaded. Select this folder (but do not open it) and hit "OK."
- Select "Finish." This should bring the entire project into your workspace.
- Install EclEmma, a test coverage tool.
- In order to do this, you should follow the instructions given on the EclEmma download site.
- It's a good idea to follow the "Update Site" instructions. They are much simpler than that "Manual Installation" instructions.
You will need to answer several questions about the QA techniques. Please answer the following questions for each of the four QA techniques:
- Overall Cost: What are the costs of this technique? Consider both sunk costs and recurring costs. Sunk costs are costs for purchasing tools and setting them up. Recurring costs are the cost of using the tool and maintaining any resources it needs. Remember, software development is a dynamic process. The code WILL change. Consider what costs will accrue from changes or to make the technique more resilient to change.
- Bugs Found: What classes of bugs does this technique find? What's the relative importance of classes of bugs? What is their relative expense if they leaked into production code? A security bug could be VERY expensive (your company can get sued!), but a minor UI flaw will only annoy customers.
- Bugs Missed: What classes of bugs did it miss? What caused it to miss these bugs?
- Technique Overlap: What techniques found similar classes of bugs? How are the techniques different from this one?
- Cost of Fix: What are the costs of fixing a bug once it is found with this technique? Consider how the developer will reproduce the bug, find the root cause of the bug, and then fix the issue.
- Lifecycle: How does this technique fit into the software lifecycle, and how does it work with other QA techniques? Be specific: the answer "during QA" will not count. Think about WHO might run this QA technique, WHEN they would do it, and WHAT tools/resources they will need.
- When to stop: How did you decide when you were done with the testing process? (Keep in mind that while the impending deadline may have had a lot to do with when or why you stopped testing, it is not usually a good idea to stop testing while a given technique is still regularly exposing bugs.)
We suggest writing your answers after you have finished the rest of the assignment, as many of the questions will have you compare techniques.
Think carefully about your answers. The hands-on part of this assignment is meant to give you familiarity with these techniques, but what you will take out of the assignment will come from your analysis of the techniques.
A note about grading: Keep each answer within a few sentences. Bulleted lists are appreciated where they make sense. Also, be sure to include the names of the members of your team in this file.
Turn in your answers in a file named questions.txt.
Please use the template bugs.html provided to create a bug list. This will contain all the bugs you found, regardless of how you found them. Notice that you may find some of these bugs through more than one different technique. This file is included in the docs sub-directory of the project.
Turn in the file bugs.html.
Manual System Testing
The purpose of manual system testing is to use the system the way a user would. Start up Othello and make sure that all of the features work as the user expects. You've been lucky that someone created a requirements document for you! It is called requirements.html and is included in the "docs" directory of the project.
When you find a bug through manual system testing, you must be able to tell the developer what the problem is. For each bug you find using this technique, create a bug report. Every company has their own set of fields that must be in a bug report. We will use a common subset, as defined in the example below. We have provided the template for you in a file called manual.html, also included in the "docs" directory of the project. Please use this template.
Title: A brief description of the bug
Reproduction steps: List exactly what the developer must do to reproduce the bug.
Expected output: What the user expected to happen
Actual output: What actually happened
Priority: How important the bug is to functionality, on a scale from 1-5 (1 is high priority)
Severity: How severe the bug is with respect to what happened, on a scale from 1-5 (1 is very severe)
Priority, Severity, and Impact
A brief explanation of priority and severity. Priority is how a bug affects the customer. For example, if the main feature of the application isn't working, it has a high priority. If a rarely used feature of the application isn't working, this is a low priority. Severity is how bad the bug itself is. If there is an obvious workaround for the user, this is a low severity. If the application throws an exception and shuts down, this is a high severity. It is possible to have a high priority, low severity bug or a low priority, high severity bug. To make matters more interesting, some companies have a third field, impact, that is a function of priority and severity. Impact is used to make final decisions about which bugs will get fixed, and which will not. We will not use that here.
Here's a sample for a bug in a calculator:
Title: Multiplying 3 numbers does not work
- Start application
- Click a number
- Click *
- Click another number
- Click *
- Click a third number
- Click =
Expected output: The multiplication of all three numbers
Actual output: The multiplication of the last two numbers only
The priority was 1 since this is a main feature of a calculator. The severity is 2 because users do have a workaround of multiplying separately, annoying as it is. A less annoying workaround would cause the severity to go down.
Turn in your list of manual bug reports in manual.html. Notice that bugs listed here must also be in bugs.html.
HINT: Some bugs are obvious errors in the feature. Others are subtle in that the user interface is misleading for the user. All of these are bugs!
HINT: Remember that different users will accomplish the same task in a different way. Don't go about your normal application usage. Test all possibilities!
Automated System Testing
The board game project comes with its own testing scaffold. We will use this to do automated system tests. The test scaffold is a command line application that will take a series of commands. We can compare the output to our expected output using a shell script. The existing system tests and the shell script are located in the test folder.
The Shell Script
To run the shell script, do the following:Unix Machines (at a terminal):
\$ ./runAll.sh nwhere n is the number of system tests created. Windows Machines (At the command-line):
The shell script will check for differences between the actual and expected output. It ignores differences in whitespace. If they differ otherwise, it prints out a failure for that test. Otherwise, it prints out a pass for that test.
Creating System Tests
The system tests are each located in their own numbered directory. We have provided 2 system tests to start you off with. Each directory contains four files:
- in The input commands for the test
- correct The correct output
- out The output from the last time the test was run
- desc A plain text description of the test
The following commands are available for the system tests.
Displays the board with `.' for empty spaces. Also displays the turn on the last line.
Displays a list of the available moves, with one move per line.
Displays a list of the moves made. The moves will be displayed in pairs, according to turn.
Takes a move as input and applies it to the board. The move is assumed to be valid. It displays
Applied move MOVE when finished.
Takes the level which minimax should run at. It runs the algorithm and applies the move. It displays
Made move MOVE when finished.
Undoes the last move made. Displays
Undid move when finished.
with the value of the current board. The value of the board is how the
minimax algorithm tells who is currently wining. If black is winning,
the number is negative, and if white is wining, the number is positive.
A value of 0 designates a tie. The bounds of this value are +/- MAX_INT
Black Wins or
Tie depending on the current status of the game.
Displays the options for the board. The options will be in the same grid as the board, and they will display the weight given to each position. There is whitespace between each grid so that the numbers can be differentiated from each other. These weights are used to calculate the current value of the board.
setOptions CORNER EDGE NEAR CENTER
Sets the options for the four major sections of the board. Displays
Options Set when complete.
Takes the file name for where to save the board. Displays
Saved board when finished.
Reloads the board stored in FILENAME. Displays
Loaded board when finished.
End of the script
Turn in: The automated-tests folder containing all of your scripts.
HINT: As before, run the existing tests and check for unusual situations.
HINT: The purpose of system tests is to make sure all the parts work together as expected. Think about how to mix up commands in ways that would be difficult to test through the UI.
The unit tests should all go into the test package inside the "src-test" folder of the project. You'll see that there are already some existing tests. You will need to add more tests to the existing files and create new test files. When creating a unit test, target a single method. Read the documentation for that method. This documentation should reflect the purpose of the method, and acts as an informal 'contract.' Your unit tests should ensure that these methods are living up to their contracts!
Writing a JUnit test
The easiest way to make a JUnit test is via the wizard. (You want to make sure that all of your tests are going somewhere inside the "src-test" folder, so make sure this directory is selected before taking the next steps.)
- Once you know which class you want to test, go to the "File" menu, select "New" and then "JUnit Test Case."
- At this dialog, you should give an appropriate package and class name.
- You should also select the "Class under test" to be the class you want to test with this test case.
- Select "Next." This will take you to a list of all the methods in that class. You can select all of the methods that you want to test, and then select "Finish."
One of the unit testing conventions is to have a 1:1 mapping of class under test to testing class, and each method also has a 1:1 mapping. However, with complex code, it is easier to have multiple test cases for a single method and put tests for the same method in a test class. We have used the later convention here. There are already some test classes for the methods OthelloBoard.applyMove(), Options(), and Options.clone().
Using JUnit and EclEmma
JUnit 4 will already been installed on Eclipse. You will need to install EclEmma for this part. Use the "Update Site" instructions for the easiest install.
EclEmma can be run on any Java application, so we could also do coverage testing of our functional and system tests. For this assignment, we will only require you check coverage of the unit tests.
To run EclEmma, go to "Run | Coverage As | JUnit Test". (If
this menu item does not appear, try highlighting the project in the
Package Explorer.) This will open a new output window called
If you click the down arrow in the right corner of the window (third
icon from the right), you will get options to change the coverage
counters. The default is
Instruction Counters. For this assignment, we will use
After running EclEmma, the code will be highlighted with green, yellow,
and red. Green lines were hit multiple times, yellow lines were only
run once, and red lines were never run.
There is also a window called
JUnit. The Failures tab
will show only the failures and will give you the trace of how they
failed. The Hierarchy tab will show all of the JUnit tests.
We will expect that your unit tests will have 80% line coverage on the OthelloBoard, OthelloMove, and Options. Your unit tests will be run to check this.
Turn in all the entire "src-test" directory. All of your tests must compile.
HINT: You might try running the existing tests and see what happens...
HINT: Also notice that the existing tests have a lot of missing coverage. Use this to guide further tests.
Many companies that do code reviews will have a checklist for what to look for. For this assignment, you will also create a checklist before doing the code review. While you do the code review, you may find that your checklist was incomplete. Keep track of the changes, additions, or deletions that you would make from your checklist.
There are bugs all over the project. We suggest reviewing everything to get experience with this technique. In practice, you may code review all code or only pick key components of your project.
Here's some general guidelines for a code review:
- Do not limit yourself to only investigating those things on your checklist, but use it as a baseline.
- When doing a code review, don't just read the code. Think about what you could send to the code to make it break.
- Dirty, poorly styled code tends to have bugs. Sometimes it helps to fix minor style issues (like white-spacing) in order to read the code.
- Complex code tends to have bugs. If a piece of code is frightening, don't avoid it. It's likely to have some good bugs in it. Make sure you understand what it's trying to do.
- Think about issues like concurrency and security. Are there race conditions? Can a hacker get into this system?
Turn in: The file review.txt that contains your checklist and your analysis of what you would have changed on it.
HINT: Bugs don't exist only in the project code! You might check other artifacts (specifications, code documentation, tests) as well...
HINT: There is a race condition. We leave it up to you to find it.
HINT: We promise that minimax algorithm is correct by itself. However, there are a couple of bugs in the general vicinity.