Course Assignment 2 Assignment Objectives

Course Assignment 2 Assignment Objectives
The objectives of this assignment are three-fold. First, you will get familiar with search-based testing techniques through a state-of-the-art testing tool Evosuite. You are required to use Evosuite to improve the test suite that you constructed in assignment 1. The newly constructed test cases should help improve test coverage for effective fault detection. Second, you will implement a fault localization tool on top of Soot. Your tool should be able to locate our injected faults and eventually fix them. Third, you will select the test cases to improve the ranking of fault localization. Assignment Material
Program under Test (Subject Program) Please download the Java program for this assignment here. It is the same as that used in assignment 1. The program contains injected faults. Assignment Tasks
Please complete the following tasks using Evosuite and Soot. Task 1: Test Case Enhancement with Evosuite (25%). In this task, please use Evosuite (quick start) to construct a test suuite (in JUnit 4 style) that achieves higher test coverage than that in assignment 1. Your constructed test suite should achieve high code coverage (e.g., over 75% line coverage and 100% public method coverage). You may try different Evosuite parameters to improve coverage. Your mark on this part is related to the coverage of your test suite. Submissions: Five test suites generated by you, i.e., the folder testUtils5, testUtils6, testUtils7, testUtils8, and testUtils9. For each suite, please submit a screenshot showing the line coverage and a screenshot for branch coverage. In total, you need to submit a folder containing 10 screenshots. Each screenshot should be properly named so that it identifies the corresponding statement coverage or branch coverage. The submission should also include a readme file that records the commands (including the parameters) used by you to generate each test suit. Grading Scheme:
1. Test suits and readme files (5%): Each test suite accounts for 1% if it can be successfully executed. 2. Statement coverage (10%): Score = (statement coverage of your test suite / highest statement coverage achieved by your classmates) * 10% 3. Branch coverage (10%): Score = (branch coverage of your test suite / highest branch coverage achieved by your classmates) * 10% Task 2: Fault Localization based on Soot (50%). In this task, you need to design and implement an effective fault localization algorithm by yourself. You can leverage the Soot instrumenter that you implemented in assignment 1 to instrument our subject program and collect the executed statements during each test run. Based on the execution information, you can implement the classic fault localization algorithm Ochiai to calculate the fault likelihood of each statement and generate a report for fault localization. This research paper describes the Ochiai algorithm and can point you to the original papers of the algorithm. Your program should output a spectrum report of potential faulty statements. Each spectrum report should in csv format and each line is in the format of “method signature,statement,suspicious score,ranking”. The report should be sorted according the descending order of suspicious scores. If multiple statements have the same scores, please sort them according to the alphabetical order of method signature and statement. The method signature could be obtained using Soot API getSignature(). The ranking of suspicious score should be computed as (N+M+1)/2 where N is the number of statements whose suspicious scores are higher than a and M is the number of statements whose suspicious scores are higher than or equal to a. For example, if a sequence of suspicious scores is (0.9, 0.8, 0.8, 0.7) their rankings are (1, 2, 2, 4), respectively. Based on the report, you can check the source code and locate the faults. There are multiple bugs injected in the subject program and each of them is in single line. You need to locate and fix as many as bugs as you can. To help you with this task, we provide you with three test suites (avaiable here), namely testUtils10, testUtils11, and testUtils12. The program will fail on some tests of these suites. Note that if it is impossible to locate faults with no failing tests. You need to use these test suites for fault localization. Submissions: 1. The source code of your program, which should be in the folder comp5111asg02/src/faultlocate/. You can re-use your code in assignment 1. Scripts to run your program and readme are required. 2. The spectrum reports of potential faulty statements running against the 3 test suits provided by us. In total, you need to submit 3 reports. Please name each of them in the format “spectrum_ochiai_test[10-12].csv” 3. The reports of the faults found and fixed by you. For each fault, you should create a .txt file and name it in the format “fault_[line-no].txt”, where lineno is the line number of the fault. In each text file, please 1) put down its
location in source code; 2) the corresponding fixing patch; 3) why it is a fault; 4) its suspicious score and ranking in the reports generated by you. Grading Scheme: 1. Correctness of your program (30%): We will check your implementation and the report generated by your program. 2. Fault Localization (20%): If you successfully locate and fix M of our N injected faults. Your mark will be M/N*20% Task 3: Test Case Selection (25%). You may find that the highly suspicious faulty statements reported by your program are not real faults. In such circumstances, you may consider to refine the test suites (e.g., by removing irrelevant or redundant test cases) provided. Please discuss your refined approach in the assignment report. Note: You are recommanded to write code to implement your selection approach. Submissions: 1. The three refined test suites (in java file) and the readme explaining how you select them. 2. The spectrum reports of potential faulty statements running against the refined test suites. In total, you need to submit three reports. Please name each of them in the format “spectrum_ochiai_selected_test[10-12].csv” 3. A short report explaining how you refine the test cases and compare the results before and after refinement. Grading Scheme: Effectiveness of test cases (25%). An effective test case requires a good oracle to judge whether the test is successful or not. For each of the N faulty statements seeded, your score will depend on the ranking of the faulty statement (better ranking, better score): 1. If it is ranked highest, your score = (25 / N) * 100% 2. Else if it is ranked in the top 5 faulty statements, your score = (25 / N) * 90% 3. Else if it is ranked in the top 10 faulty statements, your score = (25 / N) * 80% 4. Else if it is ranked in the top 20 faulty statements, your score = (25 / N) * 60% 5. Else if it is ranked in the top 50 faulty statements, your score = (25 / N) * 40% 6. Else, your score = (25 / N) * 30% NOTE We will run your refined test suites using our fault localization tool. So your score on task 2 will not be affected by the correctness of your implementation. Further, the ranking will be the averaged ranking over 3 test suites.
Bonus Task (10%) Besides Ochiai, there are other fault localization algorithms. This research paper describes includes the other 2 algorithms: Tarantula and Dstar. You are encouraged to implement them as well. Submissions: 1. The source code of your implementation. 2. The spectrum reports of potential faulty statements running against the revisded test suits provided by us. Since there are 2 algorithms, in total, you are required to submit 6 reports (3 tests * 2 algorithms). Please rename each of them using the format “spectrum_[algorithmname]_selected_test[10-12].csv” Grading Scheme: Correctness of each algorithms (10%): We will check your implementation and the report generated by your program. Each algorithm accounts for 5%. Submission Requirements
You are required to submit the following by 23:55 28th Apr to Canvas(to ease grading, please properly name all submitted files and folders): Your folder should use following folder structure and have the corresponding content 1. (Task 1 submission 1) put the test cases and its readme file generated by you in to comp5111asg02/testUtils[5-9] 2. (Task 1 submission 2&3 ) put your screenshot into comp5111asg02/screenshot 3. (Task 2 submission 1) put your code in to comp5111asg02/src/faultlocate. 4. (Task 2 submission 1) put your running scripts and readme under comp5111asg02 5. (Task 2 submission 2) put your fault localization spectrum report using the provided tests (task2) under comp5111asg02/spectrum_test 6. (Task 2 submission 3) put the reports of faults found and fixed by you under under comp5111asg02/fault 7. (Task 3 submission 1) put your selected test cases into comp5111asg02/selected_test 8. (Task 3 submission 2) put your fault localization spectrum report using the selected tests (task3) under comp5111asg02/spectrum_selected_test 9. (Task 3 submission 3) put your short report of task 3 under ofcomp5111asg02/report.pdf 10. (Bonus Task submission 1&2) put your source code under comp5111asg02/src/faultlocate and put your spectrum report under ofcomp5111asg02/spectrum_test.

The post Course Assignment 2 Assignment Objectives appeared first on EssayBishop.

WeCreativez WhatsApp Support
Our customer support team is here to answer your questions. Ask us anything!
šŸ‘‹ Hi, how can I help?
Scroll to Top