A large software corporation generates a so much change in software code that only an automated system can thoroughly test the changed version. However, the evaluation of the test results still require human workforce to complete, and can not be automated, or if it could be automated, then it can only be done on a basic level. During the years of test evaulation done by the test analysts, a lot of information has been collected including the short description of the test environment, test results, messages and others. These collected data may help us categorize the results and thus help the analysts' work during the evaulation process.
The goal is to develop a system capable of gathering the old and new data as well, including any necessary information about the environment, descriptions, results and anything else that may help to properly categorize the test cases. The system should be able to provide a prediction or estimation about the category on a new and currently uncategorized test case. As the automated test system runs the tests daily, the implemented system has to be prepared to keep updated its database in a daily basis. And finally, to help the test analysts work, the test evaulation system should provide a quick and user-friendly user interface to examine the predictions and give detailed information about the accuracy of the estimations.