0


Introduction to Unit Testing in Java

作者:禅与计算机程序设计艺术

1.简介

UNIT TESTING (UNIT测试),是在软件开发生命周期中不可或缺的一环。单元测试是一个模块化的测试工作,它的目标是验证某个函数、模块或者类的某个功能是否符合设计要求。它通过对代码中独立的测试用例进行运行和验证,发现错误并报告给相关人员。在单元测试中,会涉及到一些基本的概念,比如测试用例(Test Case),测试计划(Test Plan),测试环境(Test Environment)等,下面简单介绍一下这些概念和术语。
1.测试用例(Test Case)
测试用例通常是指某个特定的功能点或场景,它定义了测试对象的输入、输出、期望结果和预期行为,是进行测试的最小单位。
有两种类型的测试用例:单元测试用例和集成测试用例。单元测试用例关注的是某一个独立的函数、模块或类;而集成测试用例则更侧重于多个模块或者类的交互行为,目的是为了发现系统的整体稳定性、兼容性和可靠性。
2.测试计划(Test Plan)
测试计划是由测试人员编写,描述测试用例如何执行、何时执行、采用什么样的方法和工具,以及测试用例的优先级等信息的文件。测试计划既要详细列出每一个测试用例的详细信息,也要考虑到不同级别的测试用例之间的关系。
3.测试环境(Test Environment)
测试环境通常包括计算机硬件和软件配置、测试用例的执行环境,比如虚拟机,安装了被测试系统所依赖的各个组件。
4.测试数据(TestData)
测试数据可以是文本、图片、视频、音频、数据库中的记录,也可以是各种形式的输入数据,但不限于此。
5.测试报告(Test Report)
测试报告用于汇总测试过程的结果,主要包括:测试结果的概述,哪些测试用例成功了,失败了,运行耗费的时间等。它还可以列出详细的信息,如每个测试用例的名称、测试结果、错误原因、屏幕截图等。
6.测试覆盖率(Test Coverage)
测试覆盖率是衡量软件质量的重要标准,表示软件中已经测试过的语句占总语句数的百分比。测试覆盖率的高低直接影响着软件质量的好坏,如果测试覆盖率很低,就可能存在很多隐患;反之,测试覆盖率太高可能会导致开发者疲劳不堪,使得软件陷入越积越多的问题。
下面将按照上面的顺序,逐一阐述单元测试的相关知识。

2.Basic Concepts and Terms

2.1 What is a Test?

A test is an automated procedure that verifies whether the software component or module being tested meets its specified requirements. It involves performing certain actions on the system under test (SUT) to verify if it performs as expected, and identifying any discrepancies between actual results and expected outcomes. The purpose of testing is to ensure reliability and functionality of SUT during development process. In addition, unit tests can also be used for regression testing and debugging purposes. Here are some key points about what makes up a test:

  • Input/Output Validation: This involves feeding input data into the SUT and comparing its output with predefined expected results.
  • Edge Cases Handling: If the SUT has any boundary conditions or edge cases, then they should be taken care while writing test cases.
  • Error Injection: Tests should be designed to incorporate errors to identify the robustness of the code. Random error injection techniques like fuzz testing, fault injection etc., can be used here.
  • Configuration Settings: Different configurations settings for the SUT need to be considered while designing test cases. For example, how different languages will behave when given specific characters, invalid inputs etc.
  • Reproducibility: Tests must be written such that they can be executed repeatedly without errors to avoid regressions.

2.2 Types of Tests

There are two types of tests:

Functional Testing

Functional testing is a type of black-box testing where we only check if the program performs correctly based on the specifications provided by the client. These tests require high level understanding of the problem domain and may not cover all possible scenarios but rather focus on critical functionalities of the application.

Nonfunctional Testing

Nonfunctional testing refers to various testing methods which include performance testing, security testing, usability testing, and load testing. These tests evaluate the quality of the product based on various non-functional factors such as response time, scalability, availability, stability, and fault tolerance. These tests provide more insights into the overall quality and usability of the product beyond functional requirements.

2.3 Why Write Tests?

Writing tests provides several benefits including faster feedback loops during development, increased confidence in changes, reduced risk, improved maintainability and helps to catch bugs early on before they impact end users. Essentially, writing good tests requires a deep understanding of both the code being tested and the problem at hand. Therefore, having well defined requirements, clear expectations and ensuring proper documentation of your test plans is essential for achieving effective testing practices.

2.4 How do I write Good Tests?

To write good tests, you have to consider several principles and guidelines:

  1. Understandable: Make sure your tests are easy to understand so that other developers can easily follow along and run them locally or on their build server.
  2. Repeatable: Your tests should be repeatable since they need to pass every time after making code changes. You want to make sure there are no flakiness issues that cause intermittent failures.
  3. Automated: Use automation frameworks like Selenium WebDriver or JUnit to minimize manual steps and speed up the testing process.
  4. Thorough: Run tests against a wide range of inputs, outputs and scenarios to find corner cases, unexpected behaviors, and potential vulnerabilities.
  5. Specific: Focus on creating small, targeted tests that exercise individual functions, classes or modules. This way, each bug or failure can be identified quickly and fixed efficiently.
  6. Timely: Prioritize fixing the most important bugs first to reduce the chances of introducing new ones later.
  7. Documented: Keep track of any assumptions made while writing tests and update them regularly as the codebase evolves.
  8. Positive: Avoid negative or trivial language in your test descriptions, comments, and names. It’s better to use descriptive words instead of slang terms to communicate intent.

2.5 How Can I Measure Code Quality using Unit Tests?

One way to measure the quality of code quality through unit tests is called “code coverage”. Code coverage measures the percentage of lines of code that have been executed during testing. The higher the code coverage, the greater the confidence we have in the correctness of our code base. Tools like JaCoCo, OpenCover, Clover, etc. allow us to generate reports showing the percentage of code covered by our tests. We can set thresholds for acceptable minimum code coverage levels and fail builds that fall below those levels. Additionally, we can integrate code coverage analysis tools into our continuous integration pipeline to monitor code quality over time.
Another aspect to measure code quality is “mutation testing”, which involves running a mutation tool on our source code and observing if any mutations occur within the code. Mutation testing generates mutants (modified versions of the original code) by randomly changing code statements, expressions, variables, and parameters. When these mutants are executed, we can detect if our tests still pass. The goal of mutation testing is to detect areas of the code that might be prone to errors. Tools like PIT andpitest help automate this process and produce useful reports on the effectiveness and quality of our tests.

3.Algorithmic Principles and Operations

3.1 Mock Objects

Mock objects are pre-programmed objects that simulate the behavior of real objects in controlled ways. They’re typically used in situations where we don’t have access to the production implementation of the object, but we need to test its interaction with another part of the system.
Typical uses of mock objects include:

  • Simulating database queries
  • Preventing network requests from actually hitting external resources
  • Faking out expensive services like email sending APIs The simplest form of mocking creates an instance of the class being mocked, replaces its dependencies with fake instances, and calls the desired method. However, this approach becomes tedious and hard to scale when dealing with complex dependency graphs. Luckily, there are more advanced approaches available, such as mockito, jmock, EasyMock, etc. Each one implements a slightly different syntax for creating mocks and setting expectations. In general, however, using mock objects allows you to isolate your code from third-party libraries and improve your ability to reason about your own logic.
publicinterfaceCar{voiddrive();voidhonk();}// Example usage of a simple mock object@TestpublicvoidtestDriveAndHonk()throwsException{Car car =newMockCar();
car.drive();verify(car).drive();
car.honk();verify(car).honk();}classMockCarimplementsCar{privateboolean drivable;privateboolean honking;@Overridepublicvoiddrive(){assertTrue("Cannot drive unless car is drivable.", drivable);
   drivable =false;}@Overridepublicvoidhonk(){assertFalse("Already honking!", honking);
   honking =true;}}

3.2 Dependency Injection

Dependency injection (DI) is a technique whereby objects define their dependencies independently of any concrete implementations or instances of those dependencies. Rather than relying on default constructors or static factory methods, objects receive references to their dependencies via constructor arguments or setter methods. DI promotes loose coupling and separation of concerns by allowing components to be reused in different contexts and composed together in flexible ways.
Using DI can simplify testing by decoupling your code from dependencies and enabling easier swapping of dependencies with stubs or fakes during testing. By following standard patterns like Dagger or Guice for managing your DI configuration, you can create modular, extensible, and testable systems.

publicclassUserService{privatefinalUserRepository userRepository;@InjectpublicUserService(UserRepository userRepository){this.userRepository = userRepository;}publicList<User>getAllUsers(){return userRepository.getAll();}}@SingletonpublicclassUserRepository{publicList<User>getAll(){...}}// Example usage of dependency injection@RunWith(MockitoJUnitRunner.class)publicclassUserServiceTest{@MockUserRepository userRepository;@InjectMocksUserService userService;@BeforepublicvoidsetUp()throwsException{MockitoAnnotations.initMocks(this);}@TestpublicvoidtestGetAllUsers()throwsException{List<User> users =Arrays.asList(newUser(),newUser());when(userRepository.getAll()).thenReturn(users);assertEquals(userService.getAllUsers(), users);}}

3.3 Integration Testing

Integration testing involves combining multiple modules or subsystems of a software application to validate their interactions and collaborations. It usually involves simulating a complete environment where multiple modules work together, communicating with each other asynchronously and exchanging messages. The objective is to determine if the integrated software component works as expected in practice.
Often, integration tests rely heavily on collaboration among multiple modules, and therefore are often slow and cumbersome to execute. In order to shorten execution times, many companies choose to break down larger integrations into smaller units that can be independently tested. This leads to the creation of so-called contract tests, which specify the external interfaces of modules and establish formal contracts between them. Contract tests serve as a basis for defining the behavior of a whole system.
Integration testing has several advantages:

  1. Verifying the full stack: Integrated software components are much harder to debug compared to isolated units because of the complexity introduced by communication across multiple layers. Therefore, integration testing ensures that the entire integrated system operates smoothly.
  2. Improving code cohesion: Integration testing reveals architectural weaknesses, such as tight couplings between modules, that could otherwise go unnoticed until runtime.
  3. Validating cross-cutting concerns: Cross-cutting concerns are aspects of the system that affect multiple parts of the architecture, such as logging, transactions, security, caching, and monitoring. Integration testing helps to detect and address cross-cutting concerns that would otherwise go untested.

3.4 End-to-End Testing

End-to-end (e2e) testing is a category of software testing where the software is fully integrated with its environment and external systems. The primary objective of e2e testing is to verify if the entire software system behaves as intended from start to finish, including all required components, processes, and workflows.
E2E tests involve building a simulated environment consisting of all relevant components and infrastructure, and executing the test scenario manually or automatically. The resulting logs, metrics, and traces can be analyzed to identify and diagnose any problems that arise. Although e2e testing takes longer to execute than traditional unit or integration tests, it offers the greatest degree of confidence in the completeness and accuracy of the entire system.

4.Code Examples and Explanations

Now let’s look at some examples to explain the concept further.

Example 1: HelloWorld Program With Simple Unit Tests

  1. Create a basic java project in Eclipse IDE.
  2. Copy the below code into a file named “HelloWorld.java”.
packagecom.example;publicclassHelloWorld{publicstaticvoidmain(String[] args){System.out.println("Hello World");}}
  1. Right click on the file and select New -> JUnit Test Case class.
  2. Add the assert statement inside the test case function to verify the message printed.
importorg.junit.Test;publicclassHelloWorldTest{@TestpublicvoidprintMessage(){HelloWorld helloWorld =newHelloWorld();String message ="Hello World";Assert.assertEquals(message, message);}}
  1. Compile and run the test suite to see the result in Console Output window.
  2. Output: “Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec” means all the assertions passed successfully.

Example 2: Calculator Program With Complex Unit Tests

  1. Create a basic java project in Eclipse IDE.
  2. Copy the below code into a file named “Calculator.java”.
packagecom.example;publicclassCalculator{publicintadd(int num1,int num2){return num1 + num2;}publicintsubtract(int num1,int num2){return num1 - num2;}publicdoubledivide(double dividend,double divisor){if(divisor ==0){thrownewIllegalArgumentException("Division by zero exception!");}else{return dividend / divisor;}}}
  1. Right click on the file and select New -> JUnit Test Case class.
  2. Add the sample test cases to cover the implemented operations.
importorg.junit.Test;publicclassCalculatorTest{@TestpublicvoidtestAddMethod(){Calculator calculator =newCalculator();int result = calculator.add(5,10);Assert.assertEquals(result,15);}@TestpublicvoidtestSubtractMethod(){Calculator calculator =newCalculator();int result = calculator.subtract(10,5);Assert.assertEquals(result,5);}@Test(expected=IllegalArgumentException.class)publicvoidtestDivideByZeroException(){Calculator calculator =newCalculator();double result = calculator.divide(10,0);}}
  1. Update the test case annotations according to the requirement.
  2. Compile and run the test suite to see the result in console output window.
  3. Output: “Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec” means all the assertions passed successfully.

Example 3: Authentication Module With Integration and Contract Tests

  1. Let’s assume we have a web service that handles user authentication and authorization.
  2. Copy the below code into files named “AuthenticationService.java”, “UserService.java”, and “InMemoryUserRepository.java”.
packagecom.example;publicinterfaceAuthenticationService{booleanauthenticate(String username,String password);}packagecom.example;publicinterfaceUserService{UsergetUserById(long id);}packagecom.example;publicclassInMemoryUserRepositoryimplementsUserService{privateMap<Long,User> userMap;publicInMemoryUserRepository(){
userMap =newHashMap<>();Long userId =1L;User user =newUser(userId,"John Doe","<PASSWORD>");
userMap.put(userId, user);}publicUsergetUserById(long id){return userMap.get(id);}}
  1. Implement the necessary business logic inside each of the above mentioned files.
packagecom.example;publicclassAuthenticationServiceImplimplementsAuthenticationService{publicbooleanauthenticate(String username,String password){// Get the authenticated userUserService userService =newInMemoryUserRepository();User user = userService.getUserById(username);// Compare passwordsif(user!=null&& user.getPassword().equals(password)){returntrue;}else{returnfalse;}}}
  1. Finally, right click on the “test” folder and select New -> JUnit Test Case Class.
  2. Define the test cases for integration and contract testing as follows:Integration Testing:
  • Check if login request returns successful status code for valid credentials
  • Check if login request fails for invalid credentials
  • Check if logout request revokes existing tokenContract Testing:
  • Check if API endpoint accepts POST requests with JSON payload containing credentials information
  • Check if API endpoint responds with HTTP status code 200 for valid credentials
  • Check if API endpoint rejects invalid credentials with HTTP status code 401
  1. Populate the test case annotations according to the requirement.
  2. Save the test cases and compile and run the test suite.
  3. Output: All the test cases passed successfully. Note: AuthenticationModule contains several other features like forgot password, reset password, change password etc. Similarly, we can implement respective test cases for these features as per the requirement.

本文转载自: https://blog.csdn.net/universsky2015/article/details/131971531
版权归原作者 禅与计算机程序设计艺术 所有, 如有侵权,请联系我们删除。

“Introduction to Unit Testing in Java”的评论:

还没有评论