These Manual testing interview questions will help you learn the real-time questions asked by interviewers and get you through the interview process.
Best Manual Testing Interview Questions and Answers
Q1. What Is Requirement Traceability Matrix?
Ans: Requirement Traceability Matrix (RTM) is a document which records the mapping between the high-level requirements and the test cases in the form of a table. That’s how it ensures that the Test Plan covers all the requirements and links to their latest version.
Q2. Explain The Difference Between Pilot And Beta Testing?
Ans: Read the following points to know the difference between Pilot and Beta testing.
We do the beta test when the product is about to be released to the customer whereas pilot testing takes place in the earlier phase of the development cycle.
In the beta test, testing applications are given to a few users to make sure that applicants meet the customer requirements and do not contain any showstopper bug. Whereas, in the pilot test, few members of the testing teamwork at the Customer site to set up the product. They give their feedback also to improve the quality of the end product.
Q3. What do you understand by software testing?
Ans: Software testing is a validation process that confirms that a system works as per the business requirements. It qualifies a system on various aspects such as usability, accuracy, completeness, efficiency, etc. ANSI/IEEE 1059 is the global standard that defines the basic principles of testing.
Q4. When should you stop the testing process?
Ans: The testing activity ends when the testing team completes the following milestones.
Test case execution
The successful completion of a full test cycle after the final bug fix marks the end of the testing phase.
The end date of the validation stage also declares the closure of the validation if no critical or high-priority defects remain in the system.
Code Coverage(CC) ratio
It is the amount of code concealed via automated tests. If the team achieves the intended level of code coverage (CC) ratio, then it can choose to end the validation.
Mean Time Between Failure (MTBF) rate
Mean time between failure (MTBF) refers to the average amount of time that a device or product functions before failing. This unit of measurement includes the only operational time between failures and does not include repair times, assuming the item is repaired and begins functioning again. MTBF figures are often used to project how likely a single unit is to fail within a certain period of time
Q5. What do verification and validation mean in software testing?
Ans: In software testing, verification is a process to confirm that product development is taking place as per the specifications and using the standard development procedures. The process comprises the following activities:
Validation is a means to confirm that the developed product doesn’t have any bugs and is working as expected. It comprises the following activities:
- Functional testing
- Non-functional testing
Q6. What is static testing? When does it start and what does it cover?
Ans: Static testing is a white-box testing technique that directs developers to verify their code with the help of a checklist to find errors in it. Developers can start static testing without actually finalizing the application or program. Static testing is more cost-effective than dynamic testing as it conceals more areas than dynamic testing in a shorter time.
Q7. Define Black-box testing.
Ans: It is a standard software testing approach that requires testers to assess the functionality of the software as per the business requirements. The software is treated as a black box and validated as per the end user’s point of view.
Q8. What is a test plan and what does it include?
Ans: A test plan stores all possible testing activities to ensure a quality product. It gathers data from the product description, requirement, and use case documents.
The test plan document includes the following:
- Testing objectives
- Test scope
- Testing the frame
- Reason for testing
- Criteria for entrance and exit
- Risk factors
Q9. What is meant by test coverage?
Ans: Test coverage is a quality metric to represent the amount (in percentage) of testing completed for a product. It is relevant for both functional and non-functional testing activities. This metric is used to add missing test cases.
Q10. Is it possible to achieve 100% testing coverage? How would you ensure it?
Ans: It’s considered not possible to perform 100% testing of any product. But you can follow the below steps to come closer.
Set a hard limit on the following factors:
- Percentage of test cases passed
- Number of bugs found
Set a red flag if:
- Test budget is depleted
- Deadlines are breached
Set a green flag if:
- The entire functionality gets covered in test cases
- All critical and major bugs must have a ‘CLOSED’ status
Q11. What are unit testing and integration testing?
Ans: Unit testing has many names such as module testing or component testing.
Many times, it is the developers who test individual units or modules to check if they are working correctly.
Whereas, integration testing validates how well two or more units of software interact with each other.
There are three ways to validate integration:
- Big Bang approach
- Top-down approach
- Bottom-up approach
Q12. Can we do system testing at any stage?
Ans: No. System testing should start only if all modules are in place and they work correctly. However, it should be performed before UAT (user acceptance testing).
Q13. Mention the different types of software testing.
Ans: Various testing types used by manual testers are as follows:
- Unit testing
- Integration testing
- Regression testing
- Shakeout testing
- Smoke testing
- Functional testing
- Performance testing
1. Load testing
2. Stress testing
3. Endurance testing
- White-box and Black-box testing
- Alpha and Beta testing
- System testing
Q14. What is the difference between a test driver and a test stub?
Ans: The test driver is a section of code that calls a software component under test. It is useful in testing that follows the bottom-up approach.
The test stub is a dummy program that integrates with an application to complete its functionality. It is relevant for testing that uses the top-down approach.
- Let’s assume a scenario where we have to test the interface between Modules A and B. We have developed only Module A. Here, we can test Module A if we have the real Module B or a dummy module for it. In this case, we call Module B as the test stub.
- Now, Module B can’t send or receive data directly from Module A. In such a scenario, we’ve to move data from one module to another using some external features called test driver.
Q15. What is agile testing and why is it important?
Ans: Agile testing is a software testing process that evaluates software from the customers’ point of view. It is favourable as it does not require the development team to complete coding for starting QA. Instead, both coding and testing go hand in hand. However, it may require continuous customer interaction.
Q16. What do you know about data flow testing?
Ans: It is one of the white-box testing techniques.
Data flow testing emphasizes for designing test cases that cover control flow paths around variable definitions and their uses in the modules. It expects test cases to have the following attributes:
- The input to the module
- The control flow path for testing
- A pair of an appropriate variable definition and its use
- The expected outcome of the test case
Q17. What is the purpose of end-to-end testing?
Ans: End-to-end testing is a testing strategy to execute tests that cover every possible flow of an application from its start to finish. The objective of performing end-to-end tests is to discover software dependencies and to assert that the correct input is getting passed between various software modules and sub-systems.
Q18. The probability that a server-class application hosted on the cloud is up and running for six long months without crashing is 99.99 percent. To analyze this type of a scenario, what test you will perform?
Ans: Reliability testing
Q19. What will you do when a bug turns up during testing?
Ans: When a bug occurs, we can follow the below steps.
- We can run more tests to make sure that the problem has a clear description.
- We can also run a few more tests to ensure that the same problem doesn’t exist with different inputs.
- Once we are certain of the full scope of the bug, we can add details and report it.
Q20. Why is it impossible to test a program thoroughly?
Ans: Here are the two principal reasons that make it impossible to test a program entirely.
- Software specifications can be subjective and can lead to different interpretations.
- A software program may require too many inputs, outputs, and path combinations.
Q21. How do you test a product if the requirements are yet to be frozen?
Ans: If the required specifications are not available for a product, then a test plan can be created based on the assumptions made about the product. But we should get all assumptions well-documented in the test plan.
Q22. If a product is in the production stage and one of its modules gets updated, then is it necessary to ret
Ans: It is suggested to perform regression testing and run tests for all the other modules as well. Finally, the QA should also carry out system testing.
Q23. How will you overcome the challenges faced due to the unavailability of proper documentation for testing?
Ans: If the standard documents like System Requirement Specification or Feature Description Document are not available, then QAs may have to rely on the following references, if available.
- A previous version of the application
Another reliable way is to have discussions with the developer and the business analyst. It helps in solving the doubts, and it opens a channel for bringing clarity to the requirements. Also, the emails exchanged could be useful as a testing reference.
Smoke testing is yet another option that would help verify the main functionality of the application. It would reveal some very basic bugs in the application. If none of these work, then we can just test the application from our previous experiences.
Q24. Is there any difference between retesting and regression testing?
Ans: Possible differences between retesting and regression testing are as follows:
- We perform retesting to verify the defect fixes. But, the regression testing assures that the bug fix does not break other parts of the application.
- Regression test cases verify the functionality of some or all modules.
- Regression testing ensures the re-execution of passed test cases. Whereas, retesting involves the execution of test cases that are in a failed state.
- Retesting has a higher priority over regression. But in some cases, both get executed in parallel.
Q25. As per your understanding, list down the key challenges of software testing.
Ans: Following are some of the key challenges of software testing:
- The lack of availability of standard documents to understand the application
- Lack of skilled testers
- Understanding the requirements: Testers require good listening and understanding capabilities to be able to communicate with the customers the application requirements.
- The decision-making ability to analyze when to stop testing
- Ability to work under time constraints
- Ability to decide which tests to execute first
- Testing the entire application using an optimized number of test cases
Q26. What are the different types of functional testing?
Ans: Functional testing covers the following types of validation techniques:
- Unit testing
- Smoke testing
- Sanity testing
- Interface testing
- Integration testing
- System testing
- Regression testing
Q27. What are the functional test cases and non-functional test cases?
- Functional testing: It is testing the ‘functionality’ of a software or an application under test. It tests the behaviour of the software under test. Based on the requirement of the client, a document called a software specification or requirement specification is used as a guide to testing the application.
- Non-functional testing: In software terms, when an application works as per the user’s expectation, smoothly and efficiently under any condition, then it is stated as a reliable application. Based on quality, it is very critical to test these parameters. This type of testing is called non-functional testing.
Q28. What do you understand by STLC?
Ans: The software testing life cycle (STLC) proposes the test execution in a planned and systematic manner. In the STLC model, many activities occur to improve the quality of the product.
The STLC model lays down the following steps:
- Requirement Analysis
- Test Planning
- Test Case Development
- Environment Setup
- Test Execution
- Test Cycle Closure
Q29. In software testing, what does a fault mean?
Ans: A fault is a condition that makes the software fail to execute while performing the considered function.
Q30. Difference between Bug, Defect, and Error.
Ans: A slip in coding is indicated as an error. The error spotted by a manual tester becomes a defect. The defect which the development team admits is known as a bug. If a built code misses on the requirements, then it is a functional failure.
Q31. How do severity and priority relate to each other?
Ans: Severity: It represents the gravity/depth of a bug. It describes the application point of view.
Priority: It specifies which bug should get fixed first. It defines the user’s point of view.
Q32. List the different types of severity.
Ans: The criticality of a bug can be low, medium, or high depending on the context.
- User interface defects – Low
- Boundary related defects – Medium
- Error handling defects – Medium
- Calculation defects – High
- Misinterpreted data – High
- Hardware failures – High
- Compatibility issues – High
- Control flow defects – High
- Load conditions – High
Q33. What do you mean by defect detection percentage in software testing?
Ans: Defect detection percentage (DDP) is a type of testing metric. It indicates the effectiveness of a testing process by measuring the ratio of defects discovered before the release and reported after the release by customers.
For example, let’s say, the QA has detected 70 defects during the testing cycle and the customer reported 20 more after the release. Then, DDP would be: 70/(70 + 20) = 72.1%
Q34. What does defect removal efficiency mean in software testing?
Ans: Defect removal efficiency (DRE) is one of the testing metrics. It is an indicator of the efficiency of the development team to fix issues before the release.
It gets measured as the ratio of defects fixed to total the number of issues discovered.
For example, let’s say, there were 75 defects discovered during the test cycle while 62 of them got fixed by the development team at the time of measurement. The DRE would be 62/75 = 82.6%
Q35. What is the average age of a defect in software testing?
Ans: Defect age is the time elapsed between the day the tester discovered a defect and the day the developer got it fixed.
While estimating the age of a defect, consider the following points:
- The day of birth of a defect is the day it got assigned and accepted by the development team.
- The issues which got dropped are out of the scope.
- Age can be both in hours or days.
- The end time is the day the defect got verified and closed, not just the day it got fixed by the development team.
Q36. How do you perform automated testing in your environment?
Ans: Automation testing is a process of executing tests automatically. It reduces the human intervention to a great extent. We use different test automation tools like QTP, Selenium, and WinRunner. Testing tools help in speeding up the testing tasks. These tools allow you to create test scripts to verify the application automatically and also to generate the test reports.
Q37. Is there any difference between quality assurance, quality control, and software testing. If so, what is it?
Ans: Quality Assurance (QA) refers to the planned and systematic way of monitoring the quality of the process which is followed to produce a quality product. QA tracks the test reports and modifies the process to meet the expectation.
Quality Control (QC) is relevant to the quality of the product. QC not only finds the defects but suggests improvements too. Thus, a process that is set by QA is implemented by QC. QC is the responsibility of the testing team.
Software testing is the process of ensuring that the product which is developed by developers meets the users’ requirements. The aim of performing testing is to find bugs and make sure that they get fixed. Thus, it helps to maintain the quality of the product to be delivered to the customer.
Q38. Tell me about some of the essential qualities an experienced QA or Test Lead must possess.
Ans: A QA or Test Lead should have the following qualities:
- Well-versed in software testing processes
- Ability to accelerate teamwork to increase productivity
- Improve coordination between QA and Dev engineers
- Provide ideas to refine QA processes
- Skill to conduct RCA meetings and draw conclusions
- Excellent written and interpersonal communication skills
- Ability to learn fast and to groom the team members
Q39. What is a Silk Test and why should you use it?
Ans: Here are some facts about the Silk Test tool:
- The skill tool is developed for performing regression and functionality testing of an application.
- It is used when we are testing Window-based, Java, web, and the traditional client/server applications.
- Silk Test helps in preparing the test plan and managing it to provide direct access of the database and validation of the field.
Q40. On the basis of which factors you would consider choosing automated testing over manual testing?
Ans: Choosing automated testing over manual testing depends on the following factors:
- Tests require periodic execution.
- Tests include repetitive steps.
- Tests execute in a standard runtime environment.
- Automation is expected to take less time.
- Automation is increasing reusability.
- Automation reports are available for every execution.
- Small releases like service packs include a minor bug fix. In such cases, executing the regression test is sufficient for validation.
Q41. Tell me the key elements to consider while writing a bug report.
Ans: An ideal bug report should consist of the following key points:
- A unique ID
- Defect Description: A short description of the bug
- Steps to reproduce: They include the detailed test steps to emulate the issue. They also provide the test data and the time when the error has occurred
- Environment: Add any system settings that could help in reproducing the issue
- Module/section of the application in which the error has occurred
- Responsible QA: This person is a point of contact in case you want to follow-up regarding this issue
Q42. Is there any difference between bug leakage and bug release?
Ans: Bug leakage: Bug leakage is something when the bug is discovered by the end-user/customer and missed by the testing team to detect while testing the software. It is a defect that exists in the application and not detected by the tester, which is eventually found by the customer/end-user.
Bug release: A bug release is when a particular version of the software is released with a set of known bug(s). These bugs are usually of low severity/priority. It is done when a software company can afford the existence of bugs in the released software but not the time/cost for fixing it in that particular version.
Q43. What is the difference between performance testing and monkey testing?
Ans: Performance testing checks the speed, scalability, and/or stability characteristics of a system. Performance is identified with achieving response time, throughput, and resource-utilization levels that meet the performance objectives for a project or a product.
Monkey testing is a technique in software testing where the user tests the application by providing random inputs, checking the behaviour of the application (or trying to crash the application).