Basic Computer Knowledge Test Questions and Answers

Testing interview Questions - 2022

  • Q 1) What is the PDCA cycle and where testing fits in?

    Answer: There are four steps in a normal software development process. In short, these steps are referred to as PDCA.

    PDCA stands for Plan, Do, Check, Act.

    Plan: It defines the goal and the plan for achieving that goal.


    Do/ Execute: It depends on the plan strategy decided during the planning stage. It is done according to this phase.


    Check: This is the testing part of the software development phase. It is used to ensure that we are moving according to plan and getting the desired result.


    Act: This step is used to solve if there any issue has occurred during the check cycle. It takes appropriate action accordingly and revises the plan again.


    The developers do the "planning and building" of the project while testers do the "check" part of the project.

    Q 2) Explain the role of testing in software development?

    Answer: Software testing comes into play at different times in different software development methodologies. There are two main methodologies in software development, namely Waterfall and Agile.


    In a traditional waterfall software development model, requirements are gathered first. Then a specification document is created based on the document, which drives the design and development of the software. Finally, the testers conduct the testing at the end of the software development life cycle once the complete software system is built.

    Q 3) What is the difference between the white box, black box, and gray box testing?

    Black box Testing: The strategy of black box testing is based on requirements and specification. It requires no need of knowledge of internal path, structure or implementation of the software being tested.


    White box Testing: White box testing is based on internal paths, code structure, and implementation of the software being tested. It requires a full and detail programming skill.


    Gray box Testing: This is another type of testing in which we look into the box which is being tested, It is done only to understand how it has been implemented. After that, we close the box and use the black box testing.


    Following are the differences among white box, black box, and gray box testing are:


    Black box testing Gray box testing White box testing
    Black box testing does not need the implementation knowledge of a program. Gray box testing knows the limited knowledge of an internal program. In white box testing, implementation details of a program are fully required.
    It has a low granularity. It has a medium granularity. It has a high granularity.
    It is also known as opaque box testing, closed box testing, input-output testing, data-driven testing, behavioral testing and functional testing. It is also known as translucent testing. It is also known as glass box testing, clear box testing.
    It is a user acceptance testing, i.e., it is done by end users. It is also a user acceptance testing. Testers and programmers mainly do it.
    Test cases are made by the functional specifications as internal details are not known. Test cases are made by the internal details of a program. Test cases are made by the internal details of a program.
    Q 4) How much testing is sufficient? Or, is it possible to do exhaustive testing of the software?

    Answer: It is impossible to exhaustively test software or prove the absence of errors, no matter how specific your test strategy is.


    An extensive test that finds hundreds of errors doesn’t imply that it has discovered them all. There could be many more errors that the test might have missed. The absence of errors doesn’t mean there are no errors, and the software is perfect. It could easily mean ineffective or incomplete tests. To prove that a program works, you’d have to test all possible inputs and their combinations.


    Consider a simple program that takes a string as an input that is ten characters long. To test it with each possible input, you’d have to enter 2610 names, which is impossible. Since exhaustive testing is not practical, your best strategy as a tester is to pick the test cases that are most likely to find errors. Testing is sufficient when you have enough confidence to release the software and assume it will work as expected.

    Q 5) What are the types of defects?

    Answer: There are three types of defects: Wrong, missing, and extra.


    Wrong: These defects are occurred due to requirements have been implemented incorrectly.


    Missing: It is used to specify the missing things, i.e., a specification was not implemented, or the requirement of the customer was not appropriately noted.


    Extra: This is an extra facility incorporated into the product that was not given by the end customer. It is always a variance from the specification but may be an attribute that was desired by the customer. However, it is considered as a defect because of the variance from the user requirements.

    Q 6) When should exploratory testing be performed?

    Answer: Exploratory testing is performed as a final check before the software is released. It is a complementary activity to automated regression testing.

    Q 7) What is exploratory testing?

    Answer: Simultaneous test design and execution against an application is called exploratory testing. In this testing, the tester uses his domain knowledge and testing experience to predict where and under what conditions the system might behave unexpectedly.

    Q 8) When should exploratory testing be performed?

    Answer: Exploratory testing is performed as a final check before the software is released. It is a complementary activity to automated regression testing.

    Q 9) What are the different types of testing?

    Answer: You can test the software in many different ways. Some types of testing are conducted by software developers and some by specialized quality assurance staff. Here are a few different kinds of software testing, along with a brief description of each.


    Type Description
    Unit Testing A programmatic test that tests the internal working of a unit of code, such as a method or a function.
    Integration Testing Ensures that multiple components of systems work as expected when they are combined to produce a result.
    Regression Testing Ensures that existing features/functionality that used to work are not broken due to new code changes.
    System Testing Complete end-to-end testing is done on the complete software to make sure the whole system works as expected.
    Smoke Testing A quick test performed to ensure that the software works at the most basic level and doesn’t crash when it’s started. Its name originates from the hardware testing where you just plug the device and see if smoke comes out.
    Performance Testing Ensures that the software performs according to the user’s expectations by checking the response time and throughput under specific load and environment. 
    User-Acceptance Testing Ensures the software meets the requirements of the clients or users. This is typically the last step before the software is live, i.e. it goes to production.
    Stress Testing Ensures that the performance of the software doesn’t degrade when the load increases. In stress testing, the tester subjects the software under heavy loads, such as a high number of requests or stringent memory conditions to verify if it works well.
    Usability Testing Measures how usable the software is. This is typically performed with a sample set of end-users, who use the software and provide feedback on how easy or complicated it is to use the software. 
    Security Testing Now more important than ever. Security testing tries to break a software’s security checks, to gain access to confidential data. Security testing is crucial for web-based applications or any applications that involve money. 
    Q 10) Why developers shouldn’t test the software they wrote?

    Answer: Developers make poor testers. Here are some reasons why:


    1. They try to test the code to make sure that it works, rather than testing all the ways in which it doesn't work.
    2. Since they wrote it themselves, developers tend to be very optimistic about the software and don't have the correct attitude needed for testing: to break software.
    3. Developers skip the more sophisticated tests that an experienced tester would perform to break the software. They follow the happy path to execute the code from start to finish with proper inputs, often not enough to get the confidence to ship software in production.
    Q 11) Tell me about the risk-based testing?

    Answer: The risk-based testing is a testing strategy that is based on prioritizing tests by risks. It is based on a detailed risk analysis approach which categorizes the risks by their priority. Highest priority risks are resolved first.

    Q 12) What is acceptance testing?

    Answer: Acceptance testing is done to enable a user/customer to determine whether to accept a software product. It also validates whether the software follows a set of agreed acceptance criteria. In this level, the system is tested for the user acceptability.


    User acceptance testing: It is also known as end-user testing. This type of testing is performed after the product is tested by the testers. The user acceptance testing is testing performed concerning the user needs, requirements, and business processes to determine whether the system satisfies the acceptance criteria or not.


    Operational acceptance testing: An operational acceptance testing is performed before the product is released in the market. But, it is performed after the user acceptance testing.


    Contract and regulation acceptance testing: In the case of contract acceptance testing, the system is tested against certain criteria and the criteria are made in a contract. In the case of regulation acceptance testing, the software application is checked whether it meets the government regulations or not.


    Alpha and beta testing: Alpha testing is performed in the development environment before it is released to the customer. Input is taken from the alpha testers, and then the developer fixes the bug to improve the quality of a product. Unlike alpha testing, beta testing is performed in the customer environment. Customer performs the testing and provides the feedback, which is then implemented to improve the quality of a product.

    Q 13) What is the software testing life cycle?

    Answer: Similar to software development, testing has its life cycle. During the testing, a tester goes through the following activities.


    Understand the requirements: Before testing software or a feature, the tester must first understand what it is supposed to do. If they don’t know how the software is supposed to work, they can’t test it effectively.


    Test Planning and Case Development: Once the tester has a clear understanding of the requirements, they can create a test plan. It includes the scope of testing, i.e., part of software under test and objectives for testing. Various activities are involved in planning the test, such as creating documentation, estimating the time and efforts involved, deciding the tools and platforms, and the individuals who will be conducting the tests.


    Prepare a test environment: The development happens in a development environment, i.e., on a developer’s computer that might not represent the actual environment that the software will run in production. A tester prepares an environment with the test data that mimics the end user’s environment. It assists with realistic testing of the software.


    Generate the test data: Though it is impossible to do exhaustive testing of the software, the tester tries to use realistic test data to give them the confidence that the software will survive the real world if it passes the tests.


    Test Execution: Once the tester has a complete understanding of the software and has a test environment set up with the test data, they execute the test. Here, execution means that the tester runs the software or the feature under test and verifies the output with the expected outcome.


    Test Closure: At the end of the test execution, there can be two possible outcomes. First, the tester finds a bug in the part of the software under test. In this case, they create a test record/bug report. Second, the software works as expected. Both these events indicate the end of the test cycle.

    Q 14) What qualities a software tester should have?

    Answer: Any software tester's goal is to find out as many bugs and problems in the system so that the customers don't have to. Hence, a good software tester should have a keen eye for detail. They should know the ins and outs of the software they are testing and push every aspect of the software to its limits, to identify bugs that are hard to find with the software's regular use.


    Having the domain knowledge of the application is essential. If a tester doesn't understand the specific problems the software is trying to solve, they won't be able to test it thoroughly.


    A good tester should keep the end-user in mind when they are testing. Having empathy with the end-user helps the tester ensure that the software is accessible and usable. Simultaneously, the tester should possess basic programming skills to think from a developer's perspective, which allows them to notice common programming errors such as null-references, out-of-memory errors, etc.


    Communication, both written and verbal, is an essential skill for a tester. A tester will frequently have to interact with both the developers and the management. They should be able to explain the bugs and problems found during testing to the developers. For each bug found, a good tester should provide a detailed bug report consisting of all the information a developer would need to fix that problem. They should be able to make a good case to the management if they are uncomfortable releasing the software if it contains unresolved issues.

    Q 15) What is accessibility testing?

    Answer: Accessibility testing is used to verify whether a software product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

    Q 16) What is Adhoc testing?

    Answer: Ad-hoc testing is a testing phase where the tester tries to 'break' the system by randomly trying the system's functionality.

    Q 17) What is Agile testing?

    Answer: Agile testing is a testing practice that uses agile methodologies i.e. follow test-first design paradigm.

    Q 18) What is API (Application Programming Interface)?

    Answer: Application Programming Interface is a formalized set of software calls and routines that can be referenced by an application program to access supporting system or network services.

    Q 19) What do you mean by automated testing?

    Answer: Testing by using software tools which execute test without manual intervention is known as automated testing. Automated testing can be used in GUI, performance, API, etc.

    Q 20) What is Bottom-up testing?

    Answer: The Bottom-up testing is a testing approach which follows integration testing where the lowest level components are tested first, after that the higher level components are tested. The process is repeated until the testing of the top-level component.

    Q 21) What is functional testing?

    Answer: Functional testing is a form of black-box testing. As the name suggests, it focuses on the software's functional requirements rather than its internal implementation. A functional requirement refers to required behavior in the system, in terms of its input and output.


    It validates the software against the functional requirements or the specification, ignoring the non-functional attributes such as performance, usability, and reliability.


    Functional testing aims to answer the following questions, in particular:


    1. Does the software fulfill its functional requirements?
    2. Does it solve its intended users' problems?
    Q 22) What is a bug report?

    Answer: During testing, a tester records their observations, findings, and other information useful to the developers or the management. All this data belongs to a test record, also called a bug report.


    A detailed bug report is an important artifact produced during testing. It helps the team members with:


    1. Understand the problem,
    2. Steps to reproduce the problem,
    3. The environment and the specific conditions under which it happens, and
    4. The resolution if/when the developers fix the problem.

    Here are a few bits of information that a good bug report should contain. Image Source: Bugzilla


    Field Description
    Title A short headline that summarizes the problem. It shouldn’t be too long but just to give just the right information to the reader. It should be specific and accurate.
    Description The description should answer all the questions that are not explained by the title. It contains a detailed summary of the bug, its severity, and impact, steps to reproduce, expected results vs. the actual output. 
    Version A lot of time can be wasted in trying to reproduce a bug in the wrong version of the product. Knowing the exact product version or the build number on which this bug was found is very useful to the developer in reproducing the bug. 
    Status At any point, a bug can be either ‘Active’, ‘Ready for Testing’, or ‘Closed’. A bug becomes active when it is found, is ready for testing once the developer fixes it. A tester can mark it closed if the developer fixed it, or active if not. 
    Steps to Reproduce Though the steps to reproduce the problem can be provided in the description, sometimes having a distinct field force the tester to think about them. They include each step one must take to successfully reproduce the problem.
    Assigned To Name of the developer or the tester to whom this bug is assigned. 
    Resolution When a developer fixes the bug, they should include the cause for the bug and its resolution. It helps the team in the future when a similar bug resurfaces.

    For example, here is a picture of a bug reported on Jira, a popular bug-tracking software.

    Q 23) What is non-functional testing?

    Answer: Non-functional testing tests the system's non-functional requirements, which refer to an attribute or quality of the system explicitly requested by the client. These include performance, security, scalability, and usability.


    Non-functional testing comes after functional testing. It tests the general characteristics unrelated to the functional requirements of the software. Non-functional testing ensures that the software is secure, scalable, high-performance, and won't crash under heavy load.

    Q 24) What are some important testing metrics?

    Answer: Testing metrics provide a high-level overview to the management or the developers on how the project is going and the next action steps.


    Here are some of the metrics derived from a record of the tests and failures:


    1. Total number of defects found, ordered by their severity
    2. Total number of bugs fixed
    3. Total number of problems caused by an error in the source code vs. configuration or external environmental factors
    4. Bug find and fix rate over time
    5. Bugs by produce/feature area
    6. The average time is taken by a bug since it’s found and fixed.
    7. Total time spent on new feature development vs. time spent on resolving bugs and failures
    8. Number of outstanding bugs before a release
    9. Bugs/failures reported by the customers vs. those found by the testers
    Q 25) What is Baseline Testing?

    Answer: In Baseline testing, a set of tests is run to capture performance information. Baseline testing improves the performance and capabilities of the application by using the information collected and make the changes in the application. Baseline compares the present performance of the application with its previous performance.

    Q 26) What is Benchmark Testing?

    Answer: Benchmarking testing is the process of comparing application performance with respect to the industry standard given by some other organization.


    It is a standard testing which specifies where our application stands with respect to others.

    Q 27) Which types are testing are important for web testing?

    Answer: There are two types of testing which are very important for web testing:


    Performance testing: Performance testing is a testing technique in which quality attributes of a system are measured such as responsiveness, speed under different load conditions and scalability. The performance testing describes which attributes need to be improved before the product is released in the market.


    Security testing: Security testing is a testing technique which determines that the data and resources be saved from the intruders.

    Q 28) What is the difference between web application and desktop application in the scenario of testing?

    Answer: The difference between a web application and desktop application is that a web application is open to the world with potentially many users accessing the application simultaneously at various times, so load testing and stress testing are important. Web applications are also prone to all forms of attacks, mostly DDOS, so security testing is also very important in the case of web applications.

    Q 29) What is the difference between verification and validation?

    Answer: Difference between verification and validation:


    Verification Validation
    Verification is Static Testing. Validation is Dynamic Testing.
    Verification occurs before Validation. Validation occurs after Verification.
    Verification evaluates plans, document, requirements and specification. Validation evaluates products.
    In verification, inputs are the checklist, issues list, walkthroughs, and inspection. Invalidation testing, the actual product is tested.
    Verification output is a set of document, plans, specification and requirement documents. Invalidation actual product is output.
    Q 30) What is the difference between Retesting and Regression Testing?

    Answer: A list of differences between Retesting and Regression Testing:


    Regression Retesting
    Regression is a type of software testing that checks the code change does not affect the current features and functions of an application. Retesting is the process of testing that checks the test cases which were failed in the final execution.
    The main purpose of regression testing is that the changes made to the code should not affect the existing functionalities. Retesting is applied on the defect fixes.
    Defect verification is not an element of Regression testing. Defect verification is an element of regression testing.
    Automation can be performed for regression testing while manual testing could be expensive and time-consuming. Automation cannot be performed for Retesting.
    Regression testing is also known as generic testing. Retesting is also known as planned testing.
    Regression testing concern with executing test cases that was passed in earlier builds. Retesting concern with executing those test cases that are failed earlier. Regression testing can be performed in parallel with the retesting. Priority of retesting is higher than the regression testing.
    Q 31) What is Test-Driven-Development?

    Answer: Test-Driven-Development (TDD) is a popular software development technique, first introduced by Kent Beck in his book with the same name, published in 1999..


    In TDD, a developer working on a feature first writes a failing test, then writes just enough code to make that test pass. Once they have a passing test, they add another failing test and then write just enough code to pass the failing test. This cycle repeats until the developer has the fully working feature. If the code under the test has external dependencies such as database, files, or network, you can mock them to isolate the code.


    Benefits of TDD:


    1. Writing tests first forces you to think about the feature you are trying to build, helping you produce better code.
    2. As you always have a working set of tests at hand, a failing test indicates that the problem is with the code you just added, reducing the time spent in debugging.
    3. Writing tests help the developer to clarify the requirements and specification. It’s challenging to write good tests for a poor set of requirements.
    4. It’s tough to produce high-quality software unless you can test the software after each new change. You can never be sure that your new code didn’t break the working software. TDD gives you the confidence to add new code, as you already have a test in place.
    Q 32) What is the difference between preventative and reactive approaches to testing?

    Answer: Preventative tests are designed earlier, and reactive tests are designed after the software has been produced.

    Q 33) What is the purpose of exit criteria?

    Answer: The exit criteria are used to define the completion of the test level.

    Q 34) Why is the decision table testing used?

    Answer: A decision table consists of inputs in a column with the outputs in the same column but below the inputs.


    The decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combination. The reminders you get in the table explore combinations of inputs to define the output produced.

    Q 35) What is alpha and beta testing?

    Answer: These are the key differences between alpha and beta testing:


    No.Alpha TestingBeta Testing
    1)It is always done by developers at the software development site.It is always performed by customers at their site.
    2)It is also performed by Independent testing teamIt is not be performed by Independent testing team
    3)It is not open to the market and public.It is open to the market and public.
    4)It is always performed in a virtual environment.It is always performed in a real-time environment.
    5)It is used for software applications and projects.It is used for software products.
    6)It follows the category of both white box testing and Black Box Testing.It is only the kind of Black Box Testing.
    7)It is not known by any other name.It is also known as field testing.
    Q 36) Write some common mistakes that lead to major issues?

    Answer: Some common mistakes include:


    1. Poor Scheduling
    2. Underestimating
    3. Ignoring small issues
    4. Not following the exact process
    5. Improper resource allocation
    Q 37) What is a user story?

    Answer: All software has a target user. A user story describes the user's motivations and what they are trying to accomplish by using the software. Finally, it shows how the user uses the application. It ignores the design and implementation details.


    A user story aims to focus on the value provided to the end-user instead of the exact inputs they might enter and the expected output.


    In a user story, the tester creates user personas with real names and characteristics and tries to simulate a real-life interaction with the software. A user story often helps fish out hidden problems that are often not revealed by more formal testing processes.

    Q 38) What is Random/Monkey Testing?

    Answer: Random testing is also known as monkey testing. In this testing, data is generated randomly often using a tool. The data is generated either using a tool or some automated mechanism.


    Random testing has some limitations:

    1. Most of the random tests are redundant and unrealistic.
    2. It needs more time to analyze results.
    3. It is not possible to recreate the test if you do not record what data was used for testing.
    Q 39) What is the benefit of test independence?

    Answer: Test independence is very useful because it avoids author bias in defining effective tests.

    Q 40) What is the boundary value analysis/testing?

    Answer: In boundary value analysis/testing, we only test the exact boundaries rather than hitting in the middle. For example: If there is a bank application where you can withdraw a maximum of 25000 and a minimum of 100. So in boundary value testing we only test above the max and below the max. This covers all scenarios.


    The following figure shows the boundary value testing for the above-discussed bank application.TC1 and TC2 are sufficient to test all conditions for the bank. TC3 and TC4 are duplicate/redundant test cases which do not add any value to the testing. So by applying proper boundary value fundamentals, we can avoid duplicate test cases, which do not add value to the testing.

    Q 41) What is a test environment?

    Answer: A test environment consists of a server/computer on which a tester runs their tests. It is different from a development machine and tries to represent the actual hardware on which the software will run; once it’s in production.


    Whenever a new build of the software is released, the tester updates the test environment with the latest build and runs the regression tests suite. Once it passes, the tester moves on to testing new functionality.

    Q 42) How would you test the login feature of a web application?

    Answer: There are many ways to test the login feature of a web application:


    1. Sign in with valid login, Close browser and reopen and see whether you are still logged in or not.
    2. Sign in, then log out and then go back to the login page to see if you are truly logged out.
    3. Log in, then go back to the same page, do you see the login screen again?
    4. Session management is important. You must focus on how do we keep track of logged in users, is it via cookies or web sessions?
    5. Sign in from one browser, open another browser to see if you need to sign in again?
    6. Log in, change the password, and then log out, then see if you can log in again with the old password.
    Q 43) What are the types of performance testing?

    Answer:
    There are many ways to test the login feature of a web application:


    Performance testing: Performance testing is a testing technique in which quality attributes of a system are measured such as responsiveness, speed under different load conditions and scalability. The performance testing describes which attributes need to be improved before the product is released in the market.


    Types of software testing are:


    A. Load testing:


    1. Load testing is a testing technique in which system is tested with an increasing load until it reaches the threshold value.
    2. Load testing is performed to make sure that the system can withstand a heavy load
    3. The main purpose of load testing is to check the response time of the system with an increasing amount of load.
    4. Load testing is non-functional testing means that the only non-functional requirements are tested.

    B. Stress testing:


    1. Stress testing is a testing technique to check the system when hardware resources are not enough such as CPU, memory, disk space, etc.
    2. In case of stress testing, software is tested when the system is loaded with the number of processes and the hardware resources are less.
    3. The main purpose of stress testing is to check the failure of the system and to determine how to recover from this failure is known as recoverability.
    4. Stress testing is non-functional testing means that the only non-functional requirements are tested.

    C. Spike testing:


    1. Spike testing is a subset of load testing. This type of testing checks the instability of the application when the load is varied.
    2. There are different cases to be considered during testing:
      1. The first case is not to allow the number of users so that the system will not suffer heavy load.
      2. The second case is to provide warnings to the extra joiners, and this would slow down the response time.

    Q 44) What is the difference between the traceability matrix and the test case review process?

    Answer:


    Traceability matrix Test case review
    In this, we will make sure that each requirement has got at least one test case. In this, we will check whether all the scenarios are covered for the particular requirements.
    Q 45) For which and all types of testing do we write test cases?

    Answer: We can write test cases for the following types of testing:


    Different types of testing Test cases
    Smoke testing In this, we will write only standard features; thus, we can pull out some test cases that have all the necessary functions. Therefore, we do not have to write a test case for smoke testing.
    Functional/unit testing Yes, we write the test case for unit testing.
    Integration testing Yes, we write the test case for integration testing.
    System testing Yes, we write the test case for system testing.
    Acceptance testing Yes, but here the customer may write the test case.
    Compatibility testing In this, we don't have to write the test case because the same test cases as above are used for testing on different platforms.
    Adhoc testing We don't write the test case for the Adhoc testing because there are some random scenarios or the ideas, which we used at the time of Adhoc time. Though, if we identify the critical bug, then we convert that scenario into a test case.
    Performance testing We might not write the test cases because we will perform this testing with the help of performance tools.
    Usability testing In this, we use the regular checklist; therefore, we don't write the test case because here we are only testing the look and feel of the application.
    Accessibility testing In accessibility testing, we also use the checklist.
    Reliability testing Here, we don't write the manual test cases as we are using the automation tool to perform reliability testing.
    Regression testing Yes, we write the test cases for functional, integration, and system testing.
    Recovery testing Yes, we write the test cases for recovery testing, and also check how the product recovers from the crash.
    Security testing Yes, we write the test case for security testing.
    Globalization testing:
    Localization testing
    Internationalization testing
    Yes, we write the test case for L10N testing.
    Yes, we write the test case for I18N testing.
    Q 46) When do we stop the testing?

    Answer: We can stop testing whenever we have the following:


    1. Once the functionality of the application is stable.
    2. When the time is less, then we test the necessary features, and we stop it.
    3. The client's budget.
    4. When the essential feature itself is not working correctly.
    Q 47) Why does an application have bugs?

    Answer: The software can have a bug for the following reasons:


    1. Software complexity
    2. Programming errors
    3. If no communications are happening between the customer and the company, i.e., an application should or should not perform according to the software's needs.
    4. Modification in requirements
    5. Time pressure.
    Q 48) When we perform testing?

    Answer: We will perform testing whenever we need to check all requirements are executed correctly or not, and to make sure that we are delivering the right quality product.

    Q 49) Does the customer get a 100% bug-free product?

    Answer: The software can have a bug for the following reasons:


    1. The testing team is not good
    2. Developers are super
    3. Product is old
    4. All of the above

    The correct answer is testing team is not good because sometimes the fundamentals of software testing define that no product has zero bugs.

    Q 50) How many test cases we can write in a day?

    Answer: We can tell anywhere between 2-5 test cases.


    1. First test case → 1st day, 2nd day.
    2. Second test case → 3rd day, 4th day.
    3. Forth test case → 5th day.
    4. 9-10 test cases → 19th day.

    Primarily, we use to write 2-5 test cases, but in future stages we write around 6-7 because, at that time, we have the better product knowledge, we start re-using the test cases, and the experience on the product.

    Q 51) How do we test a web application? What are the types of tests we perform on the web application?

    Answer: To test any web application such as Yahoo, Gmail, and so on, we will perform the following testing:


    1. Functional testing
    2. Integration testing
    3. System testing
    4. Performance testing
    5. Compatibility testing ( test the application on the various operating systems, multiple browsers, and different version)
    6. Usability testing ( check whether it is user friendly)
    7. Ad-hoc testing
    8. Accessibility testing
    9. Smoke testing
    10. Regression testing
    11. Security testing
    12. Globalization testing ( only if it is developed in different languages)

    Primarily, we use to write 2-5 test cases, but in future stages we write around 6-7 because, at that time, we have the better product knowledge, we start re-using the test cases, and the experience on the product.,