Performance Testing Basics

Performance Testing Basics

1. What is Performance Testing? Definition:Performance testing is a type of non-functional testing that evaluates how a software application performs…

Read More
SDLC & STLC

1. SDLC (Software Development Life Cycle) Definition The Software Development Life Cycle (SDLC) is a structured process used to develop…

Read More
JIRA / Bug Tracking Tools

1. What is a Bug Tracking Tool? A Bug Tracking Tool is a software application used to report, track, and…

Read More
Test Case & Bug Reporting

1. Test Case Definition A test case is a set of conditions, steps, and expected results used to verify that…

Read More


1. What is Performance Testing?

Definition:
Performance testing is a type of non-functional testing that evaluates how a software application performs under a certain workload. It measures speed, responsiveness, stability, and scalability.

Key Goal: Ensure the application meets performance requirements under expected and peak conditions.

Why Important:

  • Identify bottlenecks in the system.
  • Ensure smooth user experience under heavy load.
  • Optimize resource usage (CPU, memory, bandwidth).
  • Prevent crashes, slowdowns, or failures in production.

2. Key Performance Metrics

MetricDescription
Response TimeTime taken to respond to a user request
ThroughputNumber of transactions processed per second
LatencyDelay between sending a request and receiving a response
ConcurrencyNumber of users the system can handle simultaneously
Resource UtilizationCPU, memory, disk, and network usage
Error RateNumber of failed requests under load

3. Types of Performance Testing

  1. Load Testing
    • Measures system behavior under expected normal load.
    • Example: 1000 users logging in simultaneously.
  2. Stress Testing
    • Tests system under extreme load to see breaking point.
    • Example: 5000 users hitting the server.
  3. Spike Testing
    • Sudden increase or decrease in load to test stability.
    • Example: Traffic suddenly spikes from 100 to 1000 users.
  4. Endurance / Soak Testing
    • Test system performance over a long period.
    • Checks for memory leaks or degradation.
  5. Scalability Testing
    • Measures how well the system scales with increased load.
    • Helps in planning server or cloud resources.

4. Performance Testing Process

  1. Requirement Gathering
    • Identify performance goals (response time, throughput, max users).
  2. Test Planning
    • Define scope, environment, tools, and schedule.
  3. Test Environment Setup
    • Prepare servers, network, databases, and monitoring tools.
  4. Test Script Development
    • Create scripts for user actions (login, search, checkout, etc.).
  5. Test Execution
    • Run tests using performance testing tools.
  6. Monitoring & Data Collection
    • Monitor CPU, memory, network, and logs during the test.
  7. Analysis & Reporting
    • Analyze metrics, identify bottlenecks, and suggest improvements.

5. Popular Performance Testing Tools

ToolKey Features
JMeterOpen-source, supports load, stress, and functional testing
LoadRunnerEnterprise-level tool, detailed monitoring and reporting
GatlingOpen-source, good for high-performance scenarios
BlazeMeterCloud-based JMeter alternative for large-scale testing
NeoLoadSupports web, mobile, and cloud applications

6. Best Practices in Performance Testing

  1. Test early and often – don’t wait for production.
  2. Use realistic test data and scenarios.
  3. Monitor all resources – CPU, memory, disk, network.
  4. Simulate real-world conditions – users, network speed, geographic location.
  5. Document performance benchmarks for future comparison.
  6. Collaborate with developers to fix bottlenecks quickly.
  7. Run tests on a staging environment similar to production.

7. Summary

  • Performance Testing ensures applications are fast, reliable, and scalable.
  • Key types include Load, Stress, Spike, Endurance, and Scalability testing.
  • Metrics like response time, throughput, and resource utilization are crucial.
  • Tools like JMeter, LoadRunner, and Gatling are widely used.
  • Following best practices prevents performance issues in production.

1. SDLC (Software Development Life Cycle)

Definition

The Software Development Life Cycle (SDLC) is a structured process used to develop software. It defines the steps to plan, create, test, and deploy software efficiently and with quality.

Key Goal: Deliver high-quality software that meets user requirements, on time and within budget.


Phases of SDLC

PhaseDescription
1. Requirement Gathering & AnalysisCollect and analyze business/user requirements. Create requirement specification document (SRS).
2. Feasibility StudyCheck technical, operational, and economic feasibility. Decide if project is viable.
3. DesignCreate software architecture and design documents. Includes UI/UX design, database design, and system design.
4. Development / ImplementationWrite code based on design documents. Usually done in iterations for Agile methodology.
5. TestingVerify the software functionality, performance, and security. Identify and fix bugs.
6. DeploymentRelease software to production or client environment.
7. MaintenanceBug fixing, updates, and adding new features based on user feedback.

SDLC Models

  1. Waterfall Model – Sequential, one phase at a time.
  2. Agile Model – Iterative, incremental development with continuous feedback.
  3. V-Model – Testing activities aligned with development phases.
  4. Spiral Model – Risk-driven iterative approach.
  5. Iterative Model – Develop software in repeated cycles.

Key Points About SDLC

  • Focuses on development process.
  • Ensures proper planning, design, implementation, and maintenance.
  • Helps reduce cost, risk, and time.

2. STLC (Software Testing Life Cycle)

Definition

The Software Testing Life Cycle (STLC) is a sequence of steps performed to ensure quality of the software. It focuses specifically on testing activities in a systematic way.

Key Goal: Detect defects early and ensure the software meets requirements and quality standards.


Phases of STLC

PhaseDescription
1. Requirement AnalysisUnderstand functional and non-functional requirements to identify testable features.
2. Test PlanningDefine scope, strategy, resources, schedule, and tools for testing.
3. Test Case Design / Test Scenario CreationCreate detailed test cases, scenarios, and test data.
4. Test Environment SetupPrepare hardware, software, network, and database needed for testing.
5. Test ExecutionRun test cases and report defects if actual results differ from expected results.
6. Defect Reporting & TrackingLog defects in a tool (like JIRA), track status, retest after fixes.
7. Test ClosurePrepare test summary reports, evaluate quality, and document lessons learned.

Key Points About STLC

  • Focuses only on testing activities.
  • Ensures quality, reliability, and correctness of software.
  • Helps in finding defects early, reducing cost of fixes.

3. Differences Between SDLC and STLC

FeatureSDLCSTLC
Full FormSoftware Development Life CycleSoftware Testing Life Cycle
FocusSoftware development processSoftware testing process
PurposeDeliver functional softwareEnsure software quality
Phases CoveredRequirement, Design, Coding, Testing, Deployment, MaintenanceRequirement Analysis, Test Planning, Test Design, Execution, Closure
Performed ByDevelopers, Designers, Business AnalystsTesters / QA team
OutcomeWorking softwareTested and defect-free software
Start PointRequirement gatheringAfter requirement analysis
End PointMaintenance of softwareTest closure / sign-off

4. How SDLC and STLC Work Together

  • SDLC includes development + testing, while STLC is only about testing.
  • Testing starts in parallel with SDLC phases (especially in V-Model or Agile).
  • Good collaboration between SDLC & STLC ensures high-quality software delivered on time.

5. Summary

  • SDLC: End-to-end software development process.
  • STLC: Systematic testing process within SDLC.
  • Both ensure quality, efficiency, and defect-free software.
  • Testers and developers work together for successful software delivery.


1. What is a Bug Tracking Tool?

A Bug Tracking Tool is a software application used to report, track, and manage defects or issues in a project. It helps teams organize, prioritize, and resolve bugs efficiently.

Key Features:

  • Log and manage bugs.
  • Track progress and status.
  • Assign bugs to team members.
  • Generate reports and metrics.

Benefits:

  • Ensures no bug is overlooked.
  • Improves communication between testers and developers.
  • Provides historical record of defects.
  • Helps in measuring software quality.

2. What is JIRA?

  • JIRA is one of the most popular bug tracking and project management tools developed by Atlassian.
  • Originally made for bug tracking, now widely used for Agile project management (Scrum/Kanban).

Key Features:

  1. Issue Tracking: Track bugs, tasks, stories, and improvements.
  2. Workflows: Customizable workflows for different types of tasks.
  3. Reporting: Burndown charts, velocity charts, and other Agile reports.
  4. Integration: Connects with tools like Confluence, Bitbucket, GitHub, Slack.
  5. Agile Boards: Scrum and Kanban boards for project management.
  6. Permissions & Roles: Control access based on roles (Admin, Developer, Tester, etc.).

3. JIRA Terminology

TermMeaning
IssueAny task, bug, or story tracked in JIRA
ProjectCollection of issues grouped together
EpicLarge feature broken into smaller stories/tasks
StoryRequirement or user functionality
TaskWork item or job to be done
BugDefect in software
WorkflowLife cycle of an issue (To Do → In Progress → Done)
SprintTime-boxed iteration for Agile development

4. How Bug Tracking Works in JIRA

Step 1: Create an Issue

  • Go to the project → Click Create Issue.
  • Choose Issue Type: Bug, Task, Story, Epic.
  • Fill in details:
    • Summary: Short title
    • Description: Detailed steps, expected vs actual results
    • Severity/Priority
    • Assignee
    • Attachments: Screenshots, logs

Step 2: Track Issue Status

  • Default workflow: Open → In Progress → Resolved → Closed
  • Status changes show progress and accountability.

Step 3: Comment & Collaborate

  • Testers and developers can comment for clarification.
  • Updates and changes are logged in the issue history.

Step 4: Resolve and Close

  • Developer fixes the bug → marks as Resolved.
  • Tester retests → marks as Closed if verified.

5. Other Popular Bug Tracking Tools

ToolKey Features
BugzillaOpen-source, customizable, widely used in large projects
RedmineOpen-source, supports issue tracking, project management
MantisBTWeb-based, simple, lightweight bug tracker
YouTrackAgile-friendly, supports workflow automation
Zoho BugTrackerCloud-based, integrates with other Zoho tools
Trello (with Power-Ups)Visual Kanban board, simple bug/task tracking

6. Advantages of Using JIRA for Bug Tracking

  1. Centralized Tracking: All issues in one place.
  2. Custom Workflows: Define issue life cycle as per project needs.
  3. Prioritization: Assign severity and priority to manage resources.
  4. Reporting & Analytics: Generate reports for stakeholders.
  5. Integration: Connects with CI/CD, version control, chat, and documentation tools.
  6. Collaboration: Team members can comment, attach files, and update statuses.

7. Best Practices for Bug Tracking in JIRA

  1. Write Clear Summary: Short, descriptive titles.
  2. Provide Steps to Reproduce: Make it easy for developers to reproduce the bug.
  3. Attach Screenshots/Logs: Visual evidence helps fix bugs faster.
  4. Set Severity and Priority: Helps manage critical vs minor bugs.
  5. Use Labels/Components: Organize issues by modules or features.
  6. Regularly Update Status: Keep workflow up to date for visibility.
  7. Close Only After Verification: Tester should retest before closing a bug.

Summary

  • Bug Tracking Tools help organize and resolve software defects efficiently.
  • JIRA is a widely used tool for bug tracking and Agile project management.
  • Testers create issues, track status, and collaborate with developers.
  • Proper bug tracking improves software quality, transparency, and team productivity.


1. Test Case

Definition

A test case is a set of conditions, steps, and expected results used to verify that a software feature works as intended. Test cases help testers systematically check functionality and ensure quality.

Key points:

  • Each test case should have a unique ID.
  • Should be clear, concise, and reproducible.
  • Can be manual or automated.

Components of a Test Case

ComponentDescription
Test Case IDUnique identifier (e.g., TC001)
Test ScenarioHigh-level description of what to test
PreconditionsConditions that must be met before execution
Test StepsStep-by-step actions to perform
Test DataInput data required for testing
Expected ResultWhat the system should do
Actual ResultWhat the system actually does (filled after testing)
StatusPass / Fail / Blocked
Remarks / NotesAny additional info

Example of a Test Case

Scenario: Test login functionality of a mobile app

Test Case IDTC001
Test ScenarioVerify user can login with valid credentials
PreconditionsApp is installed, user has valid account
Test Steps1. Open app 2. Enter username and password 3. Click Login button
Test DataUsername: testuser Password: Test@123
Expected ResultUser should successfully login and see the dashboard
Actual Result(To be filled during execution)
StatusPass / Fail
RemarksN/A

Best Practices for Test Cases

  1. Keep test cases simple and clear.
  2. Cover positive and negative scenarios.
  3. Use unique IDs for tracking.
  4. Include preconditions and test data.
  5. Make them reusable for regression testing.

2. Bug Reporting

Definition

A bug (or defect) is an error, flaw, or unexpected behavior in the software that prevents it from working as intended.

Bug reporting is the process of documenting the issue so that developers can reproduce and fix it.


Components of a Bug Report

ComponentDescription
Bug IDUnique identifier (e.g., BUG001)
Summary / TitleShort description of the bug
DescriptionDetailed explanation of the issue
Steps to ReproduceStep-by-step instructions to replicate the bug
Test DataInput data used when bug occurred
EnvironmentOS, browser, device, app version
SeverityImpact on system (Critical, Major, Minor)
PriorityUrgency to fix (High, Medium, Low)
Expected ResultWhat should happen
Actual ResultWhat actually happened
StatusOpen, In Progress, Fixed, Closed
Screenshots / AttachmentsEvidence of the bug

Example of a Bug Report

Title: Login button not responding on Android app

Bug IDBUG001
DescriptionWhen clicking the login button, nothing happens, and the user cannot login.
Steps to Reproduce1. Open app 2. Enter valid username and password 3. Click Login button
Test DataUsername: testuser Password: Test@123
EnvironmentAndroid 12, App version 2.3.1
SeverityCritical
PriorityHigh
Expected ResultUser should be logged in and redirected to dashboard
Actual ResultNothing happens; button is unresponsive
StatusOpen
Screenshots[Attach screenshot of unresponsive login button]

Best Practices for Bug Reporting

  1. Be clear and concise in description.
  2. Include steps to reproduce carefully.
  3. Specify environment details.
  4. Attach screenshots or logs for evidence.
  5. Use severity and priority to help developers prioritize fixes.
  6. Avoid ambiguous terms like “sometimes” or “it doesn’t work”.
  7. Update status as bug progresses (Open → In Progress → Fixed → Closed).

3. Difference Between Test Case and Bug

FeatureTest CaseBug Report
PurposeTo verify software works correctlyTo document defects in the software
Created ByTesterTester (or QA)
When UsedBefore executionAfter finding an issue during testing
ContentSteps, data, expected resultsIssue description, steps to reproduce, actual vs expected results
OutcomePass/FailOpen/Fixed/Closed

Summary

  • Test Case: Planned steps to check if software behaves as expected.
  • Bug Report: Documented issue when software fails.
  • Test cases ensure coverage, bug reports ensure issues get fixed.
  • Both are essential parts of software quality assurance.


1. What is API Testing?

  • API (Application Programming Interface):
    A set of rules that allows different software applications to communicate with each other.
  • API Testing:
    Checking if APIs work as expected. Unlike UI testing, API testing focuses on the logic, functionality, reliability, and performance of the API endpoints.

Key points:

  • No GUI needed.
  • Faster and more stable than UI testing.
  • Validates responses, status codes, headers, and data integrity.

2. Why Use Postman for API Testing?

Postman is a popular tool for testing APIs because it allows you to:

  • Send HTTP requests (GET, POST, PUT, DELETE, PATCH).
  • Receive and validate responses.
  • Automate tests with scripting.
  • Organize tests into collections and environments.
  • Generate documentation automatically.

3. Types of API Tests in Postman

  1. Functional Testing – Verify API performs its function correctly.
    • Example: Check if /login returns a valid token.
  2. Integration Testing – Test interaction between multiple APIs or services.
    • Example: /create-user and /get-user endpoints working together.
  3. Regression Testing – Ensure API changes do not break existing functionality.
  4. Load/Performance Testing – Check API performance under high traffic (Postman uses Runner + Newman for this).
  5. Security Testing – Verify authentication, authorization, and data encryption.

4. HTTP Methods Commonly Tested

MethodPurpose
GETRetrieve data from server
POSTSend data to create a new resource
PUTUpdate an existing resource
PATCHPartially update a resource
DELETERemove a resource

5. Postman Interface Basics

  1. Request Tab: Send requests and see responses.
  2. Collections: Group related API requests.
  3. Environments: Set variables for different environments (dev, staging, production).
  4. Tests Tab: Write scripts to validate API responses.
  5. Pre-request Scripts: Run code before sending request (e.g., generate tokens).

6. Steps to Test an API in Postman

Step 1: Create a Request

  • Choose HTTP method (GET/POST/PUT/DELETE).
  • Enter API endpoint URL.
  • Add headers (e.g., Content-Type: application/json).
  • Add body data (for POST/PUT/PATCH requests).

Step 2: Send Request

  • Click Send.
  • View the response: status code, headers, body, and response time.

Step 3: Validate Response

  • Check HTTP status codes:
    • 200 OK – Success
    • 201 Created – Resource created
    • 400 Bad Request – Client error
    • 401 Unauthorized – Invalid authentication
    • 404 Not Found – Resource missing
    • 500 Internal Server Error – Server error
  • Validate response body (JSON/XML).

Step 4: Write Tests in Postman

Postman allows JavaScript-based tests. Example:

// Status code check
pm.test("Status code is 200", function () {
    pm.response.to.have.status(200);
});

// Response time check
pm.test("Response time is less than 500ms", function () {
    pm.expect(pm.response.responseTime).to.be.below(500);
});

// Response body check
pm.test("Response has userId", function () {
    var jsonData = pm.response.json();
    pm.expect(jsonData.userId).to.eql(1);
});

Step 5: Use Variables & Environments

  • Store base URL, tokens, or dynamic values as variables.
  • Use {{variableName}} in requests for easier maintenance.

Step 6: Organize Requests in Collections

  • Group similar requests.
  • Run multiple tests using Collection Runner.
  • Automate with Newman CLI to run collections from terminal or CI/CD.

7. Postman Automation Features

  1. Collection Runner – Run multiple requests sequentially.
  2. Tests & Scripts – Write assertions for responses.
  3. Pre-request Scripts – Generate dynamic values like timestamps, tokens.
  4. Monitors – Schedule API tests to run periodically.
  5. Newman – CLI tool for running Postman collections in pipelines (CI/CD).

8. Best Practices for API Testing in Postman

  1. Validate status codes, headers, and response body.
  2. Test with valid and invalid inputs.
  3. Use environments and variables to avoid hardcoding.
  4. Organize requests in collections.
  5. Include pre-request scripts for dynamic data like tokens.
  6. Automate tests with Collection Runner/Newman.
  7. Test for performance and security where possible.

9. Summary

  • API testing ensures endpoints work correctly without relying on UI.
  • Postman is a widely used tool for sending requests, validating responses, and automating tests.
  • Key steps: Create request → Send → Validate → Automate → Organize.
  • Postman supports functional, integration, regression, and performance testing.


Automation Testing (Selenium) — Detailed Breakdown

1. Introduction to Automation Testing

  • What is automation testing?
  • Difference between manual and automation testing
  • Benefits: speed, reliability, reusability, CI/CD integration
  • When to automate & when not to

2. Overview of Selenium

  • What is Selenium?
  • Selenium Suite Components:
    • Selenium WebDriver
    • Selenium IDE
    • Selenium Grid
  • Advantages & limitations of Selenium

3. Selenium WebDriver Basics

  • WebDriver architecture
  • Launching browsers (Chrome, Firefox, Edge, Safari)
  • Locating elements:
    • ID, Name, ClassName, TagName, LinkText, PartialLinkText, CSS Selector, XPath
  • Types of waits:
    • Implicit Wait
    • Explicit Wait
    • Fluent Wait

4. Web Elements & User Interactions

  • Click, sendKeys, clear
  • Handling dropdowns (Select class)
  • Handling checkboxes & radio buttons
  • Actions class:
    • Mouse hover
    • Drag and drop
    • Right-click, double-click
  • Keyboard actions

5. Working with Web Page Components

  • Alerts (accept, dismiss, prompt)
  • Frames & iFrames
  • Windows & Tabs handling
  • File upload & download
  • Handling dynamic elements

6. Selenium with Java / Python

  • Writing your first Selenium script
  • Project setup using Maven / PIP
  • Creating reusable functions
  • Exception handling
  • Logging with Log4j

7. TestNG / PyTest Framework

  • TestNG basics:
    • @Test, @BeforeMethod, @AfterMethod
    • Groups, Priority, Dependency
  • Assertions
  • Test suite XML
  • Reporting
  • PyTest (for Python):
    • Fixtures, markers, plugins

8. Page Object Model (POM)

  • What is POM?
  • Benefits of POM
  • PageFactory & @FindBy
  • Implementing modular, scalable framework

9. Data-Driven Testing

  • Reading data from:
    • Excel
    • CSV
    • JSON
    • Database
  • Apache POI for Excel
  • Parameterization in TestNG

10. Behavior-Driven Testing (BDD)

  • Introduction to Cucumber
  • Gherkin language — Given/When/Then
  • Creating feature files
  • Step definitions
  • Runner classes

11. Selenium Grid

  • What is Selenium Grid?
  • Node & Hub architecture
  • Running tests in parallel
  • Cross-browser testing

12. CI/CD Integration

  • Running Selenium with:
    • Jenkins
    • GitHub Actions
    • GitLab CI
  • Automated job scheduling

13. Reporting Tools

  • Extent Reports
  • Allure Reports
  • TestNG default reports

14. Advanced Selenium Concepts

  • Handling Shadow DOM
  • Working with WebDriverManager
  • Headless browser automation
  • Capturing screenshots & logs
  • Smart waits & synchronization strategies

15. Common Interview Questions

  • Framework design questions
  • XPath vs CSS selector
  • StaleElementReferenceException solutions
  • Page load strategy
  • Test flakiness handling

I


🧪 Manual Testing – Overview

Manual Testing is a software testing process where test cases are executed manually without using automation tools.
The tester plays the role of an end-user and verifies the application to ensure it works as expected.

Manual testing helps identify:

  • Functional issues
  • UI problems
  • User experience bugs
  • Logical errors
  • Performance glitches (basic)

🎯 Why Manual Testing is Important?

Because:

  • It helps catch bugs early
  • It ensures the application meets user requirements
  • Some tests cannot be automated (UI, usability)
  • It provides human insight into usability and user experience

🔑 1. Software Testing Basics

What is Testing?

Testing is the process of evaluating software to detect differences between expected and actual behavior.

Goal of Testing

  • Improve product quality
  • Ensure reliability
  • Verify functionality
  • Prevent defects

🧩 2. Types of Manual Testing

1️⃣ Functional Testing

Tests the features and actions of software.

  • Smoke Testing
  • Sanity Testing
  • Integration Testing
  • System Testing
  • Regression Testing
  • User Acceptance Testing (UAT)

2️⃣ Non-Functional Testing

Checks performance, usability, reliability.

  • Usability Testing
  • Performance (basic manual checks)
  • Compatibility Testing
  • Accessibility Testing

3️⃣ White Box vs. Black Box Testing

Black Box Testing

  • Tester does not know internal code
  • Focus on input/output
  • Example: Login verification

White Box Testing

  • Tester knows internal code
  • Validates logic, loops, conditions

Grey Box Testing

  • Partial internal knowledge

📝 3. Test Artifacts in Manual Testing

These are documents created during the testing process:

  • Test Plan – Testing strategy
  • Test Scenario – High-level testing idea
  • Test Case – Step-by-step test steps
  • Test Data – Values to test with
  • Test Report – Summary of testing results
  • Bug Report – Defect description for developers

🔍 4. Test Case Structure

A typical test case includes:

  • Test Case ID
  • Test Title
  • Pre-conditions
  • Test Steps
  • Expected Result
  • Actual Result
  • Status (Pass/Fail)

🐞 5. Bug Life Cycle (Defect Life Cycle)

  1. New
  2. Assigned
  3. Open
  4. Fixed
  5. Retested
  6. Closed
  7. Reopened (if still failing)

🛠 6. Key Techniques in Manual Testing

  • Equivalence Partitioning
  • Boundary Value Analysis (BVA)
  • Error Guessing
  • Exploratory Testing
  • Ad Hoc Testing

⚙️ 7. Manual Testing vs Automation Testing

FeatureManual TestingAutomation Testing
ExecutionManualTool-based
TimeSlowerFaster
CostLow setup costHigh initial cost
Best ForUI, usability, exploratoryRepeated, regression, large data
Human InsightHighLow

💼 8. Skills Required for Manual Testing

  • Understanding SDLC & STLC
  • Writing test cases
  • Bug reporting
  • Logical thinking
  • Communication skills
  • Basic SQL
  • Basic knowledge of UI/UX

🔄 9. Where Manual Testing is Used Today?

Even with automation, manual testing is essential in:

  • Mobile apps
  • Web applications
  • Gaming testing
  • Usability testing
  • Exploratory testing
  • Initial development stages

🏁 10. Roles Related to Manual Testing

  • QA Tester
  • Test Engineer
  • QA Analyst
  • UAT Tester
  • Functional Tester