Documentation

Looking for something in particular?

Quality Hub Setup and User Guide

Introduction to Provar Quality Hub

Provar Quality Hub (f.k.a. Provar Manager) is a comprehensive software solution designed to streamline the entire software testing process from planning and design to test execution and analysis. The software includes four modules: Release Management, Test Management, Test Operations, and User Experience Hub. Each has been meticulously crafted to help you manage your testing process more efficiently and effectively.

In this user guide, we will provide a step-by-step overview of how to set up and use the product for optimal performance. Whether you are new to Quality Hub or looking to take your existing testing process to the next level, this comprehensive guide is here to assist you in achieving your testing goals. In it, you will find information on how to install, configure, and utilize the features available in each module to make your software testing experience seamless and efficient.

Target Audience

This guide is intended for IT administrators tasked with setting up the application, testers, developers, and QA managers who will use it to organize their work, and DevOps engineers who will embed quality into their automated pipelines.

How to set up Quality Hub

Installation

What type of Salesforce environment to use

Before installing Quality Hub, the first decision is to choose what Salesforce org to use.

There are three main options:

  • Trial Org: This is a new separate Salesforce org created to trial Quality Hub, which comes pre-installed. They are usually available for a limited period (e.g., 90 days), after which you can upgrade it to a fully functioning org. Choose this option if you don’t already have a Salesforce org, do not need to integrate Quality Hub with any other Salesforce-native application, or do not want the trialing exercise to disturb the ongoing delivery activities in your existing Salesforce environments.
  • Sandbox Org: This is a copy of the Production environment, usually created for testing and development purposes. Choose this option if you want to trial Quality Hub for an indefinite period or want to understand how it could integrate and work in the context of your existing Salesforce implementation. Select “Try it Now from the AppExchange listing to install in Sandbox Orgs.
  • Production Org: This is a fully functional version of your Salesforce organization’s implementation, where all customizations, configurations, data, and settings are deployed for use by end-users. Quality Hub can only be trialed for a limited period (e.g., 30 days), so only choose this option if you are certain that you want to use Quality Hub or if it’s the only way of proving that it can integrate with other Salesforce-native apps which cannot be set up in Sandbox environments.

Installing from the Success Portal

Customers can access Provar Quality Hub using the Success Portal Releases page.

  1. Click the version for Provar Quality Hub that you’d like to install.
  2. Click the Release Downloads > Download Name
  3. Here you’ll see links to download in either a Production/Developer edition or Sandbox/Scratch edition org:
  4. Use the https://login.salesforce.com/** link for Production/Developer editions and the https://test.salesforce.com/** link for Sandbox/Scratch editions.
  5. Select Install for Admins Only. You can assign permission sets and licenses at a later time, this option is recommended.
  1. Approve the Third-Party Access on the next page to allow Provar Quality Hub to connect to our execution platform. 
  1. Once installation is complete, you should receive an email confirmation message.

Note: You can disable the Remote Site Setting and CSP Trusted Site for sites you are not using later on. No information is sent if the integration is not set up first.

Accessing from the AppExchange

If you are not currently a Provar customer or do not have access to the Success Portal, then you can navigate to the AppExchange listing, where you can initiate a Trialforce Org or reach out to our team for assistance.

  1. If you’d like a trial, you can select Try It. Alternatively, click Get It Now to be redirected to our Contact page where you can get started with the help from our dedicated team.
  1. (Try It only) Choose your trial type. 
  2. Trialforce Org – Newly provisioned fresh org with the app pre-installed. Some sample data but no setup.
  1. If you select Trialforce org, you are required to enter your email, and will receive an invitation to a new org to continue setting up your user in approximately 10-15 minutes. 

Upgrading from the Plugin Marketplace

The easiest way to upgrade Quality Hub is through the Plugin Marketplace page, which can be found in the Plugins section of the Provar Quality Hub Setup app.

Select the Provar Quality Hub app and click “Install” to start the upgrade process.

Configuration

Licensing

Every Quality Hub user must have a Quality Hub license assigned to them. To manage licenses in Salesforce, click Setup > Apps > Packaging > Installed Packages.

Under Package Name, you can see an entry for Quality Hub. Click Manage Licenses to add/remove licenses from users.

Permissions

Quality Hub includes a variety of permission sets and permission set groups that can be assigned to users depending on the way they may need to use the application.

Alternative permission sets with the “(Limited Access)” suffix are available for those users that do not need “View/Modify All” permissions.

Follow the Managing Permission Set Assignments article instructions to assign the permission set to your licensed users.

Permission Set Groups

Provar Quality Hub AdminProvides complete access to Quality Hub
QAOpsProvides complete access to Provar’s Test Management and Operations capabilities
Release and Test ManagementProvides complete access to Provar’s Release and Test Management capabilities

Permission Sets

Provar Quality Hub – Release ManagementProvides complete access to the Provar Release Manager app, projects, releases, sprints, and issues
Provar Quality Hub – Test ManagementProvides complete access to the Provar Test Management app, test projects and plans, test suites and cases, defects, test cycles, and executions
Provar Quality Hub – Release and Test Management LinkProvides complete access to link Test Cases to Issues
Provar Quality Hub – Test OperationsProvides complete access to the Provar Test Operations app, environments, systems, VCS, repositories, and schedules
Provar Quality Hub – SettingsProvides full access to the Quality Hub Settings app and People
Provar Quality Hub – APIAllows access to read and create API Requests
Provar Quality Hub ReportingAllows full access to the Provar Quality Hub App along with Quality Hub Dashboards

Setup

Some settings and features can only be configured in the Provar Quality Hub Setup page. Read below for a description of each one.

NOTE: To manage Quality Hub’s settings, Salesforce connections, and VCS connections, make sure that the user has the Customize Application and the Modify Metadata Through Metadata API Functions permissions. These can be assigned using a permission set or directly on the user’s profile.

Connections

This tab facilitates the task of connecting Quality Hub to other systems.

Most importantly, it helps you connect Quality Hub to the Salesforce org where it’s installed so it can be enabled to automatically perform tasks like connecting to other external services for integration purposes, or figuring out which Quality Hub plugins can be updated.

To do this, please follow the steps outlined in the section titled “Provide Access to Salesforce Metadata” that appears at the top of the tab. Once completed, pending a refresh, you should see this screen here:

Quality Hub AI Settings

If you want to leverage your OpenAI account to use Quality Hub’s AI capabilities (e.g., test case generation, summaries, or root cause analysis report generation), enter your OpenAI API Key and click Save Settings.

Additionally, Provar Test Case Generation can be enabled directly from this tab starting in version 3.24.0 of Quality Hub.

Quality Hub API Settings

Even though external applications can leverage Salesforce’s API to interact with Quality Hub’s data, Quality Hub comes with a custom-built API to provide easier and smoother integrations.

By enabling the Quality Hub API, external systems can send requests, such as run Apex unit tests on the SIT sandbox or run Provar tests on the QA environment using Chrome and Edge, which are then picked up on an ongoing basis by a background process.

Subsequently, external systems can use the Quality Hub API to inquire about the status of their requests.

The API provides three authentication methods:

  • None – This is used only if external systems cannot use any other method.
  • API Key – The most straightforward method uses an API Key generated by Quality Hub instead of user authentication.
  • OAuth 2.0 – This is the recommended option because of its superior security, but it requires external systems to perform more complex authentication flows.

See the section How to integrate with Quality Hub > Quality Hub API for more information on how to set it up.

Provar Grid Settings

If you want to use Provar Grid, Provar’s scalable cloud test execution platform, please enter the credentials you have been given. 

  • Provar Grid Username: This is your Grid username as provided by your Customer Success Manager when you onboarded with Provar.
  • Provar Grid Access Token: This is your Grid personal access token that you should also have received from your CSM during onboarding if you purchased Grid as part of your Provar package. 
  • Provar Automation License Key: In order to run Provar Grid executions remotely, you must also configure your Provar Automation Execution Licenses here. Please provide your Automation Execution License Key, rather than your Automation User License Key.

Once you’ve filled in all of the fields, click Test Connection to validate connectivity to Provar Grid servers. You should see the below Success toast message.

You may need to enable the Provar_Cloud Remote Site Setting in your Setup > Remote Site Settings in Salesforce first.

Deletion Policy Settings

If you want to limit the amount of data stored in Quality Hub to make sure your data storage stays under the limit, you can configure these data deletion policies and click Save Settings:

  • Delete Test Cycles Older Than: Set a number if you want Quality Hub to automatically delete test cycles (along with their test executions and test step executions) older than the set number of days.
  • Delete Test Step Executions Older Than: Set a number if you want Quality Hub to automatically delete test step executions (along with their attached files) older than the set number of days.
  • Delete Coverage Reports Older Than: Set a number if you want Quality Hub to automatically delete coverage reports older than the set number of days.

Quality Hub Dashboard Metrics Processing

The Provar Quality Hub Dashboard page handles loading and presenting potentially 100k+ records at one time. Due to the heavy processing and governance limitations of Salesforce, there is a required back-end processing job that can be handled in this tab.

  • Cache Status: The platform cache partition name (searchable in Setup > Platform Cache), status (stale/empty or valid), and latest timestamp. The Platform Cache is org-wide and will apply to all of your users. Provar Quality Hub comes with 3 MB of managed package cache by default and will not impact the existing org’s cache capacity.
  • Manual Calculation: The option to manually process all Quality Hub Dashboard metrics for a given period of time, i.e. 12 months back.
  • Scheduled Jobs: For simplicity and convenience, the user can schedule nightly or weekly metrics processing jobs to happen on a recurring basis. The timings are based on your Org’s timezone settings.
  • Recent Batch Jobs: The recent history of metrics processing batch jobs that have run, their status, ID, and creation/completion dates.

It is not required to manually calculate the Quality Hub Dashboard Metrics in this tab, however, if you’d like to schedule them to reduce page load times, you have that capability. Jobs are automatically run and metrics are processed whenever the Quality Hub Dashboard page is loaded. 

Plugins

Quality Hub acts as an Integration Hub, connecting to other essential tools used during the software release lifecycle, such as release management apps, DevOps tools, code quality scanning apps, or test automation tools.

Use the Plugins Marketplace to browse the available plugins by category, find more information about them, and install them in your org.

For more information on a specific plugin or integration, please refer to the respective documentation for that plugin. You can find these on Quality Hub | Provar Documentation > Plugins.

Quality Hub Dashboard Configuration

The Quality Hub Dashboard comes with many different metrics and KPIs that can be configured from within the Quality Hub Configuration tab.

You can find the full Quality Hub Dashboard Configuration documentation here.

How to use Quality Hub

This section will explain what Quality Hub is, what it can do, why it is valuable, and how to take advantage of its main features.

Overview of a Software Testing Lifecycle

Testing is an essential aspect of any software development lifecycle, involving various activities that can also be defined as a separate lifecycle running sequentially (i.e., Waterfall) or in parallel (i.e., Agile). Depending on the chosen approach to software development (e.g., Waterfall, Agile, etc.), these testing activities will occur at different times, with more or less importance and varying levels of detail.

Regardless of this, the activities themselves can be categorized as follows:

  • Requirement analysis – The QA team tries to understand the software requirements in detail, often communicating with various stakeholders, and starts considering what sort of tests should be done and in what way (e.g., manual vs automated).
  • Test Planning – The QA team determines the strategy, resources, limitations, efforts, and costs involved in the testing endeavor.
  • Test Case Design – The QA team creates, verifies, and improves test cases and test scripts.
  • Test Environment Setup – The QA team decides the software and hardware conditions under which the applications will be tested and performs readiness checks of those environments.
  • Test Execution – The QA team tests based on the designed test cases and according to the test plans. These activities include the execution of test scripts, their maintenance, and the reporting of bugs.
  • Test Result Analytics – The results of the previous testing activities are collected so that the QA team can meet, discuss, and analyze the data to improve future executions.

To learn more about this, please check out the Quality Hub: Getting Started course at the University of Provar.

How the app is structured

The Salesforce Platform

The Salesforce platform offers several advantages to Quality Hub users, including:

  • Scalability: With Salesforce, you can quickly scale Quality Hub as needed without worrying about expensive hardware upgrades or complicated infrastructure management.
  • Customization: Salesforce provides a range of customization options so you can tailor Quality Hub to your business needs. This includes all of the packaged reports and dashboards.
  • Security: Salesforce takes security seriously and has a robust infrastructure with regular updates to protect your data from potential threats. Permission Sets within Salesforce also help to ensure that users who need access to Quality Hub do not have to be granted access to the wider application.
  • Integration: Salesforce can easily connect with other systems and tools, making it an excellent choice for businesses looking to streamline their operations.
  • Collaboration: Salesforce’s Chatter makes it easy for teams to work together, regardless of location or time zone.
  • Data analytics: With Salesforce, you can quickly analyze large amounts of data and gain valuable insights into your testing operations. 

Quality Hub Apps

Quality Hub consists of different apps accessible from the App Launcher that teams can use depending on their needs.

Provar Quality Hub

The Provar Quality Hub app is the home base for all things reporting and analytics. This is where users can access the Quality Hub Dashboards (executive & QA views), configure KPIs, and all of the customized dashboards.

You can also create and manage people, risks, and defects in this application.

Provar Quality Hub Setup

The Provar Quality Hub Setup app is used for upgrading Quality Hub, installing plugins, and managing org-wide settings and configuration for Quality.

Provar Release Management

The Provar Release Management app helps to align the project and release management artifacts with testing operations.

Teams can create projects, epics, and user stories to define the work needed and organize it into releases and sprints to track their progress. These can all be linked and related to test projects and test cases accordingly.

This app is primarily used to track imported and synced items from tools like JIRA and Azure DevOps.

Provar Test Management

The Provar Test Management app is designed to help your teams plan, execute, track, and report on their software testing activities.

Teams can create test projects, write test plans, create test cases, organize them into test suites, execute tests, and document their results for further analysis.

Provar Test Operations

The Provar Test Operations app allows teams to document the environments and systems under test, connect to Version Control Systems, and schedule the execution of automated tests on multiple platforms.

Provar Assistant

Provar Assistant is Provar’s AI-powered chatbot trained on all of Provar’s available documentation, ready to help you with any questions you may have about how to use Quality Hub. You can open it from the utility bar at the bottom of the window.

Release Management

Quality Hub includes a Release Management app, which can be used to track projects, issues (e.g., epics, user stories), releases, and sprints.

Teams can choose to leverage this module as it is if they don’t have a release management tool already or integrate Manager with external tools like Jira or Azure DevOps to bring information that will be relevant for software quality purposes.

You can open the Release Management app from the App Launcher in Salesforce:

Project Management

You can create new Projects or import existing ones into Manager from the Projects tab. 

Note: This document will not explain how to integrate with 3rd party release management tools Release Management artifacts are primarily imported/synced from JIRA, ADO, or other similar tools. As such, this guide shows examples of how these objects might be managed from within Quality Hub. 

For more information on ADO, see the full documentation here.

For more information on JIRA integration, see the full documentation here

To create a new project, click the New button, enter a Project Name, select Provar as the Release Management Tool, select a Project Lead, and click Save.

From the Project record, you can manage its related issues, releases, and sprints. You can also find some reports and settings as well.

Project Issue Types

Provar projects come with a default list of issue types:

  • Epics track collections of related bugs, stories, and tasks.
  • Stories track functionality or features expressed as user goals.
  • Tasks follow small, distinct pieces of work.
  • Bugs track problems or errors.

If you require Manager to create a different list of issue types, disable the Add Project Issue Types flow and replace it with your own.

Issue Status Categories

On release and sprints records, issues are summarized into three categories based on their status: Not Started, In Progress, and Done.

You can use the Issue Status Categories component within the Settings tab on any project record to map Issues Status custom values to one of the three categories so they get accounted for in those calculations.

Issue Management

You can create new Issues from the corresponding Project record or the Issues tab.

Every issue requires a Summary, Issue Type, and Project. Fill in the Parent Issue field with the appropriate epic record to relate stories to epics.

You can manage child or linked issues, sprints, and releases from the issue record page. You can also track the issue’s relationships with test cases, defects, and metadata components.

Issue Kanban View

If you want to see your issues in a Kanban View, please follow the steps below:

  • Go to the Issues tab
  • Click the cog icon on the right side of the search input, and select the option New.
  • Enter a List Name (e.g., your project name), select the appropriate level of visibility, and click Save.
  • (Optional) If you only want to see the issues on a particular project, use the Filters sections on the right to filter by project.

Release Management

Releases are new versions of software products, often including bug fixes, enhancements, and new features. They mark a significant milestone in the software development lifecycle.


You can create releases from a project record or from the Releases tab, and define the following information:

  • Release Name & Description
  • Status
  • Start Date & Release Date
  • Release Notes

You can also easily add, remove, or edit the issues related to the release using the Issues component.

You will also find the release progress and total number of issues per status category just below the release name.

Finally, you can get an idea of the state of quality of a release by looking at the Related Test Cases component which shows all the test cases related to the issues in the release as well as their last execution results per Environment.

Sprint Management

Sprints are short, time-boxed periods during which teams work to complete a set amount of work, typically lasting between one and four weeks


You can create sprints from a project record or from the Sprints tab, and define the following information:

  • Sprint Name: may include a number that increases with every sprint.
  • Start Date and End Date.
  • Status:
    • Future: the sprint is scheduled to start at a future date.
    • Active: the sprint is currently in progress.
    • Closed: the sprint has been completed.

Active sprints can be completed by clicking on the “Complete Sprint” button, which lets you move the open issues to another sprint or to the backlog.

You can also easily add, remove, or edit the issues related to the sprint using the Issues component.

You will also find the sprint progress and total number of issues per status category just below the sprint name.

Finally, you can get an idea of the state of quality of a sprint by looking at the Related Test Cases component which shows all the test cases related to the issues in the sprint as well as their last execution results per Environment.

Test Management

This section will teach you how to leverage Quality Hub to structure your testing activities and track relevant information.

Requirement Analysis

The Requirement Analysis phase is essential for understanding and defining the needs and expectations of stakeholders for the software being developed. In this phase, the development team works closely with the clients or end-users to gather, document, and analyze their requirements. This includes functional requirements that outline what the software should do and non-functional requirements that address performance, security, usability, and other quality attributes. Properly defining the requirements upfront helps ensure that everyone involved in the project understands what needs to be built and reduces the chances of misunderstandings or miscommunications down the line. This results in a more successful software development project with fewer revisions and changes, ultimately saving time and resources.

After analyzing the requirements, we recommend you start by creating a Test Project if you haven’t done so already.

Test Project Creation

To create a Test Project, start by opening the Test Projects tab and clicking the New button. Give your test project a name and a key to identify, and don’t forget to fill in the Description field so everyone can understand what the test project is about.

The Test Project record page is divided into two main sections:

  • Test Project Navigation shows related test plans, suites, and test cases.
  • Test Project Details is where the primary information can be found.

Test Planning

The Test Planning phase ensures that effective and efficient testing activities are carried out throughout development. A test strategy is created in this phase to guide the overall testing effort. This includes determining which types of testing (e.g., functional, performance, security) will be required, setting testing objectives and priorities, defining test cases and test data, allocating resources, and establishing a schedule for testing activities. Proper planning also involves selecting appropriate testing tools and environments and identifying the roles and responsibilities of the testing team members. By creating a comprehensive test plan, organizations can ensure that they can identify and address any potential issues early in the development process, which ultimately leads to higher-quality software and greater customer satisfaction.

Quality Hub comes with a comprehensive but customizable Test Plan template that can be used to track all the information you need.

To create a Test Plan, go to your Test Project record, click the dropdown arrow on the Test Project Navigation component, and select Create Test Plan. Enter a name for it and click Create.

The Test Plan record page is divided into three main sections:

  • Documentation: This section includes space to define objectives, an overview of the system, definitions and acronyms that may be used across the plan, the scope and features to be tested, the environments under test, any assumptions or constraints, and the overall approach to testing, among other things.
  • Execution: This section includes Test Plan Schedules and Test Cycles.
  • Analytics: This section includes some charts related to daily executions and risk coverage.

At the top of the test plan record page you will also find the Test Plan Completion Tracker, which can help you quickly see how much of the test plan has already been filled in.

Test Plan Assistant

By clicking on the “Assistant” button on the Test Plan Completion Track, you can open the Test Plan Assistant which can guide you through the different sections of the test plan and help you fill them in using AI-generated suggestions.

The better you describe the system and objectives, the better the AI-generated suggestions will be. You can click the small “refresh” icon on the pink messages to regenerate the suggestion, and also find generic examples for most steps.

Risk Management

Defining and managing risks is part of planning your testing activities, which can be done from the test plan record under the Documentation > Risks section. Risks can also be defined and managed from the Provar Quality Hub app. Risk Avoidance is one of the key metrics tracked on the Executive View of the Quality Hub Dashboard.

Every risk has a description, an owner, and an impact assessment (likelihood and impact), which will produce a Risk Level. You can also record the mitigation strategy, approach, and proposed solution.

Finally, you can link test cases to risks to track your risk coverage.

Risk Mitigation Plan Generation

Quality Hub can leverage AI to generate Risk Mitigation Plans, including a strategy, approach, and proposed solution.

To use this feature, make sure you have saved your OpenAI API Key in Quality Hub’s Setup page (see the Setup section of this document).


To generate a mitigation plan, go to a Risk record and click the Generate Mitigation Plan button on the top right and then wait a few seconds for the plan to be generated.

Once the plan is generated, you can review it and save it into the Risk record by clicking the Save button, or click Generate to generate it again.

Exporting Test Plan as PDF

In order to export the Test Plan documentation as a PDF, click on the menu dropdown on the navigation component and select Export as PDF.

Test Case Design

The Test Case Design phase focuses on developing detailed test cases used during testing activities to verify a software application’s functionality, performance, and reliability. In this phase, the testing team works closely with developers, business analysts, and other stakeholders to gather information about the system under test and its requirements.
Test cases are designed based on the requirements specifications, user stories, use cases, and other relevant documentation. Each test case should cover a specific functional or non-functional requirement and include clear and concise steps to reproduce the test, expected results, and acceptance criteria. The design of test cases also considers various testing techniques, such as boundary value analysis, equivalence partitioning, and decision table testing, to ensure comprehensive coverage of the software’s functionality.
Effective test case design requires a solid understanding of the software and the testing process. The resulting test cases will provide a foundation for executing successful tests throughout the development lifecycle and help to identify any defects or issues early in the development process, ultimately leading to higher-quality software and greater customer satisfaction.

Using Quality Hub, testers can create new test cases with ease from multiple places:

  • Test Cases tab > New button
  • Test Project Navigation component > Down arrow on Test Cases Not Linked To Test Suites > New Test Case option
  • Test Project Navigation component > Down arrow on a test suite item > Add New Test Case option.

The Test Case record page is divided into four main sections:

  • Summary: At the top of the page, this section shows a summary of what the test case does. You can manually edit the summary or use AI to generate it automatically using the test case name, persona, and test steps’ information.
  • Details: This section allows you to define information like test type, status, owner, priority, and persona.
    • Tags: you can add tags to test cases as another way of organizing information. You can also use AI to suggest tags to use.
  • Steps: This section allows you to define all test steps.
  • Executions: This section lets you set an estimated execution time, a maximum execution time, and see the latest result per environment, average execution time, and historical test executions.
  • Related: This section lets you see connected test suites, defects, issues, and risks.

You can relate Test Cases to risks to track Risk Coverage by clicking the New button under the Related section on the Covered Risks linked list.

You can also link Test Cases to Metadata Components by clicking on the menu’s Link to Metadata Components option.

Managing Test Steps

You can easily add, remove, and reorder test steps using the Test Steps Manager on the test case record page.

  1. The Cancel button cancels any changes not saved.
  2. The Add Step button adds a new step at the bottom of the list.
  3. The Save button saves any changes made to the list.
  4. The Up and Down arrow buttons can be used to change the order of the step within the same hierarchical level.
  1. The Bin button deletes the step.
  2. The Clip button shows/hides the section where you can manage the step’s attachments.
  3. The Down/Right arrow buttons expand/collapse the step’s child steps.
  4. The Level Down button creates a child step.

Test Case Generation

Quality Hub can leverage AI to generate test cases of varying output types, including Text, XML, Gherkin or Behaviour-Driven Development (BDD), Markdown, Java, Provar Automation, and Apex. Test cases can be generated from natural language input or from a user story, which can help you develop possible test scenarios. Users can also configure additional context options, such as metadata components and test persona.

To use the Standard Test Case Generation feature (non-Provar), make sure you have saved your OpenAI API Key in Quality Hub’s Setup page (see the Setup section of this document).To use the Provar Test Case Generation feature, make sure you have enabled the Provar Test Case Generation feature in Provar Quality Hub Setup > Settings > Provar Quality Hub AI.

To learn more, view our Provar Quality Hub AI Setup & Usage Guide.


Exploratory Testing

Exploratory testing is an approach to software testing that emphasizes simultaneous learning, test design, and execution. Unlike traditional scripted testing, where test cases are predefined, exploratory testing allows testers to explore the system on the fly without creating test cases in advance.
Here are some key points about exploratory testing:Simultaneous Learning and Testing: Testers learn about the system while actively testing it. They make spontaneous decisions about what to test based on their observations during the testing process.Discovery-Oriented: Exploratory testing focuses on discovering defects, risks, and new features that might not be easily covered by other testing methods.No Predefined Test Cases: Instead of following a strict test plan, testers jump straight into testing and adapt their approach as they explore the application.Individual Tester Guidance: Exploratory testing relies on the tester’s intuition, creativity, and domain knowledge. It’s a thinking activity that encourages adaptability and learning.Common Techniques: Testers use techniques like session-based test management (SBTM) cycles, where they create test charters and categorize common types of faults.

Quality Hub comes with an exploratory testing tool that helps testers document their sessions, raise defects, and log risks.

To use it, first create a test case where the testing type is “Exploratory Testing”. After creating the test case you will see three new fields appear on the page:

  • Time Allowed: time the tester will dedicate to explore.
  • Goals: summary of what the tester intends to accomplish
  • Test Charter: outlines the objectives, scope, and approach for the session.

Click on the Red Record button on the navigation component to open the Exploratory Session Recorder

A pop-up window will appear where you can set:GoalsTest CharterTime AllowedAs well as select:Test ProjectTest PlanTest EnvironmentFinally, click on Start Session to begin your exploratory session.
Here is the list of elements you will find in the session recorder:A timer is set so you know how much time is left.Stop the session to discard it.Save the session at any point.Click the Info icon to see the session goals and test charter again.Select steps in the timeline and delete them if no longer needed.Pin steps to highlight them.Use text notes, voice notes, screenshots, and files to document your session however you want.

Raise defects and identify risks from within the app for better convenience.
Once the session is saved, it will be found under the Sessions tab on the exploratory test case.

Test Case Approvals

Teams may want to configure an approval process so that each test case requires approval to progress from one stage to another (e.g., Design to Pending Implementation).

Your Salesforce admin should be able to implement a Salesforce approval process that fits your requirements as a team.

Organizing Test Cases into Test Suites

Test Suites are a great way of organizing test cases within a test project.Before you start creating test suites, make sure that the naming convention and overall structure is coherent and makes sense to your team.There is no point in grouping test cases into test suites if your team members cannot get their head around your logic for grouping them in that particular way.

As with test cases, you can also document what each test suite does using Tags and the Test Suite Summary component, and you can leverage AI to help you with both.

Test Execution

The Test Execution phase focuses on running the test cases designed during the Test Case Design phase to validate a software application’s functionality, performance, reliability, and other quality attributes. In this phase, the testing team executes the planned tests using appropriate tools and environments to identify any defects or issues in the software under test.
Test execution can be performed manually, where a tester follows the steps outlined in each test case and records the results, or automated, where specialized tools (e.g., Provar Automation) execute predefined test scripts and report the results. Manual testing is typically used for exploratory, usability tests requiring human judgment or interaction. Automated testing, on the other hand, is ideal for regression testing, performance testing, and other repetitive tests.
Test execution should be organized with clear documentation of test results and any identified defects or issues. The testing team must follow the established test plan and cover all the defined test cases to comprehensively cover the software’s functionality. This phase is crucial for identifying any issues or bugs early in the development process and providing feedback to the development team, allowing them to make improvements before the software is released to customers.
Effective test execution contributes to the overall success of a software project by ensuring that the software is functioning as intended, meeting its requirements, and providing an excellent user experience. This results in increased customer satisfaction, reduced time to market, and ultimately leads to higher quality software.

Quality Hub helps you record the execution (manual or automated) and the results of your tests. 

To start, go to a test plan record. Under its Execution section, you will find a list of test cycles where you can create new Test Cycles.

The Test Cycle record page is split into multiple sections:

  • Test Cycle Summary: This section shows a summary of the test executions, and allows you to upload 3rd-party results or export the summary as PDF.
  • Details: This section lets you define the test environment status and track the start/end date times.
  • Test Executions: This section allows you to manage test executions.
  • Configuration: This section allows you to define the testing tool and test environment settings to be used.
  • External Test Results: This section allows you to track imported test results.

Managing Test Cycles

The Test Executions Manager component found on every test cycle record’s Test Executions section allows you to do the following:

  • Add test suites, automatically creating a test execution record for each test case in the test suite.
  • Add individual test executions.
  • Perform mass status changes on the selected test executions.
  • Delete the selected test executions.
  • Search test executions.
  • Retry all executions, just the failed ones, or only the selected executions.

Managing Test Executions

The Test Execution record page is split into various sections:

  • Details: This section allows you to define the test case being executed, a related test suite, if any, the tester, its status, and any test setting that may differ from the ones set in the test cycle.
  • Results: This section allows you to log and track each test step.
  • Defects: This section allows you to manage related defects.

The Test Execution Timeline component in the Results section helps you log the results of running each test step, in case you run it manually or just review what has been reported by test automation tools like Provar Automation.

  1. Use this dropdown to filter steps by their status (e.g., show only failed steps).
  2. Turn to Edit Mode to log the results for each test step.
  3. Refresh the timeline data.
  4. Click the Copy Test Case Steps option to copy all the test steps from the test case defined in the test execution record.
  1. Save Changes made to every test step execution record.
  2. Exit Edit Mode.
  3. Click on each test step to see more information about it.
  4. View information about the test step or open the record on a new tab.
  5. View information about the test step execution or open the record on a new tab.
  6. Change the step execution status.
  7. Raise a defect if the status is Fail.
  8. Attach any relevant pieces of evidence (e.g., screenshots).

Quick Test Executions

You can also quickly execute manual test cases and test suites by clicking on the blue play button on test case and test suite records.

Defect Management

The Defect Management phase focuses on identifying, recording, tracking, prioritizing, and resolving defects or issues discovered during testing. In this phase, the testing team documents any identified defects, assigns them unique identifiers, and communicates them to the development team for resolution.
Effective defect management is critical in ensuring that the software under development meets its quality targets and is free of significant issues before its release. This phase involves collaboration between various stakeholders, including testers, developers, project managers, and other team members, as they work together to understand the root cause of each defect, prioritize their resolution based on business impact, and track their progress toward resolution.
When a defect is identified, it should be reported through Quality Hub. The report should include clear descriptions of the symptoms, steps to reproduce the defect, and any relevant context or additional information to help the development team understand the issue and prioritize its resolution.
As the development team resolves each defect, they update the testing team regarding their progress and the expected resolution date. Once a defect is resolved, the testing team must retest the software to verify that the issue has been addressed and hasn’t introduced any new issues or unintended side effects.
Effective defect management results in a high-quality software product with fewer issues and bugs, ultimately leading to increased customer satisfaction, reduced time to market, and lower maintenance costs.

Defects can be raised on Test Execution or Test Step Execution records, depending on the preferred level of granularity. 

They can be linked to Test Cases or Test Step records for traceability and reporting purposes, and you can define their impact and priority and track their status.

Root Cause Analysis Report Generation

Quality Hub can leverage AI to generate preliminary Root Cause Analysis reports for failed Salesforce Apex unit test executions. The report includes an introduction, background, description of the error, root cause analysis, recommendations, and a conclusion.

To use this feature, make sure you have saved your OpenAI API Key in Quality Hub’s Setup page (see the Setup section of this document).


To learn more about our RCA feature in Quality Hub, check out our Provar Quality Hub AI Setup & Usage Guide.

Test Cycle Closure

The Test Cycle Closure phase marks the end of the testing process, where the testing team reports on the overall test results and prepares documentation to support the release of a software application. In this phase, the testing team documents and communicates their findings from the various testing activities throughout the development lifecycle, including the number and types of defects identified, their impact on the software’s functionality, and any recommendations for improvement.
The test cycle closure report provides valuable information to stakeholders, such as project managers, developers, customers, and other team members, about the quality of the software under development and the testing process itself. This report includes an overview of the testing objectives, methodologies, and results, as well as details on any identified defects, their root causes, and the status of each defect at the time of release.
Additionally, during the test cycle closure phase, any documentation related to the testing process, such as test plans, test cases, and test scripts, is updated to maintain an accurate record of the testing activities for future reference. The testing team may also provide recommendations for improving the software quality and the testing process based on their experiences throughout the development lifecycle.
Effective test cycle closure ensures that all stakeholders are informed about the test results and the quality of the software being released. It provides valuable insights to customers or other users to support decision-making around its release. This phase plays a crucial role in maintaining the organization’s commitment to delivering high-quality software products and fostering continuous improvement within the development process.

Test Operations

The main goal of Test Operations is to ensure that software is released with high quality by executing effective and efficient tests throughout the development process. This includes functional testing, performance testing, security testing, regression testing, integration testing, and user acceptance testing.
Test Operations also involve maintaining test environments, managing test data, and providing test environment support for development teams and other stakeholders. Test Operations teams collaborate closely with various teams such as Development, Quality Assurance (QA), DevOps, and Project Management to ensure that testing is integrated seamlessly into the software development process and that testing activities are executed consistently, reliably, and efficiently.
Effective Test Operations help organizations minimize the risks of releasing software that doesn’t meet its intended quality standards. By continuously monitoring and improving testing processes, organizations can reduce the number of defects, improve time-to-market, and ultimately deliver high-quality software products that meet customer expectations and requirements.

Quality Hub’s Test Operations module lets teams manage environments, version control systems, and repositories and orchestrate test plan executions.

You can open the Test Operations app from the App drawer in Salesforce:

Test Environment Setup

Environments in Quality Hub are used to identify groups of systems with a standard level of production-ready features. For example, depending on their development process, teams may want to define their environments as Production, User Acceptance Testing, QA, System Integration Testing, and Development.

To define a new environment, open the Environments tab and click the New button.

Within an environment record, you can define systems of various types.

Salesforce System Setup

Quality Hub lets you set up connections to Salesforce orgs for integrations with 3rd-party tools or retrieving metadata components, including Apex Tests, Flows, Triggers, LWCs, Custom Objects & Fields, Visualforce Pages, and Aura Components. 

You can import your Apex Tests to schedule them from Quality Hub or use the imported metadata components to track overall metadata coverage across your Apex & Provar tests.

Follow these steps to set up a connection to Salesforce:

  1. Open the System record of System Type equal to Salesforce.
  2. You will find the Salesforce Connection Assistant card on the right-hand side of the page.
  1. Select the right Org Type and click Request Authorization.
  1. Follow the instructions on the screen:
    1. Open the link, enter the user code, and click Connect.
  1. Log into the Salesforce org and click Allow.
  2. Then click Continue.
  3. Once you have authorized Quality Hub to access the Salesforce org, the Salesforce Connection Assistant will automatically refresh to indicate that the connection has been established.
Org Browser

Salesforce system records let you browse the connected orgs to retrieve their Apex Classes, Apex Triggers, Apex Unit Tests, Aura Components, Custom Apps, Objects & Fields, Flows, Lightning Pages, LWCs, Profiles, and Visualforce Pages as Metadata Components to be used for metadata coverage reporting and intelligent testing, test execution (Apex Tests), metadata coverage reporting, and Intelligent Test Generation.

To browse the Salesforce org and import its information, follow these steps:

  1. Click the Browse Org button on a connected Salesforce system record page.
  2. Click the corresponding button (e.g., Browse Apex Classes) to load the list of metadata items in the org.
  3. Select the metadata items you want to import. You can see if they have been imported in the Imported column.
  4. Click Import as Metadata Components to import the metadata items, or Import as Test Cases (only for Apex Unit Tests) to import the test cases.

Once imported, Metadata Components will be linked to the System record automatically, and users can now track metadata coverage and utilize metadata-aware test case generation.

Users can see which Metadata Components are related to the System on the Metadata Components tab, or by searching Metadata Components in the App Launcher, which provides pre-filtered list views for each Metadata Type.

Code Quality Scans

Quality Hub provides two ways of tracking code quality scans:

  1. Use a plugin to connect to 3rd-party code quality scanning tools like Clayton or QualityClouds and import their results on a scheduled basis.
  2. Manually importing code quality results.

To manually import the results of a static code analysis performed with tools like PMD or Salesforce Code Analyzer, follow these steps:

  1. Go to a Salesforce system record.
  2. Click the New button on the Code Scans related list in its Code Quality section.
  3. Enter a scan date and time.
  4. Set the status as Draft.
  5. Click the Save button.
  6. Open the newly created Code Scan record.
  7. In its Files section, upload the code scan results. The file must be in SARIF format with the .sarif extension (e.g., myscan.sarif).
  8. Change the Code Scan status to Ready and save the changes.
  9. Quality Hub will start processing the file immediately, changing the status to In Progress and then to Completed or Failed once the file has been processed.
  10. Refresh the page after a few seconds to see the results.
Code Coverage Reports

Code coverage can be used to understand how much of the code is being tested, but it doesn’t measure how thorough or adequate those tests are.

Still, it can help your team progress toward a more thorough test suite, highlighting trouble spots requiring more attention (e.g., new code introduced but not covered or coverage % dropping after a deployment).

Quality Hub can monitor Salesforce Apex code coverage across multiple orgs. Follow the below instructions to set it up.

  1. Go to a System record and type Salesforce or create a new one.
  2. Ensure Quality Hub is connected to the Salesforce org (see Test Environment Setup > Salesforce System Setup).
  3. Click on the Coverage tab.

To schedule coverage reports to run regularly, create a Coverage Report Schedule record from the System record page (see above instructions).

Once the Coverage Report Schedule Status is set to Active, Quality Hub will regularly create a Coverage Report record under the selected System per the schedule settings.

API System Setup

With Quality Hub, you can define a System record of type API to represent an external API you may want to call at some point.

When setting up this type of System, it’s essential to specify the following:

  • API Endpoint
  • Authentication Type
    • No Authentication
    • Basic Authentication, which requires the fields Username and Password to be filled.

Set the Password to ‘NULL’ if you want it to be empty.

Set the Username to ‘USEPASSWORD’ if you want to store the password (e.g. API token) in the Password field but use it as if it were the username.

  • Named Credential (recommended) requires the field Named Credential to be filled with the name of a Salesforce Named Credential.

    Named Credentials allow you to authenticate using Username/Password, OAuth 2.0, JWT, JWT Token Exchange, and AWS Signature Version 4.

    Salesforce documentation on how to define a Named Credential.
  • API Paths

Note: remember to add the API Endpoint URL to the Salesforce Remote Site Settings if you’re not using the Named Credential authentication type.

As an example, if you wanted to define an API System that represents the OpenWeatherMap API, you would follow these steps:

  1. Create a system record of type API
  2. Set the API Endpoint to https://api.openweathermap.org/data/2.5
  3. Set the Version to 2.5
  4. Set the Authentication Type to No Authentication.
  5. Create an API Path record for the Current Weather Data API:
    1. Set the Path to “/weather”
    2. Set the Summary to “Current weather data”
    3. Set the HTTP Method to “GET”
    4. Set the Content-Type to “application/json”

Version Control System Setup

You can use Quality Hub to represent your version control systems, repositories, and branches, which you can later use to orchestrate test executions with tools like Provar Grid. 

Note: Currently, VCS integration in Quality Hub is only necessary for executing Provar tests on Provar Grid. Otherwise, this import/sync of VCS repositories and branches can be skipped.

Quality Hub currently supports the following VCS options:

  • GitHub, standard and enterprise
  • GitLab
  • BitBucket
  • Azure DevOps

To define a new VCS, open the VCS tab, click the New button, enter a name, select the appropriate platform (e.g., GitHub), and click Save.

The next step is to connect to the VCS by using the VCS Connection Assistant on the right side of the record page.

Depending on the platform selected, the assistant will ask for the appropriate information to establish a connection.

GitHub Enterprise VCS

If you are authenticating to GitHub Enterprise, follow the below steps:

  1. Enable the GitHub Enterprise Server option
  2. If you are using GitHub Enterprise Cloud, then you can likely enable the Use Default GitHub Hostname option. 
  3. Server URL is only required if you have an on-prem GitHub Server or if you access GitHub through a custom base URL (i.e. github.company.com instead of github.com). 
  4. SSL can be disabled if your authentication to GitHub does not require it.

Non-enterprise VCS

  1. Most other VCS require either a username & PAT or just PAT to authenticate.

The Personal Access Token used must have the relevant scopes to access your repositories (read is the only required scope for repository access in Quality Hub).

Once you have entered the required information, click the Connect button.

After establishing the connection, you can import Repositories by clicking the Import option on the related list.

A new tab will appear where you can view, search, and select the repositories you want to import.

After importing the repositories, you will find them under the VCS record.

You can now import its branches from the repository record and relate them to your existing environments by clicking the Import option on the related list.

A new tab will appear where you can view, select, and choose which environment to import each branch under because branches are a type of System record.

After importing the branches, you will find them under the Repository record.

Test Plan Scheduling

Test Plan Schedules are used to orchestrate the execution of tests and can be created from a test plan record under its Execution section or from the Test Plan Schedules tab in the Test Operations app.

When creating test plan schedules, you can:

  • Select one of the three supported testing tools:
    • Salesforce (Apex) to execute Salesforce Apex unit tests.
    • Provar Grid to execute Provar automated test cases in the cloud.
    • Generic API to call out external APIs to trigger jobs (e.g. Jenkins or GitHub Actions).
  • Select a Missing Test Case Strategy to decide what should happen when Quality Hub retrieves results for test cases that are not present in Quality Hub (e.g. Ignore or Create test cases).
  • Specify the frequency (one-off, daily, or weekly) and the time when the schedule should run.
    • Note: Time is set in UTC. You can see at what local time it will run in the info card below the Time field.

Note: Versions 3.8.0 and below run an hourly job that picks up all the active test plan schedules that should have run in the previous hour and runs them. Versions 3.9.0 and above run the job every 15 minutes.

After creating a Test Plan Schedule record, make sure to:

  1. Configure its settings by clicking on the Configure Settings button.
  2. (Optional) Select the tests to run if you have configured it to test cases or test suites added to the schedule by clicking the Select Tests button.
  3. Set the status as Active.

A section below the header indicates the status of each one of the above conditions:

You can also click the Run button to run them immediately.

Schedule Job Alerts

Quality Hub can automatically send notifications when a job has succeeded or failed.

Under the Alerts section on a Test Plan Schedule, you can configure the channels and criteria to send alerts by checking the Enable Alerts checkbox.

Email Alerts

Select On Success and/or On Failure on the Email Alerts field, and add Alert Recipients in the related list so they can receive alerts.

Please verify the user’s email address in Salesforce, per this help article.

Slack Alerts

Select On Success and/or On Failure on the Slack Alerts field, and select a Slack Webhook to receive the alerts.

You can learn more about creating webhook-triggered workflows in this help article. The Webhook URL required on the Slack Webhook record should look something like this:

https://hooks.slack.com/triggers/T00000000/000000000/XXXXXXXXXXXXXXXXXXXXXXXX

These are the variables you can use in your webhook:

jobIdjobUrljobNamejobStatusjobStartDatejobEndDatejobCommentsjobTotalExecutedjobTotalSkippedjobTotalPassedjobTotalFailedjobPassRatescheduleIdscheduleUrlscheduleNamescheduleDescriptionscheduleTestingToolscheduleTestPlanNamescheduleSystemNamescheduleEnvironmentName

Your message could look something like this:

JOB INFORMATION [{jobName}]{jobUrl}STATUS: {jobStatusSTART-END DATE: {jobStartDate} – {jobEndDate}COMMENTS: {jobComments}PASS RATE: {jobPassRate}SCHEDULE INFORMATION: {scheduleName}{scheduleUrl}DESCRIPTION: {scheduleDescription}TEST PLAN: {scheduleTestPlanName}ENVIRONMENT: {scheduleEnvironmentName}SYSTEM: {scheduleSystemName}TESTING TOOL: {scheduleTestingTool}

Salesforce Apex Unit Test Execution Scheduling

Create a Test Plan Schedule and select the testing tool to be Salesforce (Apex) if you want to leverage Quality Hub’s connections to your Salesforce orgs to run some Apex unit tests.

When configuring its settings, you will see the following form:

  • Tests to Run
    • Test cases/suites added to the Test Plan Schedule. Using the Select Tests button this option will use the test cases you manually select on the schedule using the Select Tests button.
    • Test suites were added to the Test Plan.
    • All tests are in the org (except managed packages). This option is recommended if you haven’t imported any Apex unit tests into Quality Hub.
    • All tests are in the org (including managed packages). This may run many irrelevant unit tests belonging to installed managed packages.
  • Skip Code Coverage Calculation. You won’t be able to retrieve the latest code coverage reports if selected, but your tests may run faster.
  • Raise Defects on Failures. Disable it if you don’t want Quality Hub to create Defect records for each failed unit test execution.

Provar Grid Execution Scheduling

To learn more about how to configure Provar Grid schedules, please refer to its documentation here.

Generic API Callout Scheduling

Create a Test Plan Schedule and select the testing tool to be Generic API if you want Quality Hub to make a callout to an API.

When configuring its settings, you will see the following form:

  1. Choose a System of type API.
  2. Choose a related API Path.
  3. (Optional) Send a file as a payload. You can upload a JSON or XML file containing the HTTP body that should be sent to the API.
  4. (Optional) Enter a JSON or XML text in the Payload field on the Test Plan Schedule if you want it to be sent on the request’s body.
  5. (Optional) URL Parameters. These will be appended to the URL like ?{key1}={param1}&{key2}={param2}.

Actionable Insights

Quality Hub provides several ways of gathering insights from your quality-related activities:

  • Quality Hub Dashboards (Executive & QA Views)
  • Quality Journey
  • Quality Center
  • Release Center
  • Packaged reports and dashboards

Because all the information is stored in Salesforce, your team can further leverage the Salesforce platform to create standard reports and dashboards or use BI and analytics tools to produce more profound insights.

Quality Hub Dashboard

The Quality Hub Dashboard page is the home page for the Provar Quality Hub app, and the central hub for all key metrics surfaced by Quality Hub. This dashboard provides an overview for both executives and QA leaders on their quality related KPIs and DORA-enforced metrics.

It comes with 2 primary views, Executive and QA Coverage

  • The Executive view focuses on and highlights quality maturity across 13 KPIs and metrics.
  • The QA Coverage view (coming soon) highlights coverage metrics and analytics for your testing activities across your organization.

To learn more about the Quality Hub Dashboard and how to configure it, see the full documentation here.

Quality Journey

The Quality Journey page is the landing page on the Provar Test Management app, as its primary purpose is to help you team members continue with their work or highlight the latest test executions they may need to pay attention to.

This page has six main components:

  • Quick Create button helps you quickly create a test case, raise a defect, or create a test project or a test plan.
  • Your First Steps nudges you to complete some of the most important activities you can perform in Quality Hub.
  • Test Cases to Work On shows your test cases in Draft, Design, Pending Implementation, Implemented, Pending Activation, or Active.
  • Failing Test Cases You Own highlights recent executions where a test case you own failed.
  • Latest Test Cycles allow you to check what tests have been executed lately.

Quality Center

Quality Center is another high-level dashboard tailored to QA managers who want to learn more about the test executions, defects, test coverage, code quality, and test management metrics gathered from all the activities logged into Quality Hub.

You can access the Quality Center from either the Provar Quality Hub or Provar Test Management applications.

It offers three different views of the data:

  • Environments view lets you drill down into particular environments and systems.
  • Test Projects view lets you drill down into particular test projects and test plans.
  • Test Case Management view offers a view into how your teams manage their tests, as well as automation coverage metrics.

Note: it may take a few seconds to load the first time.
Note: Lightning Web Security must be enabled in the Salesforce org for the charts to load.

Release Center

The Release Center is a dashboard that highlights release activities within your testing lifecycle. It is accessible from either the Provar Quality Hub or Provar Release Management apps.

The Release Center gives users access to 2 primary views that help to surface insights and high-level data on their release and regression testing progress. 

  • Release Progress shows the release timeline, open items for the release, and fixed items, based on the selected filters. You can filter based on the Project and Release. 
  • Regression Testing shows the Test Cycles tracked in a given release for Smoke and Detailed test cycles. You can filter on Test Project and Test Plan.

Dashboards

Test Cases Dashboard

Above: Snapshot of the Test Cases dashboard.

The Test Cases dashboard shows information about your test cases, risks, and test owners.

  • Test Cases by Type is helpful to understand how your test cases are distributed by type, which can trigger questions like Do we have enough % of security tests? or What should our % target be for functional tests?
  • % of Automated Test Cases can be helpful if you aim to increase the number of automated tests, e.g., We want to automate 40% of our test cases by the end of Q2.
  • Risk Coverage shows you how many risks identified in each test plan have test cases against them, which can trigger questions like Why is the Risk Coverage below 50% on this test plan? or How can we test that we are mitigating more risks on this test plan?
  • Test Cases by Owner and Status shows you the test case ownership distribution, which can trigger questions like, Why do we have so many test cases without an owner? or How can we better distribute test cases amongst the team?
  • Test Cases by Status and Owner shows you how test cases are progressing through the pipeline and who is working on what test cases.
  • Test Cases Not Linked to a Test Suite can help you consider questions like A significant % of our test cases are not organized in test suites. How can we fix that?

Test Executions and Defects Dashboard

Above: Snapshot of the Test Executions and Defects dashboard.

The Test Executions and Defects dashboard shows information about the latest test executions and open defects and can be filtered by Environment and Test Execution Date.

  • Overall Pass Rate shows the percentage of test executions that passed each day, providing a high-level overview of the status of QA executions across the projects.
  • Pass Rate by Test Type can help you understand what tests uncover the most quality issues and where your team could focus their energies.
  • Test Executions by Tester is helpful to get an overview of how the daily test executions are distributed across your team.
  • Test Executions by Test Case Owner shows you who owns the tests being executed, which can help you distribute test case ownership more efficiently amongst the team.
  • Open Defects can be used to see if the number of open defects is between the set thresholds.
  • Open Defects by Priority provides information to help you understand how the most critical defects are being fixed.
  • Open Defects by Impact shows you the distribution of open defects depending on their impact on the business.

Test Coverage

Test coverage helps monitor testing quality and assists testers in creating tests that cover missing or not validated areas.

User Story Coverage

Above: Snapshot of the User Story Coverage report.

The User Story Coverage report lists all Jira Issues and whether they have associated test cases. This report can help you identify which user stories are missing test cases that validate their acceptance criteria.

Above: Snapshot of the Requirements Traceability Matrix report.

The Requirements Traceability Matrix report lists all the Jira Issues and associated test cases.

Risk Coverage

Above: Snapshot of the Risk Coverage report.

The Risk Coverage report lists all the identified risks on each test plan and whether test cases are associated with each risk. This report can help identify those risks that are missing test cases to ensure the mitigation plan works as expected.

Above: Snapshot of the Risk Coverage Matrix report.

The Risk Coverage Matrix report shows all the test plan risks and associated test cases.

Metadata Coverage

Above: Snapshot of a Coverage Report record.

Each Coverage Report record shows the number of components reported and the individual coverage of each component.

By clicking on an individual component, you can see its coverage evolution over time and the test cases related to that component.

Teams can easily see if coverage drops at a particular time or if the coverage % is lower than the permitted threshold.

Above: Snapshot of a Metadata Component Coverage chart.

How to integrate with Quality Hub

Quality Hub API

Quality Hub provides a custom-built API to process requests of different kinds and retrieve their status.

To learn how to enable the API, please refer to the section How to set up Quality Hub > Configuration > Setup > Quality Hub API Settings.

For more information on the available APIs, please refer to Appendix #4.

Authentication Methods

OAuth 2.0

If you chose OAuth 2.0 as the authentication method, please refer to Salesforce’s documentation on how to authenticate using the OAuth 2.0 protocol.

Other Authentication Methods

If you choose None or API Key as the authentication method, follow these extra steps to make the API available to external systems.

Create a Site
  1. Open the Salesforce Setup page by clicking the gear icon on the top right and selecting Setup.
  2. Click on User Interface > Sites and Domains > Sites on the left side menu.
  3. If there aren’t any other sites already, read and accept the Salesforce Site Terms of Use, and click Register My Salesforce Site Domain.
  4. Create a new Site by clicking on the New button.
  5. Enter the details for your new site. For example:
    1. Site Label: Quality Hub API
    2. Site Name: Provar_Manager_API
    3. Site Description: This site is used to expose the Quality Hub API to external systems.
    4. Active: Checked
    5. Active Site Home Page: InMaintenance
  6. Click Save.
  7. The site should now be visible in the list of sites.
  1. Note the site’s URL, which will be needed later to call the API from external systems. For example: 
https://customization-efficiency-b4-dev-ed.scratch.my.salesforce-sites.com/provar
Configure the Public Access Settings
  1. Open the site created previously.
  2. Click the Public Access Settings button.
  3. (Optional) Scroll toward the bottom to define Login Hours or Login IP Ranges if you want to restrict usage of the Quality Hub API to particular hour ranges or IP addresses.
  4. Click on the View Users button.
  5. Click on the user’s Full Name to open it.
  6. Scroll down to the Permission Set Assignments section and click on Edit Assignments.
  7. Select the Quality Hub – API permission set and click Save.

Importing 3rd-Party Test Projects

Quality Hub is built on the Salesforce platform, meaning any tool used to perform ETL (extract, transform, load) tasks on Salesforce can also be used for Quality Hub.

Importing from TestRail

See Appendix #7 for how to import Test Projects and artifacts from 3rd-party tooling natively supported by Quality Hub.

Data Import Tools

Here are some of the most popular ETL tools for Salesforce:

  • Salesforce Data Loader is a free client application for the bulk import/export of data. Can be used to insert, update, delete, or export Salesforce records.
  • Dataloader.io is a popular cloud data loader for Salesforce that lets you quickly and securely import, export, and delete unlimited amounts of data.
  • Jitterbit Salesforce Data Loader is a cloud data loader that quickly and easily automates the import and export of Salesforce data.
  • Dataimporter.io is a cloud data loading tool that lets you connect, clean, and import data into Salesforce.

Importing Test Cases

When migrating from a test management application to Quality Hub, you must know how to import your existing test cases with their test steps.

In this section, we use Salesforce Data Loader, but you may want to use a different tool.

Step 1: Store your test cases in a CSV file

The CSV file should only contain test cases, not test steps. Depending on the information you have about your test cases, you may want to add more or less headers to the CSV file, but the following are required:

  • Test Case Name (80 characters max)
  • Test Type (must be one of the available ones in Quality Hub)
  • Status (must be one of the available ones in Quality Hub)
  • Test Project ID (not strictly required but highly recommended)

Step 2: Import the test cases

Once the CSV is ready, follow these steps to import it into Quality Hub:

  1. Open Salesforce Data Loader.
  2. Click the Insert option.
  3. Log into the Salesforce org where Quality Hub is located.
  4. Search for and select the Test Case (provar__Test_Case__c) Salesforce object.
  5. Choose the CSV file and click Next.
  1. Ignore the next screen and click Next.
  2. Click on Create or Edit a Map.
  3. Search for the test case fields and map them to your CSV file headers by dragging and dropping them.
  1. Click OK when you are done.
  2. Click Next.
  3. Select a folder to store the results. You will need the results file afterward.
  4. Click Finish and select Yes in the final prompt.
  1. Review the results and click OK.

Step 3: Import the test steps

Now that the test cases have been imported, it’s time to import their test steps by following the same approach but choosing the Test Step (provar__Test_Step__c) Salesforce object instead. 

The following headers are required:

  • Test Case ID can be found in CSV’s results when importing test cases.
  • Action (Short) is limited to 80 characters. We recommend entering the full test step action in the Action (Long) field and the truncated version in Action (Short).
  • Sequence No defines the order in which test steps should be executed (e.g., 1, 2, 3, etc…)

Importing Test Suites

Test Suites are a great way of grouping and categorizing test cases. In Quality Hub, a test suite can have many test cases, and a test case can be part of many test suites.

Follow these steps to import your test suites and then relate test cases to them.

Step 1: Store your test cases in a CSV file

The CSV file should only contain test suites, not test cases. The following headers are required:

  • Test Suite Name (80 characters max)
  • Test Project ID (not strictly required but highly recommended)

Step 2: Import the test suites

Once the CSV is ready, follow these steps to import it into Quality Hub:

  1. Open Salesforce Data Loader.
  2. Click the Insert option.
  3. Log into the Salesforce org where Quality Hub is located.
  4. Search for and select the Test Suite (provar__Test_Suite__c) Salesforce object.
  5. Choose the CSV file and click Next.
  1. Ignore the next screen and click Next.
  2. Click on Create or Edit a Map.
  3. Search for the test case fields and map them to your CSV file headers by dragging and dropping them.
  1. Click OK when you are done.
  2. Click Next.
  3. Select a folder to store the results. You will need the results file afterward.
  4. Click Finish and select Yes in the final prompt.
  1. Review the results and click OK.

Step 3: Relate test cases to their test suites

Now that the test suites have been imported, it’s time to relate the existing test cases to their test suites following the same approach as before, but choosing the Test Suite Case (provar__Test_Suite_Case__c) Salesforce object instead. 

These are the required headers in the CSV file:

  • Test Case Id. You can use Dataloader to export existing test cases with their IDs.
  • Test Suite Id. You can find this ID in the results CSV created when importing test suites.

Uploading 3rd-Party Test Results

This section will explain how to import test results from other tools into Quality Hub into JUnit or TestNG format.

You can use the assistant on the Test Cycle record or the API.

Option 1: Assistant on a Test Cycle

Follow these steps to use the assistant:

  1. Create a Test Cycle record to capture the test results.
  2. Open the test cycle record and click the Upload Test Results button.
  1. Select a tester, the format of the test results (e.g., JUnit), and the missing test case strategy (i.e., what to do when processing test cases that don’t exist in Quality Hub).
  2. Click the Next button.
  3. Select and upload the file in the selected format containing the test results.
  4. Click the Next button again.
  5. Click Finish.

You can check the job’s progress that processes the results by going to the External Test Results section and opening the Test Results Processing Job record created after finishing the assistant steps.

Once the processing job is finished, if there are errors, you can see them in the job record. Otherwise, you will find the generated Test Executions records in the Test Executions section.

Option 2: Through API

If you want to import test results via API, you will do the same thing as explained. You will create the Test Cycle record first, then the Test Results Processing record, and then upload the file. Afterward, set the Status as Ready, and it will be processed.

Appendix #1: Third-Party Access List

When installing Quality Hub, you will be asked to approve a list of third-party services that Quality Hub may need to use.

Even though they need to be approved initially, you can disabled or remove them afterwards without affecting Quality Hub’s core functionalities.

See below how Quality Hub would interact with those external services.

Remote Site Settings

External ServiceUsage
prod.provar.cloudProxy to connect to Provar Grid, the Plugin Marketplace, and Provar Assistant.
login.salesforce.comConnect to Salesforce production environments to run tests and retrieve results, coverage, and metadata information.
test.salesforce.comConnect to Salesforce sandbox environments to run tests and retrieve results, coverage, and metadata information.


Trusted URLs

External ServiceUsage
assistant.provar.cloudInteract with Provar Assistant (Provar’s own AI chatbot)
video-bundler.lambdatest.comUsed to provide video recordings of Provar Grid’s test executions.
provar.cloudReport usage data analytics.

Appendix #2: JUnit Test Results Sample and Mapping

<?xml version=”1.0″ encoding=”UTF-8″?>
<testsuites disabled=”0″ errors=”1″ failures=”1″ name=”” tests=”” time=””>
    <testsuite errors=”1″ failures=”5″ name=”nose2-junit” skips=”1″ tests=”25″ time=”0.004″>
        <testcase classname=”pkg1.test.test_things” name=”test_params_func:2″ time=”0.000098″>
            <failure message=”test failure”>Traceback (most recent call last): File “nose2/plugins/loader/parameters.py”, line 162
</failure>
        </testcase>
    </testsuite>
</testsuites>
XML TagQuality Hub Object
testsuitesN/A
    testsuiteN/A
        testcase.classnameTest Execution > Test Case (via External Id)
        testcase.nameTest Step (Action)
            failure.messageDefect (Name)
            failure.bodyDefect (Description)

Appendix #3: TestNG Test Results Sample and Mapping

<?xml version=”1.0″ encoding=”UTF-8″?>
<testng-results skipped=”0″ failed=”2″ ignored=”0″ total=”8″ passed=”6″>
  <suite name=”TestAll” duration-ms=”25″ started-at=”2018-03-06T17:22:48Z” finished-at=”2018-03-06T17:22:48Z”>
    <test name=”calculator” duration-ms=”25″ started-at=”2018-03-06T17:22:48Z” finished-at=”2018-03-06T17:22:48Z”>
      <class name=”com.xpand.java.CalcTest”>
        <test-method status=”FAIL” name=”CanAddNumbersFromGivenData” duration-ms=”1″ started-at=”2018-03-06T17:22:48Z” finished-at=”2018-03-06T17:22:48Z”>
          <params>
            <param index=”0″>
              <value>
                <![CDATA[2]]>
              </value>
            </param>
          </params>
          <exception class=”java.lang.AssertionError”>
            <message>
              <![CDATA[expected [4] but found [5]]]>
            </message>
            <full-stacktrace>
              <![CDATA[java.lang.AssertionError: expected [4] but found [5]
    at org.testng.Assert.fail(Assert.java:93)
]]>
            </full-stacktrace>
          </exception>
          <attributes>
            <attribute name=”test”>
              <![CDATA[CALC-1]]>
            </attribute>
          </attributes>
        </test-method>
      </class>
    </test>
  </suite>
</testng-results>
XML TagQuality Hub Object
testng-resultsN/A
    suiteN/A
        testN/A
            class.nameTest Execution > Test Case (via External Id)
                test-method.nameTest Step (Action)
                    paramsTest Step (added to Action)
                    exceptionN/A
                        messageDefect (Name)
                        full-stacktraceDefect (Description)
                    attributesTest Step (Comments)

Appendix #4: Quality Hub API

Authentication

The Quality Hub API offers three different authentication methods:

  • None – This option allows external systems to call the Quality Hub API without any authentication restriction.

Still, public access can be configured to restrict access based on hour ranges or IP ranges.

  • API Key – This option allows external systems to call the Quality Hub API using an API Key generated by Quality Hub. API requests without an API Key will fail.

    Add the API Key to the request’s headers or query parameters:

Key: API-Key

Value: <your_API_key>


All API requests must be made over HTTPS. Calls made over plain HTTP will fail. 

Errors

Quality Hub uses conventional HTTP response codes to indicate the success or failure of an API request. In general, codes in the 2xx range indicate success. Codes in the 4xx range indicate an error that failed given the information provided (e.g., a required parameter was omitted, a request was not found, etc.). Codes in the 5xx range indicate an error with Quality Hub.

Resources

Run Tests

POST <site_url>/services/apexrest/provar/RunTests

Submits a request for Quality Hub to run tests as per the given configuration.

There are two main ways of using this API:

  • Run a particular test plan schedule, for which you must provide only a test plan schedule.
  • Run test cases, for which you must provide a test plan, tester, system under test, testing tool, and test case automation tool.

Parameters

id StringId used by the external system to identify the request.// Apex Example
{
    “id”: “93ee414f-4925-47ee-90db-d6e7a7be737b”,
    “description”: “Run Lead form Apex unit tests in QA sandbox.”,
    “testPlan”: {
        “name”: “Apex Unit Tests”
    },
    “testCases”: [
        { “name”: “LeadFormControllerTest” },
        { “name”: “WebsiteIntegrationTest” }
    ],
    “metadataComponentDeveloperNames”: [“LeadForm.page”],
    “systemUnderTest”: {
        “name”: “Salesforce QA Org”
    },
    “tester”: {
        “externalId”: “john.smith@testing.com”
    },
    “testingTool”: “Apex”,
    “testCaseAutomationTool”: “Apex”,
    “missingTestCaseStrategy”: “Create”,
    “missingTestCaseTestType”: “Unit testing”,
    “emailAlertSettings”: {
        “onSuccess”: true,
        “onFailure”: true,
        “emailAddresses”: [“john.smith@testing.com”, “silvia.ramirez@testing.com”]
    },
    “slackAlertSettings”: {
        “onSuccess”: true,
        “onFailure”: true,
        “webhookName”: “Post to QA channel”
    },
    “settings”: {
        “APEX_TEST_LEVEL”: “RunSpecifiedTests”,
        “APEX_SKIP_CODE_COVERAGE”: “FALSE”
    }
}

// Provar Grid Example
{
    “description”: “Run specific functional test case on Provar Grid”,
    “testPlan”: {“name”: “Lead to Cash”},
    “systemUnderTest”: {
        “name”: “Salesforce QA Org”
    },
    “testCases”: [
        { “id”: “a0bPv000000NVeDIAW” }
    ],
    “groupByTestSuite”: true,
    “tester”: {“name”: “John Smith”},
    “testingTool”: “ProvarGrid”,
    “testCaseAutomationTool”: “ProvarAutomation”,
    “missingTestCaseStrategy”: “Ignore”,
    “emailAlertSettings”: {
        “onSuccess”: true,
        “onFailure”: true,
        “emailAddresses”: [“john.smith@testing.com”]
    },
    “slackAlertSettings”: {
        “onSuccess”: true,
        “onFailure”: true,
        “webhookName”: “Test Plan Schedule Job Finished”
    },
    “settings”: {
        “PROVAR_AUTOMATION_VERSION”: “latest”,       “PROVAR_GRID_TEST_LEVEL”:”RunSpecifiedTests”,
        “TEST_PLAN_PDF”: true,
        “TEST_PLAN_PIE_CHART”: true,
        “PROVAR_GRID_CONCURRENCY”: 4,
        “PROVAR_GRID_BROWSERS”: “[\”Windows 10;Firefox;121.0\”,\”Windows 11;Chrome;119.0\”,\”macOS Monterey;Safari;15.0\”,\”ubuntu 20;MicrosoftEdge;120.0\”]”,
“PROVAR_GRID_OVERRIDE_TEST_ENVIRONMENT”: true,
        “PROVAR_GRID_TEST_ENVIRONMENTS”: “a0APv000000gXu1MAE,a0APv000000gXu2MAE”
    }
}
description StringAn arbitrary string that you can attach to the test execution.
*testPlan MapTest plan ID or name that contains the tests. 
testPlanSchedule MapID or name of an existing Test plan schedule to run.
testCases ListList of test cases IDs, names, or external IDs to run.
testSuites ListList of test suite IDs, names, or external IDs to run.
metadataComponentDeveloperNames ListList of metadata component developer names related to the test cases to run.
*systemUnderTest MapSystem ID or Name to target.
*tester MapID, name, or email address of the person triggering the tests.
*testingTool StringTool used to run the tests.Possible values:Apex
ProvarGridGenericAPI
*testCaseAutomationTool StringTool used to automate the tests.Possible values:ApexProvarAutomationSeleniumOther
missingTestCaseStrategy StringStrategy to follow with regards to test cases present in the test results but not in Quality Hub.Possible values:IgnoreCreateDefault: Ignore
missingTestCaseTestType StringType of test to create when the strategy is to create missing test cases.Possible values: see Appendix #5.
groupByTestSuite BooleanEnable to run the test suites related to the test cases instead of just the test cases.Default: false
emailAlertSettings MapList of recipient email addresses to notify when the tests succeed and/or fail.
slackAlertSettings MapSlack channel to notify when the tests succeed and/or fail.
settings MapSet of string key-value pairs containing the settings to be used with the testing tool.Possible values: See Appendix #6.

Response

status StringPossible values:successerror{
    “requestId”: “03fb97c5-716c-408a-a10f-962d8813fc27”,
    “status”: “success”,
    “message”: “The request has been queued for processing”
}
message StringDescribes the outcome of the request.
requestId StringUUID of the request.

Get Request Status

GET <site_url>/services/apexrest/provar/GetRequestStatus

Queries the latest status for a particular request.

The request identifier must be passed as a URL parameter. Either of these two types of identifiers can be used:

  • externalId. This would be the “id” of the initial request.
  • requestId. This is the UUID returned by the API upon performing the initial request.

Response

status StringPossible values:errorQueuedPreparingProcessingCompletedFailed{
    “message”: “”,
    “status”: “Completed”,
    “lastUpdated”: “2024-02-01T21:06:41.000Z”
}
message StringDescribes the status of the request.
lastUpdated DatetimeDate time when the request was last updated.

Abort Test Run

GET <site_url>/services/apexrest/provar/AbortTestRun

Aborts test runs.

The request identifier must be passed as a URL parameter. Either of these two types of identifiers can be used:

  • externalId. This would be the “id” of the initial request.
  • requestId. This is the UUID returned by the API upon performing the initial request.

Response

status StringPossible values:successerror{
    “message”: “The provided test run had already started so it will be aborted as soon as possible.”,
    “status”: “success”
}
message StringDescribes the outcome of trying to abort the test run.

Appendix #5: Test Case Types

These are the test types currently enabled in Quality Hub.

  • Acceptance testing
  • API testing
  • Automated testing
  • Continuous testing
  • Domain testing
  • End-to-end testing
  • Exploratory testing
  • Functional testing
  • Manual scripted testing
  • Non-functional testing
  • Regression testing
  • Sanity testing
  • Unit testing
  • Security testing

Appendix #6: Testing Tool Settings

Provar Grid

KeyValue
*PROVAR_GRID_TEST_LEVELPossible values:RunSpecifiedTests
RunPlanTestsDefault: RunPlanTests
*PROVAR_GRID_CONCURRENCYNumber of concurrent testsDefault: 1
PROVAR_AUTOMATION_VERSIONPossible values:<empty>
custom
<provar_automation_build_number>
PROVAR_ANT_URLURL of the Provar Automation ANT ZIP file
PROVAR_TEST_PROJECT_REL_PATHRelative path to the Provar Test Project from the root of the repository. e.g. /MyTestProject
PROVAR_GRID_DISABLE_SMART_AUTO_SPLITTRUE or FALSE
PROVAR_GRID_SPLIT_BY_TEST_CASETRUE or FALSE
PROVAR_GRID_RETRY_ON_FAILURETRUE or FALSEDefault: FALSE
PROVAR_GRID_MAX_RETRIESMaximum number of retriesDefault: 0
TEST_PLAN_PDFTRUE or FALSE
TEST_PLAN_PIE_CHARTTRUE or FALSE
PROVAR_GRID_OVERRIDE_TEST_ENVIRONMENTTRUE or FALSE
PROVAR_GRID_TEST_ENVIRONMENTSComma-separated list of Environment record IDs, without whitespaces nor enclosing square brackets.E.g. “a0AQz000000WKWgMAO,a0AQz000000WaV0MAK”
*PROVAR_GRID_BROWSERSComma-separated list of Browser Configurations. E.g.[“Windows 11;Chrome;latest”,”macOS Monterey;Chrome;latest”]
ENVVAR_<ENVIRONMENT_VARIABLE_NAME>Any value you’d like to set to the environment variable
CUSART_<ARTIFACT_NAME>Path of the artifact, relative to the test project folder
JAVA_VERSION21(2.15.2+ & all 3.0+ versions), 11 (prior to 2.15.2)
PROVAR_GRID_CONSOLIDATE_TEST_REULTSTRUE or FALSE

Salesforce (Apex)

KeyValue
*APEX_TEST_LEVELPossible values:RunSpecifiedTests
RunPlanTestsRunLocalTestsRunAllTestsInOrgDefault: RunPlanTests
APEX_SKIP_CODE_COVERAGETRUE or FALSEDefault: FALSE
APEX_RAISE_DEFECTS_ON_FAILURETRUE or FALSEDefault: TRUE

Generic API

KeyValue
*API_ENDPOINTId of a System record of type “API”
*API_PATHId of an API Path record related to the above System
API_BODY_FILENAMEName of the file to be sent as the body of the request, without its file extension
URLPARAM_<KEY>Value of the parameter to be appended to the URL

Appendix #7: Migrating Test Projects from 3rd-Party Tools

You can migrate your test projects from other supported test management tools using Quality Hub’s Test Project Import Assistant following these steps:

You can import TestRail Projects and test artifacts natively using the Import option on the Test Projects tab in the Provar Test Management app. 

Firstly, you need to establish a connected System record to access the TestRail APIs.

  1. Navigate to the Provar Test Operations app and create a new System record:
  1. Then, configure the Authentication type and credentials to establish the connection to TestRail.

Note: If you use a Named Credential, follow the Salesforce guidelines for creating Named Credentials for accessing External Systems via API Callouts. You can also follow this article for the basic steps: Using Named Credentials for Secure API Calls in Salesforce – Jeet Singh

  1. Open the Provar Test Management App and click on the Test Projects tab.
  2. Click on the Import button.
  1. Select a test management tool and click Next.
  1. Select an Environment (e.g. Production), either select an existing system that represents the connection to the test management tool or create a new one, and click Next.
  1. Click Next, and you will either see projects listed to import OR a failure message. If you see this message:

This means you’ll need to add your TestRail Base URL endpoint as a Remote Site Setting in the Setup menu as shown in the failure message.

  1. Once the Remote Site is added, select a test project to import and click Next.
  1. Define the mapping between the test management tool test case fields and Quality Hub’s test case fields. Click Save Mapping to save your changes and then click Import.
  1. Review the results and click Open Test Project to open the imported test project.

Feedback

Was this article helpful for you?

3 people found this article helpful.

Documentation library

Trying to raise a case with our support team?

We use cookies to better understand how our website is used so we can tailor content for you. For more information about the different cookies we use please take a look at our Privacy Policy.

Scroll to Top