Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
  • crown-core crown-core
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 75
    • Issues 75
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 1
    • Merge requests 1
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Container Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Crown
  • crown-corecrown-core
  • Wiki
  • Quality assurance
  • Documents
  • test strategy default

Last edited by Josh Wilcox Jun 21, 2018
Page history

test strategy default

2. Crown Platform Test Strategy

Version 0.1

05/12/2017

Chris Kibble

Document History** , Approval & Contributors**

3.1.Revision History

Version Revised By Date Status Release Notes
0.1 Chris Kibble 05/12/2017 Creation

3.2.Internal review

Role Name

3.3.Document Approval

Role Name Signature Date

3.4.Document Contributors

The following people contributed to the production of this document:

Role Name
Test Manager Chris Kibble

4.Glossary of Terms

The following acronyms and abbreviations are used in this document

Abbreviation Full Description
CRW Crown Platform

Overview

The purpose of this document is to formally record the generic process to be adopted by the testing of Crown Platforms product suite.

It is acknowledged that each products testing requirements are individual and should be approached pragmatically. The testing approach will be agreed on a project-by-project basis and outlined in the appropriate test plan.

For each proposal, or release within a proposal, the test activity for each test phase should involve:

  • The creation of a test plan and test script, with supporting test cases, for each area of functionality
  • The execution of the test scripts and recording of results
  • The success criteria is met
  • The identification and recording of new defects and re-testing of 'fixed' defects
  • The regression testing of the final proposal to be delivered
  • Formal validation of the agreed exit criteria

The objective of the test plan is to recommend and document the most effective and appropriate testing to support the proposal. The test plan details all phases of testing required, with particular emphasis on:

  • the order in which the testing should be performed
  • entry and exit criteria for each test
  • the objectives of testing
  • creation and maintenance of test plans and scripts
  • defect management
  • test metrics and measurements
  • test environments and ownership

Entry Criteria

It is a standard pre-requisite of the testing that all technical specifications have been completed, agreed and signed-off by all relevant members of the proposal. This is to allow the test manager sufficient time to become familiar with the use cases, come up with a testing plan or testing mind map and highlight any potential risks to the delivery of the proposal.

Other, project-specific, test entry criteria are agreed and outlined within the appropriate test plan and are still valid at the start of each phase of testing.

Test Metrics

Various statistical techniques can be used to analyse measurements of process capability or software characteristics (e.g. reliability, usability), with the objective of producing data that can be used to evaluate software quality and process capability.

As standard, CRW monitors basic metrics covering the total number of tests, the number attempted and the number passed/failed for each test iteration.

Extended metrics can be provided as agreed within the test plan to make sure the proposal solution is robust.

Exit Criteria

The exit criteria of testing depends upon the success of the test execution, the number of outstanding issues to be resolved, the proposal deadline, resource availability and the promise to implement by the proposal dates.

Other, project-specific, test exit criteria are agreed and outlined within the appropriate test plan and are validated on completion of each phase of testing.

Upon completion of testing, a test exit report can be provided by the test manager. This report provides a summary of the testing conducted together with a summary of the defects identified during the testing. Additional information like performance, load and security can be provided to show confidence in the completed proposal.

Where possible, testing should continue past 'go-live' to proactively identify and mitigate any issues which may have an effect on the system with the CRW community.

Clone repository
  • 19.01.18 Testnet Fork Report
  • Credentials for electrumx servers
  • How to turn on off enforcement
  • Quality Assurance
  • Quality Assurance
    • Instant Send Testing
    • documents
  • Seed nodes
  • bounties
  • code of conduct
  • coding style guide
  • communications & marketing
  • community
  • community
    • Announcement Responsibilities
    • Developers
    • community built
View All Pages