Trail: PerformanceRequirement

Performance Testing : PerformanceRequirement

KnowledgeBase :: Categories :: PageIndex :: RecentChanges :: RecentlyCommented :: Login/Register
Most recent edit on 2008-10-14 16:27:25 by Admin

Additions:
Question
When
Why do we ask this?
User Base


Total amount of users?
BARR
Defines the size of the user information database. (logins that need to be created for the simulation)
How many concurrent users?
BARR
Defines the load that the system should be able to handle. Concurrent means: How many users are busy with the system.
How many peak users?
BARR
Defines the maximum load that the system should be able to handle.
How often are the peaks and when are they occurring?
BARR
Defines how often the maximum amount of users occurs and when the peaks are occurring
Roles of the users?
BARR
Defines the break down of the users in roles that could have different impact on the system resources. For instance: ESR, data entry, management, supervisor etc.
Which roles support which business functions/user scenarios?
BARR
Defines the break down of the users in roles/activities that could have a different impact on the system resources.
Location of the users?
BARR, TARC
Defines potential remote connections and therefore specific test requirements.
Network connection methods of the users?
BARR, TARC
Defines if the users need to connect through alternative mechanisms (instead of the LAN). This could result in specific test requirements.



Activities


Supported business functions?
BARR
Defines the different unique business functions and enables the test group to assess the resource usage per business function.
Business functions ranked by frequency
BARR
Defines how often certain business functions are used in order to assess the overall resource usage.
User scenarios
BARR
Defines how the system will be used by the users. It is necessary to simulate this in a performance test.
User scenarios ranked by frequency
BARR
Defines how often certain scenarios are executed so that you are able to define a correct simulation. This will result in information about the throughput of the system.
Batch run times
CSD
Defines how much time is set aside for batch programs to run and therefore how much time in a day is available for the online system. This is needed to estimate the daily throughput.
User scenarios ranked by response sensitivity (talking on the phone with a client)
BARR
Defines the scenarios that belong to front office and to back office. In the simulation, special attention can be given to the front office activities.



Environmental Constraints


Operating hours?
TARC, CSD
Defines the requirements related to stability of the application. If the application is supposed to be 24/24, then tests related to this requirement should be executes to assess the stability of the system components.
Dependencies on other systems?
BARR, TARC, CSD
Defines how other systems could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
Dependencies on system resources?
TARC, CSD
Defines how specific system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
System resources are shared with?
TARC, CSD
Defines how specific shared system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
Security requirements that might impact performance (SSL, firewalls, SSO)
TARC
Defines how specific security requirements could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be quantified in the tests.



Response times


Information request
BARR,
CSD
Defines the general performance expectation for this activity. Information request is the retrieval on information based on a user-supplied key.
Value lookup
BARR,
CSD
Defines the general performance expectation for this activity. Value lookup is the retrieval of a code value description.
Save/Submit entered information
BARR,
CSD
Defines the general performance expectation for this activity. Save/Submit is storing the new or changed information in the database.
Search information
BARR,
CSD
Defines the general performance expectation for this activity. Search information is using a search facility in the application that does not necessarily use key values.
Start to print
BARR,
CSD
Defines the general performance expectation for this activity. Start to print is defined as the time between giving the print command to the time that the printer starts to print.
Time to print
BARR,
CSD
Defines the general performance expectation for this activity. Time to print is the time that it takes for the print action to be finished measured from the time that the printer starts.
Window/Page appearance
BARR,
CSD
Window/Page appearance refers to the time that it takes for a new window/page to show up and be able to be used.
Window/Page operational
BARR,
CSD
Window/Page operational refers to the time that it takes to operate features in the window like pushbuttons, list boxes etc.
Front office/Back office values for all of the above
BARR,
CSD
Defines the difference between the expected response times of a front office function and a back office function.



Risks


What are the risks to the scenarios, the system, the WCB if performance criteria are not met?
Charter,
BARR
Defines the risk that will drive the plan for Performance Testing.
Common risks factors:
  • Large user base
  • New technology
  • Unproven technology
  • New use of existing technology
  • Inexperienced team
  • High peak usage
  • Large portion of front office activities
  • External exposure
  • Stringent security measures
  • Resource intensive scenarios
Charter,
BARR
These are the most common risk factors related to performance.



Deletions:
|=|Question|=|Why do we ask this?||
||User Base||||
||Total amount of users?||Defines the size of the user information database. (logins that need to be created for the simulation)||
||How many concurrent users?||Defines the load that the system should be able to handle. Concurrent means: How many users are busy with the system.||
||How many peak users?||Defines the maximum load that the system should be able to handle.||
||How often are the peaks and when are they occurring?||Defines how often the maximum amount of users occurs and when the peaks are occurring||
||Roles of the users?||Defines the break down of the users in roles that could have different impact on the system resources. For instance: ESR, data entry, management, supervisor etc.||
||Which roles support which business functions/user scenarios?||Defines the break down of the users in roles/activities that could have a different impact on the system resources.||
||Location of the users?||Defines potential remote connections and therefore specific test requirements.||
||Network connection methods of the users?||Defines if the users need to connect through alternative mechanisms (instead of the LAN). This could result in specific test requirements.||
||||||
||Activities||||
||Supported business functions?||Defines the different unique business functions and enables the test group to assess the resource usage per business function.||
||Business functions ranked by frequency||Defines how often certain business functions are used in order to assess the overall resource usage.||
||User scenarios||Defines how the system will be used by the users. It is necessary to simulate this in a performance test.||
||User scenarios ranked by frequency||Defines how often certain scenarios are executed so that you are able to define a correct simulation. This will result in information about the throughput of the system.||
||Batch run times||Defines how much time is set aside for batch programs to run and therefore how much time in a day is available for the online system. This is needed to estimate the daily throughput.||
||User scenarios ranked by response sensitivity (talking on the phone with a client)||Defines the scenarios that belong to front office and to back office. In the simulation, special attention can be given to the front office activities.||
||||||
||Environmental Constraints||||
||Operating hours?||Defines the requirements related to stability of the application. If the application is supposed to be 24/24, then tests related to this requirement should be executes to assess the stability of the system components.||
||Dependencies on other systems?||Defines how other systems could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.||
||Dependencies on system resources?||Defines how specific system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.||
||System resources are shared with?||Defines how specific shared system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.||
||Security requirements that might impact performance (SSL, firewalls, SSO)||Defines how specific security requirements could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be quantified in the tests.||
||||||
||Response times||||
||Information request||Defines the general performance expectation for this activity. Information request is the retrieval on information based on a user-supplied key.||
||||||
||Value lookup||Defines the general performance expectation for this activity. Value lookup is the retrieval of a code value description.||
||||||
||Save/Submit entered information||Defines the general performance expectation for this activity. Save/Submit is storing the new or changed information in the database.||
||||||
||Search information||Defines the general performance expectation for this activity. Search information is using a search facility in the application that does not necessarily use key values.||
||||||
||Start to print||Defines the general performance expectation for this activity. Start to print is defined as the time between giving the print command to the time that the printer starts to print.||
||||||
||Time to print||Defines the general performance expectation for this activity. Time to print is the time that it takes for the print action to be finished measured from the time that the printer starts.||
||||||
||Window/Page appearance||Window/Page appearance refers to the time that it takes for a new window/page to show up and be able to be used.||
||||||
||Window/Page operational||Window/Page operational refers to the time that it takes to operate features in the window like pushbuttons, list boxes etc.||
||||||
||Front office/Back office values for all of the above||Defines the difference between the expected response times of a front office function and a back office function.||
||||||
||||||
||Risks||||
||What are the risks to the scenarios, the system, the WCB if performance criteria are not met?||Defines the risk that will drive the plan for Performance Testing.||
||||||
||Common risks factors:||These are the most common risk factors related to performance.||
||Large user base||||
||New technology||||
||Unproven technology||||
||New use of existing technology||||
||Inexperienced team||||
||High peak usage||||
||Large portion of front office activities||||
||External exposure||||
||Stringent security measures||||
||Resource intensive scenarios||||




Edited on 2008-10-14 16:24:02 by Admin

Additions:
|=|Question|=|Why do we ask this?||
||User Base||||
||Total amount of users?||Defines the size of the user information database. (logins that need to be created for the simulation)||
||How many concurrent users?||Defines the load that the system should be able to handle. Concurrent means: How many users are busy with the system.||
||How many peak users?||Defines the maximum load that the system should be able to handle.||
||How often are the peaks and when are they occurring?||Defines how often the maximum amount of users occurs and when the peaks are occurring||
||Roles of the users?||Defines the break down of the users in roles that could have different impact on the system resources. For instance: ESR, data entry, management, supervisor etc.||
||Which roles support which business functions/user scenarios?||Defines the break down of the users in roles/activities that could have a different impact on the system resources.||
||Location of the users?||Defines potential remote connections and therefore specific test requirements.||
||Network connection methods of the users?||Defines if the users need to connect through alternative mechanisms (instead of the LAN). This could result in specific test requirements.||
||||||
||Activities||||
||Supported business functions?||Defines the different unique business functions and enables the test group to assess the resource usage per business function.||
||Business functions ranked by frequency||Defines how often certain business functions are used in order to assess the overall resource usage.||
||User scenarios||Defines how the system will be used by the users. It is necessary to simulate this in a performance test.||
||User scenarios ranked by frequency||Defines how often certain scenarios are executed so that you are able to define a correct simulation. This will result in information about the throughput of the system.||
||Batch run times||Defines how much time is set aside for batch programs to run and therefore how much time in a day is available for the online system. This is needed to estimate the daily throughput.||
||User scenarios ranked by response sensitivity (talking on the phone with a client)||Defines the scenarios that belong to front office and to back office. In the simulation, special attention can be given to the front office activities.||
||||||
||Environmental Constraints||||
||Operating hours?||Defines the requirements related to stability of the application. If the application is supposed to be 24/24, then tests related to this requirement should be executes to assess the stability of the system components.||
||Dependencies on other systems?||Defines how other systems could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.||
||Dependencies on system resources?||Defines how specific system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.||
||System resources are shared with?||Defines how specific shared system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.||
||Security requirements that might impact performance (SSL, firewalls, SSO)||Defines how specific security requirements could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be quantified in the tests.||
||||||
||Response times||||
||Information request||Defines the general performance expectation for this activity. Information request is the retrieval on information based on a user-supplied key.||
||||||
||Value lookup||Defines the general performance expectation for this activity. Value lookup is the retrieval of a code value description.||
||||||
||Save/Submit entered information||Defines the general performance expectation for this activity. Save/Submit is storing the new or changed information in the database.||
||||||
||Search information||Defines the general performance expectation for this activity. Search information is using a search facility in the application that does not necessarily use key values.||
||||||
||Start to print||Defines the general performance expectation for this activity. Start to print is defined as the time between giving the print command to the time that the printer starts to print.||
||||||
||Time to print||Defines the general performance expectation for this activity. Time to print is the time that it takes for the print action to be finished measured from the time that the printer starts.||
||||||
||Window/Page appearance||Window/Page appearance refers to the time that it takes for a new window/page to show up and be able to be used.||
||||||
||Window/Page operational||Window/Page operational refers to the time that it takes to operate features in the window like pushbuttons, list boxes etc.||
||||||
||Front office/Back office values for all of the above||Defines the difference between the expected response times of a front office function and a back office function.||
||||||
||||||
||Risks||||
||What are the risks to the scenarios, the system, the WCB if performance criteria are not met?||Defines the risk that will drive the plan for Performance Testing.||
||||||
||Common risks factors:||These are the most common risk factors related to performance.||
||Large user base||||
||New technology||||
||Unproven technology||||
||New use of existing technology||||
||Inexperienced team||||
||High peak usage||||
||Large portion of front office activities||||
||External exposure||||
||Stringent security measures||||
||Resource intensive scenarios||||


Deletions:
Question
When
Why do we ask this?
User Base


Total amount of users?
BARR
Defines the size of the user information database. (logins that need to be created for the simulation)
How many concurrent users?
BARR
Defines the load that the system should be able to handle. Concurrent means: How many users are busy with the system.
How many peak users?
BARR
Defines the maximum load that the system should be able to handle.
How often are the peaks and when are they occurring?
BARR
Defines how often the maximum amount of users occurs and when the peaks are occurring
Roles of the users?
BARR
Defines the break down of the users in roles that could have different impact on the system resources. For instance: ESR, data entry, management, supervisor etc.
Which roles support which business functions/user scenarios?
BARR
Defines the break down of the users in roles/activities that could have a different impact on the system resources.
Location of the users?
BARR, TARC
Defines potential remote connections and therefore specific test requirements.
Network connection methods of the users?
BARR, TARC
Defines if the users need to connect through alternative mechanisms (instead of the LAN). This could result in specific test requirements.



Activities


Supported business functions?
BARR
Defines the different unique business functions and enables the test group to assess the resource usage per business function.
Business functions ranked by frequency
BARR
Defines how often certain business functions are used in order to assess the overall resource usage.
User scenarios
BARR
Defines how the system will be used by the users. It is necessary to simulate this in a performance test.
User scenarios ranked by frequency
BARR
Defines how often certain scenarios are executed so that you are able to define a correct simulation. This will result in information about the throughput of the system.
Batch run times
CSD
Defines how much time is set aside for batch programs to run and therefore how much time in a day is available for the online system. This is needed to estimate the daily throughput.
User scenarios ranked by response sensitivity (talking on the phone with a client)
BARR
Defines the scenarios that belong to front office and to back office. In the simulation, special attention can be given to the front office activities.



Environmental Constraints


Operating hours?
TARC, CSD
Defines the requirements related to stability of the application. If the application is supposed to be 24/24, then tests related to this requirement should be executes to assess the stability of the system components.
Dependencies on other systems?
BARR, TARC, CSD
Defines how other systems could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
Dependencies on system resources?
TARC, CSD
Defines how specific system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
System resources are shared with?
TARC, CSD
Defines how specific shared system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
Security requirements that might impact performance (SSL, firewalls, SSO)
TARC
Defines how specific security requirements could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be quantified in the tests.



Response times


Information request
BARR,
CSD
Defines the general performance expectation for this activity. Information request is the retrieval on information based on a user-supplied key.
Value lookup
BARR,
CSD
Defines the general performance expectation for this activity. Value lookup is the retrieval of a code value description.
Save/Submit entered information
BARR,
CSD
Defines the general performance expectation for this activity. Save/Submit is storing the new or changed information in the database.
Search information
BARR,
CSD
Defines the general performance expectation for this activity. Search information is using a search facility in the application that does not necessarily use key values.
Start to print
BARR,
CSD
Defines the general performance expectation for this activity. Start to print is defined as the time between giving the print command to the time that the printer starts to print.
Time to print
BARR,
CSD
Defines the general performance expectation for this activity. Time to print is the time that it takes for the print action to be finished measured from the time that the printer starts.
Window/Page appearance
BARR,
CSD
Window/Page appearance refers to the time that it takes for a new window/page to show up and be able to be used.
Window/Page operational
BARR,
CSD
Window/Page operational refers to the time that it takes to operate features in the window like pushbuttons, list boxes etc.
Front office/Back office values for all of the above
BARR,
CSD
Defines the difference between the expected response times of a front office function and a back office function.



Risks


What are the risks to the scenarios, the system, the WCB if performance criteria are not met?
Charter,
BARR
Defines the risk that will drive the plan for Performance Testing.
Common risks factors:
  • Large user base
  • New technology
  • Unproven technology
  • New use of existing technology
  • Inexperienced team
  • High peak usage
  • Large portion of front office activities
  • External exposure
  • Stringent security measures
  • Resource intensive scenarios
Charter,
BARR
These are the most common risk factors related to performance.





Edited on 2005-11-23 17:53:16 by RolandStens

Additions:
""<div style="position:relative; left: -5"><table rules="all" border="1" frame="box" width="100%">

Deletions:
""<div style="position:relative; left: -5"><table rules="all" border="1" frame="box">



Edited on 2005-11-23 17:52:50 by RolandStens

Additions:
<col width="30%">
<col width="0">
<col width="70%">

Deletions:
<col width="222.666611">
<col width="71.999982">
<col width="287.999928">




Oldest known version of this page was edited on 2004-02-13 16:22:22 by Roland Stens []
Page view:

Performance Requirements Checklist


Question
When
Why do we ask this?
User Base


Total amount of users?
BARR
Defines the size of the user information database. (logins that need to be created for the simulation)
How many concurrent users?
BARR
Defines the load that the system should be able to handle. Concurrent means: How many users are busy with the system.
How many peak users?
BARR
Defines the maximum load that the system should be able to handle.
How often are the peaks and when are they occurring?
BARR
Defines how often the maximum amount of users occurs and when the peaks are occurring
Roles of the users?
BARR
Defines the break down of the users in roles that could have different impact on the system resources. For instance: ESR, data entry, management, supervisor etc.
Which roles support which business functions/user scenarios?
BARR
Defines the break down of the users in roles/activities that could have a different impact on the system resources.
Location of the users?
BARR, TARC
Defines potential remote connections and therefore specific test requirements.
Network connection methods of the users?
BARR, TARC
Defines if the users need to connect through alternative mechanisms (instead of the LAN). This could result in specific test requirements.



Activities


Supported business functions?
BARR
Defines the different unique business functions and enables the test group to assess the resource usage per business function.
Business functions ranked by frequency
BARR
Defines how often certain business functions are used in order to assess the overall resource usage.
User scenarios
BARR
Defines how the system will be used by the users. It is necessary to simulate this in a performance test.
User scenarios ranked by frequency
BARR
Defines how often certain scenarios are executed so that you are able to define a correct simulation. This will result in information about the throughput of the system.
Batch run times
CSD
Defines how much time is set aside for batch programs to run and therefore how much time in a day is available for the online system. This is needed to estimate the daily throughput.
User scenarios ranked by response sensitivity (talking on the phone with a client)
BARR
Defines the scenarios that belong to front office and to back office. In the simulation, special attention can be given to the front office activities.



Environmental Constraints


Operating hours?
TARC, CSD
Defines the requirements related to stability of the application. If the application is supposed to be 24/24, then tests related to this requirement should be executes to assess the stability of the system components.
Dependencies on other systems?
BARR, TARC, CSD
Defines how other systems could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
Dependencies on system resources?
TARC, CSD
Defines how specific system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
System resources are shared with?
TARC, CSD
Defines how specific shared system resources could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be simulated in the tests.
Security requirements that might impact performance (SSL, firewalls, SSO)
TARC
Defines how specific security requirements could potentially influence the performance results of the system under test. If a significant influence is expected then this influence should be quantified in the tests.



Response times


Information request
BARR,
CSD
Defines the general performance expectation for this activity. Information request is the retrieval on information based on a user-supplied key.
Value lookup
BARR,
CSD
Defines the general performance expectation for this activity. Value lookup is the retrieval of a code value description.
Save/Submit entered information
BARR,
CSD
Defines the general performance expectation for this activity. Save/Submit is storing the new or changed information in the database.
Search information
BARR,
CSD
Defines the general performance expectation for this activity. Search information is using a search facility in the application that does not necessarily use key values.
Start to print
BARR,
CSD
Defines the general performance expectation for this activity. Start to print is defined as the time between giving the print command to the time that the printer starts to print.
Time to print
BARR,
CSD
Defines the general performance expectation for this activity. Time to print is the time that it takes for the print action to be finished measured from the time that the printer starts.
Window/Page appearance
BARR,
CSD
Window/Page appearance refers to the time that it takes for a new window/page to show up and be able to be used.
Window/Page operational
BARR,
CSD
Window/Page operational refers to the time that it takes to operate features in the window like pushbuttons, list boxes etc.
Front office/Back office values for all of the above
BARR,
CSD
Defines the difference between the expected response times of a front office function and a back office function.



Risks


What are the risks to the scenarios, the system, the WCB if performance criteria are not met?
Charter,
BARR
Defines the risk that will drive the plan for Performance Testing.
Common risks factors:
  • Large user base
  • New technology
  • Unproven technology
  • New use of existing technology
  • Inexperienced team
  • High peak usage
  • Large portion of front office activities
  • External exposure
  • Stringent security measures
  • Resource intensive scenarios
Charter,
BARR
These are the most common risk factors related to performance.



Page History :: 2008-10-14 16:27:25 XML :: Owner: Roland Stens :: Search:
Valid XHTML 1.0 Transitional :: Valid CSS :: Powered by Wikka Wakka Wiki 1.1.6.0
Page was generated in 0.0521 seconds