Uses queuing theory formulas and algorithms
to predict response times and utilization projections from workload characterization data and essential system relationships. Analytic models require input data such as arrival rates, user profiles, and service demands placed by each workload on various system resources. System monitors and accounting logs can provide most or all of the required information. Most analytic models allow for the use of various assumptions in order to keep the solution simple and efficient. The affect of these assumptions on the accuracy
of your conclusions must be considered when making recommendations.
Analytical modeling is best used when a numeric prediction is needed, when workload forecasts are fairly accurate, when a best-case/worst-case situation holds, or when an accurate answer can save significant money, time, or headaches.
For more see this Interesting Article.
Analytical and Simulation Modeling for Network Design and Planning
By Lou Breit
There are significant differences between analytical modeling and simulation modeling, particularly in the areas of scalability and complexity.
Analytic modeling, a mathematical representation of computer or other systems for performance analysis, has limited ability to accurately model complex or dynamic environments. Single hosts or nodes in a simple platform can be quickly modeled through mathematical means, by having a large number of users generate an "average" size transaction against a single server. Queuing theory equations and formulas are used to estimate the performance or response time of an existing or planned system by relating input parameters to obtain output statistics.
While these types of models can be created quickly, they are unable to accurately handle the following types of systems.
- Concurrent access to internal resources (memory, disk, etc.)
- Prioritization of traffic streams or processes.
- Background or less significant communication protocols between components.
- Interrupts and the ability for one process to block or impede another.
- Complex events which may shift or vary over a known time interval. Validation of inter-dependent functions or processes.
- Systems with variable loads, spikes, unpredictable client interaction.
These limitations force the use of approximation and estimation
techniques, increasing the likelihood of inaccurate results. In addition, analytical models cannot be effectively used for the following types of project work:
- Where a new design is under consideration, and no baseline model exists;
- When a component upgrade or change is under review and the effect on performance needs to be determined;
- When many interdependent transactions exists within a system and the effect on response time when each are competing needs to be measured.
In an analytic model, all traffic is represented at a single level and is very effective for simple client/server or mainframe environments where load and traffic volumes remain fairly consistent. These narrowly focused and simple models are well suited to existing environments which remain static with little variation.
The Simulation Side
, in contrast, offers the ability to create a valid representation of an entire complex system consisting of all computers, routers, bridges, applications, and database servers. In the simulation, the interaction of all these components can be launched and the results measured under various scenarios. Simulation modeling can be used for any system, but is particularly well suited for complex environments. In detail, it can be used to do the following.
- Maintain accuracy by scaling and reacting to system or platform variations.
- Provide valid results for all types of software applications, both in- house and commercial.
- Address capacity planning questions at both the hardware and software resource levels, while allowing the connection between the two to influence the results.
- Represent workload fluctuations on a minute by minute basis, with batch arrivals and processing placed in parallel with interactive traffic flow.
- Create accurate models of truly concurrent systems where multiple threads may exist within one or several components.
- Represent database locking and I/O processing with greater accuracy.
- Provide a limitless approach to modeling any system regardless of complexity.
Since the majority of today's computer networks are dynamic in design, one small change in a single component or process can trigger a much larger change and have a negative effect on end user response time and system availability. These small errors or miscalculations may be compounded through the planning and design process, until the cost of correction within the finished system becomes excessive. Simulation modeling can highlight these errors and eliminate them before the system or platform is finalized.
Another method of performance analysis is through the use of load or stress testing, where system behavior is observed and recorded under predicted user volume. This type of testing usually involves the use of tools which can create groups of virtual users or agents which mimic real clients running actual application tasks.
For example, if a product manager needs to know if 100 users can update a Web server inventory database at the same time, he would request a load test matching those requirements be run against the server and application.
During the test, server and client resources, response time, etc. would be recorded for analysis.
While load or stress testing can be helpful, the process does have several limitations.
- The system or platform under review must have already been designed and implemented into production.
- The scripts simulating the user transactions must be carefully designed and tested, to ensure they closely match the actual application tasks in size and structure.
- Numerous details surrounding the actual production environment may reduce the accuracy of the testing. For example, actual user workstations may have different settings than the machine used for the load test, skewing the results.
Most important, load
or stress testing
cannot be used as a means to improve system performance or predict how to change the system to meet varying workloads. Instead, a combination of analytic and simulation modeling combined with a load testing initiative (for existing systems) may be the best approach for capacity planning and performance analysis.
Network Tip of the Month
Instead of taking an "average" calculation for analysis, use a geometric mean (found in many spreadsheets as a defined formula). In many instances, this will provide a more accurate view of trends when reviewing network utilization, system metrics, etc.
Lou Breit is a senior IT specialist in the network and application performance analysts division of First Data Merchant Services.
Contact him at email@example.com∞