Read Microsoft Word - Application Optimization White Paper.doc text version

WHITE PAPER

When Performance is Critical: Application Performance Optimization with Compuware Vantage

Introduction and Scope

Most optimization efforts begin with a phone call. The stated goal: improve end user experience--fast. As application infrastructures grow in complexity, the groups that manage them become increasingly specialized. The majority of infrastructure management tools don't detect the problems end users report, so there isn't a clear path to fix potential issues. Without the right approach to bring everyone to the same focus, teams risk following different marching orders and fighting different battles. A client-server application needed quick and effective troubleshooting for issues affecting performance. The application was critical, supporting over 18,000 users. Our consulting firm led the testing effort for the optimization project. We were in the hot seat to jumpstart the effort and keep it moving in the right direction. This white paper presents a Vantage solution for application performance optimization, drawing from our experience with large-scale performance tuning efforts. The solution helps to resolve the problems facing companies with mission-critical internally or externally facing applications to set up a process that aligns to the Service Delivery Processes components of ITIL. The paper follows a phased approach: discovery, planning, application verification, transaction analysis, findings identification and implementation. This approach assumes knowledge of the components of ITIL Service Management and embeds that knowledge within the process. The topic of application optimization is broad, and our goal here is to provide a repeatable process for performance tuning that we have successfully employed to find effective solutions. Application optimization is only covered to the extent of the team's essential efforts to understand the application, choose the toolset, and fine-tune solutions for short-term implementations. The paper is targeted at technical staff tasked with the responsibility to lead a team to performance tune applications. The paper is also beneficial for managers and customers to develop an understanding of the process. Depending on your organization, the effort could include one team, or a few individuals. Throughout our discussion, we will refer to the optimization "team" and the groups the team might include. Whatever the size of the team, stakeholders in application optimization all require the same focus: improved performance.

QoS Networking, Inc. Copyright © 2007 QoS Networking, Inc. All Rights Reserved. Page 1 of 12

The Approach

When end users are vocal about application issues affecting productivity, executives send out a call to battle--a race to optimize application performance and improve the quality of IT services. Executives and managers lead the optimization effort, while the project team includes multiple groups that are highly specialized (end-user representatives, vendors, developers, and so on). Team members may only speak the same language in that they want to see improvement, fast. In one room, the optimization "war room," all of these perspectives must agree on one coordinated approach to resolve performance slowdowns and improve end-user experience. To focus the effort, our process follows a phased approach in line with the cycle of "plan--do--check--act." This phased approach (shown in Table 1) enables the team to find, verify, and implement solutions that lead to the most noticeable performance gains.

Table 1 Phases of Application Optimization

Optimization Phase Process Steps Identify known problems. Discovery Develop an understanding of how the application works. Identify actionable items. Control the situation. Identify and implement the toolset. Planning Develop the test plan. Create an end-user experience rating system. Application Verification Transaction Analysis Findings Identification and Implementation Verify existing knowledge about the application. Test processing times for the most-used transactions. Identify and analyze the worst-performing transactions. Collaborate to find fixes to performance problems. Implement the fixes, observe and verify their impact.

Discovery

In the discovery phase, the goal is to understand the application and performance problem areas. The optimization team gathers in a kick-off meeting to share knowledge. Several key questions will begin to narrow down the problems affecting performance: What problems are reported? What are the symptoms? Who is experiencing the symptoms? What has changed in the environment? Are the problems getting worse?

QoS Networking, Inc. Copyright © 2007 QoS Networking, Inc. All Rights Reserved. Page 2 of 12

Knowledge tracking systems bring everyone to the same page. The core documentation set includes mechanisms for analyzing problems and interpreting application flows. The team will use tools in the Vantage suite to verify technical information, and most importantly, to develop a shared understanding of the application and performance problems. For example, a problem identification table as shown in Table 2 is useful for listing the known problems, finding commonalities and tying together separate points for analysis.

Table 2 Problem Identification Table

Symptom - Slow transaction - Application freezes - Application closes abruptly - Connection timeout - Server refuses connection Frequency - Any time the transaction executes - During a specific time of day - Infrequently Site(s) - All locations - Specific site - Multiple sites Access Type(s) VPN client VPN site-to-site T1 DSL Frame Relay Dialup

At the same time of problem analysis, the team needs to clarify how the application works through data gathering. Key questions can include: Application type: Is it multi-tiered? Is it a client-server application? Is it web based? Infrastructure components: What servers are involved? Are there load balancers? Are there firewalls? What are their IP addresses? Where are application components located on the network? Application access: How do users access the application--directly, through a proxy server, VPN, WAN/LAN? Application transactions: What are the transaction flows? The optimization effort will begin to take shape as the team identifies actionable items. Team leaders must freeze all changes to the application to isolate troubleshooting and alleviate the existing crisis that might be affecting performance. End-user representatives will poll users to glean more information about the problem areas identified in the initial problem analysis. Tools and testing methods will become the next goals for the process.

Case Study: Focus on the End User

It is important in the initial effort to take immediate action that will benefit the end user. For example, in our case study managers asked staff to show a physical presence at specific field sites, even though it was unlikely that this would produce critical data. End users could see the optimization effort underway and take part in resolving issues affecting their productivity.

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 3 of 12

Planning

In the planning phase, the team implements the optimization toolset and prepares for baseline testing. The goal is to solidify the test plan while focusing on solutions with the most benefit to the end-user. Groups each bring to the table their own sets of tools. For example, the server group will use performance monitor counters, and the database group will use proprietary database tools. These tools are valuable in the separate areas of the application infrastructure, but they don't provide an all-encompassing view of the IT environment and enduser experience. ClientVantage Agentless is the solution for real-time monitoring and fact-finding. The Agentless monitoring solution features a centralized dashboard that enables the entire team to realize the umbrella of client, server, database, and network data. Agentless monitoring consists of two components: a report server, which is called the Vantage Analysis Server (VAS), and an Agentless Monitoring Device (AMD). The AMD collects the data; the VAS pulls this information from the AMD and stores it in a database where it can create reports on virtually any network connection or application transaction for servers and users. Other tools in the Vantage suite can be advantageous to the team. NetworkVantage can reveal bandwidth issues. To boost existing proprietary tools, ServerVantage is effective for monitoring server performance. ClientVantage is one of the most helpful tools in the Vantage suite. ClientVantage uses synthetic transactions to simulate users of the application. ClientVantage agents operate on standard workstations. ClientVantage will continuously execute application transactions 24 hours a day, 7 days a week. ClientVantage workstations report information back to the VantageView web server. You can now track end user experience without taking users away from their work. ClientVantage can also be used for the test environment. It can repeat the same process over and over. After you have recorded ClientVantage transactions, you can use them with QALoad. QALoad is the user load simulator with the Vantage tools. QALoad is meant for the test environment and not for production. The best results are obtained in a quiet environment. Network specialists on the team need to assist with implementing the Vantage toolset. ClientVantage Agentless devices have monitoring interfaces that record information from packets passing through those interfaces. ClientVantage Agentless uses this information to create statistics for data analysis that are presented in an operational dashboard. Agentless monitoring interfaces should be strategically connected to the network where they will see the least amount of duplicate packets. We recommend deploying ClientVantage Agentless before beginning baseline testing. ApplicationVantage (AV) should also be part of the strategic plan for locating the tools in the environment. AV enables baseline testing, data capture and transaction analysis. AV agents capture network traffic deep into the packet level. They gather performance data in the various layers of the application environment, and with that data AV builds a transaction conversation map. We recommend installing AV on laptops and deploying the laptops in the data center to assist in multi-point captures and analysis.

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 4 of 12

Figure 1 provides an example of the required logical and physical diagrams to understand the application environment so that AMD interfaces and AV agents can be strategically placed on the network. The logical diagram contains only Layer 3 devices, servers, and any Layer 2 device that could manipulate or drop Layer 3 network traffic--for instance, intrusion prevention systems. The physical diagram includes all connections for network devices and servers.

Figure 1 Physical and Logical Diagram

Test Plan

While the team is implementing the toolset, parallel efforts should be underway to develop a detailed testing strategy and test plan. The focus of testing is to benchmark performance times for the most used transactions. The test plan should prioritize the problem areas having the most impact on end-user experience. How are the most used transactions identified? End-user representatives go into the field to determine the "top ten business functions" according to their impact on user productivity. These top ten business functions become the transactions measured in baseline testing. Baseline testing provides the data to benchmark application performance over the network in a best-case scenario. Testing includes both the test environment and the production environment. The test environment is critical because access is restricted; the environment is quiet and therefore optimal for benchmarking performance.

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 5 of 12

Baseline testing with multi-point captures in the test environment provides the end-to-end picture of each transaction. The test environment simulates production as closely as possible, including the login process. For example, users at the test site can log in against production domain controllers, but the client and application servers should reside in an environment absent of other activity that could skew results. Test users will perform the transactions on PC's that are standard to end-user equipment. LAN's and WAN's may be tested independently. For WAN testing, choose an average circuit size and then simulate that circuit size in the test lab. This can easily be accomplished with two routers connected back-to-back with a T1 crossover cable. The speed can be accommodated for the specific test you are performing by changing the clock setting on the router. To keep the project focused on the end user, the test plan should also include an end-user experience (EUE) rating system. With this system, the team can prioritize findings revealed through the testing process. One approach is to poll users to identify the major pain points they are experiencing. These become the key areas of concern for performance tuning.

Case Study: End-User Experience Rating

Our EUE rating system involved an EUE matrix shown in Figure 2. The EUE matrix was an instant reference to visualize the impact of potential solutions on the key areas of concern. It should be noted that statistical analysis is not required to create the matrix; its primary purpose is to provide an organizational tool.

Figure 2 Example: End-User Experience Matrix

As you can see in the "Area of Concern" column, slow response time was most likely to have the biggest impact on user productivity, so this area was given the highest priority for troubleshooting efforts and solutions. This system enabled us to prioritize findings for short-term implementations that would be most beneficial for both end users and the project as a whole.

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 6 of 12

Application Verification

Does the existing understanding of the application match what's actually occurring? How are the most-used transactions performing? In the verification phase, the team uses the toolset to verify existing knowledge about the application and benchmark processing times for the most-used transactions. Monitoring the initial data in ClientVantage Agentless will point out any inconsistencies in the understanding of the application. This verification process will provide a solid foundation for analyzing the results gleaned from testing. One of the true strengths of ClientVantage Agentless is the capability to easily obtain information. Using ClientVantage Agentless for application verification is a simple three-step process: 1) Collect IP addresses for all the servers used in the application. 2) On the VAS web interface, select the User Diagnostics option from the Network tab, and enter these IP addresses. 3) Use the results from step 2 to verify the existing documentation. After you obtain this information, you can create powerful reports from the VAS data-mining interface to analyze the performance of transactions to and from each of the application components.

Case Study: Data Gathering

Initial data gathering with the Vantage toolset involves identifying application components that play the biggest roles in the transaction flows. Visual representations enable transaction analysis and discussion by presenting information in a format that everyone will understand. Using the toolset in tandem with these references helps to separate what's real versus what's fantasy--what's actually going on in the application. For instance, first we developed an application component diagram to show the separate components in the application flow (Figure 3 on the following page). Then we used the diagram to create a transaction flow table, an ordered list with numbers assigned to each component in a specific transaction flow (Table 3 on the following page).

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 7 of 12

Figure 3 Example: Application Component Diagram

Table 3 Example: Transaction Flow Table for Print Detail Report

Transaction Name Transaction Flow Pre-Discovery PRINT DETAIL REPORT 1 1 3 2 3 5 4 7 1 10 3 Post-Discovery 10 3 5 4 7 8

8

The "Pre-Discovery" column lists the data flow the team had originally documented for the "Print Detail Report" transaction. "Post-Discovery" shows what we verified with AV conversation mapping and ClientVantage Agentless reports. Figure 4 below is an example of interpreting the transaction flow table. We found that our initial understanding was skewed. It was believed that the proxy servers were not used when in fact they were a key part of the flow. Using numbers made it much easier to explain the flow of transactions.

Figure 4 Example: Transaction Flow Post-Discovery

1 10 3 10 3 5 7 User initiates connection to proxy server. Proxy server connects to Citrix Secure Gateways; Citrix Secure Gateways connect to web server. Citrix Secure Gateways connect to Citrix Terminal Servers; Citrix Terminal Servers connect to component load balancers; component load balancers connect to database servers.

4 8

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 8 of 12

Benchmarking Performance

After implementing ClientVantage Agentless, it's time to get into the lab with AV and benchmark performance for the most-used transactions that end users identified. Baseline testing team members include the testing lead, assistants, and sample users. The testing lead directs one of the assistants in capturing the transaction data in AV. The testing lead also directs sample users to start and stop each transaction that is being tested. Another assistant records the start and stop times (transaction processing times) as sample users perform the mock transactions. Each transaction may include several steps to compute total processing time for the transaction. The team should repeat the test four times and discard the first test. The purpose of the first test is to align everyone in the same sequence for precise reporting. The test assistant records processing times in a spreadsheet where the data can be manipulated for averages among the tests. The test assistant fact-checks recorded data to ensure accuracy. In the next phase, baseline testing results will be presented to the team for analysis. The testing team should construct charts and graphs to compare the processing-time averages for transactions in the different test environments. Presenting performance data visually provides a constructive takeaway for all of the optimization stakeholders.

Transaction Analysis

In the transaction analysis phase, the team identifies and analyzes the worst-performing transactions derived from baseline testing. Focusing on the worst-performing transactions will bring noticeable benefits to the end user in the fastest time frame possible. AV will help you find where bottlenecks are occurring for specific transactions in the various layers of the application infrastructure. You're looking for "a-ha" moments--recognizing what doesn't make sense. First, merge a transaction from the multipoint capture taken with AV directing the AV agents. Open the conversation map in AV to identify how the infrastructure components are communicating. Then use AV for additional root-cause analysis. Thread analysis is one of the most critical analysis functions of AV. An application thread is any single request from a client to server, or server to backend application components. For example, an "HTTP get" would be an application thread. The response to the request is automatically grouped with the thread for easier analysis. A Gantt chart within AV reports the time to execute for each thread. The real power here is the visual representation of functions and processes within a transaction that operate synchronously, depending on the previous thread before they can finish. Another powerful part of thread analysis is the sorting function. You can sort the time field, descending, and look for the most time-consuming processes. After sorting the most time-consuming processes, you can split the AV window horizontally and show a packet decode for any thread that you highlight. Each time a thread is highlighted, only the packets for that thread are displayed. Competing "sniffer" tools do not provide this correlation.

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 9 of 12

Other features with AV include error analysis, CNS breakdown (client-network-server), node processing, node sending detail, and response time prediction. You can use all of these features for transaction analysis and troubleshooting, and relate the information to what you're seeing in ClientVantage Agentless.

Case Study: Parallel Efforts

It's important to remember that you're always moving in a parallel effort with the toolset. If there is a great deal of instability in the application, ClientVantage Agentless can spot anomalies that may not be related to lab testing. Everyone on the team should be drilling down within the ClientVantage Agentless dashboard--from the server view, and so on--to pinpoint issues for further investigation. For example, a specific transaction would bring our application to a screeching halt. Because we knew we had a problem, we went into ClientVantage Agentless to examine root causes. We saw statistics reporting transactions from the front end of the application to the middle tier prolonging for up to one second, when they should have been executing in less than 100 milliseconds. We knew that this was a key issue, and after working with the team, we identified the cause.

Findings Identification and Implementation

Findings identification and implementation may be the most crucial phase of the entire optimization project. The information you've gleaned from ClientVantage Agentless and AV is powerful. With this information on hand, team members come together, offering solutions to problems. Implementation of these solutions follows the ITIL Release Management cycle. In our terminology, a "finding" is an issue that is causing a bottleneck. Findings should be categorized along with possible solutions and the group responsible for investigating the solution, for example, the application development group or server group. The goal is to stabilize application performance. If the finding has a high impact on performance, it receives a high priority for a solution or "fix." There are two types of problems for which solutions can be identified: 1) Consistent problems that are not load-dependent, for example, the problem is still visible with only a single user in a quiet test environment. 2) Problems discovered during load in production or in the test lab. Figure 5 on the following page outlines the process to work with findings, identify and implement solutions. This process includes the build, test, working, and back-out components of the Release Management cycle. Each finding is added to the pre-build cycle based on its priority. When a finding receives a high priority, the team begins to work on a fix. The team continues this cycle until solutions are identified for the highest-priority findings. The team responsible for implementing a change first creates a solution for the test environment. For instance, the application developers create a fix and add it to the application code for the test cycle.

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 10 of 12

The first test to accomplish involves no simulated user load. This is the "best-case scenario" environment for each transaction to perform at its peak. If the solution produces positive results, the team will test it with simulated user load. QALoad provides this vehicle. A set of transaction scripts are entered into QALoad, which then simulates any number of users executing transactions in the application. There are two things to watch for when testing potential solutions with QALoad: 1) Was the fix successful? 2) Did it cause other problems? If any of the tests fail, the findings should be updated with this information and moved back into the pre-build cycle. During simulated user load testing, it is critical that ClientVantage Agentless is used to monitor transaction statistics.

Figure 5 Findings Identification and Implementation Process

Case Study: Collateral Damage

When investigating findings and fixes, the team needs to focus on problems that involve collateral damage. Collateral damage can have a spiraling effect on other areas of application performance and can dramatically restrict end-user productivity. During baseline testing, we found that one transaction could take up to 12 minutes for users to complete in a quiet environment. This transaction was synchronous, so the user was not allowed to start another transaction until this one completed. The field engineer from Microsoft reported this synchronous-type bottleneck was expected behavior. Because of the collateral damage involved, we weren't convinced.

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 11 of 12

End users had changed their behavior as a result of this extremely slow transaction that was one of their top ten business functions. They were waiting until Friday afternoon to perform the transaction, leaving the application (and their critical work with it) idle. Using ClientVantage Agentless and AV, we were able to reduce this transaction to less than a minute. For end users, this was a huge productivity gain.

Summary and Conclusions

This paper presented a phased approach to application performance optimization, drawing from success stories in the field of large-scale application performance tuning. The phased approach gives the optimization effort the necessary focus for the most benefit to end users in the fastest time frame possible. In the discovery and planning phases, the optimization team comes together to develop an understanding of the application and its problem areas, to implement the toolset and plan for testing. For application verification, the team uses the toolset to verify existing knowledge about the application and benchmark processing times for the most-used transactions. In the transaction analysis phase, the team identifies and analyzes the worst-performing transactions derived from baseline testing data. In the final phase, the team finds potential solutions, implements the fixes, observes and verifies their impact. The goal is to eliminate collateral damage, which can have a spiraling effect on application performance and stability. Several aspects are crucial to the approach: Accurate information sharing among the team Methods for responding to end-user experience The right tools and a quiet test environment for effective data analysis Parallel efforts to reveal and mitigate performance slowdowns Release Management cycle of ITIL for findings identification and implementation The path to optimization isn't always clear when various perspectives and technologies interact. The Vantage toolset provides the necessary data to align these perspectives and troubleshoot key problem areas. By focusing on the team, the tools, and the importance of end-user experience, stakeholders in application performance can drive efforts toward an effective, "tried-and-true" solution for tuning mission-critical applications.

187 Wolf Road Suite 302 Albany, NY 12205 www.qosnetworking.com Tel: 518 435-8060 Fax: 518 435-8079

QoS Networking, Inc. © 2007 QoS Networking, Inc. All Rights Reserved. Page 12 of 12

Information

Microsoft Word - Application Optimization White Paper.doc

12 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

726153


Notice: fwrite(): send of 218 bytes failed with errno=104 Connection reset by peer in /home/readbag.com/web/sphinxapi.php on line 531