Printer Friendly
The Free Library
22,710,190 articles and books

Understanding J2EE performance.



Businesses rely on Java 2, Enterprise Edition (J2EE (Java 2 Platform, Enterprise Edition) A platform from Sun for building distributed enterprise applications. J2EE services are performed in the middle tier between the user's machine and the enterprise's databases and legacy information systems. ) Application Servers to deliver highly reliable mission critical applications. These applications include self-service catalog catalog, descriptive list, on cards or in a book, of the contents of a library. Assurbanipal's library at Nineveh was cataloged on shelves of slate. The first known subject catalog was compiled by Callimachus at the Alexandrian Library in the 3d cent. B.C.  services, real-time portfolio management, and 24 X 7 customer service. If these system are not available, customers and money can be lost. Monitoring solutions for J2EE applications A J2EE application or an enterprise application is any deployable unit of J2EE functionality. This can be a single J2EE module or a group of modules packaged into an EAR file along with a J2EE application deployment descriptor.  need to aggregate data from multiple sources, provide a user-friendly interface to the date and identify performance issues before customers experience poor response time or an outage out·age  
n.
1. A quantity or portion of something lacking after delivery or storage.

2. A temporary suspension of operation, especially of electric power.
. This paper provides an overview of how J2EE addresses application management, identifies common hot spots hot spots

acute moist dermatitis.
 to watch, and offers a solution to maximize the availability of your application by keeping you aware of potential problems.

Background

J2EE was designed to provide an extensible platform for mission critical business applications. Developing n-tier J2EE applications provides much needed scalability, redundancy, and a separation between the customer interface and the business logic of an application. However these application environments also introduce additional complexity and new challenges for those charged with maximizing availability and performance. Software development groups are at different stages of integrating J2EE technology. Customer Relationship Management (CRM (Customer Relationship Management) An integrated information system that is used to plan, schedule and control the presales and postsales activities in an organization. ) companies are extending their applications by adding HTML HTML
 in full HyperText Markup Language

Markup language derived from SGML that is used to prepare hypertext documents. Relatively easy for nonprogrammers to master, HTML is the language used for documents on the World Wide Web.
 based interfaces. A J2EE application server is often introduced at this time to host Server side Java (servlets) and Java Server Pages See JSP.  (JSPs). When the product architect determines that the application requires a greater degree of management, he can design the application to take advantage of the persistence and transactional framework provided by the J2EE application server. The server manages the resources used by J2EE applications including memory, database connections, thread pools, and caching. Once deployed the J2EE application consists of several tiers, Clients use Web browsers The following is a list of web browsers. Historical
Historically important browsers
In order of release:
  • WorldWideWeb, February 26, 1991
  • Erwise, April 1992
  • ViolaWWW, May 1992, see Erwise
 to access a Web server. The Web server may process the request independently or it may pass the request back to an application server. If the necessary data is cached locally, the application server sends a response back through the Web server to the client. Otherwise, the application server queries remote databases and legacy systems in order to aggregate a response to the customer query.

Data Collection

Fundamental to every monitoring solution is the data. Without quality information available to the system, warnings and alerts cannot be generated. Performance monitors for J2EE applications need to aggregate data from multiple sources. The first data source is the application server itself. In addition to providing a platform for applications, the server also provides a management framework. Recent application servers from all the major vendors, including BEA BEA - Basic programming Environment for interactive-graphical Applications, from Siemens-Nixdorf. , IBM (International Business Machines Corporation, Armonk, NY, www.ibm.com) The world's largest computer company. IBM's product lines include the S/390 mainframes (zSeries), AS/400 midrange business systems (iSeries), RS/6000 workstations and servers (pSeries), Intel-based servers (xSeries) , Oracle, and SUN, implement the standard framework called Java Management Extensions Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (e.g. printers) and service oriented networks. Those resources are represented by objects called MBeans (for Managed Bean).  (JMX JMX Java Management Extensions
JMX Air Jamaica Express (ICAO code)
JMX Jumbogroup Multiplex
).

The JMX framework consists of three levels. At the first level are Managed Beans In the JMX API, a managed bean - sometimes simply referred to as an MBean - is a type of JavaBean, created with dependency injection. Managed Beans are particularly used in the Java Management Extensions technology.  (MBeans). MBeans expose configuration data and methods to change configuration values. MBeans also provide current resource usage. For example, an MBean can tell you the maximum size of an EJB (Enterprise JavaBeans) A software component in Sun's J2EE platform, which provides a pure Java environment for developing and running distributed applications. EJBs are written as software modules that contain the business logic of the application.  (Enterprise Java Beans See JavaBeans. ) cache, the current size of the cache, and the ability to change the maximum size.

MBeans are managed by the MBean Server that reside at the second level of the JMX framework. The MBean server contains the registry for MBeans and provides services to manipulate them In the above example, remote applications go through the MBean Server to inspect and manage the EJB cache size.

The final layer consists of JMX adapters that assist external applications to access the MBeans. This layer is specified as part of the IMX IMX In My eXperience
IMX Interactive Music Exchange (TV show)
IMX Integrated Multimedia Exchange
IMX Industrywide Mortgage Exchange
IMX Intermodal Marketing Extension
IMX Inverse Multiplexor
 standard but its implementation is not required. As such application server and software vendors write adapters to meet their specific needs. For example, Web-based console used to monitor the EJB cache size would use the HTT'P adapter A device that allows one system to connect to and work with another. An adapter is often a simple circuit that converts one set of signals to another; however, the term often refers to devices which are more accurately called "controllers.  to access the MBean Server.

Through the JMX framework, application server vendors provide access to the current resource usage and configuration of the EJB and Servlet containers An application server that provides the facilities for running Java servlets. Also called a "servlet engine" and "servlet womb," examples of servlet containers are JServ and Tomcat from the Apache Jakarta Project. . While this is a standard framework, application server vendors are free to choose which attributes of their application servers are exposed. Resources commonly monitored through JMX include EJB usage, transactions, thread pools, servlet A Java application that runs in a Web server or application server and provides server-side processing such as accessing a database and e-commerce transactions. Widely used for Web processing, servlets are designed to handle HTTP requests (get, post, etc.  pools, JMS (Java Messaging Service) A programming interface (API) from Sun for connecting Java programs to messaging middleware such as IBM's MQSeries and TIBCO's Rendezvous. JMS is part of Sun's J2EE platform. See J2EE.

JMS - Java Message Service
 thresholds, and cache sizes.

In addition to variations between application servers, there is a significant amount of information pertaining per·tain  
intr.v. per·tained, per·tain·ing, per·tains
1. To have reference; relate: evidence that pertains to the accident.

2.
 to the application that is not typically available through JMX. The process, known as instrumentation is necessary to obtain information not provided by the vendors including details about the individual methods that make up the application. There are two types of instrumentation used to obtain method level data. The first is designed by the application architect and implemented by the programmer (1) A hardware device used to customize a programmable logic chip such as a PAL, GAL, EPROM, etc. See PROM programmer.

(2) A person who designs the logic for and writes the lines of codes of a computer program.
 during the development of the application. Once compiled, the application exposes performance information as specified by the architect.

Software monitoring tools do not usually have access to application source code during the design stage. The monitoring solution is typically selected as the application enters the load testing Load testing is the process of creating demand on a system or device and measuring its response.

In mechanical systems it refers to the testing of a system to certify it under the appropriate regulations (LOLER in the UK - Lifting Operations and Lifting Equipment
 stage prior to deployment. Adding instrumentation to the source code at this time would introduce risk to the project and considerable delay to the release while the changes are made and tested. Instead of modifying the source code, monitoring tools typically apply a second form of instrumentation to the byte-code of the compiled application. By adding a small amount of additional byte-code around compiled methods, the necessary performance information is exposed to the monitoring system.

While JMX can be used to determine many attributes of the EJBS, it cannot expose everything that happens inside of the EIB See NIST binary. . Byte-code instrumentation occurs within the bean allowing for a deeper level of runtime data than when JMX is used alone. Once instrumented, class and method data is available including response time distributions, and usage counts, and thrown exceptions.

Individually, JMX and byte-code instrumentation provide valuable insight into the J2EE application server and the hosted applications. Using data from both sources, monitoring tools are able to report an accurate picture of the availability and performance of J2EE applications and alert administrators of potential problems.

Common Hot Spots

Garbage Collection A software routine that searches memory for areas of inactive data and instructions in order to reclaim that space for the general memory pool (the heap). Operating systems may or may not provide this feature.  

The manner in which your application uses memory can greatly affect its performance; something as simple as creating anew a·new  
adv.
1. Once more; again.

2. In a new and different way, form, or manner.



[Middle English : a, of (from Old English of; see of) + new
 instance of an event object and passing it between the Web-tier and EJB-tier can appear harmless The term harmless may be taken in several ways:
  • A word of ordinary English. See the Wiktionary entry at .
  • A legal term occurring in the contract law concept of hold harmless (indemnity). See also waiver.
 during development and unit testing (testing) unit testing - The type of testing where a developer (usually the one who wrote the code) proves that a code module (the "unit") meets its requirements. , but under load testing the memory impact can be significant. For example, if each request uses 10K of memory, multiply mul·ti·ply
v.
1. To increase the amount, number, or degree of.

2. To breed or propagate.
 that by 500 simultaneous users making requests on the average of 5 seconds each: after running for 5 minutes, the memory allocated for this 10K object is 300MB, Combine this with all of the other objects you are creating and suddenly garbage collection becomes a major issue.

One of the benefits of Java is that the virtual machine manages all of your memory for you: this is both a blessing as well as a curse Curse
Ancient Mariner

cursed by the crew because his slaying of the albatross is causing their deaths. [Br. Poetry: Coleridge The Rime of the Ancient Mariner]

Andvari

king of the dwarfs; his malediction spurs many events in the
 because while you are not burdened with the task of memory management, you cannot explicitly manage it either. Thus the Java Virtual Machine A Java interpreter. The Java Virtual Machine (JVM) is software that converts the Java intermediate language (bytecode) into machine language and executes it. The original JVM came from the JavaSoft division of Sun.  (JVM See Java Virtual Machine.

JVM - Java Virtual Machine
) maintains a thread that watches memory and reclaims memory as needed as needed prn. See prn order. . There are various virtual machines, but for the purposes of this discussion we will focus on the Sun Java Virtual Machine version 1.3.1 (as it is still shipping with most production application servers).

The Sun JVM manages memory by maintaining two separate generational spaces that objects can be allocated in: the young generation and the old generation. By managing a young generation, the JVM can take advantage of the fact that most objects are very short lived and are eligible for garbage collection very shortly after they are created; the young generation runs very quickly and efficiently as it either reclaims unused memory or moves old objects to the old generation using a copying mechanism The introduction to this article provides insufficient context for those unfamiliar with the subject matter.
Please help [ improve the introduction] to meet Wikipedia's layout standards. You can discuss the issue on the talk page.
 There are three types of garbage collection that it supports:

* Copying (or scavenge scav·enge  
v. scav·enged, scav·eng·ing, scav·eng·es

v.tr.
1. To search through for salvageable material: scavenged the garbage cans for food scraps.

2.
): efficiently moves objects between generations; default for minor collections

* Mark-compact: collects memory in place but significantly slower than copying; default for major collections

* Incremental Additional or increased growth, bulk, quantity, number, or value; enlarged.

Incremental cost is additional or increased cost of an item or service apart from its actual cost.
 (or train): collects memory on an on going basis to minimize the amount of time spent in a single collection; you must explicitly enable using the "-Xincgc" command line argument

The default behavior of garbage collection is to reclaim the memory that it can by copying objects between generations (minor collections) and when the memory usage approaches the maximum configured con·fig·ure  
tr.v. con·fig·ured, con·fig·ur·ing, con·fig·ures
To design, arrange, set up, or shape with a view to specific applications or uses:
 size then it performs a mark-compact operation (major collection). In a single application environment a major collection slows down the application, but runs very rarely; in an enterprise application, however, we saw that operations performed on a simple request generated 300MB of memory usage in a 5- minute period. A major collection is catastrophic to the performance of your application server. Some major collections can take in upwards of a few minutes to run and during that time your server is unresponsive unresponsive Neurology adjective Referring to a total lack of response to neurologic stimuli  and may flat out reject incoming connection requests.

So how can you avoid major collections? Or if you cannot, how can you minimize their impact on your system?

Tuning the JVM involves a couple steps:

1. Choose a heap size that supports your application running your transactions under your user load

2. Size your generations to maximize minor collections

The default behavior of the JVM works great for stand-alone applications, but abysmally for enterprise applications; the default sizes vary from operating system operating system (OS)

Software that controls the operation of a computer, directs the input and output of data, keeps track of files, and controls the processing of computer programs.
 to operating system, but regardless the performance is not tImed for the enterprise. Consider the JVM running under Solaris, it has a maximum heap size of 64MB of which 32MB is allocated to the young generation. Consider allocating 300MB in 5 minutes, or 60MB per minute with a total heap size of 64MB--and recall that your requests are not the only thing running in the virtual machine. The bottom line is that this heap is far too small. A rule of thumb is to give the virtual machine all of the memory that you can afford to give it.

When sizing the generations we must take a closer look at the structure of the young generation: it has a region where objects are created called Eden and defines two survivor spaces; one survivor is always empty and is the target for the subsequent copy--objects are copied between the survivor spaces until they age enough to be tenured ten·ured  
adj.
Having tenure: tenured civil servants; tenured faculty.

Adj. 1. tenured
 to the old generation. Properly sizing the survivor spaces is very important because if they are too small then the minor collection cannot copy all of the required objects from Eden to the survivors which causing it to run much more frequent and forces it to prematurely tenure objects. Under Solaris the default size of the survivor spaces is 1/27th of the entire size of the young generation, which is probably too small for most enterprise applications.

The next question is how to size the young generation itself. Unless you are experiencing problems with frequent major collections, the best practice is to allocate as much memory to the young generation as possible up to half the size of the heap.

After making these changes, watch the heap and the behavior of garbage collection while the system is under load and adjust the sizes accordingly.

Entity Bean See EJB.  Cache

Entity Beans define an object model running between your application and your persistent data Data that exists from session to session. Persistent data are stored in a database on disk or tape. Contrast with transient data. See persistent name. ; Entity Beans usually communicate with databases although they are not limited to only database communication. For the purposes of this discussion let us assume that our Entity Beans are communicating to a database, but the discussion is equally applicable to legacy systems or other implementations.

In a properly built enterprise Java application A Java program that is run stand alone. The Java Virtual Machine in the client or server is interpreting the instructions. Contrast with Java applet. See servlet. , you will delegate A person who is appointed, authorized, delegated, or commissioned to act in the place of another. Transfer of authority from one to another. A person to whom affairs are committed by another.

A person elected or appointed to be a member of a representative assembly.
 your data persistence to Entity Beans, not only because that was their intended purpose from the inception of J2EE, but because application servers provide a caching mechanism that manages your Entity Beans for you. The lifecycle of a data request is as follows:

1. A component requests and Entity Bean

2. The Entity Bean is loaded and initialized with the persistent information it represents

3. The Entity Bean is stored in the Entity Bean cache for future reference (activated to the cache)

4. The Entity Bean's remote (or local) interface is returned to the requesting component

Subsequent requests will be serviced directly from the Entity Bean Cache and will bypass the creation of the object and its initialization in·i·tial·ize  
tr.v. in·i·tial·ized, in·i·tial·iz·ing, in·i·tial·iz·es Computer Science
1. To set (a starting value of a variable).

2. To prepare (a computer or a printer) for use; boot.

3.
 (a query to the database); thus database access is minimized yielding significantly enhanced performance. The nature of a cache is that it has a predefined size specifying how many objects it can hold and then it manages the life times of those objects based off of an internal algorithm. For example, it might keep the most recently accessed objects in the cache and remove objects that are seldom accessed to make room for new objects. Since caches have predefined sizes, the sizing of the cache has a significant impact on performance.

When referring to Entity Bean Caches, the term for loading an object from persistent storage into the cache is called activation and the term for persisting an object from the cache to persistent storage is called passivation passivation

the final stage in instrument manufacture, passing the finished instruments through a bath of nitric acid which removes foreign particles and promotes the formation of a protective coating of chromium oxide.
. Entity Beans are activated into the cache until the cache is full and then, when new Entity Beans are requested, some existing beans must be passivated so that the new beans can be activated. Activating and passivating beans creates an overhead on the system and, if performed excessively, actually eliminates all of the benefits of having a cache. This state of excessive activations and passivations is referred to as thrashing thrashing: see threshing.


Excessive paging in a virtual memory computer. If programs are not written to run in a virtual memory environment, the operating system may spend excessive amounts of time swapping program pages in and out of the disk.
. When tuning the size of your Entity Bean Cache, the goal is to size it to service most of the requests from the cache, thus minimizing thrashing. As with every tunable parameter there is a trade off: a large cache requires more system resources (1) In a computer system, system resources are the components that provide its inherent capabilities and contribute to its overall performance. System memory, cache memory, hard disk space, IRQs and DMA channels are examples.  (memory) than a small cache. So the other facet facet /fac·et/ (fas´it) a small plane surface on a hard body, as on a bone.

fac·et
n.
1. A small smooth area on a bone or other firm structure.

2.
 of your goal is to ensure that the cache is as large as it needs to be but not much larger.

The tuning process is to load test your application using representative transactions and observe, using a monitoring tool, the behavior of your Entity Bean Caches, paying particular attention to:

* Number of Requests

* Number of Requests serviced by the cache (hit count)

* Number of Passivations

* Number of Activations

If you see excessive activations and passivations then increase your cache size until they are few or non-existent. The result will be shorter request response times and diminished database accesses.

Segmentation

Segmentation of resources is the act of assigning specific application components to use specific sets of resources. Segmentation can be applied to both thread pools as well as JDBC (Java DataBase Connectivity) A programming interface that lets Java applications access a database via the SQL language. Since Java interpreters (Java Virtual Machines) are available for all major client platforms, this allows a platform-independent database  connection pools.

All application servers maintain thread pools that are used to service incoming requests; in WebLogic these thread pools are called execute threads and contained within one or more execute queues while WebSphere terms them thread pools. Regardless of the implementation, a request that arrives at the application server is queued to wait for or directly dispatched to a thread for processing. When the thread is available, it takes the request and perform some business logic and returns a response to the caller.

In most application servers, the default behavior is to assign all applications and components to be serviced by the same pool of threads. If all components in your application are of equal business value and one business process is permitted to wait for an indefinite INDEFINITE. That which is undefined; uncertain.

INDEFINITE, NUMBER. A number which may be increased or diminished at pleasure.
     2. When a corporation is composed of an indefinite number of persons, any number of them consisting of a majority of those
 amount of time because another business process is in use, then this is an acceptable model. In practical application however, certain business processes have more intrinsic value Intrinsic Value

1. The value of a company or an asset based on an underlying perception of the value.

2. For call options, this is the difference between the underlying stock's price and the strike price.
 than others and regardless one business process should not inhibit the performance of another business process. Consider deploying the following set of components:

* E-Commerce Store Front-End

* E-Commerce Checkout and Billing Component

* Administration Component

Each of these three components has a specific purpose and value to the business: the front-end allows customers to browse the company's products, compare prices, and add components to his shopping cart; the checkout and billing component is responsible for gathering customer demographic and credit card information, connecting to a credit card billing server to attain purchase confirmation and debiting the customer's credit card, and store the record in the database for reference-, the administration component is your gateway to manage your e-store, track orders, and manage inventory. Since each has a specific purpose and value to the business, each must have the opportunity to execute in an efficient manner. For example, a customer browsing the store should not inhibit another customer from placing an order and likewise the customer placing an order should not slow down customers browsing the store; browsers must become buyers and buyers must complete their transactions for the company to be able to make any money. Finally the administration component must be able to run concurrently both of the other components in case problems develop; for example if the credit card billing company changes its online address or if the company's account is renewed, the change update must be seamless to the customer.

How can you ensure that each component has its fair access to the system resources so that it can complete its task?

The answer is to create individual thread pools for each component that only it has access to at run-time. The size of each thread pool needs to be tuned to meet the requirements of the system and the user load, more frequently access components, such as the front-end, will need more threads than the checkout and billing, which will probably need more threads than the administration component. If the system is under severe load and the front-end is running at capacity, orders can still be placed and the store administrator can always connect to the administration component.

This concept can be further extrapolated and addressed at the database connection pool level. Even if all of your underlying persistence is stored in one database, consider creating multiple connection pools to that database, each categorized cat·e·go·rize  
tr.v. cat·e·go·rized, cat·e·go·riz·ing, cat·e·go·riz·es
To put into a category or categories; classify.



cat
 by its business function. Components that are browsing the store should be serviced by one connection pool while another services customers placing orders. This will avoid contention for database resources between two components vying vy·ing  
v.
Present participle of vie.

vying vie
 to share a single connection pool; again you do not want your store browsers to inhibit other customers from paying you. Segmentation of threads and database connection pools into logical business functions can help ensure your applications response under load and guarantee that critical processes are properly serviced. The result is that the tuning overhead is greater, because you have more things to tune, but the end user your Entity Beans for you. The lifecycle of a data request is as follows:

1. A component requests and Entity Bean

2. The Entity Bean is loaded and initialized with the persistent information it represents

3. The Entity Bean is stored in the Entity Bean cache for future reference (activated to the cache)

4. The Entity Bean's remote (or local) interface is returned to the requesting component

Subsequent requests will be serviced directly from the Entity Bean Cache and will bypass the creation of the object and its initialization (a query to the database); thus database access is minimized yielding significantly enhanced performance. The nature of a cache is that it has a predefined size specifying how many objects it can hold and then it manages the life times of those objects based off of an internal algorithm. For example, it might keep the most recently accessed objects in the cache and remove objects that are seldom accessed to make room for new objects. Since caches have predefined sizes, the sizing of the cache has a significant impact on performance.

When referring to Entity Bean Caches, the term for loading an object from persistent storage into the cache is called activation and the term for persisting an object from the cache to persistent storage is called passivation. Entity Beans are activated into the cache until the cache is full and then, when new Entity Beans are requested, some existing beans must be passivated so that the new beans can be activated. Activating and passivating beans creates an overhead on the system and, if performed excessively, actually eliminates all of the benefits of having a cache. This state of excessive activations and passivations is referred to as thrashing. When tuning the size of your Entity Bean Cache, the goal is to size it to service most of the requests from the cache, thus minimizing thrashing. As with every tunable parameter there is a trade off: a large cache requires more system resources (memory) than a small cache. So the other facet of your goal is to ensure that the cache is as large as it needs to be but not much larger.

The tuning process is to load test your application using representative transactions and observe, using a monitoring tool, the behavior of your Entity Bean Caches, paying particular attention to:

* Number of Requests

* Number of Requests serviced by the cache (hit count)

* Number of Passivations

* Number of Activations

If you see excessive activations and passivations then increase your cache size until they are few or non-existent. The result will be shorter request response times and diminished database accesses.

Segmentation

Segmentation of resources is the act of assigning specific application components to use specific sets of resources. Segmentation can be applied to both thread pools as well as JDBC connection pools.

All application servers maintain thread pools that are used to service incoming requests; in WebLogic these thread pools are called execute threads and contained within one or more execute queues while WebSphere terms them thread pools. Regardless of the implementation, a request that arrives at the application server is queued to wait for or directly dispatched to a thread for processing. When the thread is available, it takes the request and perform some business logic and returns a response to the caller.

In most application servers, the default behavior is to assign all applications and components to be serviced by the same pool of threads. If all components in your application are of equal business value and one business process is permitted to wait for an indefinite amount of time because another business process is in use, then this is an acceptable model. In practical application however, certain business processes have more intrinsic value than others and regardless one business process should not inhibit the performance of another business process. Consider deploying the following set of components:

* E-Commerce Store Front-End

* E-Commerce Checkout and Billing Component

* Administration Component

Each of these three components has a specific purpose and value to the business: the front-end allows customers to browse the company's products, compare prices, and add components to his shopping cart; the checkout and billing component is responsible for gathering customer demographic and credit card information, connecting to a credit card billing server to attain purchase confirmation and debiting the customer's credit card, and store the record in the database for reference-, the administration component is your gateway to manage your e-store, track orders, and manage inventory. Since each has a specific purpose and value to the business, each must have the opportunity to execute in an efficient manner. For example, a customer browsing the store should not inhibit another customer from placing an order and likewise the customer placing an order should not slow down customers browsing the store; browsers must become buyers and buyers must complete their transactions for the company to be able to make any money. Finally the administration component must be able to run concurrently both of the other components in case problems develop; for example if the credit card billing company changes its online address or if the company's account is renewed, the change update must be seamless to the customer.

How can you ensure that each component has its fair access to the system resources so that it can complete its task?

The answer is to create individual thread pools for each component that only it has access to at run-time. The size of each thread pool needs to be tuned to meet the requirements of the system and the user load, more frequently access components, such as the front-end, will need more threads than the checkout and billing, which will probably need more threads than the administration component. If the system is under severe load and the front-end is running at capacity, orders can still be placed and the store administrator can always connect to the administration component.

This concept can be further extrapolated and addressed at the database connection pool level. Even if all of your underlying persistence is stored in one database, consider creating multiple connection pools to that database, each categorized by its business function. Components that are browsing the store should be serviced by one connection pool while another services customers placing orders. This will avoid contention for database resources between two components vying to share a single connection pool; again you do not want your store browsers to inhibit other customers from paying you. Segmentation of threads and database connection pools into logical business functions can help ensure your applications response under load and guarantee that critical processes are properly serviced. The result is that the tuning overhead is greater, because you have more things to tune, but the end user experience, which is the true goal of performance tuning Performance tuning is the improvement of system performance. This is typically a computer application, but the same methods can be applied to economic markets, bureaucracies or other complex systems. , will be greatly enhanced.

Quest's Solutions-Foglight

Problems typically originate o·rig·i·nate
v.
1. To bring into being; create.

2. To come into being; start.
 when an end user initiates a trouble ticket because they are unable to complete a transaction. With little more than the trouble ticket, the administrator needs to isolate the problem and expedite ex·pe·dite  
tr.v. ex·pe·dit·ed, ex·pe·dit·ing, ex·pe·dites
1. To speed up the progress of; accelerate.

2.
 a recovery process. To find the problem, the administrator begins with a birds-eye view of the application environment. This consists of many nodes including Web servers, application servers, databases, and a variety of legacy systems.

Foglight Quest Software's application performance monitor, enables the Application Administrator instant access to the status of each of these nodes.

Foglight's Cartridges
  • List of rifle cartridges
  • List of handgun cartridges
  • Table of pistol and rifle cartridges
  • List of cartridges by caliber
 for J2EE application servers convert the numerous data points collected through JMX and instrumentation into actionable Giving sufficient legal grounds for a lawsuit; giving rise to a Cause of Action.

An act, event, or occurrence is said to be actionable when there are legal grounds for basing a lawsuit on it.
 information and presents the results in a user-friendly interface. To expedite problem identification, Foglight for J2EE enables the administrator to drill down through domains and clusters to individual servers in order to identify if there is a problem with a server or group of servers. Foglight cartridges come with domain specific charts and data views to further isolate performance bottlenecks.

Intuitive Alerts

Foglight cuts through the clutter to provide 24 X 7 unattended monitoring. Customized rules for J2EE applications filter through the data collected through JMX, instrumentation, and log files to alert the Application Administrator when a problem is detected. The rule editor is based on performance variables set to meet customer requirements and can be reused across multiple rules. Rules trigger alerts across a range of conditions. By default alerts appear on the Foglight console. Alerts may also be sent through email to one or more recipients based on the severity of the alert. Each alert message contains the severity and description of the problem From the alert, the a administrator can drill down to the alert detail and access the help system for further information.

Standard rules in Foglight monitor the availability and performance of J2EE applications and application server resources. Rules can watch for a threshold to be crossed such as when the number of active threads reaches 100 or the number of available application servers drops from ten down to four. Instead of fixed values, a rule could trigger an alert when 90% of the JDBC connections are in use or when the number of available servers drops to 40%. A third type of rule can compare the current resource usage to the average resource usage and trigger an alert when necessary. For example, Foglight can send an email alert when the average response time of a servlet is 50% longer than the running average. This type of rule learns the typical behavior of the application and alerts the administrator when the application responds abnormally.

Chart and Data Views

Foglight views of the data collected through JMX and instrumentation are available for J2EE applications. Easy to read charts provide quick access to the resource usage, availability, and performance of EJBs and Servlets. Additional charts can be created, stored, and shared. Any report can be formatted as a report and sent via email.

Below are 20 common questions answered by Foglight charts and data views.

1. What has server availability been for the past week?

2. How many servers in my cluster (workgroup) are available?

3. How many Entity Beans are active (cached) in each applications

4. How many Session Beans In the Java Platform, Enterprise Edition specifications, a Session Bean is a type of Enterprise Beans, the other two types are Entity Beans and Message-driven beans.  are active in each application?

5. How many threads are in use in each execute queue?

6. How much memory is being reclaimed re·claim  
tr.v. re·claimed, re·claim·ing, re·claims
1. To bring into or return to a suitable condition for use, as cultivation or habitation: reclaim marshlands; reclaim strip-mined land.
 during Garbage Collection?

7. How many JDBC connections are currently in use?

8. What is the overall usage of the JMS server?

9. What is the message and byte usage for all JMS Topics and Queues?

10. How many HTTP HTTP
 in full HyperText Transfer Protocol

Standard application-level protocol used for exchanging files on the World Wide Web. HTTP runs on top of the TCP/IP protocol.
 Sessions are currently active for each applications

11. What has the JVM heap availability been for the past 24 hours?

12. How many transactions have run through the system?

13. Have there been a large number of application rollbacks today?

14. How many rollbacks were caused by a JTA (Java Transaction API) A programming interface (API) from Sun for connecting Java programs to transaction monitors such as IBM's CICS and BEA's Tuxedo. JTA is part of Sun's J2EE platform. See J2EE.  Resource?

15. What is the average response time for each monitored Class?

16. What is the average response time for each monitored Method?

17. Which method is causing my class to respond abnormally?

IS. For a given method, how do the latest response times compare to the average?

19. What is the average response time for each monitored Servlet?

20. What is the number of invocations for each monitored Servlet?

www.quest.com

JAVA UPDATE

Free JProbe Profiler

Quest Software The computer-software manufacturer Quest Software (Quest Software, Inc.) (NASDAQ: QSFT), headquartered in Aliso Viejo, California, dates from 1987. Quest develops, sells, and supports database management, Windows management, and application management software products , (UK) Ltd. has released a freeware Software that is distributed without charge and which may be redistributed without charge by its users. However, ownership is retained by the developer who may change future releases from freeware to a paid product (feeware). See shareware, free software and public domain software.  edition of its Java development solution free of charge to the entire Java development communityJProbe Profiler a Java profiling tool, helps developers diagnose diagnose /di·ag·nose/ (di´ag-nos) to identify or recognize a disease.

di·ag·nose
v.
1. To distinguish or identify a disease by diagnosis.

2.
 performance bottlenecks in Java code. Quest JProbe Profiler is part of Quest JProbe Suite, an performance toolkit for Java code tuning that helps developers diagnose performance bottlenecks, memory leaks A condition caused by a program that does not free up the extra memory it allocates. In programming languages, such as C/C++, the programmer can dynamically allocate additional memory to hold data and variables that are required for the moment, but not used throughout the program. , excessive garbage collection, threading issues and coverage deficiencies in their J2EE and J2SE (Java 2 platform, Standard Edition) See Java 2.

J2SE - Java 2 Platform, Standard Edition
 applications.

J2EE Application Server Diagnostic Tools

Quest Software has announced the release of new freeware editions of its Spotlight tools for J2EE application servers: Quest Spotlight for WebLogic Server and a new product, Quest Spotlight on WebSphere. By providing graphical views of real-time WebLogic and WebSphere application performance, the new Quest products complement the performance management tools found natively within BEA WebLogic A software suite from BEA Systems, Inc., San Jose, CA (www.beasys.com) that is used to deploy Web and SOA applications. The core product is BEA WebLogic Server, a J2EE application server.  and IBM WebSphere application servers This article or section needs sources or references that appear in reliable, third-party publications. Alone, primary sources and sources affiliated with the subject of this article are not sufficient for an accurate encyclopedia article. . Quest Spotlight on WebSphere and Spotlight for WebLogic Server detect bottlenecks on the application server tier and enable administrators to quickly determine which server or application component is experiencing performance degradation. Once a problem is detected, colour-coded alarms guide the user to the root cause, where expert advice provides insight into the problem and how to resolve it.

www.quest.com

Hugh Docherty & Steven Haines, Quest Software
COPYRIGHT 2004 A.P. Publications Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Java Progress - J2EE Performance; Java 2, Enterprise Edition
Author:Haines, Steven
Publication:Software World
Geographic Code:1USA
Date:Jul 1, 2004
Words:5120
Previous Article:Understanding the messaging paradigm.
Next Article:Comparing .NET with J2EE.
Topics:



Related Articles
MOSAID Selected as Toshiba Design Support Partner.
Pramati Server 3.0 First Application Server to Pass the Sun Microsystems J2EE v 1.3 Compatibility Test Suite.
Forte for Java, release 3.0. (Internet Focus).
Wakesoft delivers support for Oracle9i App Server.
Optane for J2EE. (Management).
One or the other or both: in the battle between J2EE and .NET, insurers must determine which platform best fits their needs.
J2EE: the great communicator.
J2EE touted for large pool of trained developers.
CGI banks on VERITAS Software for J2EE application performance management.
JAVA DEVELOPERS DEVELOP APPLICATIONS AT NOT COST.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters