At the end of this chapter you will be able to:
- Tune the Operating System
- Optimize Your Database
- Tune WebLogic Server Performance Parameters
- Tuning the Application
- Tuning Java Virtual Machines (JVMs)
Performance tuning WebLogic Server and your WebLogic Server application is a complex and iterative process. These tuning techniques are applicable to nearly all WebLogic applications. Although we highly recommend performing these tasks in the sequence they are presented, this isn’t a requirement.
Gather information about the level of activity expected on your server, the anticipated number of users, the number of requests, acceptable response time, and an optimal hardware configuration (e.g., fast CPU, disk size vs. speed, sufficient memory, and so on.).
There is no single formula for determining your hardware requirements. The process of determining what type of hardware and software configuration is required to meet application needs adequately is called capacity planning. Capacity planning requires assessment of your system performance goals and an understanding of your application. Capacity planning for server hardware should focus on maximum performance requirements.
Tune the Operating System
Each operating system sets default tuning parameters differently. For Windows platforms, the default settings are usually sufficient. However, the UNIX and Linux operating systems usually need to be tuned appropriately.
UNIX Tuning Parameters
Use the following guidelines when tuning UNIX operating systems supported by WebLogic Server.
Solaris TCP Tuning Parameters
For better TCP (transmission control protocol) socket performance, set the tcp_time_wait_interval parameter as follows:
ndd -set /dev/tcp tcp_time_wait_interval 60000
This parameter determines the time interval that a TCP socket is kept alive after issuing a close call. The default value of this parameter on Solaris is four minutes. When a large number of clients connect for a short amount of time, holding these socket resources can have a significant negative impact on performance. Setting this parameter to a value of 60000 (60 seconds) has shown a significant throughput enhancement when running benchmark JSP tests on Solaris. You might want to reduce this setting further if the server gets backed up with a queue of half-opened connections.
Note: Prior to Solaris 2.7, the tcp_time_wait_interval parameter was called tcp_close_wait_interval.
Optimize Your Database
Your database can be a major enterprise-level bottleneck. Configure your database for optimal performance by following the tuning guidelines.
Here are some general database tuning suggestions:
- Good database design — Distribute the database workload across multiple disks to avoid or reduce disk overloading. Good design also includes proper sizing and organization of tables, indexes, logs, and so on.
- Disk I/O optimization — Disk I/O optimization is related directly to throughput and scalability. Access to even the fastest disk is orders of magnitude slower than memory access. Whenever possible, optimize the number of disk accesses. In general, selecting a larger block/buffer size for I/O reduces the number of disk accesses and might substantially increase throughput in a heavily loaded production environment.
- Checkpointing — This mechanism periodically flushes all dirty cache data to disk, which increases the I/O activity and system resource usage for the duration of the checkpoint. Although frequent checkpointing can increase the consistency of on-disk data, it can also slow database performance. Most database systems have checkpointing capability, but not all database systems provide user-level controls. Oracle, for example, allows administrators to set the frequency of checkpoints while users have no control over SQLServer 7.x checkpoints. For recommended settings, see the product documentation for the database you are using.
Here are some basic tuning suggestions for Oracle, SQL Server, and Sybase. Again, you should also check the tuning guidelines in your database-specific vendor documentation.
- When using a JDBC Connection Pool, modify the following attributes:
- DriverName: Use the thin driver or jDriver.
- InitialCapacity: Set this to equal the MaxCapacity value.
- MaxCapacity: Set the MaxCapacity value to at least equal the Thread Count value, and then, if necessary, increase it again until you find the right number.
- Set the connection pool size to equal the execute queue’s Thread Count.
- Set the statement cache.
Establishing a JDBC connection with a DBMS can be very slow. If your application requires database connections that are repeatedly opened and closed, this can become a significant performance issue. WebLogic connection pools offer an efficient solution to the problem.
When WebLogic Server starts, connections from the connection pools are opened and are available to all clients. When a client closes a connection from a connection pool, the connection is returned to the pool and becomes available for other clients; the connection itself is not closed. There is little cost to opening and closing pool connections.
How many connections should you create in the pool? A connection pool can grow and shrink according to configured parameters, between a minimum and a maximum number of connections. The best performance occurs when the connection pool has as many connections as there are concurrent client sessions.
Tuning JDBC Connection Pool Initial Capacity
The InitialCapacity attribute of the JDBCConnectionPool element enables you to set the number of physical database connections to create when configuring the pool. If the server cannot create this number of connections, the creation of this connection pool will fail.
During development, it may be convenient to set the value of the InitialCapacity attribute to a low number to help the server start up faster. In production systems, consider setting the InitialCapacity value equal to the MaxCapacity attribute’s default production mode setting of 25. This way, all database connections are acquired during server start-up. And if you need to tune the MaxCapacity value, make sure to set the InitialCapacity so that it equals the MaxCapacity value.
If InitialCapacity is less than MaxCapacity, the server needs to create additional database connections when its load is increased. When the server is under load, all resources should be working to complete requests as fast as possible, rather than creating new database connections.
Tuning JDBC Connection Pool Maximum Capacity
The MaxCapacity attribute of the JDBCConnectionPool element allows you to set the maximum number of physical database connections that a connection pool can contain. Different JDBC drivers and database servers might limit the number of possible physical connections.
The default settings for development and production mode are equal to the default number of execute threads: 15 for development mode; 25 for production mode. However, in production, it is advisable that the number of connections in the pool equal the number of concurrent client sessions that require JDBC connections. The pool capacity is independent of the number of execute threads in the server. There may be many more ongoing user sessions than there are execute threads.
Caching Prepared and Callable Statements
When you use a prepared statement or callable statement in an application or EJB, there is considerable processing overhead for the communication between the application server and the database server and on the database server itself. To minimize the processing costs, WebLogic Server can cache prepared and callable statements used in your applications. When an application or EJB calls any of the statements stored in the cache, WebLogic Server reuses the statement stored in the cache. Reusing prepared and callable statements reduces CPU usage on the database server, improving performance for the current statement and leaving CPU cycles for other tasks.
Using the statement cache can dramatically increase performance, but you must consider its limitations before you decide to use it.
Tune WebLogic Server Performance Parameters
The WebLogic Server configuration file (config.xml) contains a number of OOTB (out-of-the-box) performance-related parameters that can be fine-tuned depending on your environment and applications. Tuning these parameters based on your system requirements (rather than running with default settings) can greatly improve both single-node performance and the scalability characteristics of an application.
Try experimenting with the following WebLogic Server configuration tuning parameters to determine your system’s “sweet spot” for optimal performance:
- Modify the value of the execute queue’s Thread Count.
- If possible, use native performance packs (NativeIOEnabled=true).
- Use application-specific execute queues.
- Use multiple execute queues for servlets and JSPs.
- Consider switching the default Java compiler for JSP compilation, javac, which is significantly slower than jikes or sj.
Monitor Disk and CPU Utilization
After following the previous steps, run your application under a high load while monitoring the:
- Application server (disk and CPU utilization)
- Database server (disk and CPU utilization)
To check your disk utilization on Solaris or Linux, use the iostat -D <interval> command, where the interval value determines how many seconds you want to elapse between monitoring cycles. To check your CPU utilization, simply leave off the -D flag (iostat <interval>).
For Windows, use the Performance Monitor tool (perfmon), to monitor both your disk and CPU utilization.
The goal is to get to a point where the application server becomes 100 percent utilized. If you find that the application server CPU is not close to 100 percent, confirm whether the database is bottlenecked. If the database CPU is 100 percent utilized, then check your application SQL calls query plans. For example, are your SQL calls using indexes or doing linear searches? Also, confirm whether there are too many ORDER BY clauses used in your application that are affecting the database CPU.
Monitor Data Transfers across the Network
Check the amount of data transferred between the application and the application server, and between the application server and the database server. This amount should not exceed your network bandwidth; otherwise, your network becomes the bottleneck. To verify this, monitor the network statistics for retransmission and duplicate packets. This can be done using the following command:
netstat -s -P tcp
Locate Bottlenecks in Your Applications
If you determine that neither the network nor the database server is the bottleneck, start looking at your WebLogic Server applications. Most importantly, is the machine running WebLogic Server able to get 100 percent CPU utilization with a high client load? If the answer is no, then check if there is any locking taking place in the application. You should profile your application using a commercially available tool (for example, JProbe or OptimizeIt) to pinpoint bottlenecks and improve application performance.
Tuning the Application
This section contains recommended application-specific tuning suggestions for performance improvement.
- Stateless session beans and MDBs (message-driven beans) — For maximum concurrency, the pool sizes should be at least as large as the thread count of the execute queue that handles requests to such beans.
- Use concurrency strategy.
- Experiment with EJB pool settings.
- Use Call-by-reference.
- Cache EJBs.
- Increase the MDB pool size for asynchronous message consumption.
- Disable checks for JSP page checks and servlet reloading.
- Use in-memory session persistence
- Precompile JSPs
- Avoid JMS message selectors and use multiple queues/topics to do message selection.
- Use asynchronous (onMessage) JMS consumers instead of synchronous receivers.
- Defer JMS acknowledgments and commits.
- Tune your JDBC connection pool’s Initial Capacity and Max Capacity settings to complete database requests as fast as possible, rather than creating new connections.
- Cache prepared and callable statements used in your applications to minimize processing costs.
- Make your transactions single-batch by collecting a set of data operations and submitting an update transaction in one statement in the form.
JSPs and Servlets
Tuning Java Virtual Machines (JVMs)
The Java virtual machine (JVM) is a virtual “execution engine” instance that executes the byte codes in Java class files on a microprocessor. How you tune your JVM affects the performance of WebLogic Server and your applications.
Identify the Best JVM Settings
Tune your JVM’s heap garbage collection and heap size parameters to get the best performance out of your JVM. The Sun HotSpot and WebLogic JRockit JVM parameters that most significantly affect performance are listed below.
When using the HotSpot VM option (-server or -client), experiment with the following garbage collection parameters:
- -Xms and -Xmx (use equal settings at start up)
- -XX:NewSize and -XX:MaxNewSize
- -XX:+UseISM -XX:+AggressiveHeap
When using JRockit’s JVM, experiment with the following garbage collection parameters:
- -Xms and -Xmx (use equal settings at startup)
- -Xgc: parallel
The following sections discuss JVM tuning options for WebLogic Server:
Which JVM for Your System?
Although this section focuses on Sun Microsystems’ J2SE 1.4 JVM for the Windows, UNIX, and Linux platforms, the BEA WebLogic JRockit JVM was developed expressly for server-side applications and optimized for Intel architectures to ensure reliability, scalability, manageability, and flexibility for Java applications.
JVM Heap Size and Garbage Collection
Garbage collection is the JVM’s process of freeing up unused Java objects in the Java heap. The Java heap is where the objects of a Java program live. It is a repository for live objects, dead objects, and free memory. When an object can no longer be reached from any pointer in the running program, it is considered “garbage” and ready for collection.
The JVM heap size determines how often and how long the VM spends collecting garbage. An acceptable rate for garbage collection is application-specific and should be adjusted after analyzing the actual time and frequency of garbage collections. If you set a large heap size, full garbage collection is slower, but it occurs less frequently. If you set your heap size in accordance with your memory needs, full garbage collection is faster, but occurs more frequently.
The goal of tuning your heap size is to minimize the time that your JVM spends doing garbage collection while maximizing the number of clients that WebLogic Server can handle at a given time. To ensure maximum performance during benchmarking, you might set high heap size values to ensure that garbage collection does not occur during the entire run of the benchmark.
You might see the following Java error if you are running out of heap space:
java.lang.OutOfMemoryError <<no stack trace available>>
java.lang.OutOfMemoryError <<no stack trace available>>
Exception in thread “main”
Using Verbose Garbage Collection to Determine Heap Size
The HotSpot VM’s verbose garbage collection option (verbosegc) enables you to measure exactly how much time and resources are put into garbage collection. To determine the most effective heap size, turn on verbose garbage collection and redirect the output to a log file for diagnostic purposes.
The following steps outline this procedure:
- Monitor the performance of WebLogic Server under maximum load while running your application.
- Use the -verbosegc option to turn on verbose garbage collection output for your JVM and redirect both the standard error and standard output to a log file.
This place thread dump information in the proper context with WebLogic Server informational and error messages, and provides a more useful log for diagnostic purposes.
For example, on Windows and Solaris, enter the following:
% java -ms32m -mx200m -verbosegc -classpath $CLASSPATH
>> logfile.txt 2>&1
Where the logfile.txt 2>&1 command redirects both the standard error and standard output to a log file.
On HPUX, use the following option to redirect stderr stdout to a single file:
where $$ maps to the process ID (PID) of the Java process. Because the output includes timestamps for when garbage collection ran, you can infer how often garbage collection occurs.
- Analyze the following data points:
- How often is garbage collection taking place? In the log file, compare the time stamps around the garbage collection.
- How long is garbage collection taking? Full garbage collection should not take longer than 3 to 5 seconds.
- What is your average memory footprint? In other words, what does the heap settle back down to after each full garbage collection? If the heap always settles to 85 percent free, you might set the heap size smaller.
- If you are using 1.4 Java HotSpot JVM, set the New generation heap sizes.
- Make sure that the heap size is not larger than the available free RAM on your system.
Use as large a heap size as possible without causing your system to “swap” pages to disk. The amount of free RAM on your system depends on your hardware configuration and the memory requirements of running processes on your machine. See your system administrator for help in determining the amount of free RAM on your system.
- If you find that your system is spending too much time collecting garbage (your allocated “virtual” memory is more than your RAM can handle), lower your heap size.
Typically, you should use 80 percent of the available RAM (not taken by the operating system or other processes) for your JVM.
- If you find that you have a large amount of available free RAM remaining, run more instances of WebLogic Server on your machine.
Remember, the goal of tuning your heap size is to minimize the time that your JVM spends doing garbage collection while maximizing the number of clients that WebLogic Server can handle at a given time.
Specifying Heap Size Values
You must specify Java heap size values each time you start an instance of WebLogic Server. This can be done either from the java command line or by modifying the default values in the sample startup scripts that are provided with the WebLogic distribution for starting WebLogic Server.
For example, when you start a WebLogic Server instance from a java command line, you could specify the HotSpot VM heap size values as follows:
$ java -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8 -Xms512m -Xmx512m
The default size for these values is measured in bytes. Append the letter `k’ or `K’ to the value to indicate kilobytes, `m’ or `M’ to indicate megabytes, and `g’ or `G’ to indicate gigabytes. The example above allocates 128 megabytes of memory to the New generation and maximum New generation heap sizes, and 512 megabytes of memory to the minimum and maximum heap sizes for the WebLogic Server instance running in the JVM.
Using WebLogic Startup Scripts to Set Heap Size
Sample startup scripts are provided with the WebLogic Server distribution for starting the server and for setting the environment to build and run the server:
- startWLS.cmd and setEnv.cmd (Windows).
- startWLS.sh and setEnv.sh (UNIX and Windows. On Windows, these scripts support the MKS and Cygnus BASH UNIX shell emulators.)
Java Heap Size Options
You achieve best performance by individually tuning each application. However, configuring the Java HotSpot VM heap size options listed in table when starting WebLogic Server increases performance for most applications.
These options may differ depending on your architecture and operating system. See your vendor’s documentation for platform-specific JVM tuning options.
|Table 3-2 Java Heap Size Options|
|Setting the New generation heap size||-XX:NewSize||Set this value to a multiple of 1024 that is greater than 1MB. As a general rule, set -XX:NewSize to be one-fourth the size of the maximum heap size. Increase the value of this option for larger numbers of short-lived objects.|
Be sure to increase the New generation as you increase the number of processors. Memory allocation can be parallel, but garbage collection is not parallel.
|Setting the maximum New generation heap size||-XX:MaxNewSize||Set this value to a multiple of 1024 that is greater than 1MB.|
|Setting New heap size ratios||-XX:SurvivorRatio||The New generation area is divided into three sub-areas: Eden, and two survivor spaces that are equal in size.|
Configure the ratio of the Eden/survivor space size. Try setting this value to 8, and then monitor your garbage collection.
|Setting minimum heap size||-Xms||Set the minimum size of the memory allocation pool. Set this value to a multiple of 1024 that is greater than 1MB. As a general rule, set minimum heap size (-Xms) equal to the maximum heap size (-Xmx) to minimize garbage collections.|
|Setting maximum heap size||-Xmx||Set the maximum size of the memory allocation pool. Set this value to a multiple of 1024 that is greater than 1MB.|
WebLogic Server enables you to automatically log low memory conditions observed by the server.
You configure the low memory detection process using the Administration Console:
Access the Administration Console for the domain, expand the Servers node in the navigation tree to display the servers configured in your domain.
Click the name of the server instance that you want to configure. Note that you configure low memory detection on a per-server basis.
Select the Configuration —> Tuning tab in the right pane.
On the Advanced Options bar, click Show to display additional attributes.
Modify the following Memory Option attributes as necessary
Low Memory GCThreshold
Low Memory Sample Size
Low Memory Time Interval
And reboot the server to use the new low memory detection attributes.
Tuning WebLogic Server
Development vs. Production Mode Default Tuning Values
You can indicate whether a domain is to be used in a development environment or a production environment. WebLogic Server uses different default values for various services depending on the type of environment you specify.
|Tuning Parameter||Development Mode Default||Production Mode Default|
|Execute Queue: ThreadCount||15 threads||25 threads|
|JDBC Connection Pool: MaxCapacity||15 connections||25 connections|
Using WebLogic Server “Native IO” Performance Packs
Benchmarks show major performance improvements when you use native performance packs on machines that host WebLogic Server instances. Performance packs use a platform-optimized, native socket multiplexor to improve server performance. For example, the native socket reader multiplexor threads have their own execute queue and do not borrow threads from the default execute queue, which frees up default execute threads to do application work.
However, if you must use the pure-Java socket reader implementation for host machines, you can still improve the performance of socket communication by configuring the proper number of socket reader threads for each server instance and client machine.
Tuning the Default Execute Queue Threads
The value of the ThreadCount attribute of an ExecuteQueue element in the config.xml file equals the number of simultaneous operations that can be performed by applications that use the execute queue. As work enters an instance of WebLogic Server, it is placed in an execute queue. This work is then assigned to a thread that does the work on it. Threads consume resources, so handle this attribute with care—you can degrade performance by increasing the value unnecessarily.
By default, a new WebLogic Server instance is configured with a development mode execute queue, weblogic.kernel.default, that contains 15 threads. In addition, WebLogic Server provides two other pre-configured queues:
- weblogic.admin.HTTP—Available only on Administration Servers, this queue is reserved for communicating with the Administration Console; you cannot reconfigure it.
- weblogic.admin.RMI—Both Administration Servers and Managed Servers have this queue; it is reserved for administrative traffic; you cannot reconfigure it.
Unless you configure additional execute queues, and assign applications to them, Web applications and RMI objects use weblogic.kernel.default.
Note: If native performance packs are not being used for your platform, you may need to tune the default number of execute queue threads and the percentage of threads that act as socket readers to achieve optimal performance.
Should You Modify the Default Thread Count?
Adding more threads to the default execute queue does not necessarily imply that you can process more work. Even if you add more threads, you are still limited by the power of your processor. Because threads consume memory, you can degrade performance by increasing the value of the ThreadCount attribute unnecessarily. A high execute thread count causes more memory to be used and increases context switching, which can degrade performance.
The value of the ThreadCount attribute depends very much on the type of work your application does. For example, if your client application is thin and does a lot of its work through remote invocation, that client application will spend more time connected — and thus will require a higher thread count — than a client application that does a lot of client-side processing.
If you do not need to use more than 15 threads (the development default) or 25 threads (the production default) for your work, do not change the value of this attribute. As a general rule, if your application makes database calls that take a long time to return, you will need more execute threads than an application that makes calls that are short and turn over very rapidly. For the latter case, using a smaller number of execute threads could improve performance.
Creating Execute Queues
An execute queue represents a named collection of execute threads that are available to one or more designated servlets, JSPs, EJBs, or RMI objects. An execute queue is represented in the domain config.xml file as part of the Server element. For example, an execute queue named CriticalAppQueue with four execute threads appears in the config.xml file as follows:
Assigning Servlets and JSPs to Execute Queues
You can assign a servlet or JSP to a configured execute queue by identifying the execute queue name in the initialization parameters. Initialization parameters appear within the init-param element of the servlet’s or JSP’s deployment descriptor file, web.xml. To assign an execute queue, enter the queue name as the value of the wl-dispatch-policy parameter, as in the example:
</servlet> Assigning EJBs and RMI Objects to Execute Queues
To assign an EJB object to a configured execute queue, use the new dispatch-policy element in weblogic-ejb-jar.xml.
Scenarios for Modifying the Default Thread Count
To determine the ideal thread count for an execute queue, monitor the queue’s throughput while all applications in the queue are operating at maximum load. Increase the number of threads in the queue and repeat the load test until you reach the optimal throughput for the queue. (At some point, increasing the number of threads will lead to enough contexts switching that the throughput for the queue begins to decrease.)
Assigning Applications to Execute Queues
Although you can configure the default execute queue to supply the optimal number threads for all WebLogic Server applications, configuring multiple execute queues can provide additional control for key applications. By using multiple execute queues, you can guarantee that selected applications have access to a fixed number of execute threads, regardless of the load on WebLogic Server.
Allocating Execute Threads to Act as Socket Readers
For best socket performance, BEA recommends that you use the native socket reader implementation, rather than the pure-Java implementation, on machines that host WebLogic Server instances. However, if you must use the pure-Java socket reader implementation for host machines, you can still improve the performance of socket communication by configuring the proper number of execute threads to act as socket reader threads for each server instance and client machine.
The ThreadPoolPercentSocketReaders attribute sets the maximum percentage of execute threads that are set to read messages from a socket. The optimal value for this attribute is application-specific. The default value is 33, and the valid range is 1-99.
Allocating execute threads to act as socket reader threads increases the speed and the ability of the server to accept client requests. It is essential to balance the number of execute threads that are devoted to reading messages from a socket and those threads that perform the actual execution of tasks in the server.
Tuning the Execute Thread Detection Behavior
WebLogic Server automatically detects when a thread in an execute queue becomes “stuck.” Because a stuck thread cannot complete its current work or accept new work, the server logs a message each time it diagnoses a stuck thread. If all threads in an execute queue become stuck, the server changes its health state to either “warning” or “critical” depending on the execute queue:
- If all threads in the default queue become stuck, the server changes its health state to “critical.” (You can set up the Node Manager application to automatically shut down and restart servers in the critical health state.)
- If all threads in admin.HTTP, weblogic.admin.RMI, or a user-defined execute queue become stuck, the server changes its health state to “warning.”
WebLogic Server diagnoses a thread as stuck if it is continually working (not idle) for a set period of time. You can tune a server’s thread detection behavior by changing the length of time before a thread is diagnosed as stuck, and by changing the frequency with which the server checks for stuck threads.
- Stuck Thread Max Time: The number of seconds, that a thread must be continually working before this server diagnoses the thread as being stuck. By default, WebLogic Server considers a thread to be “stuck” after 600 seconds of continuous use.
- Stuck Thread Timer Interval: The number of seconds, after which WebLogic Server periodically scans threads to see if they have been continually working for the length of time specified by Stuck Thread Max Time. By default, WebLogic Server sets this interval to 600 seconds.
Using a Free Pool to Improve Stateless Session Bean Performance
WebLogic Server uses a free pool to improve performance and throughput for stateless session EJBs. The free pool stores unbound stateless session EJBs. Unbound EJB instances are instances of a stateless session EJB class that are not processing a method call.
The following figure illustrates the WebLogic Server free pool, and the processes by which stateless EJBs enter and leave the pool. Dotted lines indicate the “state” of the EJB from the perspective of WebLogic Server.
WebLogic Server free pool showing stateless session EJB life cycle
Allocating Pool Size for Entity Beans
A pool of anonymous entity beans (that is., beans without a primary key class) used to invoke finders and home methods, and to create other entity beans. The max-beans-in-free-pool element also controls the size of this pool.
If you are running many finders or home methods or creating many beans, you may want to tune the max-beans-in-free-pool element so that there are enough beans available for use in the pool.
Tuning Pool Size for Stateless Sessions Beans at Startup
Use the initial-beans-in-free-pool element of the weblogic-ejb-jar.xml file to specify the number of stateless session bean instances in the free pool at startup.
If you specify a value for initial-beans-in-free-pool, WebLogic Server populates the free pool with the specified number of bean instances at startup. Populating the free pool in this way improves initial response time for the EJB, because initial requests for the bean can be satisfied without generating a new instance.
initial-beans-in-free-pool defaults to 0 if the element is not defined.
Setting Caching Size for Stateful Session and Entity Beans
You can configure the number of active bean instances that are present in the EJB cache (the in-memory space where beans exist).
Use the max-beans-in-cache element of the weblogic-ejb-jar.xml file to specify the maximum number of objects of this bean class that are allowed in memory. When max-beans-in-cache is reached, WebLogic Server passivates some EJBs that have not been recently used by a client. The max-beans-in-cache element also affects when EJBs are removed from the WebLogic Server cache.