When an application server crash occurs, before restarting, try to capture a log set. If, after restarting, FusionReactor rotates the log files. The files in the FusionReactor log folder should be zipped up, or at least copied and saved. Your application server log files should also be saved.
FusionReactor has several settings pages that allow you to do things from selecting how much metric data to view on the Y axis to defining what percentage of memory usage you consider critical. Below is a list of configuration guides for all these types of settings that can help you diagnose problems.
- Request Settings
- Enterprise Settings
- Metrics Page
- Resource Settings
- Protection Restrictions
- Protection Settings
- JDBC Settings
- Stack Trace Filter
- Compression Settings
- MIME Type Restrictions
- Exclude URLs
- Log Settings
- Search and Replace
- Filter Restrictions
- Filter Settings
- Content Filter Restrictions
FusionReactor stores a number of logs in memory, in order to produce useful metrics and graphs. These metrics will be your first point of observation to address issues on an ailing server. The Resource and Request logs are important to diagnose server problems, since they record the memory/CPU usage of the system and the running requests respectively. They are potentially very useful in an unstable environment, where restarts will cause FusionReactor to lose its in-memory data. All logging within FusionReactor is a computationally inexpensive operation, the limiting factor being available disk space. For detailed information about how the logging system works, please read the Overview of FusionReactor Logs.
Memory should be your first observation. If the dark orange portion of the graph is consistently near the top of the graph, consider making more memory available to the application server JVM. If there is insufficient memory on the system to increase the JVM memory, consider reducing the size of the any caches in the application server.
A sawtooth pattern in the dark orange section of the memory graph is normal. This shows Java periodically garbage collecting objects. When used memory begins to approach the allocated memory value, you may see one or more sawtooth (garbage collection) patterns, as Java attempts to reclaim memory before asking for more. If insufficient memory is reclaimed, you will see the lightest orange section allocated memory bound increase, as Java demands memory from the operating system. If the dark orange portion steadily rises over the course of an hour, this often indicates a memory leak. These are becoming more common as the complexity of applications increases.
If the used memory (dark orange) portion of the graph is growing rapidly, you can try to ask Java to perform garbage collection yourself by clicking on the garbage collection button in the lower right hand corner of the graphs section on the Metrics -> Web Metrics page. However, Java's memory algorithm is very sophisticated and it's unlikely a manual collection will have any significant effect.
CPU Usage is another useful metric. If the instance is consistently busy (see the dark orange section on the Memory Usage chart) with low load (see Request Activity graph), this might indicate a problem with the pages being run (see Request History), such as infinite loops or runaway queries.
The Transactions -> Longest Requests page shows the longest running requests. FusionReactor also flags long-running requests with an appropriate label in the Request History table. You can configure what FusionReactor considers a long-running request, and how many to store in the Longest Requests table, in the Metrics Page page.