Page tree
Skip to end of metadata
Go to start of metadata

NOTE

When an application server crash occurs, before restarting, try to capture a log set.  If, after restarting, FusionReactor rotates the log files.  The files in the FusionReactor log folder should be zipped up, or at least copied and saved. Your application server log files should also be saved.

Configurations


FusionReactor has several settings pages that allow you to do things from selecting how much metric data to view on the Y axis to defining what percentage of memory usage you consider critical. Below is a list of configuration guides for all these types of settings that can help you diagnose problems.

Instant Diagnosis


FusionReactor stores a number of logs in memory, in order to produce useful metrics and graphs. These metrics will be your first point of observation to address issues on an ailing server.

The Resource and Request logs are important to diagnose server problems, since they record the memory/CPU usage of the system and the running requests respectively.  They are potentially very useful in an unstable environment, where restarts will cause FusionReactor to lose its in-memory data. All logging within FusionReactor is a computationally inexpensive operation, the limiting factor being available disk space. For detailed information about how the logging system works, please read the Overview of FusionReactor Logs.

Memory Observation

Memory should be your first observation.

  • If the dark orange portion of the graph is consistently near the top of the graph, then consider making more memory available to the application server JVM.
  • If there is insufficient memory on the system to increase the JVM memory, consider reducing the size of the any caches in the application server.
  • If the sawtooth pattern in the dark orange section of the memory graph is normal, then this shows Java periodically garbage collecting objects. When used memory begins to approach the allocated memory value, you may see one or more sawtooth (garbage collection) patterns, as Java attempts to reclaim memory before asking for more. If insufficient memory is reclaimed, you will see the lightest orange section allocated memory bound increase, as Java demands memory from the operating system.
  • If the dark orange portion steadily rises over the course of an hour, this often indicates a memory leak. These are becoming more common as the complexity of applications increases.
  • If the used memory (dark orange) portion of the graph is growing rapidly, you can try to ask Java to perform garbage collection yourself by clicking on the garbage collection button in the lower right hand corner of the graphs section on the Metrics -> Web Metrics page. However, Java's memory algorithm is very sophisticated and it's unlikely a manual collection will have any significant effect.

 

Please note that when a JVM runs out of heap memory you will also get a massive slow down of the engine. That can be hundred or even thousands of times. Using the day and week views of memory and CPU usage you should be able to see any long term resource consumption.

Requests also track what the memory usage was when the request run. This isn't an exact image of the memory usage by the request but it ban be an indication of memory intensive requests.

CPU Observation

CPU Usage is another useful metric.

  • If the instance is consistently busy (see the dark orange section on the Memory Usage chart) with low load (see Request Activity graph), this might indicate a problem with the pages being run (see Request History), such as infinite loops or runaway queries.
  • Look for requests that are getting "stuck" in the running requests. Underneath the execution Time(ms) of a request you will see a second number typically in light grey. That's the time that the request has been on the CPU. If they number is going up then you can see you have a request consuming CPU. Take a stack trace of that request. See what it's doing, and then take another one. Quickly you should be able to see the problem.

The Transactions > Longest Requests page shows the longest running requests. FusionReactor also flags long-running requests with an appropriate label in the Request History table. You can configure what FusionReactor considers a long-running request, and how many to store in the Longest Requests table, in the Metrics Page page.


  • No labels