Code instrumentation

A considerable portion of the library code is for diagnosis, investigative and didactic purposes. This manual page discusses all of these features and - since these are useful when running small instances in the GUI only - how to enable or to disable it.


Compile time configuration

The core library comes with a header file configuration.h to control the overhead of code instrumentation at compile time. By enabling a macro, the respective functionality does not necessarily become active. There are also runtime configuration variables, but these are effective only if the code has been compiled as required.


Modules

An optimization module denotes a set of C++ methods, either forming the implementation of an optimization algorithm or the public interface to a series of alternative methods for the same optimization problem. In every module entry function, a local moduleGuard object is defined to publish the module information to the user interface.

By means of module guards, the progress of a computation can be measured two ways: Either by measuring and estimating the computation times (Progress counting), or by keeping track of lower and upper bounds on the objective value.

Module guards may run both nested and iterated. There is an implicit stack of module executions, and this has some impact on the timer, bounding and progress counting functionalities.

Once contructed, module guards become active. Basically, guard objects need not be deactivated explicitly - this is usually done by the destructor running when the hosting method or code block is left. This saves a lot of code lines for shutting down the instrumentation functionalities, and is much more exception safe than explicit instrumentation code. In a few situations, it is useful to call moduleGuard::Shutdown() manually.

All modules are listed in the module data base listOfModules[], This structure determines authors, the bibliography and a goblinTimer object.

Introducing a new module to the library means to add an enum value to TModule and to extend the listOfModules[] data base. In occasion, the bibliography data base listOfReferences[] and the author's data base listOfAuthors[] also have to be extended. If the new module is not just an alternative method for an existing solver, it might further be necessary to extend the list of global timers listOfTimers[].


Timers

When the macro _TIMERS_ is set, the library compiles with support for runtime statistics. This particular functionality is available for UNIX like platforms only, as it takes the POSIX headers sys/times.h and unistd.h.

The goblinTimer class computes and memorizes minimum, maximum, average, previous and accumulated run times. If it is supplied with a global timer list, it also keeps track of time measurements in nested function calls.

Timers can be started both nested and iterated:

For many use cases, the existing global timers suffice, and those are orchestrated automatically, namely by moduleGuard objects. If the goblinController::logTimers flag is set, timer reports are written whenever module guards are destructed.

[See API]


Filtering the logging output

Most high-level methods generate some text output, either for debugging or exemplification. The logging information is classified inline by one of the following enum values:

Up to the classes LOG_RES2 or LOG_METH2, the amount of output should not depend on the instance sizes. Code for the latter classes is only conditionally compiled, depending on the macro _LOGGING_.

    THandle LH = LogStart(LOG_METH2,"Adjacent nodes: ");
    TArc a = First(u);
   
    do
    {
       sprintf(CT.logBuffer,"%ld ", EndNode(a));
       LogAppend(LH,CT.logBuffer);
       a = Right(a,u);
    }
    while (a!=First(u));
   
    LogEnd(LH);
    

There are runtime variables to filter the effective output:

Feasible values are 0 and 1, up to goblinController::logMeth and goblinController::logRes where a value of 2 denotes more detailed output.

Before routing to file, the goblinController attaches the module ID and the module nesting / indentation level.

[See API]


Tracing of method executions

Tracing a computation means to save certain intermediate object states to file. It is possible to use trace points as break points. In that case, the trace file is written by the method exectution thread, re-read by the user interface, and the method execution thread waits for a feedback from the GUI.

When tracing is activated, this produces considerable computational overhead - at least for writing the trace files. Sometimes the trace objects are not generated in regular computations explicitly (e.g. branch trees in the branch & bound method). That is, the overhead may also include generation and layout of data objects.

The following trace levels are implemented:

A trace point may either be conditional - when goblinController::Trace() is called - or unconditional -when managedObject::Display() is called.

[See API]


Upper and lower optimization bounds

When a module guard is generated, the lower and upper bound are initializes to +/-InfFloat and can be improved by using moduleGuard::SetBounds(), moduleGuard::SetLowerBound() and moduleGuard::SetUpperBound(). When one of these methods is called, and if goblinController::logGaps is set, every improvement of bounds is logged. It is checked by each of these methods that bounds really improve, and that the lower bound does not overrun the upper bound.

By using the moduleGuard::SYNC_BOUNDS specifier, the bounds of a module guard are identified with the respective bounds in the parenting module guard. This makes it possible to share lower and upper bounding code among different modules, and is especially useful when one method determines potential solutions and another method determines dual bounds for the same optimization problem. This synchronization only works if the parent module refers to the same timer!

[See API]


Progress counting

This features is enabled by the macro _PROGRESS_. Other than timers which return absolute values, goblinController::ProgressCounter() returns the estimated fraction of executed and total computation steps. The feature is useful either in a multi-threaded application or if a special trace event handler has been registered to report about the computational progress.

In the most simple use case, an algorithm cyclically publishs this fraction by calling moduleGuard::SetProgressCounter(). Things are somewhat more complicated if there is a predefined number of computation steps, and on nested module calls:

Every module guard stores three values for the progress counting functionality:

When calling moduleGuard::ProgressStep() without a value, it realizes the forecasted improvement, and the latter value is maintained for subsequent steps.

The forecasting feature becomes important on nested module calls: Evaluating a global progress value means to scan up the stack of module executions, ending at the top-level module. At every level, the fractional progress value is computed and multiplied with the forecasted improvement in the parent module. Then, the current progress of the parent module is added and one tracks back in the module execution stack.

In one extremal case, none of the modules on the execution stack has been instrumentated explicity - up to the least one. Then the fractional progress value of this module is raised to the top level (as both the forecasted and the maximum progress value are 1.0 in the ancestor modules.

In the case when a model sets fractional progress values by using moduleGuard::SetProgressCounter(), this should be preceded by a call InitProgressCounter(1.0,0.0) so that nested module cannot influence the global progress value.

[See API]


Heap Monitoring

This feature is enabled by the macro _HEAP_MON_. Then, new and delete are reimplemented such that applications can keep track of the memory usage, namely by global variables goblinHeapSize, goblinMaxSize, goblinNFragments, goblinNAllocs and goblinNObjects.

If the feature is enabled, the size of memory fragments increases by one pointer. If it is disabled, the global variable still exist, but all with zero values. Needless to mention the impact of this feature on the running times.

[See API]


[See manual page index]