Boost C++ LibrariesSourceForge.net Logo

PrevUpHomeNext

Extending the library

Writing your own sinks
Writing your own sources
Writing your own attributes
Extending library settings support
#include <boost/log/sinks/basic_sink_backend.hpp>

As was described in the Design overview section, sinks consist of two parts: frontend and backend. Frontends are provided by the library and usually do not need to be reimplemented. Thanks to frontends, implementing backends is much easier than it could be: all filtering, formatting and thread synchronization is done there.

In order to develop a sink backend, you derive your class from either basic_sink_backend or basic_formatted_sink_backend, depending on whether your backend requires formatted log records or not. Both base classes define a set of types that are required to interface with sink frontends. One of these types is frontend_requirements.

Frontend requirements
#include <boost/log/sinks/frontend_requirements.hpp>

In order to work with sink backends, frontends use the frontend_requirements type defined by all backends. The type combines one or several requirement tags:

  • synchronized_feeding. If the backend has this requirement, it expects log records to be passed from frontend in synchronized manner (i.e. only one thread should be feeding a record at a time). Note that different threads may be feeding different records, the requirement merely states that there will be no concurrent feeds.
  • concurrent_feeding. This requirement extends synchronized_feeding by allowing different threads to feed records concurrently. The backend implements all necessary thread synchronization in this case.
  • formatted_records. The backend expects formatted log records. The frontend implements formatting to a string with character type defined by the char_type typedef within the backend. The formatted string will be passed along with the log record to the backend. The basic_formatted_sink_backend base class automatically adds this requirement to the frontend_requirements type.
  • flushing. The backend supports flushing its internal buffers. If the backend indicates this requirement it has to implement the flush method taking no arguments; this method will be called by the frontend when flushed.
[Tip] Tip

By chosing either of the thread synchronization requirements you effectively allow or prohibit certain sink frontends from being used with your backend.

Multiple requirements can be combined into frontend_requirements type with the combine_requirements metafunction:

typedef sinks::combine_requirements<
    sinks::synchronized_feeding,
    sinks::formatted_records,
    sinks::flushing
>::type frontend_requirements;

It must be noted that synchronized_feeding and concurrent_feeding should not be combined together as it would make the synchronization requirement ambiguous. The synchronized_feeding is a more strict requirement than concurrent_feeding, so whenever the backend requires concurrent feeding it is also capable of synchronized feeding.

The has_requirement metafunction can be used to test for a specific requirement in the frontend_requirements typedef.

Minimalistic sink backend

As an example of the basic_sink_backend class usage, let's implement a simple statistical information collector backend. Assume we have a network server and we want to monitor how many incoming connections are active and how much data was sent or received. The collected information should be written to a CSV-file every minute. The backend definition could look something like this:

// The backend collects statistical information about network activity of the application
class stat_collector :
    public sinks::basic_sink_backend<
        sinks::combine_requirements<
            sinks::synchronized_feeding,                                        1
            sinks::flushing                                                     2
        >::type
    >
{
private:
    // The file to write the collected information to
    std::ofstream m_csv_file;

    // Here goes the data collected so far:
    // Active connections
    unsigned int m_active_connections;
    // Sent bytes
    unsigned int m_sent_bytes;
    // Received bytes
    unsigned int m_received_bytes;

    // The number of collected records since the last write to the file
    unsigned int m_collected_count;
    // The time when the collected data has been written to the file last time
    boost::posix_time::ptime m_last_store_time;

public:
    // The constructor initializes the internal data
    explicit stat_collector(const char* file_name);

    // The function consumes the log records that come from the frontend
    void consume(logging::record_view const& rec);
    // The function flushes the file
    void flush();

private:
    // The function resets statistical accumulators to initial values
    void reset_accumulators();
    // The function writes the collected data to the file
    void write_data();
};

1

we will have to store internal data, so let's require frontend to synchronize feeding calls to the backend

2

also enable flushing support

As you can see, the public interface of the backend is quite simple. Only the consume and flush methods are called by frontends. The consume function is called every time a logging record passes filtering in the frontend. The record, as was stated before, contains a set of attribute values and the message string. Since we have no need for the record message, we will ignore it for now. But from the other attributes we can extract the statistical data to accumulate and write to the file. We can use attribute keywords and value visitation to accomplish this.

BOOST_LOG_ATTRIBUTE_KEYWORD(sent, "Sent", unsigned int)
BOOST_LOG_ATTRIBUTE_KEYWORD(received, "Received", unsigned int)

// The function consumes the log records that come from the frontend
void stat_collector::consume(logging::record_view const& rec)
{
    // Accumulate statistical readings
    if (rec.attribute_values().count("Connected"))
        ++m_active_connections;
    else if (rec.attribute_values().count("Disconnected"))
        --m_active_connections;
    else
    {
        namespace phoenix = boost::phoenix;
        logging::visit(sent, rec, phoenix::ref(m_sent_bytes) += phoenix::placeholders::_1);
        logging::visit(received, rec, phoenix::ref(m_received_bytes) += phoenix::placeholders::_1);
    }
    ++m_collected_count;

    // Check if it's time to write the accumulated data to the file
    boost::posix_time::ptime now = boost::posix_time::microsec_clock::universal_time();
    if (now - m_last_store_time >= boost::posix_time::minutes(1))
    {
        write_data();
        m_last_store_time = now;
    }
}

// The function writes the collected data to the file
void stat_collector::write_data()
{
    m_csv_file << m_active_connections
        << ',' << m_sent_bytes
        << ',' << m_received_bytes
        << std::endl;
    reset_accumulators();
}

// The function resets statistical accumulators to initial values
void stat_collector::reset_accumulators()
{
    m_sent_bytes = m_received_bytes = 0;
    m_collected_count = 0;
}

Note that we used Boost.Phoenix to automatically generate visitor function objects for attribute values.

The last bit of implementation is the flush method. It is used to flush all buffered data to the external storage, which is a file in our case. The method can be implemented in the following way:

// The function flushes the file
void stat_collector::flush()
{
    // Store any data that may have been collected since the list write to the file
    if (m_collected_count > 0)
    {
        write_data();
        m_last_store_time = boost::posix_time::microsec_clock::universal_time();
    }

    m_csv_file.flush();
}

You can find the complete code of this example here.

Formatting sink backend

As an example of a formatting sink backend, let's implement a sink that will display text notifications for every log record passed to it.

[Tip] Tip

Real world applications would probably use some GUI toolkit API to display notifications but GUI programming is out of scope of this documentation. In order to display notifications we shall use an external program which does just that. In this example we shall employ notify-send program which is available on Linux (Ubuntu/Debian users can install it with the libnotify-bin package; other distros should also have it available in their package repositories). The program takes the notification parameters in the command line, displays the notification in the current desktop environment and then exits. Other platforms may also have similar tools.

The definition of the backend is very similar to what we have seen in the previous section:

// The backend starts an external application to display notifications
class app_launcher :
    public sinks::basic_formatted_sink_backend<
        char,                                                                   1
        sinks::synchronized_feeding                                             2
    >
{
public:
    // The function consumes the log records that come from the frontend
    void consume(logging::record_view const& rec, string_type const& command_line);
};

1

target character type

2

in order not to spawn too many application instances we require records to be processed serial

The first thing to notice is that the app_launcher backend derives from basic_formatted_sink_backend rather than basic_sink_backend. This base class accepts the character type in addition to the requirements. The specified character type defines the target string type the formatter will compose in the frontend and it typically corresponds to the underlying API the backend uses to process records. It must be mentioned that the character type the backend requires is not related to the character types of string attribute values, including the message text. The formatter will take care of character code conversion when needed.

The second notable difference from the previous examples is that consume method takes an additional string parameter besides the log record. This is the result of formatting. The string_type type is defined by the basic_formatted_sink_backend base class and it corresponds to the requested character type.

We don't need to flush any buffers in this example, so we didn't specify the flushing requirement and omitted the flush method in the backens. Although we don't need any synchronization in our backend, we specified synchronized_feeding requirement so that we don't spawn multiple instances of notify-send program and cause a "fork bomb".

Now, the consume implementation is trivial:

// The function consumes the log records that come from the frontend
void app_launcher::consume(logging::record_view const& rec, string_type const& command_line)
{
    std::system(command_line.c_str());
}

So the formatted string is expected to actually be a command line to start the application. The exact application name and arguments are to be determined by the formatter. This approach adds flexibility because the backend can be used for different purposes and updating the command line is as easy as updating the formatter.

The sink can be configured with the following code:

BOOST_LOG_ATTRIBUTE_KEYWORD(process_name, "ProcessName", std::string)
BOOST_LOG_ATTRIBUTE_KEYWORD(caption, "Caption", std::string)

// Custom severity level formatting function
std::string severity_level_as_urgency(
    logging::value_ref< logging::trivial::severity_level, logging::trivial::tag::severity > const& level)
{
    if (!level || level.get() == logging::trivial::info)
        return "normal";
    logging::trivial::severity_level lvl = level.get();
    if (lvl < logging::trivial::info)
        return "low";
    else
        return "critical";
}

// The function initializes the logging library
void init_logging()
{
    boost::shared_ptr< logging::core > core = logging::core::get();

    typedef sinks::synchronous_sink< app_launcher > sink_t;
    boost::shared_ptr< sink_t > sink(new sink_t());

    const std::pair< const char*, const char* > shell_decorations[] =
    {
        std::pair< const char*, const char* >("\"", "\\\""),
        std::pair< const char*, const char* >("$", "\\$"),
        std::pair< const char*, const char* >("!", "\\!")
    };

    // Make the formatter generate the command line for notify-send
    sink->set_formatter
    (
        expr::stream << "notify-send -t 2000 -u "
            << boost::phoenix::bind(&severity_level_as_urgency, logging::trivial::severity.or_none())
            << expr::if_(expr::has_attr(process_name))
               [
                    expr::stream << " -a '" << process_name << "'"
               ]
            << expr::if_(expr::has_attr(caption))
               [
                    expr::stream << " \"" << expr::char_decor(shell_decorations)[ expr::stream << caption ] << "\""
               ]
            << " \"" << expr::char_decor(shell_decorations)[ expr::stream << expr::message ] << "\""
    );

    core->add_sink(sink);

    // Add attributes that we will use
    core->add_global_attribute("ProcessName", attrs::current_process_name());
}

The most interesting part is the sink setup. The synchronous_sink frontend (as well as any other frontend) will detect that the app_launcher backend requires formatting and enable the corresponding functionality. The set_formatter method becomes available and can be used to set the formatting expression that composes the command line to start the notify-send program. We used attribute keywords to identify particular attribute values in the formatter. Notice that string attribute values have to be preprocessed so that special characters interpreted by the shell are escaped in the command line. We achieve that with the char_decor decorator with our custom replacement map. After the sink is configured we also add the current process name attribute to the core so that we don't have to add it to every record.

After all this is done, we can finally display some notifications:

void test_notifications()
{
    BOOST_LOG_TRIVIAL(debug) << "Hello, it's a simple notification";
    BOOST_LOG_TRIVIAL(info) << logging::add_value(caption, "Caption text") << "And this notification has caption as well";
}

The complete code of this example is available here.


PrevUpHomeNext