hdfs: Store messages on the Hadoop Distributed File System (HDFS)

Starting with version 3.7, AxoSyslog can send plain-text log files to the Hadoop Distributed File System (HDFS), allowing you to store your log data on a distributed, scalable file system. This is especially useful if you have huge amounts of log messages that would be difficult to store otherwise, or if you want to process your messages using Hadoop tools (for example, Apache Pig).

Note the following limitations when using the AxoSyslog hdfs destination:

  • Since AxoSyslog uses the official Java HDFS client, the hdfs destination has significant memory usage (about 400MB).

  • You cannot set when log messages are flushed. Hadoop performs this action automatically, depending on its configured block size, and the amount of data received. There is no way for the AxoSyslog application to influence when the messages are actually written to disk. This means that AxoSyslog cannot guarantee that a message sent to HDFS is actually written to disk. When using flow-control, AxoSyslog acknowledges a message as written to disk when it passes the message to the HDFS client. This method is as reliable as your HDFS environment.

Declaration:

   @include "scl.conf"
    
    hdfs(
        client-lib-dir("/opt/syslog-ng/lib/syslog-ng/java-modules/:<path-to-preinstalled-hadoop-libraries>")
        hdfs-uri("hdfs://NameNode:8020")
        hdfs-file("<path-to-logfile>")
    );

Example: Storing logfiles on HDFS

The following example defines an hdfs destination using only the required parameters.

   @include "scl.conf"
    
    destination d_hdfs {
        hdfs(
            client-lib-dir("/opt/syslog-ng/lib/syslog-ng/java-modules/:/opt/hadoop/libs")
            hdfs-uri("hdfs://10.140.32.80:8020")
            hdfs-file("/user/log/logfile.txt")
        );
    };

The hdfs() driver is actually a reusable configuration snippet configured to receive log messages using the Java language-binding of AxoSyslog. For details on using or writing such configuration snippets, see Reusing configuration blocks. You can find the source of the hdfs configuration snippet on GitHub.