This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
FilterX
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Note
FilterX (developed by Axoflow) is a replacement for syslog-ng
filters, parsers, and rewrite rules. It has its own syntax, allowing you to filter, parse, manipulate, and rewrite variables and complex data structures, and also compare them with various operators.
FilterX is a consistent and comprehensive reimplementation of several core features with improved performance, proper typing support, and the ability to handle multi-level typed objects.
FilterX helps you to route, parse, and modify your logs: a message passes through the FilterX block in a log path only if all the FilterX statements evaluate to true for the particular message. If a log statement includes multiple FilterX blocks, the messages are sent to the destinations only if they pass all FilterX blocks of the log path. For example, you can select only the messages originating from a particular host, or create complex filters using operators, functions, and logical expressions.
FilterX blocks consist of a list of FilterX statements, each statement evaluates either to truthy or falsy. If a message matches all FilterX statements, it passes through the FilterX block to the next element of the log path, for example, the destination.
- Truthy values are:
- Complex values (for example, a datetime object),
- non-empty lists and objects,
- non-empty strings,
- non-zero numbers,
- the
true
boolean object.
- Falsy values are:
- empty strings,
- the
false
value,
- the
0
value,
null
,
Statements that result in an error (for example, if a comparison cannot be evaluated because of type error, or a field or a dictionary referenced in the statement doesn’t exist or is unset) are also treated as falsy.
Define a filterx block
You can define filterx
blocks inline in your log statements. (If you want to reuse filterx
blocks, Reuse FilterX blocks.)
For example, the following FilterX statement selects the messages that contain the word deny
and come from the host example
.
log {
source(s1);
filterx {
${HOST} == "example";
${MESSAGE} =~ "deny";
};
destination(d1);
};
You can use filterx
blocks together with other blocks in a log path, for example, use a parser before/after the filterx
block if needed.
FilterX statements
A FilterX block contains one or more FilterX statements. The order of the statements is important, as they are processed sequentially. If any of the statements is falsy (or results in an error), AxoSyslog drops the message from that log path.
FilterX statements can be one of the following:
-
A comparison, for example, ${HOST} == "my-host";
. This statement is true only for messages where the value of the ${HOST}
field is my-host
. Such simple comparison statements can be the equivalents of traditional filter functions.
-
A value assignment for a name-value pair or a local variable, for example, ${my-field} = "bar";
. The left-side variable automatically gets the type of the right-hand expression. Assigning the false value to a variable (${my-field} = false;
) is a valid statement that doesn’t automatically cause the FilterX block to return as false.
-
Existence of a variable of field. For example, the ${HOST};
expression is true only if the ${HOST}
macro exists and isn’t empty.
-
A conditional statement ( if (expr) { ... } elif (expr) {} else { ... };
) which allows you to evaluate complex decision trees.
-
A declaration of a pipeline variable, for example, declare my_pipeline_variable = "something";
.
-
A FilterX action. This can be one of the following:
drop;
: Intentionally drop the message. This means that the message was successfully processed, but discarded. Processing the dropped message stops at the drop
statement, subsequent sections or other branches of the FilterX block won’t process the message. For example, you can use this to discard unneeded messages, like debug logs. Available in AxoSyslog 4.9 and later.
done;
: Return truthy and don’t execute the rest of the FilterX block, returns with true. This is an early return that you can use to avoid unnecessary processing, for example, when the message matches an early classification in the block. Available in AxoSyslog 4.9 and later.
Note
- The
true;
and false;
literals are also valid as statements. They can be useful in complex conditional (if/elif/else) statements.
- A name-value pair or a variable in itself is also a statement. For example,
${HOST};
. If the name-value pair or variable is empty or doesn’t exist, the statement is considered falsy.
When you assign the value of a variable using another variable (for example, ${MESSAGE} = ${HOST};
), AxoSyslog copies the current value of the ${HOST}
variable. If a statement later changes the value of the ${HOST}
field, the ${MESSAGE}
field won’t change. For example:
filterx {
${HOST} = "first-hostname";
${MESSAGE} = ${HOST}; # The value of ${MESSAGE} is first-hostname
${HOST} = "second-hostname"; # The value of ${MESSAGE} is still first-hostname
};
The same is true for complex objects, like JSON, for example:
js = json({
"key": "value",
"second-key": "another-value"
});
${MESSAGE} = js;
js.third_key = "third-value-not-available-in-MESSAGE";
You can use FilterX operators and functions.
Data model and scope
Each FilterX block can access data from the following elements.
-
Macros and name-value pairs of the message being processed (for example, $PROGRAM
). The names of macros and name-value pairs begin with the $
character. If you define a new variable in a FilterX block and its name begins with the $
character, it’s automatically added to the name-value pairs of the message.
Note
Using curly braces around macro names is not mandatory, and the "$MESSAGE"
and "${MESSAGE}"
formats are equivalent. If the name contains only alphanumeric characters and the underscore character, you don’t need the curly braces. If it contains any other characters (like a hyphen (-
) or a dot (.
)), you need to add the curly braces, therefore it’s best to always use curly braces.
Names are case-sensitive, so "$message"
and "$MESSAGE"
are not the same.
-
Local variables. These have a name that doesn’t start with a $
character, for example, my_local_variable
. Local variables are available only in the FilterX block they’re defined.
-
Pipeline variables. These are similar to local variables, but must be declared before first use, for example, declare my_pipeline_variable=5;
Pipeline variables are available in the current and all subsequent FilterX block. They’re global in the sense that you can access them from multiple FilterX blocks, but note that they’re still attached to the particular message that is processed, so the values of pipeline variables aren’t preserved between messages.
If you don’t need to pass the variable to another FilterX block, use local variables, as pipeline variables have a slight performance overhead.
Note
- If you want to pass data between two FilterX blocks of a log statement, use pipeline variables. That has better performance than name-value pairs.
- Local and pipeline variables aren’t available in destination templates. For details, see FilterX variables in destinations.
Variable names
FilterX variable names have more restrictions than generic name-value pair names. They:
- can contain alphanumeric characters and the underscore character (
_
), but cannot contain hyphens,
- cannot begin with numbers,
- can begin with underscore.
Note
Although you can re-use type names and function names as variable names, that’s not considered good practice and should be avoided.
Variable types
Variables can have the following types. All of these types have a matching function that can be used to type cast something into the specific type.
Assign values
To assign value to a name-value pair or a variable, use the following syntax:
<variable-name> = <value-of-the-variable>;
In most cases you can omit the type, and AxoSyslog automatically assigns the type based on the syntax of the value, for example:
mystring = "string-value";
myint = 3;
mydouble = 2.5;
myboolean = true;
When needed, you can explicitly specify the type of the variable, and AxoSyslog attempts to convert the value to the specified type:
<variable-name> = <variable-type>(<value-of-the-variable>);
For example:
filterx {
${MESSAGE} = string("Example string message");
};
You can also assign the value of other name-value pairs, for example:
filterx {
${MESSAGE} = ${HOST};
};
When processing RFC5424-formatted (IETF-syslog) messages, you can modify the SDATA part of the message as well. The following example sets the sequenceId:
filterx {
${.SDATA.meta.sequenceId} = 55555;
};
Note
When assigning values to name-value pairs, you cannot modify
hard macros.
Template functions
You can use the traditional template functions of AxoSyslog to access and format name-value pairs. For that you must enclose the template function expression between double-quotes, for example:
${MESSAGE} = "$(format-json --subkeys values.)";
However, note that template functions cannot access the local and pipeline variables created in FilterX blocks.
Delete values
To delete a value without deleting the object itself (for example, name-value pair), use the null value
, for example:
${MY-NV-PAIR-KEY} = null;
To delete the name-value pair (or a key from an object), use the unset
function:
unset(${MY-NV-PAIR-KEY});
unset(${MY-JSON}["key-to-delete"]);
To unset every empty field of an object, use the unset-empties
function:
Add two values
The plus operator (+
) adds two arguments, if possible. (For example, you can’t add two datetime values.)
-
You can use it to add two numbers (two integers, two double values). If you add a double to an integer, the result is a double.
-
Adding two strings concatenates the strings. Note that if you want to have spaces between the added elements, you have to add them manually, like in Python, for example:
${MESSAGE} = ${HOST} + " first part of the message," + " second part of the message" + "\n";
-
Adding two lists merges the lists. Available in AxoSyslog 4.9 and later.
-
Adding two dicts updates the dict with the values of the second operand. For example:
x = {"key1": "value1", "key2": "value1"};
y = {"key3": "value1", "key2": "value2"};
${MESSAGE} = x + y; # ${MESSAGE} value is {"key1": "value1", "key3": "value1", "key2": "value2"};
Available in AxoSyslog 4.9 and later.
Complex types: lists, dicts, and JSON
The list and dict types are similar to their Python counterparts. FilterX uses JSON to represent generic dictionary and list types, but you can create other, specific dictionary and list types as well (currently for OTEL, for example, otel_kvlist
, or otel_array
). All supported dictionary and list types are compatible with each other, and you can convert them to and from each other, copy values between them (retaining the type), and so on.
For example:
my_list = []; # Creates an empty list (which defaults to a JSON list)
my_array = {}; # Creates an empty dictionary (which defaults to a JSON object)
my_list2 = json_array(); # Creates an empty JSON list
my_array2 = json(); # Creates an empty JSON object.
You can add elements to lists and dictionaries like this:
list = json_array(); # Create an empty JSON list
#list = otel_array(); # Create an OTEL list
list += ["first_element"]; # Append entries to the list
list += ["second_element"];
list += ["third_element"];
${MESSAGE} = list;
You can also create the list and assign values in a single step:
list = json_array(["first_element", "second_element", "third_element"]);
${MESSAGE} = list;
You can refer to the elements using an index (starting with 0
):
list = json_array(); # Create an empty JSON list
list[0] = "first_element"; # Append entries to the list
list[1] = "second_element";
list[2] = "third_element";
${MESSAGE} = list;
In all three cases, the value of ${MESSAGE}
is the same JSON array: ["first_element", "second_element", "third_element"]
.
You can define JSON objects using the json()
type, for example:
js1 = json();
js1 += {
"body": "mystring",
"time_unix_nano": 123456789,
"attributes": {
"int": 42,
"flag": true
}
};
js2 = json({"key": "value"})
Naturally, you can assign values from other variables to an object, for example:
js = json_array(["foo", "bar", "baz"]);
${MESSAGE} = json({
"key": "value",
"list": list
});
or
js = json({
"key": ${MY-NAME-VALUE-PAIR},
"key-from-expression": isset(${HOST}) ? ${HOST} : "default-hostname",
"list": list
});
Within a FilterX block, you can access the fields of complex data types by using indexes and the dot notation, for example:
- dot notation:
js.key
- indexing:
js["key"]
- or mixed mode if needed:
js.list[1]
When referring to the field of a name-value pair (which begins with the $
character), place the dot or the square bracket outside the curly bracket surrounding the name of the name-value pair, for example: ${MY-LIST}[2]
or ${MY-OBJECT}.mykey
. If the name of the key contains characters that are not permitted in FilterX variable names, for example, a hyphen (-
), use the bracketed syntax and enclose the key in double quotes: ${MY-LIST}["my-key-name"]
.
You can add two lists or two dicts using the Plus operator.
Operators
FilterX has the following operators.
- Comparison operators:
==
, <
, <=
, >=
, >
, !=
, ===
, !==
, eq
, lt
, le
, gt
, ge
, ne
.
- Boolean operators:
not
, or
, and
.
- Dot operator (
.
) to access fields of an object, like JSON.
- Indexing operator
[]
to access fields of an object, like JSON.
- Plus (
+
) operator to add values and concatenate strings.
- Plus equal (
+=
) operator to add the right operand to the left.
- Ternary conditional operator:
?:
.
- Null coalescing operator:
??
.
- Regular expression (regexp) match:
=~
and !~
.
For details, see FilterX operator reference.
Functions
FilterX has the following built-in functions.
cache_json_file
: Loads an external JSON file to lookup contextual information.
endswith
: Checks if a string ends with the specified value.
flatten
: Flattens the nested elements of an object.
format_csv
: Formats a dictionary or a list into a comma-separated string.
format_json
: Dumps a JSON object into a string.
format_kv
: Formats a dictionary into key=value pairs.
get_sdata
: Returns the SDATA part of an RFC5424-formatted syslog message as a JSON object.
has_sdata
: Checks if a string ends with the specified value.
includes
: Checks if a string contains a specific substring.
isodate
: Parses a string as a date in ISODATE format.
is_sdata_from_enterprise
: Checks if the message contains the specified organization ID.
isset
: Checks that argument exists and its value is not empty or null.
istype
: Checks the type of an object.
len
: Returns the length of an object.
lower
: Converts a string into lowercase characters.
parse_csv
: Parses a comma-separated or similar string.
parse_kv
: Parses a string consisting of whitespace or comma-separated key=value
pairs.
parse_leef
: Parses LEEF-formatted string.
parse_xml
: Parses an XML object into a JSON object.
parse_windows_eventlog_xml
: Parses a Windows Event Log XML object into a JSON object.
regexp_search
: Searches a string using regular expressions.
regexp_subst
: Rewrites a string using regular expressions.
startswith
: Checks if a string begins with the specified value.
strptime
: Converts a string containing a date/time value, using a specified format string.
unset
: Deletes a name-value pair, or a field from an object.
unset_empties
: Deletes empty fields from an object.
update_metric
: Updates a labeled metric counter.
upper
: Converts a string into uppercase characters.
vars
: Lists the variables defined in the FilterX block.
For details, see FilterX function reference.
Use cases and examples
The following list shows you some common tasks that you can solve with FilterX:
-
To set message fields (like macros or SDATA fields) or replace message parts: you can assign values to change parts of the message, or use one of the FilterX functions to rewrite existing values.
-
To delete or unset message fields, see Delete values.
-
To rename a message field, assign the value of the old field to the new one, then unset the old field. For example:
$my_new_field = $mike_old_field;
unset($mike_old_field);
-
To use conditional rewrites, you can either:
Create an iptables parser
The following example shows you how to reimplement the iptables parser in a FilterX block. The following is a sample iptables log message (with line-breaks added for readability):
Dec 08 12:00:00 hostname.example kernel: custom-prefix:IN=eth0 OUT=
MAC=11:22:33:44:55:66:aa:bb:cc:dd:ee:ff:08:00 SRC=192.0.2.2 DST=192.168.0.1 LEN=40 TOS=0x00
PREC=0x00 TTL=232 ID=12345 PROTO=TCP SPT=54321 DPT=22 WINDOW=1023 RES=0x00 SYN URGP=0
This is a normal RFC3164-formatted message logged by the kernel (where iptables logging messages originate from), and contains space-separated key-value pairs.
-
First, create some filter statements to select iptables messages only:
block filterx parse_iptables() {
${FACILITY} == "kern"; # Filter on the kernel facility
${PROGRAM} == "kernel"; # Sender application is the kernel
${MESSAGE} =~ "PROTO="; # The PROTO key appears in all iptables messages
}
-
To make the parsed data available under macros beginning with ${.iptables}
, like in the case of the original iptables-parser()
, create the ${.iptables}
JSON object.
block filterx parse_iptables() {
${FACILITY} == "kern"; # Filter on the kernel facility
${PROGRAM} == "kernel"; # Sender application is the kernel
${MESSAGE} =~ "PROTO="; # The PROTO key appears in all iptables messages
${.iptables} = json(); # Create an empty JSON object
}
-
Add a key=value parser to parse the content of the messages into the ${.iptables}
JSON object. The key=value pairs are space-separated, while equal signs (=) separates the values from the keys.
block filterx parse_iptables() {
${FACILITY} == "kern"; # Filter on the kernel facility
${PROGRAM} == "kernel"; # Sender application is the kernel
${MESSAGE} =~ "PROTO="; # The PROTO key appears in all iptables messages
${.iptables} = json(); # Create an empty JSON object
${.iptables} = parse_kv(${MESSAGE}, value_separator="=", pair_separator=" ");
}
FilterX variables in destinations
If you’re modifying messages using FilterX (for example, you extract a value from the message and add it to another field of the message), note the following points:
- Macros and name-value pairs (variables with names beginning with the
$
character) are included in the outgoing message in case the template of the destination includes them. For example, if you change the value of the ${MESSAGE}
macro, it’s automatically sent to the destination if the destination template includes this macro.
- Local and pipeline variables are not included in the message, you must assign their value to a macro or name-value pair that’s included in the destination template to send them to the destination.
- When sending data to
opentelemetry()
destinations, if you’re modifying messages received via the opentelemetry()
source, then you must explicitly update the original (raw) data structures in your FilterX block, otherwise the changes won’t be included in the outgoing message. For details, see Modify incoming OTEL.
1 - Boolean operators in FilterX
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
When a log statement includes multiple filter statements, AxoSyslog sends a message to the destination only if all filters are true for the message. In other words, the filters are connected by logical AND
operators. In the following example, no message arrives to the destination, because the filters are mutually exclusive (the hostname of a client cannot be example1
and example2
at the same time):
log {
source(s1); source(s2);
filterx { ${HOST} == "example1"; };
filterx { ${HOST} == "example2"; };
destination(d1); destination(d2); };
To select the messages that come from either host example1
or example2
, use a single filter expression:
log {
source(s1); source(s2);
filterx { ${HOST} == "example1" or ${HOST} == "example2"; };
destination(d1); destination(d2); };
Use the not
operator to invert boolean filters, for example, to select messages that weren’t sent by host example1
:
filterx { not ( ${HOST} == "example1" ); };
However, to select the messages that weren’t sent by host example1
or example2
, you have to use the and
operator (that’s how boolean logic works, see De Morgan’s laws for details):
filterx { not (${HOST} == "example1") and not (${HOST} == "example2"); };
Alternatively, you can use parentheses and the or
operator to avoid this confusion:
filterx { not ( (${HOST} == "example1") or (${HOST} == "example2") ); };
The following filter statement selects the messages that contain the word deny
and come from the host example
.
filterx {
${HOST} == "example";
${MESSAGE} =~ "deny";
};
Note
FilterX blocks are often used together with log path flags. For details, see
Log path flags.
2 - Comparing values in FilterX
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
In AxoSyslog you can compare macro values, templates, and variables as numerical and string values. String comparison is alphabetical: it determines if a string is alphabetically greater than or equal to another string. For details on macros and templates, see Customize message format using macros and templates.
Use the following syntax to compare macro values or templates.
filterx {
"<macro-or-variable-or-expression>" operator "<macro-or-variable-or-expression>";
};
String and numerical comparison
You can use mathematical symbols as operators (like ==, !=, >=
), and based on the type of the arguments AxoSyslog automatically determines how to compare them. The logic behind this is similar to JavaScript:
- If both sides of the comparisons are strings, then the comparison is string.
- If one of the arguments is numeric, then the comparison is numeric.
- Literal numbers (numbers not enclosed in quotes) are numeric.
- You can explicitly type-cast an argument into a number.
- The
bytes
, json
, and protobuf
types are always compared as strings.
- Currently you can’t compare dictionaries and lists.
For example:
-
if (${.apache.httpversion} == 1.0)
The right side of the ==
operator is 1.0, which is a floating point literal (double), so the comparison is numeric.
-
if (double(${.apache.httpversion}) == "1.0")
The left side is explicitly type cast into double, the right side is string (because of the quotes), so the comparison is numeric.
-
if (${.apache.request} == "/wp-admin/login.php")
The left side is not type-cast, the right side is a string, so the comparison is string.
Note
You can use
string operators if you want to, they are still available for backwards compatibility.
Example: Compare macro values
The following expression selects log messages that contain a PID (that is, the ${PID}
macro is not empty):
(It is equivalent to using the isset()
function: isset(${PID});
).
The following expression selects log messages where the priority level is not emerg
.
filterx {${LEVEL} != "emerg"; };
The following example selects messages with priority level higher than 5.
filterx {
${LEVEL_NUM} > 5;
};
Make sure to:
- Enclose literal strings and templates in double-quotes. For macros and variables do not use quotes.
- Use the
$
character before macros.
Note that you can use:
- type casting anywhere where you can use templates to apply a type to the result of the template expansion.
- any macro in the expression, including user-defined macros from parsers and classifications.
- boolean operators to combine comparison expressions.
Compare the type (strict equality)
To compare the values of operands and verify that they have the same type, use the ===
(strict equality) operator. The following example defines a string variable with the value “5” as string and uses it in different comparisons:
mystring = "5"; # Type is string
mystring === 5; # false, because the right-side is an integer
mystring === "5"; # true
};
To compare only the types of variables and macros, you can use the istype
function.
Strict inequality operator
Compares the values of operands and returns true
if they are different. Also returns true
if the value of the operands are the same, but their type is different. For example:
"example" !== "example"; # False, because they are the same and both are strings
"1" !== 1; # True, because one is a string and the other an integer
Comparison operators
The following numerical and string comparison operators are available.
Numerical or string operator |
String operator |
Meaning |
== |
eq |
Equals |
!= |
ne |
Not equal to |
> |
gt |
Greater than |
< |
lt |
Less than |
>= |
ge |
Greater than or equal |
=< |
le |
Less than or equal |
=== |
|
Equals and has the same type |
!== |
|
Not equal to or has different type |
3 - String search in FilterX
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Available in AxoSyslog 4.9 and later.
You can check if a string contains a specified string using the includes
FilterX function. The startswith
and endswith
functions check the beginning and ending of the strings, respectively. For example, the following expression checks if the message ($MESSAGE
) begins with the %ASA-
string:
startswith($MESSAGE, '%ASA-')
By default, matches are case sensitive. For case insensitive matches, use the ignorecase=true
option:
startswith($MESSAGE, '%ASA-', ignorecase=true)
All three functions (includes
, startswith
, and endswith
) can take a list with multiple search strings and return true if any of them match. This is equivalent with using combining the individual searches with logical OR operators. For example:
${MESSAGE} = "%ASA-5-111010: User ''john'', running ''CLI'' from IP 0.0.0.0, executed ''dir disk0:/dap.xml"
includes($MESSAGE, ['%ASA-','john','CLI'])
includes($MESSAGE, ['%ASA-','john','CLI'])
includes($MESSAGE, '%ASA-') or includes($MESSAGE, 'john') or includes($MESSAGE, 'CLI')
For more complex searches, or if you need to match a regular expression, use the regexp_search
FilterX function.
4 - Parsing data in FilterX
4.1 - CEF
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Available in AxoSyslog 4.9 and later.
The parse_cef
FilterX function parses messages formatted in the Common Event Format (CEF).
Declaration
Usage: parse_cef(<input-string>, value_separator="=", pair_separator="|")
The first argument is the input message. Optionally, you can set the pair_separator
and value_separator
arguments to override their default values.
The value_separator
must be a single-character string. The pair_separator
can be a regular string.
Example
The following is a CEF-formatted message including mandatory and custom (extension) fields:
CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|KLPRCI_TaskState|Completed successfully|1|foo=foo bar=bar baz=test
The following FilterX expression parses it and converts it into JSON format:
filterx {
${PARSED_MESSAGE} = json(parse_cef(${MESSAGE}));
};
The content of the JSON object for this message will be:
{
"version":"0",
"device_vendor":"KasperskyLab",
"device_product":"SecurityCenter",
"device_version":"13.2.0.1511",
"device_event_class_id":"KLPRCI_TaskState",
"name":"Completed successfully",
"agent_severity":"1",
"extensions": {
"foo":"foo=bar",
"bar":"bar=baz",
"baz":"test"
}
}
4.1.1 - Options of CEF parsers
The parse_cef
FilterX function has the following options.
pair_separator
Specifies the character or string that separates the key-value pairs in the extensions. Default value:
(space).
value_separator
Specifies the character that separates the keys from the values in the extensions. Default value: =
.
4.2 - Comma-separated values
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
The parse_csv
FilterX function can separate parts of log messages (that is, the contents of the ${MESSAGE}
macro) along delimiter characters or strings into lists, or key-value pairs within dictionaries, using the csv (comma-separated-values) parser.
Usage: parse_csv(<input-string>, columns=json_array, delimiter=string, string_delimiters=json_array, dialect=string, strip_whitespace=boolean, greedy=boolean)
Only the input parameter is mandatory.
If the columns
option is set, parse_csv
returns a dictionary with the column names (as keys) and the parsed values. If the columns
option isn’t set, parse_csv
returns a list.
The following example separates hostnames like example-1
and example-2
into two parts.
block filterx p_hostname_segmentation() {
cols = json_array(["NAME","ID"]);
HOSTNAME = parse_csv(${HOST}, delimiter="-", columns=cols);
# HOSTNAME is a json object containing parts of the hostname
# For example, for example-1 it contains:
# {"NAME":"example","ID":"1"}
# Set the important elements as name-value pairs so they can be referenced in the destination template
${HOSTNAME_NAME} = HOSTNAME.NAME;
${HOSTNAME_ID} = HOSTNAME.ID;
};
destination d_file {
file("/var/log/${HOSTNAME_NAME:-examplehost}/${HOSTNAME_ID}"/messages.log);
};
log {
source(s_local);
filterx(p_hostname_segmentation());
destination(d_file);
};
Parse Apache log files
The following parser processes the log of Apache web servers and separates them into different fields. Apache log messages can be formatted like:
"%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %T %v"
Here is a sample message:
192.168.1.1 - - [31/Dec/2007:00:17:10 +0100] "GET /cgi-bin/example.cgi HTTP/1.1" 200 2708 "-" "curl/7.15.5 (i4 86-pc-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8c zlib/1.2.3 libidn/0.6.5" 2 example.mycompany
To parse such logs, the delimiter character is set to a single whitespace (delimiter=" "
). Excess leading and trailing whitespace characters are stripped.
block filterx p_apache() {
${APACHE} = json();
cols = [
"CLIENT_IP", "IDENT_NAME", "USER_NAME",
"TIMESTAMP", "REQUEST_URL", "REQUEST_STATUS",
"CONTENT_LENGTH", "REFERER", "USER_AGENT",
"PROCESS_TIME", "SERVER_NAME"
];
${APACHE} = parse_csv(${MESSAGE}, columns=cols, delimiter=(" "), strip_whitespace=true, dialect="escape-double-char");
# Set the important elements as name-value pairs so they can be referenced in the destination template
${APACHE_USER_NAME} = ${APACHE.USER_NAME};
};
The results can be used for example, to separate log messages into different files based on the APACHE.USER_NAME field. in case the field is empty, the nouser
string is assigned as default.
log {
source(s_local);
filterx(p_apache());
destination(d_file);
};
destination d_file {
file("/var/log/messages-${APACHE_USER_NAME:-nouser}");
};
Segment a part of a message
You can use multiple parsers in a layered manner to split parts of an already parsed message into further segments. The following example splits the timestamp of a parsed Apache log message into separate fields. Note that the scoping of FilterX variables is important:
- If you add the new parser to the FilterX block used in the previous example, every variable is available.
- If you use a separate FilterX block, only global variables and name-value pairs (variables with names starting with the
$
character) are accessible from the block.
block filterx p_apache_timestamp() {
cols = ["TIMESTAMP.DAY", "TIMESTAMP.MONTH", "TIMESTAMP.YEAR", "TIMESTAMP.HOUR", "TIMESTAMP.MIN", "TIMESTAMP.SEC", "TIMESTAMP.ZONE"];
${APACHE.TIMESTAMP} = parse_csv(${APACHE.TIMESTAMP}, columns=cols, delimiters=("/: "), dialect="escape-none");
# Set the important elements as name-value pairs so they can be referenced in the destination template
${APACHE_TIMESTAMP_DAY} = ${APACHE.TIMESTAMP_DAY};
};
destination d_file {
file("/var/log/messages-${APACHE_USER_NAME:-nouser}/${APACHE_TIMESTAMP_DAY}");
};
log {
source(s_local);
filterx(p_apache());
filterx(p_apache_timestamp());
destination(d_file);
};
4.2.1 - Options of CSV parsers
The parse_csv
FilterX function has the following options.
columns
|
|
Synopsis: |
columns=["1st","2nd","3rd"] |
Default value: |
N/A |
Description: Specifies the names of the columns, and correspondingly the keys in the resulting JSON array.
- If the
columns
option is set, parse_csv
returns a dictionary with the column names (as keys) and the parsed values.
- If the
columns
option isn’t set, parse_csv
returns a list.
delimiter
|
|
Synopsis: |
delimiter="<string-with-delimiter-characters>" |
Default value: |
, |
Description: The delimiter parameter contains the characters that separate the columns in the input string. If you specify multiple characters, every character will be treated as a delimiter. Note that the delimiters aren’t included in the column values. For example:
- To separate the text at every hyphen (-) and colon (:) character, use
delimiter="-:"
.
- To separate the columns along the tabulator (tab character), specify
delimiter="\\t"
.
- To use strings instead of characters as delimiters, see
string_delimiters
.
Multiple delimiters
If you use more than one delimiter, note the following points:
- AxoSyslog will split the message at the nearest possible delimiter. The order of the delimiters in the configuration file does not matter.
- You can use both string delimiters and character delimiters in a parser.
- The string delimiters may include characters that are also used as character delimiters.
- If a string delimiter and a character delimiter both match at the same position of the input, AxoSyslog uses the string delimiter.
dialect
|
|
Synopsis: |
dialect="<dialect-name>" |
Default value: |
escape-none |
Description: Specifies how to handle escaping in the input strings.
The following values are available.
escape-backslash
: The parsed message uses the backslash (\\
) character to escape quote characters.
escape-backslash-with-sequences
: The parsed message uses ""
as an escape character but also supports C-style escape
sequences, like \n
or \r
. Available in AxoSyslog version 4.0 and later.
escape-double-char
: The parsed message repeats the quote character when the quote character is used literally. For example, to escape a comma (,
), the message contains two commas (,,
).
escape-none
: The parsed message does not use any escaping for using the quote character literally.
greedy
|
|
Synopsis: |
greedy=true |
Default value: |
false |
If the greedy
option is enabled, AxoSyslog adds the remaining part of the message to the last column, ignoring any delimiters that may appear in this part of the message. You can use this option to process messages where the number of columns varies from message to message.
For example, you receive the following comma-separated message: example 1, example2, example3
, and you segment it with the following parser:
my-parsed-values = parse_csv(${MESSAGE}, columns=["COLUMN1", "COLUMN2", "COLUMN3"], delimiter=",");
The COLUMN1
, COLUMN2
, and COLUMN3
variables will contain the strings example1
, example2
, and example3
, respectively. If the message looks like example 1, example2, example3, some more information
, then any text appearing after the third comma (that is, some more information
) is not parsed, and thus possibly lost if you use only the parsed columns to reconstruct the message (for example, if you send the columns to different columns of a database table).
Using the greedy=true
flag will assign the remainder of the message to the last column, so that the COLUMN1
, COLUMN2
, and COLUMN3
variables will contain the strings example1
, example2
, and example3, some more information
.
my-parsed-values = parse_csv(${MESSAGE}, columns=["COLUMN1", "COLUMN2", "COLUMN3"], delimiters=[","], greedy=true);
strip_whitespace
|
|
Synopsis: |
strip_whitespace=true |
Default value: |
false |
Description: Remove leading and trailing whitespaces from all columns. The strip_whitespace
option is an alias for strip_whitespace
.
string_delimiters
|
|
Synopsis: |
string_delimiters=json_array(["first-string","2nd-string"]) |
Description: In case you have to use a string as a delimiter, list your string delimiters as a JSON array in the string_delimiters=["<delimiter_string1>", "<delimiter_string2>", ...]
option.
By default, the parse_csv
FilterX function uses the comma as a delimiter. If you want to use only strings as delimiters, you have to disable the default space delimiter, for example: delimiter="", string_delimiters=["<delimiter_string>"])
Otherwise, AxoSyslog will use the string delimiters in addition to the default character delimiter, so for example, string_delimiters=["=="]
is actually equivalent to delimiters=",", string_delimiters=["=="]
, and not delimiters="", string_delimiters=["=="]
Multiple delimiters
If you use more than one delimiter, note the following points:
- AxoSyslog will split the message at the nearest possible delimiter. The order of the delimiters in the configuration file does not matter.
- You can use both string delimiters and character delimiters in a parser.
- The string delimiters may include characters that are also used as character delimiters.
- If a string delimiter and a character delimiter both match at the same position of the input, AxoSyslog uses the string delimiter.
4.3 - key=value pairs
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
The parse_kv
FilterX function can split a string consisting of whitespace or comma-separated key=value
pairs (for example, Postfix log messages). You can also specify other value separator characters instead of the equal sign, for example, colon (:
) to parse MySQL log messages. The AxoSyslog application automatically trims any leading or trailing whitespace characters from the keys and values, and also parses values that contain unquoted whitespace.
Note
If a log message contains the same key multiple times (for example, key1=value1, key2=value2, key1=value3, key3=value4, key1=value5
), then AxoSyslog only stores the last (rightmost) value for the key. Using the previous example, AxoSyslog will store the following pairs: key1=value5, key2=value2, key3=value4
.
Warning
By default, the parser discards sections of the input string that are not key=value
pairs, even if they appear between key=value
pairs that can be parsed. To store such sections, see stray_words_key.
The names of the keys can contain only the following characters: numbers (0-9), letters (a-z,A-Z), underscore (_), dot (.), hyphen (-). Other special characters are not permitted.
Declaration
Usage: parse_kv(<input-string>, value_separator="=", pair_separator=",", stray_words_key="stray_words")
The value_separator
must be a single-character string. The pair_separator
can be a regular string.
Example
In the following example, the source is a Postfix log message consisting of comma-separated key=value
pairs:
Jun 20 12:05:12 mail.example.com <info> postfix/qmgr[35789]: EC2AC1947DA: from=<me@example.com>, size=807, nrcpt=1 (queue active)
filterx {
${PARSED_MESSAGE} = parse_kv(${MESSAGE});
};
You can set the value separator character (the character between the key and the value) to parse for example, key:value
pairs, like MySQL logs:
Mar 7 12:39:25 myhost MysqlClient[20824]: SYSTEM_USER:'oscar', MYSQL_USER:'my_oscar', CONNECTION_ID:23, DB_SERVER:'127.0.0.1', DB:'--', QUERY:'USE test;'
filterx {
${PARSED_MESSAGE} = parse_kv(${MESSAGE}, value_separator=":", pair_separator=",");
};
4.3.1 - Options of key=value parsers
The parse_kv
FilterX function has the following options.
pair_separator
Specifies the character or string that separates the key-value pairs from each other. Default value: ,
.
For example, to parse key1=value1;key2=value2
pairs, use:
${MESSAGE} = parse_kv("key1=value1;key2=value2", pair_separator=";");
stray_words_key
Specifies the key where AxoSyslog stores any stray words that appear before or between the parsed key-value pairs. If multiple stray words appear in a message, then AxoSyslog stores them as a comma-separated list. Default value:N/A
For example, consider the following message:
VSYS=public; Slot=5/1; protocol=17; source-ip=10.116.214.221; source-port=50989; destination-ip=172.16.236.16; destination-port=162;time=2016/02/18 16:00:07; interzone-emtn_s1_vpn-enodeb_om; inbound; policy=370;
This is a list of key-value pairs, where the value separator is =
and the pair separator is ;
. However, before the last key-value pair (policy=370
), there are two stray words: interzone-emtn_s1_vpn-enodeb_om;
and inbound;
. If you want to store or process these, specify a key to store them, for example:
${MESSAGE} = "VSYS=public; Slot=5/1; protocol=17; source-ip=10.116.214.221; source-port=50989; destination-ip=172.16.236.16; destination-port=162;time=2016/02/18 16:00:07; interzone-emtn_s1_vpn-enodeb_om; inbound; policy=370;";
${PARSED_MESSAGE} = parse_kv(${MESSAGE}, stray_words_key="stray_words");
The value of ${PARSED_MESSAGE}.stray_words
for this message will be: ["interzone-emtn_s1_vpn-enodeb_om", "inbound"]
value_separator
Specifies the character that separates the keys from the values. Default value: =
.
For example, to parse key:value
pairs, use:
${MESSAGE} = parse_kv("key1:value1,key2:value2", value_separator=":");
4.4 - LEEF
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Available in AxoSyslog 4.9 and later.
The parse_leef
FilterX function parses messages formatted in the Log Event Extended Format (LEEF).
Both LEEF versions (1.0 and 2.0) are supported.
Declaration
Usage: parse_leef(<input-string>, value_separator="=", pair_separator="|")
The first argument is the input message. Optionally, you can set the pair_separator
and value_separator
arguments to override their default values.
The value_separator
must be a single-character string. The pair_separator
can be a regular string.
Example
The following is a LEEF-formatted message including mandatory and custom (extension) fields:
LEEF:1.0|Microsoft|MSExchange|4.0 SP1|15345|src=192.0.2.0 dst=172.50.123.1 sev=5cat=anomaly srcPort=81 dstPort=21 usrName=john.smith
The following FilterX expression parses it and converts it into JSON format:
filterx {
${PARSED_MESSAGE} = json(parse_leef(${MESSAGE}));
};
The content of the JSON object for this message will be:
{
"version":"1.0",
"vendor":"Microsoft",
"product_name":"MSExchange",
"product_version":"4.0 SP1",
"event_id":"15345",
"extensions": {
"src":"192.0.2.0",
"dst":"172.50.123.1",
"sev":"5cat=anomaly",
"srcPort":"81",
"dstPort":"21",
"usrName":"john.smith"
}
}
4.4.1 - Options of LEEF parsers
The parse_leef
FilterX function has the following options.
pair_separator
Specifies the character or string that separates the key-value pairs in the extensions. Default value: \t
(tab).
LEEF v2 can specify the separator per message. Omitting this option uses the LEEF v2 provided separator, setting this value overrides it during parsing.
value_separator
Specifies the character that separates the keys from the values in the extensions. Default value: =
.
4.5 - Windows Event Log
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Available in AxoSyslog 4.9 and later.
The parse_windows_eventlog_xml()
FilterX function parses Windows Event Logs XMLs. It’s a specialized version of the parse_xml()
parser.
The parser returns false in the following cases:
- The input isn’t valid XML.
- The root element doesn’t reference the Windows Event Log schema (
<Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'>
). Note that the parser doesn’t validate the input data to the schema.
For example, the following converts the input XML into a JSON object:
filterx {
xml = "<xml-input/>"
$MSG = json(parse_windows_eventlog_xml(xml));
};
Given the following input:
<Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'>
<System>
<Provider Name='EventCreate'/>
<EventID Qualifiers='0'>999</EventID>
<Version>0</Version>
<Level>2</Level>
<Task>0</Task>
<Opcode>0</Opcode>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime='2024-01-12T09:30:12.1566754Z'/>
<EventRecordID>934</EventRecordID>
<Correlation/>
<Execution ProcessID='0' ThreadID='0'/>
<Channel>Application</Channel>
<Computer>DESKTOP-2MBFIV7</Computer>
<Security UserID='S-1-5-21-3714454296-2738353472-899133108-1001'/>
</System>
<RenderingInfo Culture='en-US'>
<Message>foobar</Message>
<Level>Error</Level>
<Task></Task>
<Opcode>Info</Opcode>
<Channel></Channel>
<Provider></Provider>
<Keywords>
<Keyword>Classic</Keyword>
</Keywords>
</RenderingInfo>
<EventData>
<Data Name='param1'>foo</Data>
<Data Name='param2'>bar</Data>
</EventData>
</Event>
The parser creates the following JSON object:
{
"Event": {
"@xmlns": "http://schemas.microsoft.com/win/2004/08/events/event",
"System": {
"Provider": {"@Name": "EventCreate"},
"EventID": {"@Qualifiers": "0", "#text": "999"},
"Version": "0",
"Level": "2",
"Task": "0",
"Opcode": "0",
"Keywords": "0x80000000000000",
"TimeCreated": {"@SystemTime": "2024-01-12T09:30:12.1566754Z"},
"EventRecordID": "934",
"Correlation": "",
"Execution": {"@ProcessID": "0", "@ThreadID": "0"},
"Channel": "Application",
"Computer": "DESKTOP-2MBFIV7",
"Security": {"@UserID": "S-1-5-21-3714454296-2738353472-899133108-1001"},
},
"RenderingInfo": {
"@Culture": "en-US",
"Message": "foobar",
"Level": "Error",
"Task": "",
"Opcode": "Info",
"Channel": "",
"Provider": "",
"Keywords": {"Keyword": "Classic"},
},
"EventData": {
"Data": {
"param1": "foo",
"param2": "bar",
},
},
},
}
4.6 - XML
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Available in AxoSyslog 4.9 and later.
The parse_xml()
FilterX function parses raw XMLs into dictionaries. This is a new implementation, so the limitations and options of the legacy xml-parser()
do not apply.
There is no standardized way of converting XML into a dict. AxoSyslog creates the most compact dict possible. This means certain nodes will have different types and structures depending on the input XML element. Note the following points:
-
Empty XML elements become empty strings.
XML: <foo></foo>
JSON: {"foo": ""}
-
Attributions are stored in @attr
key-value pairs, similarly to other converters (like python xmltodict).
XML: <foo bar="123" baz="bad"/>
JSON: {"foo": {"@bar": "123", "@baz": "bad"}}
-
If an XML element has both attributes and a value, we need to store them in a dict, and the value needs a key. We store the text value under the #text
key.
XML: <foo bar="123">baz</foo>
JSON: {"foo": {"@bar": "123", "#text": "baz"}}
-
An XML element can have both a value and inner elements. We use the #text
key here, too.
XML: <foo>bar<baz>123</baz></foo>
JSON: {"foo": {"#text": "bar", "baz": "123"}}
-
An XML element can have multiple values separated by inner elements. In that case we concatenate the values.
XML: <foo>bar<a></a>baz</foo>
JSON: {"foo": {"#text": "barbaz", "a": ""}}
Usage
my_structured_data = parse_xml(raw_xml);
5 - Handle OpenTelemetry log records
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
AxoSyslog allows you to process, manipulate, and create OpenTelemetry log messages using FilterX. For example, you can:
- route your OpenTelemetry messages to different destinations based on the content of the messages,
- change fields in the message (for example, add missing information, or delete unnecessary data), or
- convert incoming syslog messages to OpenTelemetry log messages.
Route OTEL messages
To route OTEL messages (such as the ones received through the opentelemetry()
source) based on their content, configure the following:
-
Map the OpenTelemetry input message to OTEL objects in FilterX, so AxoSyslog handles their type properly. Add the following to your FilterX block:
log {
source {opentelemetry()};
filterx {
# Input mapping
declare log = otel_logrecord(${.otel_raw.log});
declare resource = otel_resource(${.otel_raw.resource});
declare scope = otel_scope(${.otel_raw.scope});
};
destination {
# your opentelemetry destination settings
};
};
-
Add FilterX statements that select the messages you need. The following example selects messages sent by the nginx
application, received from the host called example-host
.
log {
source {opentelemetry()};
filterx {
# Input mapping
declare log = otel_logrecord(${.otel_raw.log});
declare resource = otel_resource(${.otel_raw.resource});
declare scope = otel_scope(${.otel_raw.scope});
# FilterX statements that act as filters
resource.attributes["service.name"] == "nginx";
resource.attributes["host.name"] == "example-host";
};
destination {
# your opentelemetry destination settings
};
};
For details on the common keys in log records, see the otel_logrecord reference
.
Modify incoming OTEL
To modify messages received via the OpenTelemetry protocol (OTLP), such as the ones received using the opentelemetry()
source, you have to configure the following:
-
Map the OpenTelemetry input message to OTEL objects in FilterX, so AxoSyslog handles their type properly. Add the following to your FilterX block:
log {
source {opentelemetry()};
filterx {
# Input mapping
declare log = otel_logrecord(${.otel_raw.log});
declare resource = otel_resource(${.otel_raw.resource});
declare scope = otel_scope(${.otel_raw.scope});
};
destination {
# your opentelemetry destination settings
};
};
-
After the mapping, you can access the elements of the different data structures as FilterX dictionaries, for example, the body of the message (log.body
), its attributes (log.attributes
), or the attributes of the resource (resource.attributes
).
The following example does two things:
-
It checks if the hostname resource attribute exists, and sets it to the sender IP address if it doesn’t.
if (not isset(resource.attributes["host.name"])) {
resource.attributes["host.name"] = ${SOURCEIP};
};
-
It checks whether the Timestamp field (which is optional) is set in the log object, and sets it to the date AxoSyslog received the message if it isn’t.
if (log.observed_time_unix_nano == 0) {
log.observed_time_unix_nano = ${R_UNIXTIME};
};
When inserted into the configuration, this will look like:
log {
source {opentelemetry()};
filterx {
# Input mapping
declare log = otel_logrecord(${.otel_raw.log});
declare resource = otel_resource(${.otel_raw.resource});
declare scope = otel_scope(${.otel_raw.scope});
# Modifying the message
if (not isset(resource.attributes["host.name"])) {
resource.attributes["host.name"] = ${SOURCEIP};
};
if (log.observed_time_unix_nano == 0) {
log.observed_time_unix_nano = ${R_UNIXTIME};
};
};
destination {
# your opentelemetry destination settings
};
};
For details on mapping values, see the otel_logrecord reference
.
-
Update the message with the modified objects so that your changes are included in the message sent to the destination:
log {
source {opentelemetry()};
filterx {
# Input mapping
declare log = otel_logrecord(${.otel_raw.log});
declare resource = otel_resource(${.otel_raw.resource});
declare scope = otel_scope(${.otel_raw.scope});
# Modifying the message
if (not isset(resource.attributes["host.name"])) {
resource.attributes["host.name"] = ${SOURCEIP};
};
if (log.observed_time_unix_nano == 0) {
log.observed_time_unix_nano = ${R_UNIXTIME};
};
# Update output
${.otel_raw.log} = log;
${.otel_raw.resource} = resource;
${.otel_raw.scope} = scope;
${.otel_raw.type} = "log";
};
destination {
# your opentelemetry destination settings
};
};
syslog to OTEL
To convert incoming syslog messages to OpenTelemetry log messages and send them to an OpenTelemetry receiver, you have to perform the following high-level steps in your configuration file:
-
Receive the incoming syslog messages.
-
Initialize the data structures required for OpenTelemetry log messages in a FilterX block.
-
Map the key-value pairs and macros of the syslog message to appropriate OpenTelemetry log record fields. There is no universal mapping scheme available, it depends on the source message and the receiver as well. For some examples, see the Example Mappings page in the OpenTelemetry documentation, or check the recommendations and requirements of your receiver. For details on the fields that are available in the AxoSyslog OTEL data structures, see the otel_logrecord reference
.
The following example includes a simple mapping for RFC3164-formatted syslog messages. Note that the body of the message is rendered as a string, not as structured data.
log {
source {
# Configure a source to receive your syslog messages
};
filterx {
# Create the empty data structures for OpenTelemetry log records
declare log = otel_logrecord();
declare resource = otel_resource();
declare scope = otel_scope();
# Set the log resource fields and map syslog values
resource.attributes["host.name"] = ${HOST};
resource.attributes["service.name"] = ${PROGRAM};
log.observed_time_unix_nano = ${R_UNIXTIME};
log.body = ${MESSAGE};
log.severity_number = ${LEVEL_NUM};
# Update output
${.otel_raw.log} = log;
${.otel_raw.resource} = resource;
${.otel_raw.scope} = scope;
${.otel_raw.type} = "log";
};
destination {
# your opentelemetry destination settings
};
};
otel_logrecord reference
OpenTelemetry log records can have the following fields. (Based on the official OpenTelemetry proto file.)
attributes
Attributes that describe the event. Attribute keys MUST be unique.
body
The body of the log record. It can be a simple string, or any number of complex nested objects, such as lists and arrays.
flags
Flags as a bit field.
observed_time_unix_nano
The time when the event was observed by the collection system, expressed as nanoseconds elapsed since the UNIX Epoch (January 1, 1970, 00:00:00 UTC).
severity_number
The severity of the message as a numerical value of the severity.
SEVERITY_NUMBER_UNSPECIFIED = 0;
SEVERITY_NUMBER_TRACE = 1;
SEVERITY_NUMBER_TRACE2 = 2;
SEVERITY_NUMBER_TRACE3 = 3;
SEVERITY_NUMBER_TRACE4 = 4;
SEVERITY_NUMBER_DEBUG = 5;
SEVERITY_NUMBER_DEBUG2 = 6;
SEVERITY_NUMBER_DEBUG3 = 7;
SEVERITY_NUMBER_DEBUG4 = 8;
SEVERITY_NUMBER_INFO = 9;
SEVERITY_NUMBER_INFO2 = 10;
SEVERITY_NUMBER_INFO3 = 11;
SEVERITY_NUMBER_INFO4 = 12;
SEVERITY_NUMBER_WARN = 13;
SEVERITY_NUMBER_WARN2 = 14;
SEVERITY_NUMBER_WARN3 = 15;
SEVERITY_NUMBER_WARN4 = 16;
SEVERITY_NUMBER_ERROR = 17;
SEVERITY_NUMBER_ERROR2 = 18;
SEVERITY_NUMBER_ERROR3 = 19;
SEVERITY_NUMBER_ERROR4 = 20;
SEVERITY_NUMBER_FATAL = 21;
SEVERITY_NUMBER_FATAL2 = 22;
SEVERITY_NUMBER_FATAL3 = 23;
SEVERITY_NUMBER_FATAL4 = 24;
severity_text
The severity of the message as a string, one of:
"SEVERITY_NUMBER_TRACE"
"SEVERITY_NUMBER_TRACE2"
"SEVERITY_NUMBER_TRACE3"
"SEVERITY_NUMBER_TRACE4"
"SEVERITY_NUMBER_DEBUG"
"SEVERITY_NUMBER_DEBUG2"
"SEVERITY_NUMBER_DEBUG3"
"SEVERITY_NUMBER_DEBUG4"
"SEVERITY_NUMBER_INFO"
"SEVERITY_NUMBER_INFO2"
"SEVERITY_NUMBER_INFO3"
"SEVERITY_NUMBER_INFO4"
"SEVERITY_NUMBER_WARN"
"SEVERITY_NUMBER_WARN2"
"SEVERITY_NUMBER_WARN3"
"SEVERITY_NUMBER_WARN4"
"SEVERITY_NUMBER_ERROR"
"SEVERITY_NUMBER_ERROR2"
"SEVERITY_NUMBER_ERROR3"
"SEVERITY_NUMBER_ERROR4"
"SEVERITY_NUMBER_FATAL"
"SEVERITY_NUMBER_FATAL2"
"SEVERITY_NUMBER_FATAL3"
"SEVERITY_NUMBER_FATAL4"
span_id
Unique identifier of a span within a trace, an 8-byte array.
time_unix_nano
The time when the event occurred, expressed as nanoseconds elapsed since the UNIX Epoch (January 1, 1970, 00:00:00 UTC). If 0
, the timestamp is missing.
trace_id
Unique identifier of a trace, a 16-byte array.
otel_resource reference
The resource describes the entity that produced the log record. It contains a set of attributes (key-value pairs) that must have unique keys. For example, it can contain the hostname and the name of the cluster.
otel_scope reference
Describes the instrumentation scope that sent the message. It may contain simple key-value pairs (strings or integers), but also arbitrary nested objects, such as lists and arrays. It usually contains a name
and a version
field.
6 - Handle SDATA in RFC5424 log records
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Available in AxoSyslog 4.9 and later.
AxoSyslog FilterX has a few functions to handle the structured data (SDATA) part of RFC5424-formatted log messages. These functions allow you to filter messages based on their SDATA fields.
get_sdata()
Extracts the SDATA part of the message into a two-level dictionary, for example:
{"Originator@6876": {"sub": "Vimsvc.ha-eventmgr", "opID": "esxui-13c6-6b16"}}
filterx {
sdata_json = get_sdata();
};
has_sdata()
Returns true
if the SDATA field of the current message is not empty:
filterx {
has_sdata();
};
is_sdata_from_enterprise
Filter messages based on enterprise ID in the SDATA field. For example:
filterx {
is_sdata_from_enterprise("6876");
};
7 - Metrics
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
Available in AxoSyslog 4.9 and later.
Updates a labeled metric counter, similarly to the metrics-probe()
parser. For details, see Metrics.
You can use update_metric
to count the processed messages, and create labeled metric counters based on the fields of the processed messages.
You can configure the name of the counter to update and the labels to add. The name of the counter is an unnamed, mandatory option. Note that the name is automatically prefixed with the syslogng_
string. For example:
update_metric(
"my_counter_name",
labels={
"host": ${HOST},
"app": ${PROGRAM},
"id": ${SOURCE}
}
);
This results in counters like:
syslogng_my_counter_name{app="example-app", host="localhost", source="s_local_1"} 3
Options
increment
|
|
Type: |
integer or variable |
Default: |
1 |
An integer, or an expression that resolves to an integer that defines the increment of the counter. The following example defines a counter called syslogng_input_event_bytes_total
, and increases its value with the size of the incoming message (in bytes).
update_metric(
"input_event_bytes_total",
labels={
"host": ${HOST},
"app": ${PROGRAM},
"id": ${SOURCE}
},
increment=${RAWMSG_SIZE}
);
labels
The labels used to create separate counters, based on the fields of the messages processed by update_metric
. Use the following format:
labels(
{
"name-of-label1": "value-of-the-label1",
... ,
"name-of-labelx": "value-of-the-labelx"
}
)
level
|
|
Type: |
integer (0-3) |
Default: |
0 |
Sets the stats level of the generated metrics.
Note: Drivers configured with internal(yes)
register their metrics on level 3. That way if you are creating an SCL, you can disable the built-in metrics of the driver, and create metrics manually using update_metric
.
8 - Update filters to FilterX
The following sections show you how you can change your existing filters and rewrite rules to FilterX statements. Note that:
- Many examples in the FilterX documentation were adapted from the existing filter, parser, and rewrite examples to show how you can achieve the same functionality with FilterX.
- Don’t worry if you can’t update something to FilterX. While you can’t use other blocks within a FilterX block, you can use both in a log statement, for example, you can use a FilterX block, then a parser if needed.
- There is no push to use FilterX. You can keep using the traditional blocks if they satisfy your requirements.
Update filters to FilterX
This section shows you how to update your existing filter
expressions to filterx
.
You can replace most filter functions with a simple value comparison of the appropriate macro, for example:
-
facility(user)
with ${FACILITY} == "user"
-
host("example-host")
with ${HOST} == "example-host"
-
level(warning)
with ${LEVEL} == "warning"
If you want to check for a range of levels, use numerical comparison with the ${LEVEL_NUM}
macro instead. For a list of numerical level values, see LEVEL_NUM.
-
message("example")
with ${MESSAGE} =~ "example"
(see the equal tilde operator for details)
-
program(nginx)
with ${PROGRAM} == "nginx"
-
source(my-source)
with ${SOURCE} == "my-source"
You can compare values and use boolean operators similarly to filters.
Since all FilterX statements must match a message to pass the FilterX block, you can often replace complex boolean filter expressions with multiple, simple FilterX statements. For example, consider the following filter statement:
filter { host("example1") and program("nginx"); };
The following is the same FilterX statement:
filterx { ${HOST} == "example1" and ${PROGRAM} == "nginx"; };
which is equivalent to:
filterx {
${HOST} == "example1";
${PROGRAM} == "nginx";
};
The following filter functions have no equivalents in FilterX yet:
Update rewrite rules
This section shows you how to update your existing rewrite
expressions to filterx
.
You can replace most rewrite rules with FilterX functions and value assignments, for example:
9 - FilterX operator reference
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
This page describes the operators you can use in FilterX blocks.
Comparison operators
Comparison operators allow you to compare values of macros, variables, and expressions as numbers (==, <, <=, >=, >, !=
) or as strings
(eq, lt, le, gt, ge, ne
). You can also check for type equality (===
) and strict inequality (!==
). For details and examples, see Comparing values in FilterX.
Boolean operators
The not
, or
, and
operators allow you to combine any number of comparisons and expressions. For details and examples, see Boolean operators in FilterX.
Null coalescing operator
The null coalescing operator returns the result of the left operand if it exists and is not null, otherwise it returns the operand on the right.
left-operand ?? right-operand
You can use it to define a default value, or to handle errors in your FilterX statements: if evaluating the left-side operand returns an error, the right-side operand is evaluated instead.
For example, if a key of a JSON object doesn’t exist for every message, you can set it to a default value:
${MESSAGE} = json["BODY"] ?? "Empty message"
Plus operator
The plus operator (+
) adds two arguments, if possible. (For example, you can’t add two datetime values.)
-
You can use it to add two numbers (two integers, two double values). If you add a double to an integer, the result is a double.
-
Adding two strings concatenates the strings. Note that if you want to have spaces between the added elements, you have to add them manually, like in Python, for example:
${MESSAGE} = ${HOST} + " first part of the message," + " second part of the message" + "\n";
-
Adding two lists merges the lists. Available in AxoSyslog 4.9 and later.
-
Adding two dicts updates the dict with the values of the second operand. For example:
x = {"key1": "value1", "key2": "value1"};
y = {"key3": "value1", "key2": "value2"};
${MESSAGE} = x + y; # ${MESSAGE} value is {"key1": "value1", "key3": "value1", "key2": "value2"};
Available in AxoSyslog 4.9 and later.
Plus equal operator
The +=
operator increases the value of a variable with the value on the right. Exactly how the addition happens depends on the type of the variable.
-
For numeric types (int
and double
), the result is the sum of the values. For example:
a = 3;
a += 4;
# a is 7
b = 3.3;
b += 4.1;
# b is 7.4
Adding a double value to an integer changes the integer into a double:
c = 3;
c += 4.1;
# c is 7.1 and becomes a double
-
For strings (including string values in an object), it concatenates the strings. For example:
mystring = "axo";
mystring += "flow";
# mystring is axoflow
-
For lists, it appends the new values to the list. For example:
mylist = json_array(["one", "two"]);
mylist += ["let's", "go"];
# mylist is ["one", "two", "let's", "go"]
-
For datetime variables, it increments the time. Note that you can add only integer and double values to a datetime, and:
-
When adding an integer, it must be the number of microseconds you want to add. For example:
d = strptime("2000-01-01T00:00:00Z", "%Y-%m-%dT%H:%M:%S%z");
d += 3600000000; # 1 hour in microseconds
# d is "2000-01-01T01:00:00.000+00:00"
-
When adding a double, the integer part must be the number of seconds you want to add. For example:
d = strptime("2000-01-01T00:00:00Z", "%Y-%m-%dT%H:%M:%S%z");
d += 3600.000; # 3600 seconds, 1 hour
# d is "2000-01-01T01:00:00.000+00:00"
Regexp match (equal tilde)
To check if a value contains a string or matches a regular expression, use the =~
operator. For example, the following statement is true if the ${MESSAGE}
contains the word error
:
Use the !~
operator to check if a literal string or variable doesn’t contain an expression. For example, the following statement is true if the ${MESSAGE}
doesn’t contain the word error
:
Note
- If you want to process the matches of a search, use the regexp_search FilterX function.
- If you want to rewrite or modify the matches of a search, use the regexp_subst FilterX function.
Note the following points:
- Regular expressions are case sensitive by default. For case insensitive matches, add
(?i)
to the beginning of your pattern.
- You can use regexp constants (slash-enclosed regexps) within FilterX blocks to simplify escaping special characters, for example,
/^beginning and end$/
.
- FilterX regular expressions are interpreted in “leave the backslash alone mode”, meaning that a backslash in a string before something that doesn’t need to be escaped and will be interpreted as a literal backslash character. For example,
string\more-string
is equivalent to string\\more-string
.
Ternary conditional operator
The ternary conditional operator evaluates an expression and returns the first argument if the expression is true, and the second argument if it’s false.
Syntax:
<expression> ? <return-if-true> : <return-if-false>
For example, the following example checks the value of the ${LEVEL_NUM}
macro and returns low
if it’s lower than 5, high
otherwise.
(${LEVEL_NUM} < 5 ) ? "low" : "high";
You can also use it to check if a value is set, and set it to a default value if it isn’t, but for this use case we recommend using the Null coalescing operator:
${HOST} = isset(${HOST}) ? ${HOST} : "default-hostname"
10 - FilterX function reference
FilterX is an experimental feature currently under development. Feedback is most welcome on Discord and GitHub.
Available in AxoSyslog 4.8.1 and later.
This page describes the functions you can use in FilterX blocks.
Functions have arguments that can be either mandatory or optional.
- Mandatory options are always positional, so you need to pass them in the correct order. You cannot set them in the
arg=value
format.
- Optional arguments are always named, like
arg=value
. You can pass optional arguments in any order.
cache_json_file
Load the contents of an external JSON file in an efficient manner. You can use this function to lookup contextual information. (Basically, this is a FilterX-specific implementation of the add-contextual-data() functionality
.)
Usage: cache_json_file("/path/to/file.json")
For example, if your context-info-db.json
file contains the following:
{
"nginx": "web",
"httpd": "web",
"apache": "web",
"mysql": "db",
"postgresql": "db"
}
Then the following FilterX expression selects only “web” traffic:
filterx {
declare known_apps = cache_json_file("/context-info-db.json");
${app} = known_apps[${PROGRAM}] ?? "unknown";
${app} == "web"; # drop everything that's not a web server log
}
Note
AxoSyslog reloads the contents of the JSON file only when the AxoSyslog configuration is reloaded.
datetime
Cast a value into a datetime variable.
Usage: datetime(<string or expression to cast as datetime>)
For example:
date = datetime("1701350398.123000+01:00");
Usually, you use the strptime FilterX function to create datetime values. Alternatively, you can cast an integer, double, string, or isodate variable into datetime with the datetime()
FilterX function. Note that:
- When casting from an integer, the integer is the number of microseconds elapsed since the UNIX epoch (00:00:00 UTC on 1 January 1970).
- When casting from a double, the double is the number of seconds elapsed since the UNIX epoch (00:00:00 UTC on 1 January 1970). (The part before the floating points is the seconds, the part after the floating point is the microseconds.)
- When casting from a string, the string (for example,
1701350398.123000+01:00
) is interpreted as: <the number of seconds elapsed since the UNIX epoch>.<microseconds>+<timezone relative to UTC (GMT +00:00)>
endswith
Available in AxoSyslog 4.9 and later.
Returns true if the input string ends with the specified substring. By default, matches are case sensitive. Usage:
endswith(input-string, substring);
endswith(input-string, [substring_1, substring_2], ignorecase=true);
For details, see String search in FilterX.
flatten
Flattens the nested elements of an object using the specified separator, similarly to the format-flat-json()
template function. For example, you can use it to flatten nested JSON objects in the output if the receiving application cannot handle nested JSON objects.
Usage: flatten(dict_or_list, separator=".")
You can use multi-character separators, for example, =>
. If you omit the separator, the default dot (.
) separator is used.
sample-dict = json({"a": {"b": {"c": "1"}}});
${MESSAGE} = flatten(sample-dict);
The value of ${MESSAGE}
will be: {"a.b.c": "1"}
Formats a dictionary or a list into a comma-separated string.
Usage: format_csv(<input-list-or-dict>, columns=<json-list>, delimiter=<delimiter-character>, default_value=<string>)
Only the input is mandatory, other arguments are optional. Note that the delimiter must be a single character.
By default, the delimiter is the comma (delimiter=","
), the columns
and default_value
are empty.
If the columns
option is set, AxoSyslog checks that the number of fields or entries in the input data matches the number of columns. If there are fewer items, it adds the default_value
to the missing entries.
Formats a dictionary into a string containing key=value pairs.
Usage: format_kv(kvs_dict, value_separator="<separator-character>", pair_separator="<separator-string>")
By default, format_kv
uses =
to separate values, and ,
(comma and space) to separate the pairs:
filterx {
${MESSAGE} = format_kv(<input-dictionary>);
};
The value_separator
option must be a single character, the pair_separator
can be a string. For example, to use the colon (:) as the value separator and the semicolon (;) as the pair separator, use:
format_kv(<input-dictionary>, value_separator=":", pair_separator=";")
Formats any value into a raw JSON string.
Usage: format_json($data)
get_sdata
See Handle SDATA in RFC5424 log records.
has_sdata
See Handle SDATA in RFC5424 log records.
includes
Available in AxoSyslog 4.9 and later.
Returns true if the input string contains the specified substring. By default, matches are case sensitive. Usage:
includes(input-string, substring);
includes(input-string, [substring_1, substring_2], ignorecase=true);
For details, see String search in FilterX.
isodate
Parses a string as a date in ISODATE format: %Y-%m-%dT%H:%M:%S%z
is_sdata_from_enterprise()
See Handle SDATA in RFC5424 log records.
isset
Returns true if the argument exists and its value is not empty or null.
Usage: isset(<name of a variable, macro, or name-value pair>)
istype
Returns true if the object (first argument) has the specified type (second argument). The type must be a quoted string. (See List of type names.)
Usage: istype(object, "type_str")
For example:
obj = json();
istype(obj, "json_object"); # True
istype(${PID}, "string");
istype(my-local-json-object.mylist, "json_array");
If the object doesn’t exist, istype()
returns with an error, causing the FilterX statement to become false, and logs an error message to the internal()
source of AxoSyslog.
json
Cast a value into a JSON object.
Usage: json(<string or expression to cast to json>)
For example:
js_dict = json({"key": "value"});
Starting with version 4.9, you can use {}
without the json()
keyword as well. For example, the following creates an empty JSON object:
json_array
Cast a value into a JSON array.
Usage: json_array(<string or expression to cast to json array>)
For example:
js_list = json_array(["first_element", "second_element", "third_element"]);
Starting with version 4.9, you can use []
without the json_array()
keyword as well. For example, the following creates an empty JSON list:
len
Returns the number of items in an object as an integer: the length (number of characters) of a string, the number of elements in a list, or the number of keys in an object.
Usage: len(object)
lower
Converts all characters of a string lowercase characters.
Usage: lower(string)
otel_array
Creates a dictionary represented as an OpenTelemetry array.
otel_kvlist
Creates a dictionary represented as an OpenTelemetry key-value list.
otel_logrecord
Creates an OpenTelemetry log record object.
otel_resource
Creates an OpenTelemetry resource object.
otel_scope
Creates an OpenTelemetry scope object.
parse_csv
Split a comma-separated or similar string.
Usage: parse_csv(msg_str [columns=json_array, delimiter=string, string_delimiters=json_array, dialect=string, strip_whitespace=boolean, greedy=boolean])
For details, see Comma-separated values.
parse_kv
Split a string consisting of whitespace or comma-separated key=value
pairs (for example, WELF-formatted messages).
Usage: parse_kv(msg, value_separator="=", pair_separator=", ", stray_words_key="stray_words")
The value_separator
must be a single character. The pair_separator
can consist of multiple characters.
For details, see key=value pairs.
parse_leef
Parse a LEEF-formatted string.
Usage: parse_leef(msg)
For details, see LEEF.
parse_xml
Parse an XML object into a JSON object.
Usage: parse_xml(msg)
For details, see /axosyslog-core-docs/filterx/filterx-parsing/xml/
parse_windows_eventlog_xml
Parses a Windows Event Log XML object into a JSON object.
Usage: parse_xml(msg)
For details, see /axosyslog-core-docs/filterx/filterx-parsing/xml/
regexp_search
Searches a string and returns the matches of a regular expression as a list or a dictionary. If there are no matches, the list or dictionary is empty.
Usage: regexp_search("<string-to-search>", <regular-expression>)
For example:
# ${MESSAGE} = "ERROR: Sample error message string"
my-variable = regexp_search(${MESSAGE}, "ERROR");
You can also use unnamed match groups (()
) and named match groups ((?<first>ERROR)(?<second>message)
).
Note the following points:
- Regular expressions are case sensitive by default. For case insensitive matches, add
(?i)
to the beginning of your pattern.
- You can use regexp constants (slash-enclosed regexps) within FilterX blocks to simplify escaping special characters, for example,
/^beginning and end$/
.
- FilterX regular expressions are interpreted in “leave the backslash alone mode”, meaning that a backslash in a string before something that doesn’t need to be escaped and will be interpreted as a literal backslash character. For example,
string\more-string
is equivalent to string\\more-string
.
Unnamed match groups
${MY-LIST} = json(); # Creates an empty JSON object
${MY-LIST}.unnamed = regexp_search("first-word second-part third", /(first-word)(second-part)(third)/);
${MY-LIST}.unnamed
is a list containing: ["first-word second-part third", "first-word", "second-part", "third"],
Named match groups
${MY-LIST} = json(); # Creates an empty JSON object
${MY-LIST}.named = regexp_search("first-word second-part third", /(?<one>first-word)(?<two>second-part)(?<three>third)/);
${MY-LIST}.named
is a dictionary with the names of the match groups as keys, and the corresponding matches as values: {"0": "first-word second-part third", "one": "first-word", "two": "second-part", "three": "third"},
Mixed match groups
If you use mixed (some named, some unnamed) groups in your regular expression, the output is a dictionary, where AxoSyslog automatically assigns a key to the unnamed groups. For example:
${MY-LIST} = json(); # Creates an empty JSON object
${MY-LIST}.mixed = regexp_search("first-word second-part third", /(?<one>first-word)(second-part)(?<three>third)/);
${MY-LIST}.mixed
is: {"0": "first-word second-part third", "first": "first-word", "2": "second-part", "three": "third"}
regexp_subst
Rewrites a string using regular expressions. This function implements the subst
rewrite rule functionality.
Usage: regexp_subst(<input-string>, <pattern-to-find>, <replacement>, flags
The following example replaces the first IP
in the text of the message with the IP-Address
string.
regexp_subst(${MESSAGE}, "IP", "IP-Address");
To replace every occurrence, use the global=true
flag:
regexp_subst(${MESSAGE}, "IP", "IP-Address", global=true);
Note the following points:
- Regular expressions are case sensitive by default. For case insensitive matches, add
(?i)
to the beginning of your pattern.
- You can use regexp constants (slash-enclosed regexps) within FilterX blocks to simplify escaping special characters, for example,
/^beginning and end$/
.
- FilterX regular expressions are interpreted in “leave the backslash alone mode”, meaning that a backslash in a string before something that doesn’t need to be escaped and will be interpreted as a literal backslash character. For example,
string\more-string
is equivalent to string\\more-string
.
Options
You can use the following flags with the regexp_subst
function:
-
global=true
:
Replace every match of the regular expression, not only the first one.
-
ignorecase=true
:
Do case insensitive match.
-
jit=true
:
Enable just-in-time compilation function for PCRE regular expressions.
-
newline=true
:
When configured, it changes the newline definition used in PCRE regular expressions to accept either of the following:
- a single carriage-return
- linefeed
- the sequence carriage-return and linefeed (
\\r
, \\n
and \\r\\n
, respectively)
This newline definition is used when the circumflex and dollar patterns (^
and $
) are matched against an input. By default, PCRE interprets the linefeed character as indicating the end of a line. It does not affect the \\r
, \\n
or \\R
characters used in patterns.
-
utf8=true
:
Use Unicode support for UTF-8 matches: UTF-8 character sequences are handled as single characters.
startswith
Available in AxoSyslog 4.9 and later.
Returns true if the input string begins with the specified substring. By default, matches are case sensitive. Usage:
startswith(input-string, substring);
startswith(input-string, [substring_1, substring_2], ignorecase=true);
For details, see String search in FilterX.
string
Cast a value into a string. Note that currently AxoSyslog evaluates strings and executes template functions and template expressions within the strings. In the future, template evaluation will be moved to a separate FilterX function.
Usage: string(<string or expression to cast>)
For example:
myvariable = string(${LEVEL_NUM});
Sometimes you have to explicitly cast values to strings, for example, when you want to concatenate them into a message using the +
operator.
strptime
Creates a datetime
object from a string, similarly to the date-parser()
. The first argument is the string containing the date. The second argument is a format string that specifies how to parse the date string. Optionally, you can specify additional format strings that are applied in order if the previous one doesn’t match the date string.
Usage: strptime(time_str, format_str_1, ..., format_str_N)
For example:
${MESSAGE} = strptime("2024-04-10T08:09:10Z", "%Y-%m-%dT%H:%M:%S%z");
Note
If none of the format strings match,
strptime
returns the null value and logs an error message to the
internal()
source of AxoSyslog. If you want the FilterX block to explicitly return false in such cases, use the
isset
FilterX function on the result of
strptime
.
You can use the following format codes in the format string:
%% PERCENT
%a day of the week, abbreviated
%A day of the week
%b month abbr
%B month
%c MM/DD/YY HH:MM:SS
%C ctime format: Sat Nov 19 21:05:57 1994
%d numeric day of the month, with leading zeros (eg 01..31)
%e like %d, but a leading zero is replaced by a space (eg 1..31)
%f microseconds, leading 0's, extra digits are silently discarded
%D MM/DD/YY
%G GPS week number (weeks since January 6, 1980)
%h month, abbreviated
%H hour, 24 hour clock, leading 0's)
%I hour, 12 hour clock, leading 0's)
%j day of the year
%k hour
%l hour, 12 hour clock
%L month number, starting with 1
%m month number, starting with 01
%M minute, leading 0's
%n NEWLINE
%o ornate day of month -- "1st", "2nd", "25th", etc.
%p AM or PM
%P am or pm (Yes %p and %P are backwards :)
%q Quarter number, starting with 1
%r time format: 09:05:57 PM
%R time format: 21:05
%s seconds since the Epoch, UCT
%S seconds, leading 0's
%t TAB
%T time format: 21:05:57
%U week number, Sunday as first day of week
%w day of the week, numerically, Sunday == 0
%W week number, Monday as first day of week
%x date format: 11/19/94
%X time format: 21:05:57
%y year (2 digits)
%Y year (4 digits)
%Z timezone in ascii format (for example, PST), or in format -/+0000
%z timezone in ascii format (for example, PST), or in format -/+0000 (Required element)
Warning
When using the %z
and %Z
format codes, consider that while %z
strictly expects a specified timezone, and triggers a warning if the timezone is missing, %Z
does not trigger a warning if the timezone is not specified.
For further information about the %z
and %Z
format codes, see the ‘DESCRIPTION’ section on the srtptime(3) - NetBSD Manual Pages.
For example, for the date 01/Jan/2016:13:05:05 PST
use the following format string: "%d/%b/%Y:%H:%M:%S %Z"
The isodate
FilterX function is a specialized variant of strptime
, that accepts only a fixed format.
unset
Deletes a variable, a name-value pair, or a key in a complex object (like JSON), for example: unset(${<name-value-pair-to-unset>});
You can also list multiple values to delete: unset(${<first-name-value-pair-to-unset>}, ${<second-name-value-pair-to-unset>});
See also Delete values.
unset_empties
Deletes (unsets) the empty fields of an object, for example, a JSON object or list. By default, the object is processed recursively, so the empty values are deleted from inner dicts and lists as well. If you set the replacement
option, you can also use this function to replace fields of the object to custom values.
Usage: unset_empties(object, options)
The unset_empties()
function has the following options:
ignorecase
: Set to false
to perform case-sensitive matching. Default value: true
. Available in Available in AxoSyslog 4.9 and later.
recursive
: Enables recursive processing of nested dictionaries. Default value: true
replacement
: Replace the target elements with the value of replacement
instead of removing them. Available in AxoSyslog 4.9 and later.
targets
: A list of elements to remove or replace. Default value: ["", null, [], {}]
. Available in AxoSyslog 4.9 and later.
For example, to remove the fields with -
and N/A
values, you can use
unset_empties(input_object, targets=["-", "N/A"], ignorecase=false);
update_metric
Updates a labeled metric counter, similarly to the metrics-probe()
parser. For details, see Metrics.
upper
Converts all characters of a string uppercase characters.
Usage: upper(string)
vars
Returns the variables (including pipeline variables and name-value pairs) defined in the FilterX block as a JSON object.
For example:
filterx {
${logmsg_variable} = "foo";
local_variable = "bar";
declare pipeline_level_variable = "baz";
${MESSAGE} = vars();
};
The value of ${MESSAGE}
will be: {"logmsg_variable":"foo","pipeline_level_variable":"baz"}
11 - Reuse FilterX blocks
To use a FilterX block in multiple log paths, you have to define it as a separate block:
block filterx <identifier>() {
<filterx-statement-1>;
<filterx-statement-2>;
...
};
Then use it in a log path:
log {
source(s1);
filterx(<identifier>);
destination(d1);
};
For example, the following FilterX statement selects the messages that contain the word deny
and come from the host example
.
block filterx demo_filterx() {
${HOST} == "example";
${MESSAGE} =~ "deny";
};
log {
source(s1);
filter(demo_filterx);
destination(d1);
};