Considerations for using an external syslog server
An external syslog server is a server outside of StorageGRID you can use to collect system audit information in a single location. Using an external syslog server enables you to reduce network traffic on your Admin Nodes and manage the information more efficiently. For StorageGRID, the outbound syslog message packet format is compliant with RFC 3164.
The types of audit information you can send to the external syslog server include:
-
Audit logs containing the audit messages generated during normal system operation
-
Security-related events such as logins and escalations to root
-
Application logs that might be requested if it is necessary to open a support case to troubleshoot an issue you have encountered
When to use an external syslog server
An external syslog server is especially useful if you have a large grid, use multiple types of S3 applications, or want to retain all audit data. Sending audit information to an external syslog server enables you to:
-
Collect and manage audit information such as audit messages, application logs, and security events more efficiently.
-
Reduce network traffic on your Admin Nodes because audit information is transferred directly from the various Storage Nodes to the external syslog server, without having to go through an Admin Node.
When logs are sent to an external syslog server, single logs greater than 8,192 bytes are truncated at the end of the message to conform with common limitations in external syslog server implementations. To maximize the options for full data recovery in the event of a failure of the external syslog server, up to 20 GB of local logs of audit records ( localaudit.log
) are maintained on each node.
How to configure an external syslog server
To learn how to configure an external syslog server, see Configure audit messages and external syslog server.
If you plan to configure use the TLS or RELP/TLS protocol, you must have the following certificates:
-
Server CA certificates: One or more trusted CA certificates for verifying the external syslog server in PEM encoding. If omitted, the default Grid CA certificate will be used.
-
Client certificate: The client certificate for authentication to the external syslog server in PEM encoding.
-
Client private key: Private key for the client certificate in PEM encoding.
If you use a client certificate you must also use a client private key. If you provide an encrypted private key, you must also provide the passphrase. There is no significant security benefit from using an encrypted private key because the key and passphrase must be stored; using an unencrypted private key, if available, is recommended for simplicity.
How to estimate the size of the external syslog server
Normally, your grid is sized to achieve a required throughput, defined in terms of S3 operations per second or bytes per second. For example, you might have a requirement that your grid handle 1,000 S3 operations per second, or 2,000 MB per second, of object ingests and retrievals. You should size your external syslog server according to your grid's data requirements.
This section provides some heuristic formulas that help you estimate the rate and average size of log messages of various types that your external syslog server needs to be capable of handling, expressed in terms of the known or desired performance characteristics of the grid (S3 operations per second).
Use S3 operations per second in estimation formulas
If your grid was sized for a throughput expressed in bytes per second, you must convert this sizing into S3 operations per second to use the estimation formulas. To convert grid throughput, you must first determine your average object size, which you can do using the information in existing audit logs and metrics (if any), or by using your knowledge of the applications that will use StorageGRID. For example, if your grid was sized to achieve a throughput of 2,000 MB/second, and your average object size is 2 MB, then your grid was sized to be able to handle 1,000 S3 operations per second (2,000 MB / 2 MB).
The formulas for external syslog server sizing in the following sections provide common-case estimates (rather than worst-case estimates). Depending on your configuration and workload, you might see a higher or lower rate of syslog messages or volume of syslog data than the formulas predict. The formulas are meant to be used as guidelines only. |
Estimation formulas for audit logs
If you have no information about your S3 workload other than number of S3 operations per second your grid is expected to support, you can estimate the volume of audit logs your external syslog server will need to handle using the following formulas, under the assumption that you leave the Audit Levels set to the default values (all categories set to Normal, except Storage, which is set to Error):
Audit Log Rate = 2 x S3 Operations Rate Audit Log Average Size = 800 bytes
For example, if your grid is sized for 1,000 S3 operations per second, your external syslog server should be sized to support 2,000 syslog messages per second and should be able to receive (and typically store) audit log data at a rate of 1.6 MB per second.
If you know more about your workload, more accurate estimations are possible. For audit logs, the most important additional variables are the percentage of S3 operations that are PUTs (vs. GETS), and the average size, in bytes, of the following S3 fields (4-character abbreviations used in the table are audit log field names):
Code | Field | Description |
---|---|---|
SACC |
S3 tenant account name (request sender) |
The name of the tenant account for the user who sent the request. Empty for anonymous requests. |
SBAC |
S3 tenant account name (bucket owner) |
The tenant account name for the bucket owner. Used to identify cross-account or anonymous access. |
S3BK |
S3 bucket |
The S3 bucket name. |
S3KY |
S3 key |
The S3 key name, not including the bucket name. Operations on buckets don't include this field. |
Let's use P to represent the percentage of S3 operations that are PUTs, where 0 ≤ P ≤ 1 (so for a 100% PUT workload, P = 1, and for a 100% GET workload, P = 0).
Let's use K to represent the average size of the sum of the S3 account names, S3 bucket, and S3 key. Suppose the S3 account name is always my-s3-account (13 bytes), buckets have fixed-length names like /my/application/bucket-12345 (28 bytes), and objects have fixed-length keys like 5733a5d7-f069-41ef-8fbd-13247494c69c (36 bytes). Then the value of K is 90 (13+13+28+36).
If you can determine values for P and K, you can estimate the volume of audit logs your external syslog server will need to handle using the following formulas, under the assumption that you leave the Audit Levels set to the defaults (all categories set to Normal, except Storage, which is set to Error):
Audit Log Rate = ((2 x P) + (1 - P)) x S3 Operations Rate Audit Log Average Size = (570 + K) bytes
For example, if your grid is sized for 1,000 S3 operations per second, your workload is 50% PUTs, and your S3 account names, bucket names, and object names average 90 bytes, your external syslog server should be sized to support 1,500 syslog messages per second and should be able to receive (and typically store) audit log data at a rate of approximately 1 MB per second.
Estimation formulas for non-default audit levels
The formulas provided for audit logs assume the use of default audit level settings (all categories set to Normal, except Storage, which is set to Error). Detailed formulas for estimating the rate and average size of audit messages for non-default audit level settings aren't available. However, the following table can be used to make a rough estimate of the rate; you can use the average size formula provided for audit logs, but be aware that it is likely to result in an over-estimate because the "extra" audit messages are, on average, smaller than the default audit messages.
Condition | Formula |
---|---|
Replication: Audit levels all set to Debug or Normal |
Audit log rate = 8 x S3 Operations Rate |
Erasure coding: audit levels all set to Debug or Normal |
Use same formula as for default settings |
Estimation formulas for security events
Security events aren't correlated with S3 operations and typically produce a negligible volume of logs and data. For these reasons, no estimation formulas are provided.
Estimation formulas for application logs
If you have no information about your S3 workload other than the number of S3 operations per second your grid is expected to support, you can estimate the volume of applications logs your external syslog server will need to handle using the following formulas:
Application Log Rate = 3.3 x S3 Operations Rate Application Log Average Size = 350 bytes
So, for example, if your grid is sized for 1,000 S3 operations per second, your external syslog server should be sized to support 3,300 application logs per second and be able to receive (and store) application log data at a rate of about 1.2 MB per second.
If you know more about your workload, more accurate estimations are possible. For application logs, the most important additional variables are the data protection strategy (replication vs. erasure coding), the percentage of S3 operations that are PUTs (vs. GETs/other), and the average size, in bytes, of the following S3 fields (4-character abbreviations used in table are audit log field names):
Code | Field | Description |
---|---|---|
SACC |
S3 tenant account name (request sender) |
The name of the tenant account for the user who sent the request. Empty for anonymous requests. |
SBAC |
S3 tenant account name (bucket owner) |
The tenant account name for the bucket owner. Used to identify cross-account or anonymous access. |
S3BK |
S3 bucket |
The S3 bucket name. |
S3KY |
S3 key |
The S3 key name, not including the bucket name. Operations on buckets don't include this field. |
Example sizing estimations
This section explains example cases of how to use the estimation formulas for grids with the following methods of data protection:
-
Replication
-
Erasure coding
If you use replication for data protection
Let P represent the percentage of S3 operations that are PUTs, where 0 ≤ P ≤ 1 (so for a 100% PUT workload, P = 1, and for a 100% GET workload, P = 0).
Let K represent the average size of the sum of the S3 account names, S3 bucket, and S3 key. Suppose the S3 account name is always my-s3-account (13 bytes), buckets have fixed-length names like /my/application/bucket-12345 (28 bytes), and objects have fixed-length keys like 5733a5d7-f069-41ef-8fbd-13247494c69c (36 bytes). Then K has a value of 90 (13+13+28+36).
If you can determine values for P and K, you can estimate the volume of application logs your external syslog server will have to be able to handle using the following formulas.
Application Log Rate = ((1.1 x P) + (2.5 x (1 - P))) x S3 Operations Rate Application Log Average Size = (P x (220 + K)) + ((1 - P) x (240 + (0.2 x K))) Bytes
So, for example, if your grid is sized for 1,000 S3 operations per second, your workload is 50% PUTs, and your S3 account names, bucket names, and object names average 90 bytes, your external syslog server should be sized to support 1800 application logs per second, and will be receiving (and typically storing) application data at a rate of 0.5 MB per second.
If you use erasure coding for data protection
Let P represent the percentage of S3 operations that are PUTs, where 0 ≤ P ≤ 1 (so for a 100% PUT workload, P = 1, and for a 100% GET workload, P = 0).
Let K represent the average size of the sum of the S3 account names, S3 bucket, and S3 key. Suppose the S3 account name is always my-s3-account (13 bytes), buckets have fixed-length names like /my/application/bucket-12345 (28 bytes), and objects have fixed-length keys like 5733a5d7-f069-41ef-8fbd-13247494c69c (36 bytes). Then K has a value of 90 (13+13+28+36).
If you can determine values for P and K, you can estimate the volume of application logs your external syslog server will have to be able to handle using the following formulas.
Application Log Rate = ((3.2 x P) + (1.3 x (1 - P))) x S3 Operations Rate Application Log Average Size = (P x (240 + (0.4 x K))) + ((1 - P) x (185 + (0.9 x K))) Bytes
So, for example, if your grid is sized for 1,000 S3 operations per second, your workload is 50% PUTs, and your S3 account names, bucket names, and object names average 90 bytes, your external syslog server should be sized to support 2,250 application logs per second and should be able to receive (and typically store) application data at a rate of 0.6 MB per second.