Opciones de configuración del operador de supervisión de Kubernetes
La "Operador de supervisión de Kubernetes" configuración se puede personalizar.
La siguiente tabla enumera las posibles opciones para el archivo AgentConfiguration:
Componente | Opción | Descripción |
---|---|---|
agente |
Opciones de configuración comunes a todos los componentes que el operador puede instalar. Estas pueden considerarse como opciones “globales”. |
|
Repositorio de documentos |
Una anulación de dockerRepo para extraer imágenes de repositorios de Docker privados de clientes en comparación con el repositorio de Docker de Data Infrastructure Insights. El valor predeterminado es el repositorio de datos de información sobre infraestructura de datos |
|
DockerImagePullSecret |
Opcional: Un secreto para el repositorio privado de los clientes |
|
Nombre del clúster |
Campo de texto libre que identifica de forma única un clúster en todos los clústeres de clientes. Debería ser único para un inquilino de Data Infrastructure Insights. El valor predeterminado es lo que introduce el cliente en la interfaz de usuario del campo «Cluster Name» |
|
Proxy Format: Proxy: Servidor: Puerto: Nombre de usuario: Contraseña: NoProxy: IsTelegrafProxyEnabled: IsAuProxyEnabled: IsFluentbitProxyEnabled: IsCollectorProxyEnabled: |
Opcional para establecer el proxy. Este suele ser el proxy corporativo del cliente. |
|
telegraf |
Opciones de configuración que pueden personalizar la instalación de telegraf del Operador |
|
Intervalo de colección |
Intervalo de recopilación de métricas, en segundos (Max=60s) |
|
DsCpuLimit |
Límite de CPU para telegraf ds |
|
DsMemLimit |
Límite de memoria para telegraf ds |
|
DsCpuRequest |
Solicitud de CPU para telegraf ds |
|
DsMemRequest |
Solicitud de memoria para telegraf ds |
|
RsCpuLimit |
Límite de CPU para telegraf rs |
|
RsMemLimit |
Límite de memoria para telegraf rs |
|
RsCpuRequest |
Solicitud de CPU para telegraf rs |
|
RsMemRequest |
Solicitud de memoria para telegraf rs |
|
Privilegios de ejecución |
Ejecute el contenedor telegraf-mountstats-poller de telegraf DaemonSet en modo privilegiado. Establezca esta opción en true si SELinux está habilitado en sus nodos de Kubernetes. |
|
RunDsPrivileged |
Establezca runDsPrivileged en true para ejecutar el contenedor telegraf DaemonSet en modo privilegiado. |
|
Tamaño de lote |
||
Buffer Limit |
||
RoundInterval |
||
Colección Jitter |
||
precisión |
||
FlushInterval |
||
FlushJitter |
||
Tiempo de espera de salida |
||
DsToleraciones |
telegraf-ds toleraciones adicionales. |
|
RsToleraciones |
toleraciones adicionales de telegraf-rs. |
|
SkipProcessorsAfterAggregators |
||
sin protección |
Vea esto "Problema conocido de Telegraf". Si se establece UNPROTECTED, se indicará al operador de supervisión de Kubernetes que ejecute Telegraf con el |
|
métricas-estado-kube |
Opciones de configuración que pueden personalizar la instalación de métricas de estado kube del Operador |
|
CpuLimit |
Límite de CPU para la implementación de métricas de estado-kube |
|
MemLimit |
Límite de MEM para el despliegue de métricas de estado-kube |
|
CpuRequest |
Solicitud de CPU para el despliegue de métricas de estado de kube |
|
MemRequest |
Solicitud de MEM para el despliegue de métricas de estado de kube |
|
recursos |
lista de recursos separados por comas para capturar. ejemplo: cronjobs,daemonsets,deployments,ingresses,jobs,namespaces,nodes,persistentvolumeclaims, persistentvolumes,pods,replicasets,resourcequota,services,statefulsets |
|
toleraciones |
kube-state-metrics toleraciones adicionales. |
|
etiquetas |
una lista separada por comas de los recursos que kube-state-metrics debe capturar + ejemplo: cronjobs=[],daemonsets=[],despliegues=[],ingresses=[],jobs=[],namespaces=[],nodes=[], persistentvolumeclaims=[],persistentvolumes=[],replicads=[ |
|
registros |
Opciones de configuración que pueden personalizar la recopilación de registros y la instalación del operador |
|
ReadFromHead |
verdadero/falso, debe leer con fluidez el log de la cabecera |
|
tiempo de espera |
timeout, en segundos |
|
DnsMode |
TCP/UDP, modo para DNS |
|
toleraciones de bits fluidas |
toleraciones adicionales de fluent-bit-ds. |
|
toleraciones-exportador-de-eventos |
toleraciones adicionales de evento-exportador. |
|
Event-exporter-maxEventAgeSeconds |
antigüedad máxima de evento de exportador-evento. Consulte https://github.com/jkroepke/resmoio-kubernetes-event-exporter |
|
asignación de carga de trabajo |
Opciones de configuración que pueden personalizar la recopilación de mapas de carga de trabajo y la instalación del operador. |
|
CpuLimit |
Límite de CPU para ds de observador neto |
|
MemLimit |
límite de mem para ds de observador neto |
|
CpuRequest |
Solicitud de CPU para ds de observador de red |
|
MemRequest |
solicitud de mem para ds de observador neto |
|
MetricAggregationInterval |
intervalo de agregación de métricas, en segundos |
|
BpfPollInterval |
Intervalo de sondeo de BPF, en segundos |
|
Habilitar DNSLookup |
True/false, active la búsqueda de DNS |
|
l4-toleraciones |
net-observer-l4-ds toleraciones adicionales. |
|
Privilegios de ejecución |
True/false - Establece runPrivileged en true si SELinux está habilitado en los nodos de Kubernetes. |
|
gestión del cambio |
Opciones de configuración para la administración y análisis de cambios de Kubernetes |
|
CpuLimit |
Límite de CPU para change-observer-watch-rs |
|
MemLimit |
Límite de MEM para change-observer-watch-rs |
|
CpuRequest |
Solicitud de CPU para change-observer-watch-rs |
|
MemRequest |
solicitud de mem para change-observer-watch-rs |
|
FailureDeclarationIntervalMins |
Intervalo en minutos tras el cual un despliegue incorrecto de una carga de trabajo se marcará como erróneo |
|
DeployAggrIntervalSeconds |
Frecuencia a la que se envían los eventos de implementación de carga de trabajo en curso |
|
No WorkloadAggrIntervalSeconds |
Frecuencia a la que se combinan y se envían las implementaciones sin cargas de trabajo |
|
TermsToRedact |
Un conjunto de expresiones regulares utilizadas en los nombres env y los mapas de datos cuyo valor será redactado como ejemplo: “Pwd”, “password”, “token”, “apikey”, “api-key”, “jwt” |
|
KindsToWatch adicional |
Una lista separada por comas de tipos adicionales para ver desde el conjunto predeterminado de tipos observados por el recopilador |
|
KindsToIgnoreFromWatch |
Una lista separada por comas de tipos que ignorar de la observación del conjunto predeterminado de tipos observados por el recopilador |
|
LogRecordAggrIntervalSeconds |
Frecuencia con la que los registros de registro se envían a CI desde el recopilador |
|
toleraciones de vigilancia |
tolerancia adicional change-observer-watch-ds. Formato de línea única abreviado solamente. Ejemplo: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: Noexecute}' |
Archivo de configuración de AgentConfiguration de ejemplo
A continuación se muestra un archivo AgentConfiguration de ejemplo.
apiVersion: monitoring.netapp.com/v1alpha1 kind: AgentConfiguration metadata: name: netapp-ci-monitoring-configuration namespace: "netapp-monitoring" labels: installed-by: nkmo-netapp-monitoring spec: # # You can modify the following fields to configure the operator. # # Optional settings are commented out and include default values for reference # # To update them, uncomment the line, change the value, and apply the updated AgentConfiguration. agent: # # [Required Field] A uniquely identifiable user-friendly clustername. # # clusterName must be unique across all clusters in your Data Infrastructure Insights environment. clusterName: "my_cluster" # # Proxy settings. The proxy that the operator should use to send metrics to Data Infrastructure Insights. # # Please see documentation here: https://docs.netapp.com/us-en/cloudinsights/task_config_telegraf_agent_k8s.html#configuring-proxy-support # proxy: # server: # port: # noproxy: # username: # password: # isTelegrafProxyEnabled: # isFluentbitProxyEnabled: # isCollectorsProxyEnabled: # # [Required Field] By default, the operator uses the CI repository. # # To use a private repository, change this field to your repository name. # # Please see documentation here: https://docs.netapp.com/us-en/cloudinsights/task_config_telegraf_agent_k8s.html#using-a-custom-or-private-docker-repository dockerRepo: 'docker.c01.cloudinsights.netapp.com' # # [Required Field] The name of the imagePullSecret for dockerRepo. # # If you are using a private repository, change this field from 'netapp-ci-docker' to the name of your secret. dockerImagePullSecret: 'netapp-ci-docker' # # Allow the operator to automatically rotate its ApiKey before expiration. # tokenRotationEnabled: 'true' # # Number of days before expiration that the ApiKey should be rotated. This must be less than the total ApiKey duration. # tokenRotationThresholdDays: '30' telegraf: # # Settings to fine-tune metrics data collection. Telegraf config names are included in parenthesis. # # See https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#agent # # The default time telegraf will wait between inputs for all plugins (interval). Max=60 # collectionInterval: '60s' # # Maximum number of records per output that telegraf will write in one batch (metric_batch_size). # batchSize: '10000' # # Maximum number of records per output that telegraf will cache pending a successful write (metric_buffer_limit). # bufferLimit: '150000' # # Collect metrics on multiples of interval (round_interval). # roundInterval: 'true' # # Each plugin waits a random amount of time between the scheduled collection time and that time + collection_jitter before collecting inputs (collection_jitter). # collectionJitter: '0s' # # Collected metrics are rounded to the precision specified. When set to "0s" precision will be set by the units specified by interval (precision). # precision: '0s' # # Time telegraf will wait between writing outputs (flush_interval). Max=collectionInterval # flushInterval: '60s' # # Each output waits a random amount of time between the scheduled write time and that time + flush_jitter before writing outputs (flush_jitter). # flushJitter: '0s' # # Timeout for writing to outputs (timeout). # outputTimeout: '5s' # # telegraf-ds CPU/Mem limits and requests. # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ # dsCpuLimit: '750m' # dsMemLimit: '800Mi' # dsCpuRequest: '100m' # dsMemRequest: '500Mi' # # telegraf-rs CPU/Mem limits and requests. # rsCpuLimit: '3' # rsMemLimit: '4Gi' # rsCpuRequest: '100m' # rsMemRequest: '500Mi' # # Skip second run of processors after aggregators # skipProcessorsAfterAggregators: 'true' # # telegraf additional tolerations. Use the following abbreviated single line format only. # # Inspect telegraf-rs/-ds to view tolerations which are always present. # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}' # dsTolerations: '' # rsTolerations: '' # If telegraf warns of insufficient lockable memory, try increasing the limit of lockable memory for Telegraf in the underlying operating system/node. If increasing the limit is not an option, set this to true to instruct Telegraf to not attempt to reserve locked memory pages. While this might pose a security risk as decrypted secrets might be swapped out to disk, it allows for execution in environments where reserving locked memory is not possible. # unprotected: 'false' # # Run the telegraf DaemonSet's telegraf-mountstats-poller container in privileged mode. Set runPrivileged to true if SELinux is enabled on your Kubernetes nodes. # runPrivileged: '{{ .Values.telegraf_installer.kubernetes.privileged_mode }}' # # Set runDsPrivileged to true to run the telegraf DaemonSet's telegraf container in privileged mode # runDsPrivileged: '{{ .Values.telegraf_installer.kubernetes.ds.privileged_mode }}' # # Collect container Block IO metrics. # dsBlockIOEnabled: 'true' # # Collect NFS IO metrics. # dsNfsIOEnabled: 'true' # # Collect kubernetes.system_container metrics and objects in the kube-system|cattle-system namespaces for managed kubernetes clusters (EKS, AKS, GKE, managed Rancher). Set this to true if you want collect these metrics. # managedK8sSystemMetricCollectionEnabled: 'false' # # Collect kubernetes.pod_volume (pod ephemeral storage) metrics. Set this to true if you want to collect these metrics. # podVolumeMetricCollectionEnabled: 'false' # # Declare Rancher cluster as managed. Set this to true if your Rancher cluster is managed as opposed to on-premise. # isManagedRancher: 'false' # # If telegraf-rs fails to start due to being unable to find the etcd crt and key, manually specify the appropriate path here. # rsHostEtcdCrt: '' # rsHostEtcdKey: '' # kube-state-metrics: # # kube-state-metrics CPU/Mem limits and requests. # cpuLimit: '500m' # memLimit: '1Gi' # cpuRequest: '100m' # memRequest: '500Mi' # # Comma-separated list of resources to enable. # # See resources in https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md # resources: 'cronjobs,daemonsets,deployments,ingresses,jobs,namespaces,nodes,persistentvolumeclaims,persistentvolumes,pods,replicasets,resourcequotas,services,statefulsets' # # Comma-separated list of metrics to enable. # # See metric-allowlist in https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md # metrics: 'kube_cronjob_created,kube_cronjob_status_active,kube_cronjob_labels,kube_daemonset_created,kube_daemonset_status_current_number_scheduled,kube_daemonset_status_desired_number_scheduled,kube_daemonset_status_number_available,kube_daemonset_status_number_misscheduled,kube_daemonset_status_number_ready,kube_daemonset_status_number_unavailable,kube_daemonset_status_observed_generation,kube_daemonset_status_updated_number_scheduled,kube_daemonset_metadata_generation,kube_daemonset_labels,kube_deployment_status_replicas,kube_deployment_status_replicas_available,kube_deployment_status_replicas_unavailable,kube_deployment_status_replicas_updated,kube_deployment_status_observed_generation,kube_deployment_spec_replicas,kube_deployment_spec_paused,kube_deployment_spec_strategy_rollingupdate_max_unavailable,kube_deployment_spec_strategy_rollingupdate_max_surge,kube_deployment_metadata_generation,kube_deployment_labels,kube_deployment_created,kube_job_created,kube_job_owner,kube_job_status_active,kube_job_status_succeeded,kube_job_status_failed,kube_job_labels,kube_job_status_start_time,kube_job_status_completion_time,kube_namespace_created,kube_namespace_labels,kube_namespace_status_phase,kube_node_info,kube_node_labels,kube_node_role,kube_node_spec_unschedulable,kube_node_created,kube_persistentvolume_capacity_bytes,kube_persistentvolume_status_phase,kube_persistentvolume_labels,kube_persistentvolume_info,kube_persistentvolume_claim_ref,kube_persistentvolumeclaim_access_mode,kube_persistentvolumeclaim_info,kube_persistentvolumeclaim_labels,kube_persistentvolumeclaim_resource_requests_storage_bytes,kube_persistentvolumeclaim_status_phase,kube_pod_info,kube_pod_start_time,kube_pod_completion_time,kube_pod_owner,kube_pod_labels,kube_pod_status_phase,kube_pod_status_ready,kube_pod_status_scheduled,kube_pod_container_info,kube_pod_container_status_waiting,kube_pod_container_status_waiting_reason,kube_pod_container_status_running,kube_pod_container_state_started,kube_pod_container_status_terminated,kube_pod_container_status_terminated_reason,kube_pod_container_status_last_terminated_reason,kube_pod_container_status_ready,kube_pod_container_status_restarts_total,kube_pod_overhead_cpu_cores,kube_pod_overhead_memory_bytes,kube_pod_created,kube_pod_deletion_timestamp,kube_pod_init_container_info,kube_pod_init_container_status_waiting,kube_pod_init_container_status_waiting_reason,kube_pod_init_container_status_running,kube_pod_init_container_status_terminated,kube_pod_init_container_status_terminated_reason,kube_pod_init_container_status_last_terminated_reason,kube_pod_init_container_status_ready,kube_pod_init_container_status_restarts_total,kube_pod_status_scheduled_time,kube_pod_status_unschedulable,kube_pod_spec_volumes_persistentvolumeclaims_readonly,kube_pod_container_resource_requests_cpu_cores,kube_pod_container_resource_requests_memory_bytes,kube_pod_container_resource_requests_storage_bytes,kube_pod_container_resource_requests_ephemeral_storage_bytes,kube_pod_container_resource_limits_cpu_cores,kube_pod_container_resource_limits_memory_bytes,kube_pod_container_resource_limits_storage_bytes,kube_pod_container_resource_limits_ephemeral_storage_bytes,kube_pod_init_container_resource_limits_cpu_cores,kube_pod_init_container_resource_limits_memory_bytes,kube_pod_init_container_resource_limits_storage_bytes,kube_pod_init_container_resource_limits_ephemeral_storage_bytes,kube_pod_init_container_resource_requests_cpu_cores,kube_pod_init_container_resource_requests_memory_bytes,kube_pod_init_container_resource_requests_storage_bytes,kube_pod_init_container_resource_requests_ephemeral_storage_bytes,kube_replicaset_status_replicas,kube_replicaset_status_ready_replicas,kube_replicaset_status_observed_generation,kube_replicaset_spec_replicas,kube_replicaset_metadata_generation,kube_replicaset_labels,kube_replicaset_created,kube_replicaset_owner,kube_resourcequota,kube_resourcequota_created,kube_service_info,kube_service_labels,kube_service_created,kube_service_spec_type,kube_statefulset_status_replicas,kube_statefulset_status_replicas_current,kube_statefulset_status_replicas_ready,kube_statefulset_status_replicas_updated,kube_statefulset_status_observed_generation,kube_statefulset_replicas,kube_statefulset_metadata_generation,kube_statefulset_created,kube_statefulset_labels,kube_statefulset_status_current_revision,kube_statefulset_status_update_revision,kube_node_status_capacity,kube_node_status_allocatable,kube_node_status_condition,kube_pod_container_resource_requests,kube_pod_container_resource_limits,kube_pod_init_container_resource_limits,kube_pod_init_container_resource_requests' # # Comma-separated list of Kubernetes label keys that will be used in the resources' labels metric. # # See metric-labels-allowlist in https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md # labels: 'cronjobs=[*],daemonsets=[*],deployments=[*],ingresses=[*],jobs=[*],namespaces=[*],nodes=[*],persistentvolumeclaims=[*],persistentvolumes=[*],pods=[*],replicasets=[*],resourcequotas=[*],services=[*],statefulsets=[*]' # # kube-state-metrics additional tolerations. Use the following abbreviated single line format only. # # No tolerations are applied by default # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}' # tolerations: '' # # kube-state-metrics shards. Increase the number of shards for larger clusters if telegraf RS pod(s) experience collection timeouts # shards: '2' # # Settings for the Events Log feature. # logs: # # Set runPrivileged to true if Fluent Bit fails to start, trying to open/create its database. # runPrivileged: 'false' # # If Fluent Bit should read new files from the head, not tail. # # See Read_from_Head in https://docs.fluentbit.io/manual/pipeline/inputs/tail # readFromHead: "true" # # Network protocol that Fluent Bit should use for DNS: "UDP" or "TCP". # dnsMode: "UDP" # # DNS resolver that Fluent Bit should use: "LEGACY" or "ASYNC" # fluentBitDNSResolver: "LEGACY" # # Logs additional tolerations. Use the following abbreviated single line format only. # # Inspect fluent-bit-ds to view tolerations which are always present. No tolerations are applied by default for event-exporter. # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}' # fluent-bit-tolerations: '' # event-exporter-tolerations: '' # # event-exporter CPU/Mem limits and requests. # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ # event-exporter-cpuLimit: '500m' # event-exporter-memLimit: '1Gi' # event-exporter-cpuRequest: '50m' # event-exporter-memRequest: '100Mi' # # event-exporter max event age. # # See https://github.com/jkroepke/resmoio-kubernetes-event-exporter # event-exporter-maxEventAgeSeconds: '10' # # event-exporter client-side throttling # # Set kubeBurst to roughly match your events per minute and kubeQPS=kubeBurst/5 # # See https://github.com/resmoio/kubernetes-event-exporter#troubleshoot-events-discarded-warning # event-exporter-kubeQPS: 20 # event-exporter-kubeBurst: 100 # # fluent-bit CPU/Mem limits and requests. # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ # fluent-bit-cpuLimit: '500m' # fluent-bit-memLimit: '1Gi' # fluent-bit-cpuRequest: '50m' # fluent-bit-memRequest: '100Mi' # # Settings for the Network Performance and Map feature. # workload-map: # # netapp-ci-net-observer-l4-ds CPU/Mem limits and requests. # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ # cpuLimit: '500m' # memLimit: '500Mi' # cpuRequest: '100m' # memRequest: '500Mi' # # Metric aggregation interval in seconds. Min=30, Max=120 # metricAggregationInterval: '60' # # Interval for bpf polling. Min=3, Max=15 # bpfPollInterval: '8' # # Enable performing reverse DNS lookups on observed IPs. # enableDNSLookup: 'true' # # netapp-ci-net-observer-l4-ds additional tolerations. Use the following abbreviated single line format only. # # Inspect netapp-ci-net-observer-l4-ds to view tolerations which are always present. # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}' # l4-tolerations: '' # # Set runPrivileged to true if SELinux is enabled on your Kubernetes nodes. # # Note: In OpenShift environments, this is set to true automatically. # runPrivileged: 'false' # change-management: # # change-observer-watch-rs CPU/Mem limits and requests. # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ # cpuLimit: '1' # memLimit: '1Gi' # cpuRequest: '500m' # memRequest: '500Mi' # # Interval in minutes after which a non-successful deployment of a workload will be marked as failed # failureDeclarationIntervalMins: '30' # # Frequency at which workload deployment in-progress events are sent # deployAggrIntervalSeconds: '300' # # Frequency at which non-workload deployments are combined and sent # nonWorkloadAggrIntervalSeconds: '15' # # A set of regular expressions used in env names and data maps whose value will be redacted # termsToRedact: '"pwd", "password", "token", "apikey", "api-key", "api_key", "jwt", "accesskey", "access_key", "access-key", "ca-file", "key-file", "cert", "cafile", "keyfile", "tls", "crt", "salt", ".dockerconfigjson", "auth", "secret"' # # A comma separated list of additional kinds to watch from the default set of kinds watched by the collector # # Each kind will have to be prefixed by its apigroup # # Example: '"authorization.k8s.io.subjectaccessreviews"' # additionalKindsToWatch: '' # # A comma separated list of additional field paths whose diff is ignored as part of change analytics. This list in addition to the default set of field paths ignored by the collector. # # Example: '"metadata.specTime", "data.status"' # additionalFieldsDiffToIgnore: '' # # A comma separated list of kinds to ignore from watching from the default set of kinds watched by the collector # # Each kind will have to be prefixed by its apigroup # # Example: '"networking.k8s.io.networkpolicies,batch.jobs", "authorization.k8s.io.subjectaccessreviews"' # kindsToIgnoreFromWatch: '' # # Frequency with which log records are sent to CI from the collector # logRecordAggrIntervalSeconds: '20' # # change-observer-watch-ds additional tolerations. Use the following abbreviated single line format only. # # Inspect change-observer-watch-ds to view tolerations which are always present. # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}' # watch-tolerations: ''