Skip to main content
Data Infrastructure Insights
O português é fornecido por meio de tradução automática para sua conveniência. O inglês precede o português em caso de inconsistências.

Opções de configuração do operador de monitoramento do Kubernetes

Colaboradores netapp-alavoie

O "Operador de monitoramento do Kubernetes" Oferece amplas opções de personalização por meio do arquivo AgentConfiguration. Você pode configurar limites de recursos, intervalos de coleta, configurações de proxy, tolerâncias e configurações específicas de componentes para otimizar o monitoramento do seu ambiente Kubernetes. Use essas opções para personalizar o telegraf, o kube-state-metrics, a coleta de logs, o mapeamento de carga de trabalho, o gerenciamento de alterações e outros componentes de monitoramento.

A tabela abaixo lista as opções possíveis para o arquivo AgentConfiguration:

Componente Opção Descrição

agente

Opções de configuração comuns a todos os componentes que o operador pode instalar. Estas podem ser consideradas opções "globais".

dockerRepo

Uma substituição do dockerRepo para extrair imagens dos repositórios docker privados do cliente, em vez do repositório docker do Data Infrastructure Insights . O padrão é o repositório docker do Data Infrastructure Insights .

dockerImagePullSecret

Opcional: Um segredo para o repositório privado do cliente.

nome do cluster

Campo de texto livre que identifica exclusivamente um cluster entre todos os clusters do cliente. Ele deve ser exclusivo para cada locatário do Data Infrastructure Insights . O padrão é o que o cliente insere na interface do usuário para o campo "Nome do Cluster".

Formato do proxy: proxy: servidor: porta: nome de usuário: senha: noProxy: isTelegrafProxyEnabled: isAuProxyEnabled: isFluentbitProxyEnabled: isCollectorProxyEnabled:

Opcional para definir proxy. Geralmente, esse é o representante corporativo do cliente.

telégrafo

Opções de configuração que podem personalizar a instalação do telegraf do Operador

intervalo de coleta

Intervalo de coleta de métricas, em segundos (máx. = 60s)

dsCpuLimit

Limite de CPU para telegraf ds

dsMemLimit

Limite de memória para telegraf ds

dsCpuRequest

Solicitação de CPU para telegraf ds

dsMemRequest

Solicitação de memória para telegraf ds

rsCpuLimit

Limite de CPU para telegraf rs

rsMemLimit

Limite de memória para telegraf rs

rsCpuRequest

Solicitação de CPU para telegraf rs

rsMemRequest

Solicitação de memória para telegraf rs

executarPrivilegiado

Execute o contêiner telegraf-mountstats-poller do telegraf DaemonSet em modo privilegiado. Defina isso como verdadeiro se o SELinux estiver habilitado nos seus nós do Kubernetes.

runDsPrivileged

Defina runDsPrivileged como verdadeiro para executar o contêiner telegraf do DaemonSet no modo privilegiado.

tamanho do lote

Ver"Documentação de configuração do Telegraf"

limite de buffer

Ver"Documentação de configuração do Telegraf"

intervaloredondo

Ver"Documentação de configuração do Telegraf"

coleçãoJitter

Ver"Documentação de configuração do Telegraf"

precisão

Ver"Documentação de configuração do Telegraf"

intervalo de descarga

Ver"Documentação de configuração do Telegraf"

flushJitter

Ver"Documentação de configuração do Telegraf"

tempo limite de saída

Ver"Documentação de configuração do Telegraf"

dsTolerações

telegraf-ds tolerâncias adicionais.

rsTolerâncias

telegraf-rs tolerâncias adicionais.

skipProcessorsAfterAggregators

Ver"Documentação de configuração do Telegraf"

desprotegido

Veja isto "problema conhecido do Telegraf" . A configuração desprotegido instruirá o Operador de Monitoramento do Kubernetes a executar o Telegraf com o --unprotected bandeira.

insecureK8sSkipVerify

Se o telegraf não conseguir verificar o certificado devido à falta de SANs IP, tente habilitar a verificação ignorada

métricas de estado do kube

Opções de configuração que podem personalizar a instalação das métricas de estado do Kube do Operador

Limite de CPU

Limite de CPU para implantação de kube-state-metrics

Limite de memória

Limite de memória para implantação de kube-state-metrics

Solicitação de CPU

Solicitação de CPU para implantação de métricas de estado do Kube

solicitação de memória

Solicitação de memória para implantação de métricas de estado do Kube

recursos

uma lista separada por vírgulas de recursos a serem capturados. Exemplo: cronjobs,daemonsets,deployments,ingresses,jobs,namespaces,nodes,persistentvolumeclaims, persistentvolumes,pods,replicasets,resourcequotas,services,statefulsets

tolerâncias

kube-state-metrics tolerâncias adicionais.

rótulos

uma lista separada por vírgulas de recursos para os quais o kube-state-metrics deve capturar rótulos exemplo: cronjobs=[*],daemonsets=[*],deployments=[*],ingresses=[*],jobs=[*],namespaces=[*],nodes=[*], persistentvolumeclaims=[*],persistentvolumes=[*],pods=[*],replicasets=[*],resourcequotas=[*],services=[*],statefulsets=[*]

toras

Opções de configuração que podem personalizar a coleta de logs e a instalação do Operador

lerDaCabeça

verdadeiro/falso, o bit fluente deve ler o log da cabeça

tempo esgotado

tempo limite, em segundos

Modo dns

TCP/UDP, modo para DNS

tolerâncias de bits fluentes

tolerâncias adicionais fluent-bit-ds.

tolerâncias do exportador de eventos

tolerâncias adicionais do exportador de eventos.

evento-exportador-maxEventAgeSeconds

event-exporter idade máxima do evento. Ver https://github.com/jkroepke/resmoio-kubernetes-event-exporter

Caminho de Log do contêiner de bits fluentes

Por padrão, o Fluentbit DaemonSet montará os caminhos de host /var/log e /var/lib/docker/containers para acessar/ler os logs do contêiner Kubernetes. Se o Kubernetes tiver sido configurado para colocar logs de contêiner em um local não padrão, use esta opção para modificar o Fluentbit DaemonSet para montar o caminho não padrão.

mapa de carga de trabalho

Opções de configuração que podem personalizar a coleta de mapas de carga de trabalho e a instalação do Operador.

Limite de CPU

Limite de CPU para o observador de rede ds

Limite de memória

limite de memória para o observador líquido ds

Solicitação de CPU

Solicitação de CPU para o observador de rede ds

solicitação de memória

solicitação de memória para o observador de rede ds

intervalo de agregação métrica

intervalo de agregação métrica, em segundos

Intervalo de Pesquisa bpf

Intervalo de pesquisa BPF, em segundos

habilitar DNSLookup

verdadeiro/falso, habilitar pesquisa de DNS

tolerâncias l4

net-observer-l4-ds tolerâncias adicionais.

executarPrivilegiado

verdadeiro/falso - Defina runPrivileged como verdadeiro se o SELinux estiver habilitado nos seus nós do Kubernetes.

gestão de mudanças

Opções de configuração para gerenciamento e análise de mudanças do Kubernetes

Limite de CPU

Limite de CPU para change-observer-watch-rs

Limite de memória

Limite de memória para change-observer-watch-rs

Solicitação de CPU

Solicitação de CPU para change-observer-watch-rs

solicitação de memória

solicitação de memória para change-observer-watch-rs

IntervaloDeclaraçãoDeFalhaDeCargaDeFalhaSegundos

Intervalo após o qual uma implantação malsucedida de uma carga de trabalho será marcada como falha, em segundos

workloadDeployAggrIntervalSeconds

Frequência em que as implantações de carga de trabalho são combinadas e enviadas, em segundos

nonWorkloadDeployAggrIntervalSeconds

Frequência em que as implantações sem carga de trabalho são combinadas e enviadas, em segundos

termos para redigir

Um conjunto de expressões regulares usadas em nomes de ambiente e mapas de dados cujo valor será redigido. Termos de exemplo: "pwd", "password", "token", "apikey", "api-key", "jwt"

TiposadicionaisParaAssistir

Uma lista separada por vírgulas de tipos adicionais a serem observados do conjunto padrão de tipos observados pelo coletor

tiposParaIgnorarDeObservar

Uma lista separada por vírgulas de tipos a serem ignorados da observação do conjunto padrão de tipos observados pelo coletor

logRecordAggrIntervalSeconds

Frequência com que os registros de log são enviados ao CI do coletor

tolerâncias de relógio

change-observer-watch-ds tolerâncias adicionais. Somente formato de linha única abreviado. Exemplo: '{chave: taint1, operador: Existe, efeito: NoSchedule},{chave: taint2, operador: Existe, efeito: NoExecute}'

Arquivo de configuração do agente de amostra

Abaixo está um exemplo de arquivo AgentConfiguration.

apiVersion: monitoring.netapp.com/v1alpha1
kind: AgentConfiguration
metadata:
  name: netapp-ci-monitoring-configuration
  namespace: "netapp-monitoring"
  labels:
    installed-by: nkmo-netapp-monitoring

spec:
  # # You can modify the following fields to configure the operator.
  # # Optional settings are commented out and include default values for reference
  # #   To update them, uncomment the line, change the value, and apply the updated AgentConfiguration.
  agent:
    # # [Required Field] A uniquely identifiable user-friendly clustername.
    # # clusterName must be unique across all clusters in your Data Infrastructure Insights environment.
    clusterName: "my_cluster"

    # # Proxy settings. The proxy that the operator should use to send metrics to Data Infrastructure Insights.
    # # Please see documentation here: https://docs.netapp.com/us-en/cloudinsights/task_config_telegraf_agent_k8s.html#configuring-proxy-support
    # proxy:
    #   server:
    #   port:
    #   noproxy:
    #   username:
    #   password:
    #   isTelegrafProxyEnabled:
    #   isFluentbitProxyEnabled:
    #   isCollectorsProxyEnabled:

    # # [Required Field] By default, the operator uses the CI repository.
    # # To use a private repository, change this field to your repository name.
    # # Please see documentation here: https://docs.netapp.com/us-en/cloudinsights/task_config_telegraf_agent_k8s.html#using-a-custom-or-private-docker-repository
    dockerRepo: 'docker.c01.cloudinsights.netapp.com'
    # # [Required Field] The name of the imagePullSecret for dockerRepo.
    # # If you are using a private repository, change this field from 'netapp-ci-docker' to the name of your secret.
    dockerImagePullSecret: 'netapp-ci-docker'

    # # Allow the operator to automatically rotate its ApiKey before expiration.
    # tokenRotationEnabled: 'true'
    # # Number of days before expiration that the ApiKey should be rotated. This must be less than the total ApiKey duration.
    # tokenRotationThresholdDays: '30'

  telegraf:
    # # Settings to fine-tune metrics data collection. Telegraf config names are included in parenthesis.
    # # See https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#agent

    # # The default time telegraf will wait between inputs for all plugins (interval). Max=60
    # collectionInterval: '60s'
    # # Maximum number of records per output that telegraf will write in one batch (metric_batch_size).
    # batchSize: '10000'
    # # Maximum number of records per output that telegraf will cache pending a successful write (metric_buffer_limit).
    # bufferLimit: '150000'
    # # Collect metrics on multiples of interval (round_interval).
    # roundInterval: 'true'
    # # Each plugin waits a random amount of time between the scheduled collection time and that time + collection_jitter before collecting inputs (collection_jitter).
    # collectionJitter: '0s'
    # # Collected metrics are rounded to the precision specified. When set to "0s" precision will be set by the units specified by interval (precision).
    # precision: '0s'
    # # Time telegraf will wait between writing outputs (flush_interval). Max=collectionInterval
    # flushInterval: '60s'
    # # Each output waits a random amount of time between the scheduled write time and that time + flush_jitter before writing outputs (flush_jitter).
    # flushJitter: '0s'
    # # Timeout for writing to outputs (timeout).
    # outputTimeout: '5s'

    # # telegraf-ds CPU/Mem limits and requests.
    # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    # dsCpuLimit: '750m'
    # dsMemLimit: '800Mi'
    # dsCpuRequest: '100m'
    # dsMemRequest: '500Mi'

    # # telegraf-rs CPU/Mem limits and requests.
    # rsCpuLimit: '3'
    # rsMemLimit: '4Gi'
    # rsCpuRequest: '100m'
    # rsMemRequest: '500Mi'

    # # Skip second run of processors after aggregators
    # skipProcessorsAfterAggregators: 'true'

    # # telegraf additional tolerations. Use the following abbreviated single line format only.
    # # Inspect telegraf-rs/-ds to view tolerations which are always present.
    # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}'
    # dsTolerations: ''
    # rsTolerations: ''


    # If telegraf warns of insufficient lockable memory, try increasing the limit of lockable memory for Telegraf in the underlying operating system/node.  If increasing the limit is not an option, set this to true to instruct Telegraf to not attempt to reserve locked memory pages.  While this might pose a security risk as decrypted secrets might be swapped out to disk, it allows for execution in environments where reserving locked memory is not possible.
    # unprotected: 'false'

    # # Run the telegraf DaemonSet's telegraf-mountstats-poller container in privileged mode.  Set runPrivileged to true if SELinux is enabled on your Kubernetes nodes.
    # runPrivileged: '{{ .Values.telegraf_installer.kubernetes.privileged_mode }}'

    # # Set runDsPrivileged to true to run the telegraf DaemonSet's telegraf container in privileged mode
    # runDsPrivileged: '{{ .Values.telegraf_installer.kubernetes.ds.privileged_mode }}'

    # # Collect container Block IO metrics.
    # dsBlockIOEnabled: 'true'

    # # Collect NFS IO metrics.
    # dsNfsIOEnabled: 'true'

    # # Collect kubernetes.system_container metrics and objects in the kube-system|cattle-system namespaces for managed kubernetes clusters (EKS, AKS, GKE, managed Rancher).  Set this to true if you want collect these metrics.
    # managedK8sSystemMetricCollectionEnabled: 'false'

    # # Collect kubernetes.pod_volume (pod ephemeral storage) metrics.  Set this to true if you want to collect these metrics.
    # podVolumeMetricCollectionEnabled: 'false'

    # # Declare Rancher cluster as managed.  Set this to true if your Rancher cluster is managed as opposed to on-premise.
    # isManagedRancher: 'false'

    # # If telegraf-rs fails to start due to being unable to find the etcd crt and key, manually specify the appropriate path here.
    # rsHostEtcdCrt: ''
    # rsHostEtcdKey: ''

  # kube-state-metrics:
    # # kube-state-metrics CPU/Mem limits and requests.
    # cpuLimit: '500m'
    # memLimit: '1Gi'
    # cpuRequest: '100m'
    # memRequest: '500Mi'

    # # Comma-separated list of resources to enable.
    # # See resources in https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md
    # resources: 'cronjobs,daemonsets,deployments,ingresses,jobs,namespaces,nodes,persistentvolumeclaims,persistentvolumes,pods,replicasets,resourcequotas,services,statefulsets'

    # # Comma-separated list of metrics to enable.
    # # See metric-allowlist in https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md
    # metrics: 'kube_cronjob_created,kube_cronjob_status_active,kube_cronjob_labels,kube_daemonset_created,kube_daemonset_status_current_number_scheduled,kube_daemonset_status_desired_number_scheduled,kube_daemonset_status_number_available,kube_daemonset_status_number_misscheduled,kube_daemonset_status_number_ready,kube_daemonset_status_number_unavailable,kube_daemonset_status_observed_generation,kube_daemonset_status_updated_number_scheduled,kube_daemonset_metadata_generation,kube_daemonset_labels,kube_deployment_status_replicas,kube_deployment_status_replicas_available,kube_deployment_status_replicas_unavailable,kube_deployment_status_replicas_updated,kube_deployment_status_observed_generation,kube_deployment_spec_replicas,kube_deployment_spec_paused,kube_deployment_spec_strategy_rollingupdate_max_unavailable,kube_deployment_spec_strategy_rollingupdate_max_surge,kube_deployment_metadata_generation,kube_deployment_labels,kube_deployment_created,kube_job_created,kube_job_owner,kube_job_status_active,kube_job_status_succeeded,kube_job_status_failed,kube_job_labels,kube_job_status_start_time,kube_job_status_completion_time,kube_namespace_created,kube_namespace_labels,kube_namespace_status_phase,kube_node_info,kube_node_labels,kube_node_role,kube_node_spec_unschedulable,kube_node_created,kube_persistentvolume_capacity_bytes,kube_persistentvolume_status_phase,kube_persistentvolume_labels,kube_persistentvolume_info,kube_persistentvolume_claim_ref,kube_persistentvolumeclaim_access_mode,kube_persistentvolumeclaim_info,kube_persistentvolumeclaim_labels,kube_persistentvolumeclaim_resource_requests_storage_bytes,kube_persistentvolumeclaim_status_phase,kube_pod_info,kube_pod_start_time,kube_pod_completion_time,kube_pod_owner,kube_pod_labels,kube_pod_status_phase,kube_pod_status_ready,kube_pod_status_scheduled,kube_pod_container_info,kube_pod_container_status_waiting,kube_pod_container_status_waiting_reason,kube_pod_container_status_running,kube_pod_container_state_started,kube_pod_container_status_terminated,kube_pod_container_status_terminated_reason,kube_pod_container_status_last_terminated_reason,kube_pod_container_status_ready,kube_pod_container_status_restarts_total,kube_pod_overhead_cpu_cores,kube_pod_overhead_memory_bytes,kube_pod_created,kube_pod_deletion_timestamp,kube_pod_init_container_info,kube_pod_init_container_status_waiting,kube_pod_init_container_status_waiting_reason,kube_pod_init_container_status_running,kube_pod_init_container_status_terminated,kube_pod_init_container_status_terminated_reason,kube_pod_init_container_status_last_terminated_reason,kube_pod_init_container_status_ready,kube_pod_init_container_status_restarts_total,kube_pod_status_scheduled_time,kube_pod_status_unschedulable,kube_pod_spec_volumes_persistentvolumeclaims_readonly,kube_pod_container_resource_requests_cpu_cores,kube_pod_container_resource_requests_memory_bytes,kube_pod_container_resource_requests_storage_bytes,kube_pod_container_resource_requests_ephemeral_storage_bytes,kube_pod_container_resource_limits_cpu_cores,kube_pod_container_resource_limits_memory_bytes,kube_pod_container_resource_limits_storage_bytes,kube_pod_container_resource_limits_ephemeral_storage_bytes,kube_pod_init_container_resource_limits_cpu_cores,kube_pod_init_container_resource_limits_memory_bytes,kube_pod_init_container_resource_limits_storage_bytes,kube_pod_init_container_resource_limits_ephemeral_storage_bytes,kube_pod_init_container_resource_requests_cpu_cores,kube_pod_init_container_resource_requests_memory_bytes,kube_pod_init_container_resource_requests_storage_bytes,kube_pod_init_container_resource_requests_ephemeral_storage_bytes,kube_replicaset_status_replicas,kube_replicaset_status_ready_replicas,kube_replicaset_status_observed_generation,kube_replicaset_spec_replicas,kube_replicaset_metadata_generation,kube_replicaset_labels,kube_replicaset_created,kube_replicaset_owner,kube_resourcequota,kube_resourcequota_created,kube_service_info,kube_service_labels,kube_service_created,kube_service_spec_type,kube_statefulset_status_replicas,kube_statefulset_status_replicas_current,kube_statefulset_status_replicas_ready,kube_statefulset_status_replicas_updated,kube_statefulset_status_observed_generation,kube_statefulset_replicas,kube_statefulset_metadata_generation,kube_statefulset_created,kube_statefulset_labels,kube_statefulset_status_current_revision,kube_statefulset_status_update_revision,kube_node_status_capacity,kube_node_status_allocatable,kube_node_status_condition,kube_pod_container_resource_requests,kube_pod_container_resource_limits,kube_pod_init_container_resource_limits,kube_pod_init_container_resource_requests'

    # # Comma-separated list of Kubernetes label keys that will be used in the resources' labels metric.
    # # See metric-labels-allowlist in https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md
    # labels: 'cronjobs=[*],daemonsets=[*],deployments=[*],ingresses=[*],jobs=[*],namespaces=[*],nodes=[*],persistentvolumeclaims=[*],persistentvolumes=[*],pods=[*],replicasets=[*],resourcequotas=[*],services=[*],statefulsets=[*]'

    # # kube-state-metrics additional tolerations. Use the following abbreviated single line format only.
    # # No tolerations are applied by default
    # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}'
    # tolerations: ''

    # # kube-state-metrics shards.  Increase the number of shards for larger clusters if telegraf RS pod(s) experience collection timeouts
    # shards: '2'

  # # Settings for the Events Log feature.
  # logs:
    # # Set runPrivileged to true if Fluent Bit fails to start, trying to open/create its database.
    # runPrivileged: 'false'

    # # If Fluent Bit should read new files from the head, not tail.
    # # See Read_from_Head in https://docs.fluentbit.io/manual/pipeline/inputs/tail
    # readFromHead: "true"

    # # Network protocol that Fluent Bit should use for DNS: "UDP" or "TCP".
    # dnsMode: "UDP"

    # # DNS resolver that Fluent Bit should use: "LEGACY" or "ASYNC"
    # fluentBitDNSResolver: "LEGACY"

    # # Logs additional tolerations. Use the following abbreviated single line format only.
    # # Inspect fluent-bit-ds to view tolerations which are always present. No tolerations are applied by default for event-exporter.
    # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}'
    # fluent-bit-tolerations: ''
    # event-exporter-tolerations: ''

    # # event-exporter CPU/Mem limits and requests.
    # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    # event-exporter-cpuLimit: '500m'
    # event-exporter-memLimit: '1Gi'
    # event-exporter-cpuRequest: '50m'
    # event-exporter-memRequest: '100Mi'

    # # event-exporter max event age.
    # # See https://github.com/jkroepke/resmoio-kubernetes-event-exporter
    # event-exporter-maxEventAgeSeconds: '10'

    # # event-exporter client-side throttling
    # # Set kubeBurst to roughly match your events per minute and kubeQPS=kubeBurst/5
    # # See https://github.com/resmoio/kubernetes-event-exporter#troubleshoot-events-discarded-warning
    # event-exporter-kubeQPS: 20
    # event-exporter-kubeBurst: 100

    # # fluent-bit CPU/Mem limits and requests.
    # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    # fluent-bit-cpuLimit: '500m'
    # fluent-bit-memLimit: '1Gi'
    # fluent-bit-cpuRequest: '50m'
    # fluent-bit-memRequest: '100Mi'

    # By default, the Fluentbit DaemonSet will mount the /var/log and /var/lib/docker/containers host paths to access/read the
    # Kubernetes container logs.  If Kubernetes has been configured to place container logs in a non-default location, use
    # this option to modify the Fluentbit DaemonSet to mount the non-default path.
    # fluent-bit-containerLogPath

  # # Settings for the Network Performance and Map feature.
  # workload-map:
    # # netapp-ci-net-observer-l4-ds CPU/Mem limits and requests.
    # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    # cpuLimit: '500m'
    # memLimit: '500Mi'
    # cpuRequest: '100m'
    # memRequest: '500Mi'

    # # Metric aggregation interval in seconds. Min=30, Max=120
    # metricAggregationInterval: '60'

    # # Interval for bpf polling. Min=3, Max=15
    # bpfPollInterval: '8'

    # # Enable performing reverse DNS lookups on observed IPs.
    # enableDNSLookup: 'true'

    # # netapp-ci-net-observer-l4-ds additional tolerations. Use the following abbreviated single line format only.
    # # Inspect netapp-ci-net-observer-l4-ds to view tolerations which are always present.
    # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}'
    # l4-tolerations: ''

    # # Set runPrivileged to true if SELinux is enabled on your Kubernetes nodes.
    # # Note: In OpenShift environments, this is set to true automatically.
    # runPrivileged: 'false'

  # change-management:
    # # change-observer-watch-rs CPU/Mem limits and requests.
    # # See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    # cpuLimit: '1'
    # memLimit: '1Gi'
    # cpuRequest: '500m'
    # memRequest: '500Mi'

    # # Interval after which a non-successful deployment of a workload will be marked as failed, in seconds
    # workloadFailureDeclarationIntervalSeconds: '30'

    # # Frequency at which workload deployments are combined and sent, in seconds
    # workloadDeployAggrIntervalSeconds: '300'

    # # Frequency at which non-workload deployments are combined and sent, in seconds
    # nonWorkloadDeployAggrIntervalSeconds: '15'

    # # A set of regular expressions used in env names and data maps whose value will be redacted
    # termsToRedact: '"pwd", "password", "token", "apikey", "api-key", "api_key", "jwt", "accesskey", "access_key", "access-key", "ca-file", "key-file", "cert", "cafile", "keyfile", "tls", "crt", "salt", ".dockerconfigjson", "auth", "secret"'

    # # A comma separated list of additional kinds to watch from the default set of kinds watched by the collector
    # # Each kind will have to be prefixed by its apigroup
    # # Example: '"authorization.k8s.io.subjectaccessreviews"'
    # additionalKindsToWatch: ''

    # # A comma separated list of additional field paths whose diff is ignored as part of change analytics. This list in addition to the default set of field paths ignored by the collector.
    # # Example: '"metadata.specTime", "data.status"'
    # additionalFieldsDiffToIgnore: ''

    # # A comma separated list of kinds to ignore from watching from the default set of kinds watched by the collector
    # # Each kind will have to be prefixed by its apigroup
    # # Example: '"networking.k8s.io.networkpolicies,batch.jobs", "authorization.k8s.io.subjectaccessreviews"'
    # kindsToIgnoreFromWatch: ''

    # # Frequency with which log records are sent to CI from the collector
    # logRecordAggrIntervalSeconds: '20'

    # # change-observer-watch-ds additional tolerations. Use the following abbreviated single line format only.
    # # Inspect change-observer-watch-ds to view tolerations which are always present.
    # # Example: '{key: taint1, operator: Exists, effect: NoSchedule},{key: taint2, operator: Exists, effect: NoExecute}'
    # watch-tolerations: ''