<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dan Johansson</title>
    <description>The latest articles on DEV Community by Dan Johansson (@danjoh74).</description>
    <link>https://dev.to/danjoh74</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/danjoh74"/>
    <language>en</language>
    <item>
      <title>Enable annotation based scraping in Azure Monitor managed service for Prometheus</title>
      <dc:creator>Dan Johansson</dc:creator>
      <pubDate>Mon, 20 Feb 2023 21:29:53 +0000</pubDate>
      <link>https://dev.to/danjoh74/enable-annotation-based-scraping-in-azure-monitor-managed-service-for-prometheus-2gjd</link>
      <guid>https://dev.to/danjoh74/enable-annotation-based-scraping-in-azure-monitor-managed-service-for-prometheus-2gjd</guid>
      <description>&lt;p&gt;The aim of these instructions is to provide a concise and easy-to-follow guide, without going into intricate details. By following these instructions, you can activate annotation-based scraping of pods in an AKS cluster using the &lt;a href="https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/prometheus-metrics-overview" rel="noopener noreferrer"&gt;Azure Monitor managed service for Prometheus&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You are assumed to have an AKS cluster up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create resources
&lt;/h2&gt;

&lt;p&gt;Create the following resources as needed...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/Microsoft.AzurePrometheus" rel="noopener noreferrer"&gt;Azure Monitor Managed Service for Prometheus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft.azure-managed-grafana" rel="noopener noreferrer"&gt;Azure Managed Grafana&lt;/a&gt; (Not required but makes it possible to visualize Prometheus metrics.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 2: Enable metrics collection in Azure Monitor
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://portal.azure.com/" rel="noopener noreferrer"&gt;Azure Portal&lt;/a&gt;, navigate to the &lt;em&gt;Azure Monitor workspace&lt;/em&gt; that was created in step 1. Select &lt;em&gt;Monitored clusters&lt;/em&gt; and enable monitoring of your AKS cluster. Select the &lt;em&gt;Azure Managed Grafana&lt;/em&gt; instance that was created in step 1 as &lt;em&gt;Linked Grafana instance&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;(Can also be done in the Insights/Monitor Settings of the AKS cluster.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a config map for the monitoring addon for the Azure Monitor agent
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml" rel="noopener noreferrer"&gt;Download sample manifest&lt;/a&gt; and change from &lt;code&gt;monitor_kubernetes_pods = false&lt;/code&gt; to &lt;code&gt;monitor_kubernetes_pods = true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The final &lt;em&gt;ConfigMap&lt;/em&gt; manifest file should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schema-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;#string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be rejected by the agent.&lt;/span&gt;
    &lt;span class="s"&gt;v1&lt;/span&gt;
  &lt;span class="na"&gt;config-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;#string.used by customer to keep track of this config file's version in their source control/repository (max allowed 10 chars, other chars will be truncated)&lt;/span&gt;
    &lt;span class="s"&gt;ver1&lt;/span&gt;
  &lt;span class="na"&gt;log-data-collection-settings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;# Log data collection settings&lt;/span&gt;
    &lt;span class="s"&gt;# Any errors related to config map settings can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to.&lt;/span&gt;

    &lt;span class="s"&gt;[log_collection_settings]&lt;/span&gt;
       &lt;span class="s"&gt;[log_collection_settings.stdout]&lt;/span&gt;
          &lt;span class="s"&gt;# In the absense of this configmap, default value for enabled is true&lt;/span&gt;
          &lt;span class="s"&gt;enabled = true&lt;/span&gt;
          &lt;span class="s"&gt;# exclude_namespaces setting holds good only if enabled is set to true&lt;/span&gt;
          &lt;span class="s"&gt;# kube-system,gatekeeper-system log collection are disabled by default in the absence of 'log_collection_settings.stdout' setting. If you want to enable kube-system,gatekeeper-system, remove them from the following setting.&lt;/span&gt;
          &lt;span class="s"&gt;# If you want to continue to disable kube-system,gatekeeper-system log collection keep the namespaces in the following setting and add any other namespace you want to disable log collection to the array.&lt;/span&gt;
          &lt;span class="s"&gt;# In the absense of this configmap, default value for exclude_namespaces = ["kube-system","gatekeeper-system"]&lt;/span&gt;
          &lt;span class="s"&gt;exclude_namespaces = ["kube-system","gatekeeper-system"]&lt;/span&gt;

       &lt;span class="s"&gt;[log_collection_settings.stderr]&lt;/span&gt;
          &lt;span class="s"&gt;# Default value for enabled is true&lt;/span&gt;
          &lt;span class="s"&gt;enabled = true&lt;/span&gt;
          &lt;span class="s"&gt;# exclude_namespaces setting holds good only if enabled is set to true&lt;/span&gt;
          &lt;span class="s"&gt;# kube-system,gatekeeper-system log collection are disabled by default in the absence of 'log_collection_settings.stderr' setting. If you want to enable kube-system,gatekeeper-system, remove them from the following setting.&lt;/span&gt;
          &lt;span class="s"&gt;# If you want to continue to disable kube-system,gatekeeper-system log collection keep the namespaces in the following setting and add any other namespace you want to disable log collection to the array.&lt;/span&gt;
          &lt;span class="s"&gt;# In the absense of this configmap, default value for exclude_namespaces = ["kube-system","gatekeeper-system"]&lt;/span&gt;
          &lt;span class="s"&gt;exclude_namespaces = ["kube-system","gatekeeper-system"]&lt;/span&gt;

       &lt;span class="s"&gt;[log_collection_settings.env_var]&lt;/span&gt;
          &lt;span class="s"&gt;# In the absense of this configmap, default value for enabled is true&lt;/span&gt;
          &lt;span class="s"&gt;enabled = true&lt;/span&gt;
       &lt;span class="s"&gt;[log_collection_settings.enrich_container_logs]&lt;/span&gt;
          &lt;span class="s"&gt;# In the absense of this configmap, default value for enrich_container_logs is false&lt;/span&gt;
          &lt;span class="s"&gt;enabled = false&lt;/span&gt;
          &lt;span class="s"&gt;# When this is enabled (enabled = true), every container log entry (both stdout &amp;amp; stderr) will be enriched with container Name &amp;amp; container Image&lt;/span&gt;
       &lt;span class="s"&gt;[log_collection_settings.collect_all_kube_events]&lt;/span&gt;
          &lt;span class="s"&gt;# In the absense of this configmap, default value for collect_all_kube_events is false&lt;/span&gt;
          &lt;span class="s"&gt;# When the setting is set to false, only the kube events with !normal event type will be collected&lt;/span&gt;
          &lt;span class="s"&gt;enabled = false&lt;/span&gt;
          &lt;span class="s"&gt;# When this is enabled (enabled = true), all kube events including normal events will be collected&lt;/span&gt;
       &lt;span class="s"&gt;#[log_collection_settings.schema]&lt;/span&gt;
          &lt;span class="s"&gt;# In the absence of this configmap, default value for containerlog_schema_version is "v1"&lt;/span&gt;
          &lt;span class="s"&gt;# Supported values for this setting are "v1","v2"&lt;/span&gt;
          &lt;span class="s"&gt;# See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema&lt;/span&gt;
          &lt;span class="s"&gt;# containerlog_schema_version = "v2"&lt;/span&gt;
       &lt;span class="s"&gt;#[log_collection_settings.enable_multiline_logs]&lt;/span&gt;
          &lt;span class="s"&gt;# fluent-bit based multiline log collection for go (stacktrace), dotnet (stacktrace)&lt;/span&gt;
          &lt;span class="s"&gt;# if enabled will also stitch together container logs split by docker/cri due to size limits(16KB per log line)&lt;/span&gt;
          &lt;span class="s"&gt;# enabled = "false"&lt;/span&gt;


  &lt;span class="na"&gt;prometheus-data-collection-settings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;# Custom Prometheus metrics data collection settings&lt;/span&gt;
    &lt;span class="s"&gt;[prometheus_data_collection_settings.cluster]&lt;/span&gt;
        &lt;span class="s"&gt;# Cluster level scrape endpoint(s). These metrics will be scraped from agent's Replicaset (singleton)&lt;/span&gt;
        &lt;span class="s"&gt;# Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to.&lt;/span&gt;

        &lt;span class="s"&gt;#Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h.&lt;/span&gt;
        &lt;span class="s"&gt;interval = "1m"&lt;/span&gt;

        &lt;span class="s"&gt;## Uncomment the following settings with valid string arrays for prometheus scraping&lt;/span&gt;
        &lt;span class="s"&gt;#fieldpass = ["metric_to_pass1", "metric_to_pass12"]&lt;/span&gt;

        &lt;span class="s"&gt;#fielddrop = ["metric_to_drop"]&lt;/span&gt;

        &lt;span class="s"&gt;# An array of urls to scrape metrics from.&lt;/span&gt;
        &lt;span class="s"&gt;# urls = ["http://myurl:9101/metrics"]&lt;/span&gt;

        &lt;span class="s"&gt;# An array of Kubernetes services to scrape metrics from.&lt;/span&gt;
        &lt;span class="s"&gt;# kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]&lt;/span&gt;

        &lt;span class="s"&gt;# When monitor_kubernetes_pods = true, replicaset will scrape Kubernetes pods for the following prometheus annotations:&lt;/span&gt;
        &lt;span class="s"&gt;# - prometheus.io/scrape: Enable scraping for this pod&lt;/span&gt;
        &lt;span class="s"&gt;# - prometheus.io/scheme: If the metrics endpoint is secured then you will need to&lt;/span&gt;
        &lt;span class="s"&gt;#     set this to `https` &amp;amp; most likely set the tls config.&lt;/span&gt;
        &lt;span class="s"&gt;# - prometheus.io/path: If the metrics path is not /metrics, define it with this annotation.&lt;/span&gt;
        &lt;span class="s"&gt;# - prometheus.io/port: If port is not 9102 use this annotation&lt;/span&gt;
        &lt;span class="s"&gt;monitor_kubernetes_pods = true&lt;/span&gt;

        &lt;span class="s"&gt;## Restricts Kubernetes monitoring to namespaces for pods that have annotations set and are scraped using the monitor_kubernetes_pods setting.&lt;/span&gt;
        &lt;span class="s"&gt;## This will take effect when monitor_kubernetes_pods is set to true&lt;/span&gt;
        &lt;span class="s"&gt;##   ex: monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]&lt;/span&gt;
        &lt;span class="s"&gt;# monitor_kubernetes_pods_namespaces = ["default1"]&lt;/span&gt;

        &lt;span class="s"&gt;## Label selector to target pods which have the specified label&lt;/span&gt;
        &lt;span class="s"&gt;## This will take effect when monitor_kubernetes_pods is set to true&lt;/span&gt;
        &lt;span class="s"&gt;## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors&lt;/span&gt;
        &lt;span class="s"&gt;# kubernetes_label_selector = "env=dev,app=nginx"&lt;/span&gt;

        &lt;span class="s"&gt;## Field selector to target pods which have the specified field&lt;/span&gt;
        &lt;span class="s"&gt;## This will take effect when monitor_kubernetes_pods is set to true&lt;/span&gt;
        &lt;span class="s"&gt;## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/&lt;/span&gt;
        &lt;span class="s"&gt;## eg. To scrape pods on a specific node&lt;/span&gt;
        &lt;span class="s"&gt;# kubernetes_field_selector = "spec.nodeName=$HOSTNAME"&lt;/span&gt;

    &lt;span class="s"&gt;[prometheus_data_collection_settings.node]&lt;/span&gt;
        &lt;span class="s"&gt;# Node level scrape endpoint(s). These metrics will be scraped from agent's DaemonSet running in every node in the cluster&lt;/span&gt;
        &lt;span class="s"&gt;# Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to.&lt;/span&gt;

        &lt;span class="s"&gt;#Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h.&lt;/span&gt;
        &lt;span class="s"&gt;interval = "1m"&lt;/span&gt;

        &lt;span class="s"&gt;## Uncomment the following settings with valid string arrays for prometheus scraping&lt;/span&gt;

        &lt;span class="s"&gt;# An array of urls to scrape metrics from. $NODE_IP (all upper case) will substitute of running Node's IP address&lt;/span&gt;
        &lt;span class="s"&gt;# urls = ["http://$NODE_IP:9103/metrics"]&lt;/span&gt;

        &lt;span class="s"&gt;#fieldpass = ["metric_to_pass1", "metric_to_pass12"]&lt;/span&gt;

        &lt;span class="s"&gt;#fielddrop = ["metric_to_drop"]&lt;/span&gt;

  &lt;span class="na"&gt;metric_collection_settings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;# Metrics collection settings for metrics sent to Log Analytics and MDM&lt;/span&gt;
    &lt;span class="s"&gt;[metric_collection_settings.collect_kube_system_pv_metrics]&lt;/span&gt;
      &lt;span class="s"&gt;# In the absense of this configmap, default value for collect_kube_system_pv_metrics is false&lt;/span&gt;
      &lt;span class="s"&gt;# When the setting is set to false, only the persistent volume metrics outside the kube-system namespace will be collected&lt;/span&gt;
      &lt;span class="s"&gt;enabled = false&lt;/span&gt;
      &lt;span class="s"&gt;# When this is enabled (enabled = true), persistent volume metrics including those in the kube-system namespace will be collected&lt;/span&gt;

  &lt;span class="na"&gt;alertable-metrics-configuration-settings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;# Alertable metrics configuration settings for container resource utilization&lt;/span&gt;
    &lt;span class="s"&gt;[alertable_metrics_configuration_settings.container_resource_utilization_thresholds]&lt;/span&gt;
        &lt;span class="s"&gt;# The threshold(Type Float) will be rounded off to 2 decimal points&lt;/span&gt;
        &lt;span class="s"&gt;# Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage&lt;/span&gt;
        &lt;span class="s"&gt;container_cpu_threshold_percentage = 95.0&lt;/span&gt;
        &lt;span class="s"&gt;# Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage&lt;/span&gt;
        &lt;span class="s"&gt;container_memory_rss_threshold_percentage = 95.0&lt;/span&gt;
        &lt;span class="s"&gt;# Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage&lt;/span&gt;
        &lt;span class="s"&gt;container_memory_working_set_threshold_percentage = 95.0&lt;/span&gt;

    &lt;span class="s"&gt;# Alertable metrics configuration settings for persistent volume utilization&lt;/span&gt;
    &lt;span class="s"&gt;[alertable_metrics_configuration_settings.pv_utilization_thresholds]&lt;/span&gt;
        &lt;span class="s"&gt;# Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage&lt;/span&gt;
        &lt;span class="s"&gt;pv_usage_threshold_percentage = 60.0&lt;/span&gt;

    &lt;span class="s"&gt;# Alertable metrics configuration settings for completed jobs count&lt;/span&gt;
    &lt;span class="s"&gt;[alertable_metrics_configuration_settings.job_completion_threshold]&lt;/span&gt;
        &lt;span class="s"&gt;# Threshold for completed job count , metric will be sent only for those jobs which were completed earlier than the following threshold&lt;/span&gt;
        &lt;span class="s"&gt;job_completion_threshold_time_minutes = 360&lt;/span&gt;
  &lt;span class="na"&gt;integrations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;[integrations.azure_network_policy_manager]&lt;/span&gt;
        &lt;span class="s"&gt;collect_basic_metrics = false&lt;/span&gt;
        &lt;span class="s"&gt;collect_advanced_metrics = false&lt;/span&gt;
    &lt;span class="s"&gt;[integrations.azure_subnet_ip_usage]&lt;/span&gt;
        &lt;span class="s"&gt;enabled = false&lt;/span&gt;

&lt;span class="c1"&gt;# Doc - https://github.com/microsoft/Docker-Provider/blob/ci_prod/Documentation/AgentSettings/ReadMe.md&lt;/span&gt;
  &lt;span class="na"&gt;agent-settings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;# prometheus scrape fluent bit settings for high scale&lt;/span&gt;
    &lt;span class="s"&gt;# buffer size should be greater than or equal to chunk size else we set it to chunk size.&lt;/span&gt;
    &lt;span class="s"&gt;# settings scoped to prometheus sidecar container. all values in mb&lt;/span&gt;
    &lt;span class="s"&gt;[agent_settings.prometheus_fbit_settings]&lt;/span&gt;
      &lt;span class="s"&gt;tcp_listener_chunk_size = 10&lt;/span&gt;
      &lt;span class="s"&gt;tcp_listener_buffer_size = 10&lt;/span&gt;
      &lt;span class="s"&gt;tcp_listener_mem_buf_limit = 200&lt;/span&gt;

    &lt;span class="s"&gt;# prometheus scrape fluent bit settings for high scale&lt;/span&gt;
    &lt;span class="s"&gt;# buffer size should be greater than or equal to chunk size else we set it to chunk size.&lt;/span&gt;
    &lt;span class="s"&gt;# settings scoped to daemonset container. all values in mb&lt;/span&gt;
    &lt;span class="s"&gt;# [agent_settings.node_prometheus_fbit_settings]&lt;/span&gt;
      &lt;span class="s"&gt;# tcp_listener_chunk_size = 1&lt;/span&gt;
      &lt;span class="s"&gt;# tcp_listener_buffer_size = 1&lt;/span&gt;
      &lt;span class="s"&gt;# tcp_listener_mem_buf_limit = 10&lt;/span&gt;

    &lt;span class="s"&gt;# prometheus scrape fluent bit settings for high scale&lt;/span&gt;
    &lt;span class="s"&gt;# buffer size should be greater than or equal to chunk size else we set it to chunk size.&lt;/span&gt;
    &lt;span class="s"&gt;# settings scoped to replicaset container. all values in mb&lt;/span&gt;
    &lt;span class="s"&gt;# [agent_settings.cluster_prometheus_fbit_settings]&lt;/span&gt;
      &lt;span class="s"&gt;# tcp_listener_chunk_size = 1&lt;/span&gt;
      &lt;span class="s"&gt;# tcp_listener_buffer_size = 1&lt;/span&gt;
      &lt;span class="s"&gt;# tcp_listener_mem_buf_limit = 10&lt;/span&gt;

    &lt;span class="s"&gt;# The following settings are "undocumented", we don't recommend uncommenting them unless directed by Microsoft.&lt;/span&gt;
    &lt;span class="s"&gt;# They increase the maximum stdout/stderr log collection rate but will also cause higher cpu/memory usage.&lt;/span&gt;
    &lt;span class="s"&gt;## Ref for more details about Ignore_Older -  https://docs.fluentbit.io/manual/v/1.7/pipeline/inputs/tail&lt;/span&gt;
    &lt;span class="s"&gt;# [agent_settings.fbit_config]&lt;/span&gt;
    &lt;span class="s"&gt;#   log_flush_interval_secs = "1"                 # default value is 15&lt;/span&gt;
    &lt;span class="s"&gt;#   tail_mem_buf_limit_megabytes = "10"           # default value is 10&lt;/span&gt;
    &lt;span class="s"&gt;#   tail_buf_chunksize_megabytes = "1"            # default value is 32kb (comment out this line for default)&lt;/span&gt;
    &lt;span class="s"&gt;#   tail_buf_maxsize_megabytes = "1"              # default value is 32kb (comment out this line for default)&lt;/span&gt;
    &lt;span class="s"&gt;#   tail_ignore_older = "5m"                      # default value same as fluent-bit default i.e.0m&lt;/span&gt;

    &lt;span class="s"&gt;# On both AKS &amp;amp; Arc K8s enviornments, if Cluster has configured with Forward Proxy then Proxy settings automatically applied and used for the agent&lt;/span&gt;
    &lt;span class="s"&gt;# Certain configurations, proxy config should be ignored for example Cluster with AMPLS + Proxy&lt;/span&gt;
    &lt;span class="s"&gt;# in such scenarios, use the following config to ignore proxy settings&lt;/span&gt;
    &lt;span class="s"&gt;# [agent_settings.proxy_config]&lt;/span&gt;
    &lt;span class="s"&gt;#    ignore_proxy_settings = "true"  # if this is not applied, default value is false&lt;/span&gt;

&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;container-azm-ms-agentconfig&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the manifest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f container-azm-ms-agentconfig.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Create a config map with a job for Annotation Based Scraping
&lt;/h2&gt;

&lt;p&gt;Create a new manifest file named &lt;em&gt;ama-metrics-prometheus-config-node.yaml&lt;/em&gt; with the following content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;prometheus-config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;scrape_configs:&lt;/span&gt;
    &lt;span class="s"&gt;- job_name: 'kubernetes-pods'&lt;/span&gt;

      &lt;span class="s"&gt;kubernetes_sd_configs:&lt;/span&gt;
      &lt;span class="s"&gt;- role: pod&lt;/span&gt;

      &lt;span class="s"&gt;relabel_configs:&lt;/span&gt;
      &lt;span class="s"&gt;# Scrape only pods with the annotation: prometheus.io/scrape = true&lt;/span&gt;
      &lt;span class="s"&gt;- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]&lt;/span&gt;
        &lt;span class="s"&gt;action: keep&lt;/span&gt;
        &lt;span class="s"&gt;regex: true&lt;/span&gt;

      &lt;span class="s"&gt;# If prometheus.io/path is specified, scrape this path instead of /metrics&lt;/span&gt;
      &lt;span class="s"&gt;- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]&lt;/span&gt;
        &lt;span class="s"&gt;action: replace&lt;/span&gt;
        &lt;span class="s"&gt;target_label: __metrics_path__&lt;/span&gt;
        &lt;span class="s"&gt;regex: (.+)&lt;/span&gt;

      &lt;span class="s"&gt;# If prometheus.io/port is specified, scrape this port instead of the default&lt;/span&gt;
      &lt;span class="s"&gt;- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]&lt;/span&gt;
        &lt;span class="s"&gt;action: replace&lt;/span&gt;
        &lt;span class="s"&gt;regex: ([^:]+)(?::\d+)?;(\d+)&lt;/span&gt;
        &lt;span class="s"&gt;replacement: $1:$2&lt;/span&gt;
        &lt;span class="s"&gt;target_label: __address__&lt;/span&gt;

      &lt;span class="s"&gt;# If prometheus.io/scheme is specified, scrape with this scheme instead of http&lt;/span&gt;
      &lt;span class="s"&gt;- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]&lt;/span&gt;
        &lt;span class="s"&gt;action: replace&lt;/span&gt;
        &lt;span class="s"&gt;regex: (http|https)&lt;/span&gt;
        &lt;span class="s"&gt;target_label: __scheme__&lt;/span&gt;

      &lt;span class="s"&gt;# Include the pod namespace as a label for each metric (namespace)&lt;/span&gt;
      &lt;span class="s"&gt;- source_labels: [__meta_kubernetes_namespace]&lt;/span&gt;
        &lt;span class="s"&gt;action: replace&lt;/span&gt;
        &lt;span class="s"&gt;target_label: namespace&lt;/span&gt;

      &lt;span class="s"&gt;# Include the pod name as a label for each metric&lt;/span&gt;
      &lt;span class="s"&gt;- source_labels: [__meta_kubernetes_pod_name]&lt;/span&gt;
        &lt;span class="s"&gt;action: replace&lt;/span&gt;
        &lt;span class="s"&gt;target_label: pod&lt;/span&gt;

      &lt;span class="s"&gt;# [Optional] Include all pod labels as labels for each metric&lt;/span&gt;
      &lt;span class="s"&gt;- action: labelmap&lt;/span&gt;
        &lt;span class="s"&gt;regex: __meta_kubernetes_pod_label_(.+)&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ama-metrics-prometheus-config-node&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the manifest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ama-metrics-prometheus-config-node.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Restart agent pods
&lt;/h2&gt;

&lt;p&gt;Restart all pods named &lt;code&gt;ama-metrics-node-…&lt;/code&gt; in the &lt;code&gt;kube-system&lt;/code&gt; namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Annotate pods to be scraped
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Data type&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;prometheus.io/scrape&lt;/td&gt;
&lt;td&gt;Boolean&lt;/td&gt;
&lt;td&gt;true or false&lt;/td&gt;
&lt;td&gt;Enables scraping of the pod, and monitor_kubernetes_pods must be set to true.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;prometheus.io/scheme&lt;/td&gt;
&lt;td&gt;String&lt;/td&gt;
&lt;td&gt;http or https&lt;/td&gt;
&lt;td&gt;Defaults to scraping over HTTP. If necessary, set to https.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;prometheus.io/path&lt;/td&gt;
&lt;td&gt;String&lt;/td&gt;
&lt;td&gt;Comma-separated array&lt;/td&gt;
&lt;td&gt;The HTTP resource path from which to fetch metrics. If the metrics path isn't /metrics, define it with this annotation.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;prometheus.io/port&lt;/td&gt;
&lt;td&gt;String&lt;/td&gt;
&lt;td&gt;9102&lt;/td&gt;
&lt;td&gt;Specify a port to scrape from. If the port isn't set, it will default to 9102.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;prometheus.io/scrape&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# Enable scraping for this pod ​&lt;/span&gt;
    &lt;span class="na"&gt;prometheus.io/scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http'&lt;/span&gt; &lt;span class="c1"&gt;# If the metrics endpoint is secured then you will need to set this to `https`, if not default ‘http’​&lt;/span&gt;
    &lt;span class="na"&gt;prometheus.io/path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/mymetrics'&lt;/span&gt; &lt;span class="c1"&gt;# If the metrics path is not /metrics, define it with this annotation. ​&lt;/span&gt;
    &lt;span class="na"&gt;prometheus.io/port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8000&lt;/span&gt; &lt;span class="c1"&gt;# If port is not 9102 use this annotation​&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pods are now being scraped and Prometheus metrics can be visualized in the Grafana instance that was created in step 1 using the automatically configured Prometheus data source.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-prometheus?tabs=cluster-wide" rel="noopener noreferrer"&gt;Collect Prometheus metrics with Container insights&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/prometheus-metrics-scrape-configuration" rel="noopener noreferrer"&gt;Customize scraping of Prometheus metrics in Azure Monitor&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
